CN105550636A - Method and device for identifying target types - Google Patents

Method and device for identifying target types Download PDF

Info

Publication number
CN105550636A
CN105550636A CN201510884182.8A CN201510884182A CN105550636A CN 105550636 A CN105550636 A CN 105550636A CN 201510884182 A CN201510884182 A CN 201510884182A CN 105550636 A CN105550636 A CN 105550636A
Authority
CN
China
Prior art keywords
target
acoustical signal
confidence
type
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510884182.8A
Other languages
Chinese (zh)
Other versions
CN105550636B (en
Inventor
王志峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 3 Research Institute
Original Assignee
CETC 3 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 3 Research Institute filed Critical CETC 3 Research Institute
Priority to CN201510884182.8A priority Critical patent/CN105550636B/en
Publication of CN105550636A publication Critical patent/CN105550636A/en
Application granted granted Critical
Publication of CN105550636B publication Critical patent/CN105550636B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a method and a device for identifying target types. The method comprises steps that a sound signal of a target is acquired through a planar sound transmission array, the target includes any one of a low-altitude aircraft, a light aircraft, a power delta wing, a cruise missile and a power paraglider, the direction of the sound signal is evaluated, spatial filtering processing on the sound signal after direction evaluation is carried out, after spatial filtering processing, characteristics of the sound signal, characteristic vectors corresponding to the characteristics of the sound signal and characteristics of the target are extracted, type confidence of the target is determined according to the characteristics of the sound signal, the characteristic vectors of the sound signal and the characteristics of the target, the type confidence is identified through the fusion identification technology, and thereby the type of the target is determined. Through the method, type identification accuracy of the target is improved.

Description

A kind of method of target type discrimination and device
Technical field
The present invention relates to Detection Techniques field, particularly a kind of method of target type discrimination and device.
Background technology
China is vast in territory; the important defending target distribution whole nation; easily receive the attack of the high-tech aircraft such as armed helicopter, cruise missile, unmanned spacecraft etc.; in the urgent need to setting up the Low Altitude Target Detection early warning system of protection important area; the target of low-latitude flying is because flying height is low, RCS is little; general radar is difficult to find; but these targets sound of radiation in flight course is difficult to eliminate, utilizing these sound to position low flyer and identifying is the large gordian technique of passive acoustics detectiona two of modern scholars' common concern.
Utilize the acoustical signal of different target type radiation different, by extracting the vocal print feature of different target acoustical signal, Land use models is known method for distinguishing and is carried out classification analysis to vocal print feature further, can reach the object identified different target.Concrete, in prior art when carrying out target type discrimination, need the acoustical signal data gathering target, feature-extraction analysis is carried out to the target acoustic signal data collected, and then utilize the type of Classification and Identification technology to target comprising template matches and neural network to identify.
But, template matching technique needs could use under given conditions, nerual network technique requires sufficient sample data, need train fully network, require that network weight coefficient convergence is to globally optimal solution, otherwise neural network is difficult to get a desired effect when reality uses, Classification and Identification technology of the prior art of therefore sampling is difficult to produce a desired effect in actual use, affects the accuracy of recognition result.
Summary of the invention
The embodiment of the present invention provides a kind of recognition methods of target type, effectively can improve the accuracy of target identification.
In the embodiment of the present invention, first aspect provides a kind of method of target localization, comprising:
By the acoustical signal of the transaudient array acquisition target of plane, described target comprise in the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, powered paraglider any one;
Described acoustical signal travel direction is estimated, and airspace filter process is carried out to the acoustical signal after direction estimation;
After airspace filter process, extract the feature of described acoustical signal, described acoustical signal feature characteristic of correspondence vector and described clarification of objective;
According to the feature of described acoustical signal, the proper vector of described acoustical signal and described clarification of objective, determine the type degree of confidence of described target;
By fusion identifying technology, described type degree of confidence is identified, to determine the type of described target.
Optionally, after the acoustical signal by the transaudient array acquisition target of plane, and described acoustical signal travel direction is estimated, and before airspace filter process is carried out to the acoustical signal after direction estimation, also comprises:
Adaptive noise is carried out to described acoustical signal and suppresses process.
Optionally, described to described acoustical signal carry out adaptive noise suppress process comprise:
The adaptive noise suppression technology based on wavelet decomposition is utilized to carry out suppression process to described acoustical signal.
Optionally, behind the direction estimating described acoustical signal, determine the flight path of described target in the direction in each moment according to described acoustical signal.
Optionally, the feature of the described acoustical signal of described extraction comprises:
When the sound power spectral line spectrum fundamental frequency of radiation is greater than threshold value, frequency domain character is analyzed, to obtain line spectral frequencies feature by helicopter line spectrum and harmonic wave collection detection algorithm;
The proper vector of the described acoustical signal of described extraction comprises:
Calculate coefficient of autocorrelation and cepstrum coefficient;
Described coefficient of autocorrelation and described cepstrum coefficient are combined, obtains the temporal signatures vector of some dimensions;
Carry out imparametrization power spectrumanalysis, to calculate power spectrum;
Utilize the power spectrum calculated, obtain the frequency domain character vector of some dimensions;
Calculate the energy feature of each frequency band of acoustical signal, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature;
Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, obtains the wavelet packet character amount of some dimensions;
The described clarification of objective of described extraction comprises:
Tracking prediction is carried out to the orientation of described target and rate of change thereof, obtains the dynamic signature of flight path of described target.
Optionally, the described feature according to described acoustical signal determines that the type degree of confidence of described target comprises:
Characteristic frequency storehouse is set up in advance according to described line spectral frequencies feature;
The characteristic frequency of acoustical signal described in extract real-time;
According to the match condition in the described characteristic frequency extracted and described characteristic frequency storehouse, determine the first kind degree of confidence of described target;
The described proper vector according to described acoustical signal determines that the type degree of confidence of described target comprises:
The input and output number of plies of temporal signatures sub neural network is determined respectively according to the dimension of described temporal signatures vector and the quantity of described target;
The Second Type degree of confidence of described target is determined by described temporal signatures sub neural network;
The input and output number of plies of frequency domain character sub neural network is determined respectively according to the dimension of described frequency domain character vector and the quantity of described target;
The 3rd type degree of confidence of described target is determined by described frequency domain character sub neural network;
The input and output number of plies of wavelet packet character sub neural network is determined respectively according to the dimension of described Wavelet Packet Energy Eigenvector and the quantity of described target;
The 4th type degree of confidence of described target is determined by described wavelet packet character sub neural network;
Describedly determine that the type degree of confidence of described target comprises according to described clarification of objective vector:
By track association recognition technology, described flight path is associated with described dynamic signature of flight path, determine the 5th type degree of confidence of described target.
Optionally, describedly by fusion identifying technology, described type degree of confidence to be identified, to determine that the type of described target comprises:
Utilize described first kind degree of confidence, Second Type degree of confidence, the 3rd type degree of confidence, the 4th type degree of confidence, the 5th type degree of confidence and weighted value, calculate the total value of type degree of confidence, choose the type of type corresponding to maximal value as described target.
Optionally, described type degree of confidence is identified by fusion identifying technology described, also to comprise before the type determining described target:
The method for identifying and classifying utilizing genetic algorithm and neural network to combine carries out global optimization to neural network weight coefficient.
Optionally, when described target is low-altitude low-velocity small targets, by the acoustical signal of target described in the transaudient array acquisition of cross, the transaudient array of described cross comprises 12 transaudient array elements and 4 orthogonal horizon bars.
In the embodiment of the present invention, second aspect provides a kind of target type discrimination device, comprising:
Acquisition module, for by the acoustical signal of the transaudient array acquisition target of plane, described target comprise in the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, powered paraglider any one;
First processing module, for estimating described acoustical signal travel direction, and carries out airspace filter process to the acoustical signal after direction estimation;
Extraction module, for after airspace filter process, extract the feature of described acoustical signal, described acoustical signal feature characteristic of correspondence vector and described clarification of objective;
First determination module, for proper vector and the described clarification of objective of the feature according to described acoustical signal, described acoustical signal, determines the type degree of confidence of described target;
Identification module, for being identified described type degree of confidence by fusion identifying technology, to determine the type of described target.
Optionally, described device also comprises:
Second processing module, for after the acoustical signal by the transaudient array acquisition target of plane, and described acoustical signal travel direction is estimated, and before airspace filter process is carried out to the acoustical signal after direction estimation, adaptive noise is carried out to described acoustical signal and suppresses process.
Optionally, described second processing module carries out suppression process specifically for utilizing the adaptive noise suppression technology based on wavelet decomposition to described acoustical signal.
Optionally, described device also comprises:
Second determination module, for behind the direction estimating described acoustical signal, determines the flight path of described target in the direction in each moment according to described acoustical signal.
Optionally, described extraction module specifically for:
When the sound power spectral line spectrum fundamental frequency of radiation is greater than threshold value, frequency domain character is analyzed, to obtain line spectral frequencies feature by helicopter line spectrum and harmonic wave collection detection algorithm;
Calculate coefficient of autocorrelation and cepstrum coefficient;
Described coefficient of autocorrelation and described cepstrum coefficient are combined, obtains the temporal signatures vector of some dimensions;
Carry out imparametrization power spectrumanalysis, to calculate power spectrum;
Utilize the power spectrum calculated, obtain the frequency domain character vector of some dimensions;
Calculate the energy feature of each frequency band of acoustical signal, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature;
Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, obtains the wavelet packet character amount of some dimensions;
Tracking prediction is carried out to the orientation of described target and rate of change thereof, obtains the dynamic signature of flight path of described target.
Optionally, described determination module specifically for:
Characteristic frequency storehouse is set up in advance according to described line spectral frequencies feature;
The characteristic frequency of acoustical signal described in extract real-time;
According to the match condition in the described characteristic frequency extracted and described characteristic frequency storehouse, determine the first kind degree of confidence of described target;
The input and output number of plies of temporal signatures sub neural network is determined respectively according to the dimension of described temporal signatures vector and the quantity of described target;
The Second Type degree of confidence of described target is determined by described temporal signatures sub neural network;
The input and output number of plies of frequency domain character sub neural network is determined respectively according to the dimension of described frequency domain character vector and the quantity of described target;
The 3rd type degree of confidence of described target is determined by described frequency domain character sub neural network
The input and output number of plies of wavelet packet character sub neural network is determined respectively according to the dimension of described Wavelet Packet Energy Eigenvector and the quantity of described target;
The 4th type degree of confidence of described target is determined by described wavelet packet character sub neural network;
By track association recognition technology, described flight path is associated with described dynamic signature of flight path, determine the 5th type degree of confidence of described target.
Optionally, described identification module specifically for:
Utilize described first kind degree of confidence, Second Type degree of confidence, the 3rd type degree of confidence, the 4th type degree of confidence, the 5th type degree of confidence and weighted value, calculate the total value of type degree of confidence, choose the type of type corresponding to maximal value as described target.
Optionally, described device also comprises:
Global optimization module, for being identified described type degree of confidence by fusion identifying technology described, before the type determining described target, the method for identifying and classifying utilizing genetic algorithm and neural network to combine carries out global optimization to neural network weight coefficient.
Optionally, described acquisition module is specifically for when described target is low-altitude low-velocity small targets, and by the acoustical signal of target described in the transaudient array acquisition of cross, the transaudient array of described cross comprises 12 transaudient array elements and 4 orthogonal horizon bars.
As can be seen from the above technical solutions, the embodiment of the present invention has the following advantages:
In the embodiment of the present invention, by the acoustical signal of the transaudient array acquisition target of plane, described target comprises the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider, described acoustical signal travel direction is estimated, and airspace filter process is carried out to the acoustical signal after direction estimation, after airspace filter process, extract the feature of described acoustical signal, the feature characteristic of correspondence vector of described acoustical signal and described clarification of objective, according to the feature of described acoustical signal, the proper vector of described acoustical signal and described clarification of objective, determine the type degree of confidence of described target, by fusion identifying technology, described type degree of confidence is identified, to determine that the type embodiment of the present invention of described target improves the accuracy of target localization.In the embodiment of the present invention, utilize the proper vector of the feature of acoustical signal, described acoustical signal and described clarification of objective, determine the type degree of confidence of target, then by fusion identifying technology, described type degree of confidence is identified, effectively can improve the accuracy of target type discrimination.
Accompanying drawing explanation
Fig. 1 is the embodiment schematic diagram of the method for target type discrimination in the embodiment of the present invention;
Fig. 2 is microphone array structural representation in the embodiment of the present invention;
Fig. 3 is the adaptive noise Restrainable algorithms schematic flow sheet of wavelet decomposition in the embodiment of the present invention;
Fig. 4 is the template matches schematic flow sheet of characteristic frequency in the embodiment of the present invention;
Fig. 5 is the structure intention of target type discrimination device in the embodiment of the present invention.
Embodiment
Embodiments provide a kind of method of target type discrimination, improve the accuracy of target type discrimination.
The present invention program is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the embodiment of a part of the present invention, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, should belong to the scope of protection of the invention.
Term " first ", " second " etc. (if existence) in instructions of the present invention and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.The embodiments described herein should be appreciated that the data used like this can be exchanged in the appropriate case, so that can be implemented with the order except the content except here diagram or description.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, such as, contain those steps or unit that the process of series of steps or unit, method, system, product or equipment is not necessarily limited to clearly list, but can comprise clearly do not list or for intrinsic other step of these processes, method, product or equipment or unit.
Below to the embodiment of target type discrimination method in the embodiment of the present invention, be introduced.
Refer to Fig. 1, in the embodiment of the present invention, method embodiment of target localization comprises:
101, by the acoustical signal of the transaudient array acquisition target of plane, described target comprise in the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, powered paraglider any one.
Optionally, as shown in Figure 2, when described target is low-altitude low-velocity small targets, by the acoustical signal of target described in the transaudient array acquisition of cross, the transaudient array of described cross comprises 12 transaudient array elements and 4 orthogonal horizon bars.
102, described acoustical signal travel direction is estimated, and airspace filter process is carried out to the acoustical signal after direction estimation.
Be understandable that, in the embodiment of the present invention, can utilize detection, orientation algorithm carries out orientation to the signal after time-domain filtering and estimated goal orientation, each moment, directed result formed the flight path of target.
Wherein, target detection, directed employing general-purpose algorithm, airspace filter adopts general beamforming algorithm, and the present invention repeats no more.
Optionally, between step 101 in the present embodiment and 102, can also comprise: adaptive noise is carried out to described acoustical signal and suppresses process.
Be understandable that, each channel targets acoustical signal all includes other noise such as neighbourhood noise, wind noise, what these noises had belong to, and white noise has belongs to coloured noise, in the present embodiment, process is suppressed by carrying out adaptive noise to described acoustical signal, namely in time domain, filtering is carried out to each channel signal, can signal to noise ratio (S/N ratio) be improved.
Concrete, in the present embodiment, adopt the adaptive noise suppression technology based on wavelet decomposition to carry out adaptive noise to described acoustical signal and suppress process, as shown in Figure 3, in the adaptive noise suppression technology of wavelet decomposition, wavelet basis adopts db6 small echo, first the target acoustic signal collected each array element of microphone array carries out 6 layers of wavelet decomposition, each layer data is reconstructed separately, a kind of precondition is proposed: the neighbourhood noise of each passage is uncorrelated or correlativity is very little, after wavelet decomposition, each passage of the layer at noise data place is also uncorrelated or correlativity is very little, according to this precondition, cross-correlation calculation is carried out to the data after each channel decomposition, the thresholding of cross-correlation coefficient is 0.3, when the cross-correlation coefficient of this layer of each passage is less than 0.3, think that changing layer is noise floor, extract all noise floors and carry out data reconstruction, obtain noise data, noise data is inputted general sef-adapting filter and carry out squelch.
103, after airspace filter process, extract the feature of described acoustical signal, described acoustical signal feature characteristic of correspondence vector and described clarification of objective.
Optionally, when described clarification of objective is frequecy characteristic, the feature of the described acoustical signal of described extraction comprises: when the sound power spectral line spectrum fundamental frequency of radiation is greater than threshold value, analyzed by helicopter line spectrum and harmonic wave collection detection algorithm, to obtain line spectral frequencies feature frequency domain character.
Be understandable that, from power spectrum, mostly there is obvious line spectrum feature in the Small object of low-latitude flying, or narrow band frequency feature, the wide range feature of similar jet plane is less, and due to the engine generally adopted less, in order to reach certain power, its rotating speed is general higher, and the sound power spectral line spectrum fundamental frequency of radiation is at 70Hz ~ 160Hz.Helicopter line spectrum, harmonic wave collection detection algorithm is adopted to extract line spectral frequencies feature at more than 50Hz.
Optionally, the proper vector of the described acoustical signal of described extraction comprises following several mode:
One, when described acoustical signal is characterized as temporal signatures, calculate coefficient of autocorrelation and cepstrum coefficient, described coefficient of autocorrelation and described cepstrum coefficient are combined, obtain the temporal signatures vector of some dimensions.
Concrete, when coefficient of autocorrelation and cepstrum coefficient are 20, the temporal signatures vector of 40 dimensions can be obtained.Coefficient of autocorrelation can obtain by the following method:
For the Small object of low-altitude low-velocity flight, the single channel voice signal after wave beam exports can regard stationary random signal as, can by following formula approximate treatment for its autocorrelation function in time domain of random stationary signal:
R x x ( τ ) = Σ - N N x ′ ( n ) x ( n + τ )
In formula, R xxrepresent the time-domain signal coefficient of autocorrelation calculated, x (n) represents the time-domain signal after wave beam output, and τ represents time delay.Coefficient of autocorrelation gets 99 values on the 0 time delay point left side after obtaining, 100 values are got on the right, and from left to right every 10 coefficient points are averaged successively, comprehensively obtains 20 unique points altogether, utilizes the difference of different target type autocorrelation characteristic point to identify target type.
Cepstrum coefficient can obtain by the following method:
Hypothetical target acoustical signal frequency is limited, limited extent, and all in microphone design indication range, so microphone can regard the stable system of cause and effect as, can be described by the transport function of a rational form, this system meets the condition of minimum phase system, linear prediction model can be adopted to carry out modeling to this system, and then obtain its linear prediction model cepstrum coefficient, different its cepstrum coefficients of echo signal type is generally different, thus this coefficient can be utilized to identify target type.
The output valve that linear prediction model is expressed as current time can be multiplied by corresponding coefficient and estimate by the output valve in past certain hour section, expression is as follows:
x ^ ( n ) = a 1 x ( n - 1 ) + a 2 x ( n - 1 ) + ... + a p x ( n - p ) = Σ k = 1 p a k x ( n - k ) - - - ( 2 )
In formula, represent the predicted value of current time, x (n) represents the actual value of current time, a ii=1,2 ..., p represents the coefficient of linear prediction model, and p represents exponent number.Coefficient for linear prediction model can adopt minimum mean square error method to estimate.Adopt after obtaining linear prediction model coefficient and calculate cepstrum coefficient with the following method:
c i = a i + Σ k = 1 i - 1 k i c k a i - k , i = 1 , 2 , ... , p - - - ( 3 )
In formula, c irepresent cepstrum coefficient, other symbol cotype (2).
When actual computation, exponent number gets 20, estimates 20 cepstrum coefficients as target signature, and common extraction 40 parameters that combine with 20 coefficient of autocorrelation are as target temporal signatures vector F t=(r 1, r 2..., r 20, m 1, m 2..., m 20.
Two, when described acoustical signal is characterized as frequency domain character, carry out imparametrization power spectrumanalysis, to calculate power spectrum, utilize the power spectrum calculated, obtain the frequency domain character vector of some dimensions.
Concrete, the frequency domain character vector of some dimensions can be obtained in the following way:
Carry out the calculating of imparametrization power spectrumanalysis in the acoustical signal exported wave beam, imparametrization power spectrumanalysis generally uses modified periodogram method, and its computing method are as follows:
P X ( w ) = 1 N W | Σ n = 0 N - 1 x ( n ) c ( n ) e - j n w | 2
W = 1 N Σ n = 0 N - 1 | c ( n ) | 2 - - - ( 4 )
In formula, x (n) is sequence of time-domain samples, P xw () power spectrum for trying to achieve, c (n) is window function.
After the signal exported wave beam carries out power spectrumanalysis, get the signal of 10Hz ~ 650Hz, be divided into 40 sections, every section is averaging energy and obtains target frequency domain character vector
Three, when described acoustical signal is characterized as wavelet packet character, calculate the energy feature of each frequency band of acoustical signal, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature, energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, obtains the wavelet packet character amount of some dimensions.
Concrete, in actual applications, can by above-mentioned four kinds of features, calculate the wavelet packet character amount of 32 dimensions, computation process is as follows:
WAVELET PACKET DECOMPOSITION is a kind of effective Time-Frequency Analysis Method, signal is carried out multi-level division by from low to high, on each level, the feature such as energy, spectrum center of gravity, second moment, Wavelet Entropy of signal fully can show the characteristic of such target, and the step that wavelet packet character is extracted is as follows:
1) signal decomposition: adopt wavelet function sym6 to carry out 3 layers of WAVELET PACKET DECOMPOSITION to the data that wave beam exports, obtain the wavelet packet coefficient of 8 frequency bands.
2) signal reconstruction: to step 1) decompose 8 frequency band wavelet packet coefficients obtaining and be reconstructed, obtain 8 frequency bands reconstruct time domain waveforms.
3) each frequency band feature amount is calculated: set the sampling length of signal as N, utilize step 2) signal that obtains calculates each characteristic quantity, and computing method are as follows:
A) energy feature
The energy balane of each frequency band is as follows:
E i ′ = Σ k = 1 N | x i ( k ) | 2 , i = 1 , 2 , ... , 8 - - - ( 5 )
Energy normalized:
E i = E i ′ Σ k = 1 8 E i ′ - - - ( 6 )
B) standard deviation characteristic
V i = ( 1 N - 1 Σ m = 1 N ( x i ( m ) - x ‾ i ) 2 ) 1 2 - - - ( 7 )
C) gravity center characteristics is composed
Ask for the power spectrum of each frequency band:
(4) formula of utilization calculates each band power spectrum and obtains
In formula, fs is sampling rate, and k is discrete frequency, A ik amplitude that () is respective frequencies, for the phase place of respective frequencies.
Ask for each band spectrum center of gravity:
C i = Σ m = 1 N A i ( m ) k ( m ) Σ m = 1 N A i ( m ) - - - ( 9 )
D) wavelet packet Sample Entropy feature
Calculate each frequency band Sample Entropy step as follows:
S1, to mould-fixed dimension d=N/10, each sequence of frequency bands composition d n dimensional vector n:
X i(m)=[x i(m),x(m+1),…,x(m+d-1)],m=1,2,…,N-d+1(10)
S2, calculating X i(m) and X idistance between (g):
L i(m,g)=max|x i(m+b)-x i(g+b)|,b=1,2,…,d-1,m≠g(11)
S3, given threshold value r, to each b Data-Statistics L ithe number P of (m, g) <r, and calculate:
B b d ( r ) = P N - d + 1 - - - ( 12 )
Ask the mean value to all b:
B m ( r ) = 1 N - d + 1 &Sigma; b = 1 N - d + 1 B b d ( r ) - - - ( 13 )
S4, make d=d+1, repeat step s1 ~ s3, calculate B d+1(r)
S5, calculate Sample Entropy and be:
S i = - l n B d + 1 ( r ) B d ( r ) - - - ( 14 )
4 kinds of characteristic quantities are combined composition 32 and tie up Wavelet Packet Energy Eigenvector: F w=(E 1, E 2..., E 8, V 1, V 2..., V 8, C 1, C 2..., C 8, S 1, S 2..., S 8).
Four, when described be characterized as motion feature time, the described clarification of objective of described extraction comprises: carry out tracking prediction to the orientation of described target and rate of change thereof, obtain the dynamic signature of flight path of described target.
Wherein, the orientation of target relative to microphone array can be calculated by orientation algorithm after microphone array receives target acoustic signal, the rate of direction change of target relative to microphone array can be extrapolated by each moment target azimuth, the target information that can observe reliably thus comprises target azimuth and rate of change thereof, kalman filter method is adopted to carry out modeling, tracking prediction is carried out to target azimuth and rate of change thereof, thus target dynamic signature of flight path can be obtained.
104, according to the feature of described acoustical signal, the proper vector of described acoustical signal and described clarification of objective, the type degree of confidence of described target is determined.
Wherein, determine that the type degree of confidence of described target comprises according to the feature of described acoustical signal: utilize template matching technique, characteristic frequency storehouse is set up in advance according to described line spectral frequencies feature, the characteristic frequency of acoustical signal described in extract real-time, according to the match condition in the described characteristic frequency extracted and described characteristic frequency storehouse, determine the first kind degree of confidence of described target.
Concrete, as shown in Figure 4, the sound of the screw propeller target emanation of low-latitude flying has obvious line spectrum fundamental frequency mostly, the sound of these targets under various state of flight is gathered in advance and analyzed, frequecy characteristic under various state is summarized, set up corresponding characteristic frequency storehouse as the template to these target identifications, when practical application, the characteristic frequency of extract real-time target sound, if itself and characteristic frequency storehouse are compared by existing characteristics frequency, if can be good at coupling, the degree of confidence exporting this target type is higher, if do not find occurrence in frequency storehouse, the degree of confidence that this method exports such target is lower.
Wherein, when determining the type degree of confidence of described target according to the proper vector of described acoustical signal, concrete, sub neural network designing technique can be utilized to determine the type degree of confidence of described target in conjunction with the proper vector of acoustical signal.
Wherein, the input and output number of plies of temporal signatures sub neural network is determined respectively according to the dimension of described temporal signatures vector and the quantity of described target, such as, when target comprises screw propeller target, powered paraglider, cruise missile, jet-propelled target, other target and ground unrest totally 6 classification timestamps, output layer number is defined as 6 according to target classification number.The number of hidden neuron is by experimental formula: obtain, n in formula kfor the number of hidden neuron, n ifor the number of input neuron, n ofor the number of output neuron, a is the constant between 1 ~ 10, and hidden neuron number is determined also to carry out preferably according to test of many times result actual.Input layer selects constant value to be the transforming function transformation function of 1, and hidden layer transforming function transformation function selects hyperbolic function, output layer transforming function transformation function selection percentage function.
Be understandable that, utilize sub neural network designing technique to determine that the type degree of confidence of described target comprises in conjunction with the proper vector of acoustical signal several as follows:
One, determine the input and output number of plies of temporal signatures sub neural network according to the dimension of described temporal signatures vector and the quantity of described target respectively, determined the Second Type degree of confidence of described target by described temporal signatures sub neural network.
Be understandable that, when temporal signatures vector dimension is 40, input layer designs 40 constant value transforming function transformation functions, and 26 hyperbolic function neurons are designed in middle layer empirically formula and actual tests result, and output layer neuron number is 6.
Two, determine the input and output number of plies of frequency domain character sub neural network according to the dimension of described frequency domain character vector and the quantity of described target respectively, determined the 3rd type degree of confidence of described target by described frequency domain character sub neural network.
Be understandable that, when frequency domain character vector dimension is 40, input layer designs 40 constant value transforming function transformation functions, and 26 hyperbolic function neurons are designed in middle layer empirically formula and actual tests result, and output layer neuron number is 6.
Three, determine the input and output number of plies of wavelet packet character sub neural network according to the dimension of described Wavelet Packet Energy Eigenvector and the quantity of described target respectively, determined the 4th type degree of confidence of described target by described wavelet packet character sub neural network.
Be understandable that, when Wavelet Packet Energy Eigenvector dimension is 32, input layer designs 32 constant value transforming function transformation functions, and 26 hyperbolic function neurons are designed in middle layer empirically formula and actual tests result, and output layer neuron number is 6.
Wherein, the result that above-mentioned each sub neural network exports can as the degree of confidence of respective objects type.
In this enforcement, describedly determine that the type degree of confidence of described target comprises according to described clarification of objective vector: by track association recognition technology, associated with described dynamic signature of flight path by described flight path, determine the 5th type degree of confidence of described target.
Be understandable that, according to the result of motion subtree, if the angle that the angle of current time actual measurement and angular velocity and prediction algorithm export and angular rate matching, this target of current time or previous moment or the same target of previous time period are described, its target type is consistent with the type of the target of previous moment, the degree of confidence exporting such target is high, otherwise degree of confidence is low.
Optionally, in embodiments of the present invention before step 104, can also comprise: the method for identifying and classifying utilizing genetic algorithm and neural network to combine carries out global optimization to neural network weight coefficient.
Wherein, genetic algorithm and neural network are combined, utilizes the fitness function of global cost function as genetic algorithm of Feedback error, global optimization is carried out to neural network weight coefficient, train complete neural network to have better extensive adaptive faculty, concrete steps are as follows:
1), by neural network weight coefficient putting into weight coefficient matrix, is hidden layer weight coefficient matrix, is output layer weight coefficient matrix;
2), by weight coefficient matrix dimension initialization weight coefficient matrix, obtain initialization population, population scale is M, makes evolutionary generation g=0;
3), by bar chromosome band every in population enter neural network, adopt the chromosomal output of the neural network every bar of forward process computation, calculate fitness function and obtain the chromosomal fitness value F of every bar in population;
4), by individuality best for fitness in contemporary population directly copy in population of future generation, directly eliminate the chromosome that in population, fitness is the worst simultaneously;
5) in population residue individuality, M individual composition population of new generation is selected by the ratio back-and-forth method of " roulette " form;
6), in population of new generation, single-point intersection is carried out with certain probability, the individuality in Population Regeneration;
7), to the individuality in population, make a variation with certain probability, the individuality in Population Regeneration;
8), calculate fitness value individual in population, judge whether the best value of fitness meets accuracy requirement, or judge whether to reach maximum evolution number of times, if satisfied condition, then terminate, otherwise enter step 3); By setting certain precision conditions and evolution number of times, the globally optimal solution of neural network weight.
105, by fusion identifying technology, described type degree of confidence is identified, to determine the type of described target.
First kind degree of confidence in step 104, Equations of The Second Kind degree of confidence, the 3rd class degree of confidence, the 4th class degree of confidence, the 5th class degree of confidence, often kind exports 6 class degree of confidence, can be expressed as follows:
B = &alpha; 1 B 1 &alpha; 2 B 2 &alpha; 3 B 3 &alpha; 4 B 4 &alpha; 5 B 5 T = b 11 b 12 b 13 b 14 b 15 b 16 b 21 b 22 b 23 b 24 b 25 b 26 b 31 b 32 b 33 b 34 b 35 b 36 b 41 b 42 b 43 b 44 b 45 b 46 b 51 b 52 b 53 b 54 b 55 b 56 - - - ( 16 )
In formula, B irepresent the degree of confidence vector that i-th kind of method exports, α irepresent the weighted value i-th kind of method being exported to degree of confidence vector, b ijrepresent the jth class objective degrees of confidence that i-th kind of method exports.By a large amount of verification experimental verifications design weighted value be: α=[0.50.80.60.60.8], then have the total degree of confidence of jth class target to be:
BT j = &Sigma; i = 1 5 b i j , j = 1 , 2 , ... , 6 - - - ( 17 )
Finally, by finding total degree of confidence BT jj=1,2 ..., the target type that 6 maximal values are corresponding, is total algorithm Output rusults.
In the embodiment of the present invention, by the acoustical signal of the transaudient array acquisition target of plane, described target comprises the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider, described acoustical signal travel direction is estimated, and airspace filter process is carried out to the acoustical signal after direction estimation, after airspace filter process, extract the feature of described acoustical signal, the feature characteristic of correspondence vector of described acoustical signal and described clarification of objective, according to the feature of described acoustical signal, the proper vector of described acoustical signal and described clarification of objective, determine the type degree of confidence of described target, by fusion identifying technology, described type degree of confidence is identified, to determine that the type embodiment of the present invention of described target improves the accuracy of target localization.In the embodiment of the present invention, utilize the proper vector of the feature of acoustical signal, described acoustical signal and described clarification of objective, determine the type degree of confidence of target, then by fusion identifying technology, described type degree of confidence is identified, effectively can improve the accuracy of target type discrimination.
Introduce the embodiment of the device of target type discrimination in the embodiment of the present invention below.
Refer to Fig. 5, in the embodiment of the present invention, an embodiment of the device 500 of target localization comprises:
Acquisition module 501, for by the acoustical signal of the transaudient array acquisition target of plane, described target comprise in the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, powered paraglider any one;
First processing module 502, for estimating described acoustical signal travel direction, and carries out airspace filter process to the acoustical signal after direction estimation;
Extraction module 503, for after airspace filter process, extract the feature of described acoustical signal, described acoustical signal feature characteristic of correspondence vector and described clarification of objective;
First determination module 504, for proper vector and the described clarification of objective of the feature according to described acoustical signal, described acoustical signal, determines the type degree of confidence of described target;
Identification module 505, for being identified described type degree of confidence by fusion identifying technology, to determine the type of described target.
In the embodiment of the present invention, acquisition module 501 is by the acoustical signal of the transaudient array acquisition target of plane, described target comprises the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider, processing module 502 is estimated described acoustical signal travel direction, and airspace filter process is carried out to the acoustical signal after direction estimation, after airspace filter process, extraction module 503 extracts the feature of described acoustical signal, the feature characteristic of correspondence vector of described acoustical signal and described clarification of objective, determination module 504 according to the feature of described acoustical signal, the proper vector of described acoustical signal and described clarification of objective, determine the type degree of confidence of described target, identification module 505 is identified described type degree of confidence by fusion identifying technology, to determine that the type embodiment of the present invention of described target improves the accuracy of target localization.In the embodiment of the present invention, utilize the proper vector of the feature of acoustical signal, described acoustical signal and described clarification of objective, determine the type degree of confidence of target, then by fusion identifying technology, described type degree of confidence is identified, effectively can improve the accuracy of target type discrimination.
Optionally, described device 500 can also comprise the second processing module 506, for after the acoustical signal by the transaudient array acquisition target of plane, and described acoustical signal travel direction is estimated, and before airspace filter process is carried out to the acoustical signal after direction estimation, adaptive noise is carried out to described acoustical signal and suppresses process.
Optionally, described second processing module 506 carries out suppression process specifically for utilizing the adaptive noise suppression technology based on wavelet decomposition to described acoustical signal.
Optionally, described device also comprises:
Second determination module 507, for behind the direction estimating described acoustical signal, determines the flight path of described target in the direction in each moment according to described acoustical signal.
Optionally, described extraction module 503 specifically for: when radiation sound power spectral line spectrum fundamental frequency be greater than threshold value time, frequency domain character is analyzed, to obtain line spectral frequencies feature by helicopter line spectrum and harmonic wave collection detection algorithm; Calculate coefficient of autocorrelation and cepstrum coefficient; Described coefficient of autocorrelation and described cepstrum coefficient are combined, obtains the temporal signatures vector of some dimensions; Carry out imparametrization power spectrumanalysis, to calculate power spectrum; Utilize the power spectrum calculated, obtain the frequency domain character vector of some dimensions; Calculate the energy feature of each frequency band of acoustical signal, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature; Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, obtains the wavelet packet character amount of some dimensions; Tracking prediction is carried out to the orientation of described target and rate of change thereof, obtains the dynamic signature of flight path of described target.
Optionally, described determination module 504 is specifically for setting up characteristic frequency storehouse in advance according to described line spectral frequencies feature; The characteristic frequency of acoustical signal described in extract real-time; According to the match condition in the described characteristic frequency extracted and described characteristic frequency storehouse, determine the first kind degree of confidence of described target; The input and output number of plies of temporal signatures sub neural network is determined respectively according to the dimension of described temporal signatures vector and the quantity of described target; The Second Type degree of confidence of described target is determined by described temporal signatures sub neural network; The input and output number of plies of frequency domain character sub neural network is determined respectively according to the dimension of described frequency domain character vector and the quantity of described target; Determine that the 3rd type degree of confidence of described target determines the input and output number of plies of wavelet packet character sub neural network respectively according to the quantity of the dimension of described Wavelet Packet Energy Eigenvector and described target by described frequency domain character sub neural network; The 4th type degree of confidence of described target is determined by described wavelet packet character sub neural network; By track association recognition technology, described flight path is associated with described dynamic signature of flight path, determine the 5th type degree of confidence of described target.
Optionally, described device also comprises global optimization module 508, for being identified described type degree of confidence by fusion identifying technology described, before the type determining described target, the method for identifying and classifying utilizing genetic algorithm and neural network to combine carries out global optimization to neural network weight coefficient.
Optionally, described acquisition module is specifically for when described target is low-altitude low-velocity small targets, and by the acoustical signal of target described in the transaudient array acquisition of cross, the transaudient array of described cross comprises 12 transaudient array elements and 4 orthogonal horizon bars.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the system of foregoing description, the specific works process of device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that, disclosed system, apparatus and method, can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If described integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-OnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disc or CD etc. various can be program code stored medium.
The above, above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit; Although with reference to previous embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein portion of techniques feature; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (10)

1. a method for target type discrimination, is characterized in that, comprising:
By the acoustical signal of the transaudient array acquisition target of plane, described target comprise in the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, powered paraglider any one;
Described acoustical signal travel direction is estimated, and airspace filter process is carried out to the acoustical signal after direction estimation;
After airspace filter process, extract the feature of described acoustical signal, described acoustical signal feature characteristic of correspondence vector and described clarification of objective;
According to the feature of described acoustical signal, the proper vector of described acoustical signal and described clarification of objective, determine the type degree of confidence of described target;
By fusion identifying technology, described type degree of confidence is identified, to determine the type of described target.
2. method according to claim 1, is characterized in that, after the acoustical signal by the transaudient array acquisition target of plane, and estimates described acoustical signal travel direction, and before carrying out airspace filter process to the acoustical signal after direction estimation, also comprises:
Adaptive noise is carried out to described acoustical signal and suppresses process.
3. method according to claim 2, is characterized in that, described to described acoustical signal carry out adaptive noise suppress process comprise:
The adaptive noise suppression technology based on wavelet decomposition is utilized to carry out suppression process to described acoustical signal.
4. method according to claim 1, is characterized in that, also comprises:
Behind the direction estimating described acoustical signal, determine the flight path of described target in the direction in each moment according to described acoustical signal.
5. method according to claim 4, is characterized in that,
The feature of the described acoustical signal of described extraction comprises:
When the sound power spectral line spectrum fundamental frequency of radiation is greater than threshold value, frequency domain character is analyzed, to obtain line spectral frequencies feature by helicopter line spectrum and harmonic wave collection detection algorithm;
The proper vector of the described acoustical signal of described extraction comprises:
Calculate coefficient of autocorrelation and cepstrum coefficient;
Described coefficient of autocorrelation and described cepstrum coefficient are combined, obtains the temporal signatures vector of some dimensions;
Carry out imparametrization power spectrumanalysis, to calculate power spectrum;
Utilize the power spectrum calculated, obtain the frequency domain character vector of some dimensions;
Calculate the energy feature of each frequency band of acoustical signal, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature;
Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, obtains the wavelet packet character amount of some dimensions;
The described clarification of objective of described extraction comprises:
Tracking prediction is carried out to the orientation of described target and rate of change thereof, obtains the dynamic signature of flight path of described target.
6. method according to claim 5, is characterized in that,
The described feature according to described acoustical signal determines that the type degree of confidence of described target comprises:
Characteristic frequency storehouse is set up in advance according to described line spectral frequencies feature;
The characteristic frequency of acoustical signal described in extract real-time;
According to the match condition in the described characteristic frequency extracted and described characteristic frequency storehouse, determine the first kind degree of confidence of described target;
The described proper vector according to described acoustical signal determines that the type degree of confidence of described target comprises:
The input and output number of plies of temporal signatures sub neural network is determined respectively according to the dimension of described temporal signatures vector and the quantity of described target;
The Second Type degree of confidence of described target is determined by described temporal signatures sub neural network;
The input and output number of plies of frequency domain character sub neural network is determined respectively according to the dimension of described frequency domain character vector and the quantity of described target;
The 3rd type degree of confidence of described target is determined by described frequency domain character sub neural network;
The input and output number of plies of wavelet packet character sub neural network is determined respectively according to the dimension of described Wavelet Packet Energy Eigenvector and the quantity of described target;
The 4th type degree of confidence of described target is determined by described wavelet packet character sub neural network;
Describedly determine that the type degree of confidence of described target comprises according to described clarification of objective vector:
By track association recognition technology, described flight path is associated with described dynamic signature of flight path, determine the 5th type degree of confidence of described target.
7. method according to claim 6, is characterized in that, is describedly identified described type degree of confidence by fusion identifying technology, to determine that the type of described target comprises:
Utilize described first kind degree of confidence, Second Type degree of confidence, the 3rd type degree of confidence, the 4th type degree of confidence, the 5th type degree of confidence and weighted value, calculate the total value of type degree of confidence, choose the type of type corresponding to maximal value as described target.
8. method according to claim 6, is characterized in that, is identified by fusion identifying technology described to described type degree of confidence, also to comprise before the type determining described target:
The method for identifying and classifying utilizing genetic algorithm and neural network to combine carries out global optimization to neural network weight coefficient.
9. method according to any one of claim 1 to 7, it is characterized in that, when described target is low-altitude low-velocity small targets, by the acoustical signal of target described in the transaudient array acquisition of cross, the transaudient array of described cross comprises 12 transaudient array elements and 4 orthogonal horizon bars.
10. a target type discrimination device, is characterized in that, comprising:
Acquisition module, for by the acoustical signal of the transaudient array acquisition target of plane, described target comprise in the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, powered paraglider any one;
First processing module, for estimating described acoustical signal travel direction, and carries out airspace filter process to the acoustical signal after direction estimation;
Extraction module, for after airspace filter process, extract the feature of described acoustical signal, described acoustical signal feature characteristic of correspondence vector and described clarification of objective;
First determination module, for proper vector and the described clarification of objective of the feature according to described acoustical signal, described acoustical signal, determines the type degree of confidence of described target;
Identification module, for being identified described type degree of confidence by fusion identifying technology, to determine the type of described target.
CN201510884182.8A 2015-12-04 2015-12-04 A kind of method and device of target type discrimination Active CN105550636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510884182.8A CN105550636B (en) 2015-12-04 2015-12-04 A kind of method and device of target type discrimination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510884182.8A CN105550636B (en) 2015-12-04 2015-12-04 A kind of method and device of target type discrimination

Publications (2)

Publication Number Publication Date
CN105550636A true CN105550636A (en) 2016-05-04
CN105550636B CN105550636B (en) 2019-03-01

Family

ID=55829819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510884182.8A Active CN105550636B (en) 2015-12-04 2015-12-04 A kind of method and device of target type discrimination

Country Status (1)

Country Link
CN (1) CN105550636B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157952A (en) * 2016-08-30 2016-11-23 北京小米移动软件有限公司 Sound identification method and device
CN106842179A (en) * 2016-12-23 2017-06-13 成都赫尔墨斯科技有限公司 A kind of anti-UAS based on acoustic detection
CN106873626A (en) * 2017-03-31 2017-06-20 芜湖博高光电科技股份有限公司 A kind of target-seeking system of passive location
CN108280395A (en) * 2017-12-22 2018-07-13 中国电子科技集团公司第三十研究所 A kind of efficient identification method flying control signal to low small slow unmanned plane
CN108965789A (en) * 2017-05-17 2018-12-07 杭州海康威视数字技术股份有限公司 A kind of unmanned plane monitoring method and audio/video linkage device
CN109270520A (en) * 2018-10-18 2019-01-25 四川九洲空管科技有限责任公司 The processing method of secondary radar response target identities code is obtained based on amplitude information
CN109375204A (en) * 2018-10-26 2019-02-22 中电科仪器仪表有限公司 Object detection method, system, equipment and medium based on radar
CN109658944A (en) * 2018-12-14 2019-04-19 中国电子科技集团公司第三研究所 Helicopter acoustic signal Enhancement Method and device
CN109864740A (en) * 2018-12-25 2019-06-11 北京津发科技股份有限公司 A kind of the surface electromyogram signal acquisition sensor and equipment of motion state
CN110020685A (en) * 2019-04-09 2019-07-16 山东超越数控电子股份有限公司 A kind of preprocess method based on adaptive-filtering and limited Boltzmann machine, terminal and readable storage medium storing program for executing
CN110084094A (en) * 2019-03-06 2019-08-02 中国电子科技集团公司第三十八研究所 A kind of unmanned plane target identification classification method based on deep learning
CN110209993A (en) * 2019-06-17 2019-09-06 中国电子科技集团公司信息科学研究院 A kind of information extraction method and system detecting target
CN110390949A (en) * 2019-07-22 2019-10-29 苏州大学 Acoustic Object intelligent identification Method based on big data
CN110832408A (en) * 2017-07-03 2020-02-21 深圳市大疆创新科技有限公司 Neural network based image target tracking by aircraft
CN110826583A (en) * 2018-08-14 2020-02-21 珠海格力电器股份有限公司 Fault determination method and device, storage medium and electronic device
CN111626093A (en) * 2020-03-27 2020-09-04 国网江西省电力有限公司电力科学研究院 Electric transmission line related bird species identification method based on sound power spectral density
CN113420743A (en) * 2021-08-25 2021-09-21 南京隼眼电子科技有限公司 Radar-based target classification method, system and storage medium
CN114113837A (en) * 2021-11-15 2022-03-01 国网辽宁省电力有限公司朝阳供电公司 Acoustic feature-based transformer live-line detection method and system
CN114999529A (en) * 2022-08-05 2022-09-02 中国民航大学 Model classification method for airport aviation noise
US11513205B2 (en) 2017-10-30 2022-11-29 The Research Foundation For The State University Of New York System and method associated with user authentication based on an acoustic-based echo-signature
CN116150594A (en) * 2023-04-18 2023-05-23 长鹰恒容电磁科技(成都)有限公司 Method for identifying switch element characteristics in spectrum test data
CN116381406A (en) * 2023-03-16 2023-07-04 武汉船舶职业技术学院 Ship power grid fault positioning method, device, equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286703B2 (en) * 2004-09-30 2007-10-23 Fujifilm Corporation Image correction apparatus, method and program
CN102928435A (en) * 2012-10-15 2013-02-13 南京航空航天大学 Aircraft skin damage identification method and device based on image and ultrasound information fusion
CN103245524A (en) * 2013-05-24 2013-08-14 南京大学 Acoustic fault diagnosis method based on neural network
CN103679746A (en) * 2012-09-24 2014-03-26 中国航天科工集团第二研究院二O七所 object tracking method based on multi-information fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286703B2 (en) * 2004-09-30 2007-10-23 Fujifilm Corporation Image correction apparatus, method and program
CN103679746A (en) * 2012-09-24 2014-03-26 中国航天科工集团第二研究院二O七所 object tracking method based on multi-information fusion
CN102928435A (en) * 2012-10-15 2013-02-13 南京航空航天大学 Aircraft skin damage identification method and device based on image and ultrasound information fusion
CN103245524A (en) * 2013-05-24 2013-08-14 南京大学 Acoustic fault diagnosis method based on neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
张学磊 等: "声探测技术在低空开放中的应用", 《电声技术》 *
范海宁 等: "基于小波包分解的声信号特征提取方法", 《现代电子技术》 *
钱汉明 等: "飞机声信号的特征提取与识别", 《探测与控制学报》 *
陈虎虎 等: "基于支持向量机的低空飞行目标声识别", 《系统工程与电子技术》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157952A (en) * 2016-08-30 2016-11-23 北京小米移动软件有限公司 Sound identification method and device
CN106842179B (en) * 2016-12-23 2019-11-26 成都赫尔墨斯科技股份有限公司 A kind of anti-UAV system based on acoustic detection
CN106842179A (en) * 2016-12-23 2017-06-13 成都赫尔墨斯科技有限公司 A kind of anti-UAS based on acoustic detection
CN106873626A (en) * 2017-03-31 2017-06-20 芜湖博高光电科技股份有限公司 A kind of target-seeking system of passive location
CN108965789A (en) * 2017-05-17 2018-12-07 杭州海康威视数字技术股份有限公司 A kind of unmanned plane monitoring method and audio/video linkage device
CN108965789B (en) * 2017-05-17 2021-03-12 杭州海康威视数字技术股份有限公司 Unmanned aerial vehicle monitoring method and audio-video linkage device
CN110832408B (en) * 2017-07-03 2022-03-25 深圳市大疆创新科技有限公司 Neural network based image target tracking by aircraft
CN110832408A (en) * 2017-07-03 2020-02-21 深圳市大疆创新科技有限公司 Neural network based image target tracking by aircraft
US11513205B2 (en) 2017-10-30 2022-11-29 The Research Foundation For The State University Of New York System and method associated with user authentication based on an acoustic-based echo-signature
CN108280395A (en) * 2017-12-22 2018-07-13 中国电子科技集团公司第三十研究所 A kind of efficient identification method flying control signal to low small slow unmanned plane
CN108280395B (en) * 2017-12-22 2021-12-17 中国电子科技集团公司第三十研究所 Efficient identification method for flight control signals of low-small-slow unmanned aerial vehicle
CN110826583A (en) * 2018-08-14 2020-02-21 珠海格力电器股份有限公司 Fault determination method and device, storage medium and electronic device
CN109270520A (en) * 2018-10-18 2019-01-25 四川九洲空管科技有限责任公司 The processing method of secondary radar response target identities code is obtained based on amplitude information
CN109375204A (en) * 2018-10-26 2019-02-22 中电科仪器仪表有限公司 Object detection method, system, equipment and medium based on radar
CN109658944A (en) * 2018-12-14 2019-04-19 中国电子科技集团公司第三研究所 Helicopter acoustic signal Enhancement Method and device
CN109864740A (en) * 2018-12-25 2019-06-11 北京津发科技股份有限公司 A kind of the surface electromyogram signal acquisition sensor and equipment of motion state
CN110084094A (en) * 2019-03-06 2019-08-02 中国电子科技集团公司第三十八研究所 A kind of unmanned plane target identification classification method based on deep learning
CN110020685A (en) * 2019-04-09 2019-07-16 山东超越数控电子股份有限公司 A kind of preprocess method based on adaptive-filtering and limited Boltzmann machine, terminal and readable storage medium storing program for executing
CN110209993A (en) * 2019-06-17 2019-09-06 中国电子科技集团公司信息科学研究院 A kind of information extraction method and system detecting target
CN110209993B (en) * 2019-06-17 2023-05-05 中国电子科技集团公司信息科学研究院 Information extraction method and system for detection target
CN110390949A (en) * 2019-07-22 2019-10-29 苏州大学 Acoustic Object intelligent identification Method based on big data
CN110390949B (en) * 2019-07-22 2021-06-15 苏州大学 Underwater sound target intelligent identification method based on big data
CN111626093A (en) * 2020-03-27 2020-09-04 国网江西省电力有限公司电力科学研究院 Electric transmission line related bird species identification method based on sound power spectral density
CN111626093B (en) * 2020-03-27 2023-12-26 国网江西省电力有限公司电力科学研究院 Method for identifying related bird species of power transmission line based on sound power spectral density
CN113420743A (en) * 2021-08-25 2021-09-21 南京隼眼电子科技有限公司 Radar-based target classification method, system and storage medium
CN114113837A (en) * 2021-11-15 2022-03-01 国网辽宁省电力有限公司朝阳供电公司 Acoustic feature-based transformer live-line detection method and system
CN114113837B (en) * 2021-11-15 2024-04-30 国网辽宁省电力有限公司朝阳供电公司 Transformer live detection method and system based on acoustic characteristics
CN114999529A (en) * 2022-08-05 2022-09-02 中国民航大学 Model classification method for airport aviation noise
CN114999529B (en) * 2022-08-05 2022-11-01 中国民航大学 Airplane type classification method for airport aviation noise
CN116381406A (en) * 2023-03-16 2023-07-04 武汉船舶职业技术学院 Ship power grid fault positioning method, device, equipment and readable storage medium
CN116381406B (en) * 2023-03-16 2024-06-04 武汉船舶职业技术学院 Ship power grid fault positioning method, device, equipment and readable storage medium
CN116150594A (en) * 2023-04-18 2023-05-23 长鹰恒容电磁科技(成都)有限公司 Method for identifying switch element characteristics in spectrum test data

Also Published As

Publication number Publication date
CN105550636B (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN105550636A (en) Method and device for identifying target types
CN108280395B (en) Efficient identification method for flight control signals of low-small-slow unmanned aerial vehicle
CN106529428A (en) Underwater target recognition method based on deep learning
Guo et al. One-dimensional frequency-domain features for aircraft recognition from radar range profiles
CN104714225B (en) Dynamic programming tracking-before-detection method based on generalized likelihood ratios
CN102866388B (en) Iterative computation method for self-adaptive weight number in space time adaptive processing (STAP)
CN104239901B (en) Classification of Polarimetric SAR Image method based on Fuzzy particle swarm artificial and goal decomposition
CN107315996A (en) A kind of noise characteristic extracting method of ships under water based on IMF Energy-Entropies and PCA
CN113640768B (en) Low-resolution radar target identification method based on wavelet transformation
Li et al. Radar signal recognition algorithm based on entropy theory
Darzikolaei et al. Classification of radar clutters with artificial neural network
KR20190019713A (en) System and method for classifying based on support vector machine for uav sound identification
CN112666533B (en) Repetition frequency change steady target identification method based on spatial pyramid pooling network
Zhao et al. Mutation grey wolf elite PSO balanced XGBoost for radar emitter individual identification based on measured signals
CN111812598B (en) Time domain and frequency domain multi-feature-based ground and sea clutter classification method
CN104732970A (en) Ship radiation noise recognition method based on comprehensive features
RU2579353C1 (en) Method of tracking aerial target from &#34;turbojet aircraft&#34; class under effect of velocity deflecting noise
CN110501683A (en) A kind of extra large land Clutter Classification method based on 4 D data feature
Greenwood II Fundamental rotorcraft acoustic modeling from experiments (FRAME)
CN103994820A (en) Moving target identification method based on micro-aperture microphone array
CN102279399A (en) Dim target frequency spectrum tracking method based on dynamic programming
CN115061094B (en) Radar target recognition method based on neural network and SVM
CN105911546A (en) Sea clutter identification method and device
CN112801065B (en) Space-time multi-feature information-based passive sonar target detection method and device
Farrokhrooz et al. Ship noise classification using probabilistic neural network and AR model coefficients

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant