CN107527625A - Dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system - Google Patents

Dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system Download PDF

Info

Publication number
CN107527625A
CN107527625A CN201710793362.4A CN201710793362A CN107527625A CN 107527625 A CN107527625 A CN 107527625A CN 201710793362 A CN201710793362 A CN 201710793362A CN 107527625 A CN107527625 A CN 107527625A
Authority
CN
China
Prior art keywords
mrow
neurotransmitter
amount
signal
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710793362.4A
Other languages
Chinese (zh)
Inventor
孙金涛
生雪莉
郭龙祥
殷敬伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201710793362.4A priority Critical patent/CN107527625A/en
Publication of CN107527625A publication Critical patent/CN107527625A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

It is an object of the invention to provide the dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system, using following steps:(1) pretreatment signal is normalized;(2) signal that step 1 obtains is obtained into M subband signal by Gammatone auditory filter groups;(3) Fast Fourier Transform (FFT) is done to step 2 subband signal, by low pass filter, and generates hearing spectrum forms;(4) step 3 subband signal to subband hearing spectrum forms self-adaptive processing, is obtained into enhanced hearing spectrum forms, (5) calculate each subband sense of hearing spectrum energy of hearing spectrum forms, obtain M dimensional feature vectors by Meddis inner hair cell models.The present invention can solve in the prior art, and operand is big, can not rapid extraction feature;To nonlinear properties, nonstationary random response effect is undesirable;To the high dependence of ambient noise, the problem of application limitation.

Description

Dolphin whistle signal aural signature extraction based on analog cochlea in bionical auditory system Method
Technical field
The present invention relates to a kind of signal mode recognition methods, specifically aquatic organism signal mode identification side Method.
Background technology
System for Underwater Acoustic Signals Recognition mainly includes feature extraction and classifier design two parts.The task of feature extraction is selection table Effective and reliable and stable feature of target identities is levied, is one of key component of signal identification, it directly affects underwater sound letter Number identification final result.Underwater world's sighting distance of dolphin life is limited, and by Long Term Evolution, dolphin has been evolved excellent sonar System.By Sonar system, dolphin can complete individual identification in population, predation, hide the behaviors such as natural enemy.In dolphin sonar In, the whistle signal of dolphin carries the task of individual exchange communication, and the Classification and Identification of whistle signal also turns into dolphin research One of key link.
At present, it is based primarily upon time-frequency domain for the feature extracting method of dolphin whistle signal.Waveform configuration including time domain Feature extracting method, the Power estimation method and time-frequency domain of frequency domain utilize various signal converter techniques extraction signal characteristic.Traditional Feature extracting method achieves good effect by excavation for a long time.Time domain waveform to be mainly characterized by method simple, Real-time is good.But underwater signal time domain waveform is complicated, it is difficult to waveform configuration feature of the extraction with high score category information.Frequency domain The characteristics of analyzing feature is technology maturation, and method is simple, and spectrum information contains clear and definite physical concept, but is adapted to processing line Property, stationary signal.The feature of Time-Frequency Analysis Method extraction can more reflect the time domain and frequency domain character of signal, but time frequency analysis is more Complexity, amount of calculation, amount of storage are big, and calculating speed is slow, poor real.Under the Underwater Acoustic Environment of reality, by different hydrological conditions Influence, the extraction of traditional time-frequency characteristics tends not to obtain preferable effect.
The content of the invention
It is an object of the invention to provide can solve the problem that operand is big in the prior art, can not rapid extraction feature, to non- Linear signal, nonstationary random response effect are undesirable, to the high dependence of ambient noise, the problems such as application limitation based on The dolphin whistle signal aural signature extracting method of analog cochlea in bionical auditory system.
The object of the present invention is achieved like this:
Dolphin whistle signal aural signature extracting method of the invention based on analog cochlea in bionical auditory system, its feature It is:
(1) dolphin cry is sampled, obtains faithful record signal s (n), faithful record signal s (n) is normalized pre- place Reason:
(2) signal that step (1) obtains is obtained into M subband signal, M is filter by Gammatone auditory filter groups Ripple device number, quick FFT is carried out to each subband signal;
The impulse response of Gammatone wave filters is:
In formula:A represents the amplitude factor of gammatone wave filters, and n represents the exponent number of gammatone wave filters, fcRepresent The centre frequency of gammatone wave filters,Represent the initial phase of gammatone wave filters, 2 π bERB (fc) represent damping because Son, u (t) represent unit-step function;
ERB represents equivalent rectangular bandwidth, and its expression formula is:ERB(fc)=24.7+0.108fc
It is as follows that Laplace transform is done to gammatone filter impulse responses functions:
Wherein, A is filter gain, and n is filter order, fcCentered on frequency,For phase, b=2 π ERB (fc),w =2 π fc
(3) Fast Fourier Transform (FFT) is done to step (2) subband signal, by low pass filter, and generates hearing spectrum forms;
(4) halfwave rectifier, generation simulation people are carried out by Meddis inner hair cells model to step (2) filtered signal The hearing spectrum forms that ear perceives, calculate each channel band energy of hearing spectrum forms, form the characteristic vector containing aural signature;
(5) each channel band energy of each hearing spectrum forms is calculated, forms the characteristic vector containing aural signature:
The present invention can also include:
1st, Meddis models include five physical quantitys:The amount of neurotransmitter, capillary in cell membrane permeability, inner ear hair cells The amount of intercellular space neurotransmitter, regenerate the amount of neurotransmitter, excitation probability in storehouse:
(1) cell membrane permeability:
Reflection neurotransmitter describes in the following way from inner hair cell to the permeability of hair cell gap transmission ability:
K (t) is cell membrane permeability, and stim (t) is the instantaneous amplitude for inputting sound wave, A, B, and g is cell parameters, k (t)= A/ (A+B) g represents the spontaneous response of cell membrane;
(2) in inner ear hair cells neurotransmitter amount:
The amount q (t) of neurotransmitter changes over time rate and can be represented by the formula in inner ear hair cells:
Wherein, y (1-q (t))) it is the amount that manufactory adds to hair cell neurotransmitter, xw (t) is flowed back to by regenerating storehouse The amount of hair cell neurotransmitter ,-k (t) q (t) flow to the amount of space between cells neurotransmitter out of hair cell;
(3) amount of outside neurotransmitter:
The amount c (t) of outside neurotransmitter changes over time to be described with following formula:
Wherein, k (t) q (t) are the neurotransmitters that inner hair cell flows into space between cells, and-lc (t) is that space between cells is flowed out Neurotransmitter ,-rc (t) are the neurotransmitters flowed out to from space between cells in regeneration storehouse;
(4) amount of neurotransmitter in storehouse is regenerated:
Amount w (t) rates of changing with time of neurotransmitter are expressed from the next in regeneration storehouse:
(5) nerve fibre excitation probability:
The amount c (t) for eventually settling at the neurotransmitter in space between cells determines the probability that rear class nerve conduction fiber excites, Represent as follows with scale factor h:
P (t)=hc (t) dt.
Advantage of the invention is that:The present invention is directed to traditional characteristic extracting method, computationally intensive, realization complexity, Yi Shouhuan The problem of border influence of noise, it is proposed that a kind of dolphin whistle signal aural signature based on analog cochlea in bionical auditory system carries Method is taken, and the process of establishing of whole model is described in detail, including the frequency dividing filter of the basilar membrane based on human hearing characteristic The conversion of inner hair cell signal, the process of auditory nerve granting rate calculating characteristic vector on ripple, basilar memebrane.Gammatone filters utensil There is sharp frequency selective characteristic, the decay at the edge of wave filter is very slow, and the energy effectively avoided between nearby frequency bands is let out Leakage, can quickly handle voice signal, and its Frequency Response is consistent with the filtering characteristic of people's basilar membrane, while the wave filter is only Less parameter is needed just can preferably to simulate human auditory system, it is easy to accomplish.Meddis modeling inner hair cell treatment mechanisms Self-adaptive processing is carried out to input signal, the correct recognition rata of acoustic signature is improved, there is certain inhibitory action to noise.By right The simulation of human ear basilar memebrane frequency selection mechanism and inner ear hair cells transduction mechanism it is this based on Gammatone wave filter groups and The feature extracting method amount of calculation of Meddis models, realize that difficulty is less than conventional method, while this be based on bionical auditory model Feature extracting method there is good noise immunity and robustness.
Brief description of the drawings
Fig. 1 is the impulse response waveform of Gammatone wave filters;
Fig. 2 is the amplitude-frequency response of Gammatone wave filters;
Fig. 3 is inner hair cell illustraton of model;
Fig. 4 is the flow chart of the inventive method.
Embodiment
Illustrate below in conjunction with the accompanying drawings and the present invention is described in more detail:
With reference to Fig. 1-4, dolphin cry obtains through 128kHz sample rates.
For faithful record signal:
Step 1:Pretreatment is normalized to marine faithful record signal s (n);
Step 2:By the signal that step 1 obtains by Gammatone auditory filter groups, obtaining M subband signal, (M is Number of filter), quick FFT is carried out to each subband signal.The impulse response of Gammatone wave filters is:
In formula:A represents the amplitude factor of gammatone wave filters;N represents the exponent number of gammatone wave filters;fcRepresent The centre frequency of gammatone wave filters;Represent the initial phase of gammatone wave filters;2πbERB(fc) represent damping because Son;U (t) represents unit-step function;
ERB represents equivalent rectangular bandwidth, and its expression formula is:ERB(fc)=24.7+0.108fc
It is as follows that Laplace transform is done to gammatone filter impulse responses functions:
Wherein, A is filter gain, and n is filter order, fcCentered on frequency,For phase, b=2 π ERB (fc),w =2 π fc
Step 3:Fast Fourier Transform (FFT) is done to step 2 subband signal, by low pass filter, and generates hearing spectrum forms.
Step 4:Halfwave rectifier, generation simulation are carried out by Meddis inner hair cells model to the filtered signal of step 2 The hearing spectrum forms of auditory perceptual, each channel band energy of hearing spectrum forms is calculated, forms the characteristic vector containing aural signature. Meddis inner hair cell models are as shown in Figure 3.
Meddis models mainly include five physical quantitys:The amount of neurotransmitter, capillary intercellular in permeability, inner ear hair cells The amount of gap neurotransmitter, regenerate the amount of neurotransmitter, excitation probability in storehouse.
(1) cell membrane permeability
Permeability reflects neurotransmitter from inner hair cell to the ability of hair cell gap transmission.Permeability is available as follows Mode describes:
K (t) is cell permeability of the membrane, and stim (t) is the instantaneous amplitude for inputting sound wave, and A, B, g is cell parameters.k(t) =A/ (A+B) g represents the spontaneous response of cell membrane, describes a non-linear process.
(2) in inner ear hair cells neurotransmitter amount
The amount q (t) of neurotransmitter changes over time rate and can be represented by the formula in inner ear hair cells:
Wherein, y (1-q (t)) is the amount that manufactory adds to hair cell neurotransmitter, and xw (t) is flowed back to by regenerating storehouse The amount of hair cell neurotransmitter ,-k (t) q (t) flow to the amount of space between cells neurotransmitter out of hair cell.They are together decided on The amount of neurotransmitter in inner ear hair cells changes with time rate.
(3) amount of outside neurotransmitter
The amount c (t) of outside neurotransmitter changes over time available following formula description:
Wherein, k (t) q (t) are the neurotransmitters that inner hair cell flows into space between cells, and-lc (t) is that space between cells is flowed out Neurotransmitter ,-rc (t) are the neurotransmitters flowed out to from space between cells in regeneration storehouse, and the amount of space between cells neurotransmitter is at any time Between rate of change thus three decision.
(4) amount of neurotransmitter in storehouse is regenerated
Amount w (t) rates of changing with time of neurotransmitter are expressed from the next in regeneration storehouse:
(5) nerve fibre excitation probability
The amount c (t) for eventually settling at the neurotransmitter in space between cells determines the probability that rear class nerve conduction fiber excites, Represent as follows with scale factor h:
P (t)=hc (t) dt
Step 5:, each channel band energy of each hearing spectrum forms is calculated, forms the characteristic vector containing aural signature.
The centre frequency of above example setup algorithm is from 5kHz to 10kHz, the dolphin whistle signal feature that will extract Vector is analyzed, and compared with extracting characteristic vector under different degrees of underwater ambient noise.As a result show to extract feature There are higher correct recognition rata, a preferable noise robustness, it was demonstrated that effectiveness of the invention.

Claims (2)

1. based on the dolphin whistle signal aural signature extracting method of analog cochlea in bionical auditory system, it is characterized in that:
(1) dolphin cry is sampled, obtains faithful record signal s (n), faithful record signal s (n) is normalized pretreatment:
<mrow> <mover> <mi>s</mi> <mo>&amp;OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>|</mo> <mi>s</mi> <msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mrow> <mi>m</mi> <mi>L</mi> <mi>&amp;alpha;</mi> </mrow> </msub> <mo>|</mo> </mrow> </mfrac> <mo>;</mo> </mrow>
(2) signal that step (1) obtains is obtained into M subband signal, M is wave filter by Gammatone auditory filter groups Number, quick FFT is carried out to each subband signal;
The impulse response of Gammatone wave filters is:
In formula:A represents the amplitude factor of gammatone wave filters, and n represents the exponent number of gammatone wave filters, fcRepresent The centre frequency of gammatone wave filters,Represent the initial phase of gammatone wave filters, 2 π bERB (fc) represent damping because Son, u (t) represent unit-step function;
ERB represents equivalent rectangular bandwidth, and its expression formula is:ERB(fc)=24.7+0.108fc
It is as follows that Laplace transform is done to gammatone filter impulse responses functions:
Wherein, A is filter gain, and n is filter order, fcCentered on frequency,For phase, b=2 π ERB (fc), w=2 π fc
(3) Fast Fourier Transform (FFT) is done to step (2) subband signal, by low pass filter, and generates hearing spectrum forms;
(4) halfwave rectifier, generation simulation human ear sense are carried out by Meddis inner hair cells model to step (2) filtered signal The hearing spectrum forms known, each channel band energy of hearing spectrum forms is calculated, forms the characteristic vector containing aural signature;
(5) each channel band energy of each hearing spectrum forms is calculated, forms the characteristic vector containing aural signature:
<mrow> <msub> <mi>E</mi> <mi>m</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>,</mo> <mn>1</mn> <mo>&amp;le;</mo> <mi>m</mi> <mo>&amp;le;</mo> <mi>N</mi> <mo>.</mo> </mrow>
2. the dolphin whistle signal aural signature extraction according to claim 1 based on analog cochlea in bionical auditory system Method, it is characterized in that:Meddis models include five physical quantitys:The amount of neurotransmitter in cell membrane permeability, inner ear hair cells, The amount of capillary intercellular space neurotransmitter, regenerate the amount of neurotransmitter, excitation probability in storehouse:
(1) cell membrane permeability:
Reflection neurotransmitter describes in the following way from inner hair cell to the permeability of hair cell gap transmission ability:
<mrow> <mi>k</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <mi>A</mi> <mo>+</mo> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>A</mi> <mo>+</mo> <mi>B</mi> <mo>+</mo> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mi>g</mi> <mo>,</mo> <mi>A</mi> <mo>+</mo> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>&amp;GreaterEqual;</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> <mi>A</mi> <mo>+</mo> <mi>s</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>&lt;</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
K (t) is cell membrane permeability, and stim (t) is the instantaneous amplitude for inputting sound wave, A, B, and g is cell parameters, k (t)=A/ (A + B) g represent cell membrane spontaneous response;
(2) in inner ear hair cells neurotransmitter amount:
The amount q (t) of neurotransmitter changes over time rate and can be represented by the formula in inner ear hair cells:
<mrow> <mfrac> <mrow> <mi>d</mi> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>d</mi> <mi>t</mi> </mrow> </mfrac> <mo>=</mo> <mi>y</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>q</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>k</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>x</mi> <mi>w</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow>
Wherein, y (1-q (t)) is the amount that manufactory adds to hair cell neurotransmitter, and xw (t) is to flow back to capillary by regenerating storehouse The amount of born of the same parents' neurotransmitter ,-k (t) q (t) flow to the amount of space between cells neurotransmitter out of hair cell;
(3) amount of outside neurotransmitter:
The amount c (t) of outside neurotransmitter changes over time to be described with following formula:
<mrow> <mfrac> <mrow> <mi>d</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>d</mi> <mi>t</mi> </mrow> </mfrac> <mo>=</mo> <mi>k</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi>q</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>l</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>r</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow>
Wherein, k (t) q (t) are the neurotransmitters that inner hair cell flows into space between cells, and-lc (t) is the nerve of space between cells outflow Mediator ,-rc (t) are the neurotransmitters flowed out to from space between cells in regeneration storehouse;
(4) amount of neurotransmitter in storehouse is regenerated:
Amount w (t) rates of changing with time of neurotransmitter are expressed from the next in regeneration storehouse:
<mrow> <mfrac> <mrow> <mi>d</mi> <mi>w</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>d</mi> <mi>t</mi> </mrow> </mfrac> <mo>=</mo> <mi>r</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>x</mi> <mi>w</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
(5) nerve fibre excitation probability:
The amount c (t) for eventually settling at the neurotransmitter in space between cells determines the probability that rear class nerve conduction fiber excites, with than Example factor h represents as follows:
P (t)=hc (t) dt.
CN201710793362.4A 2017-09-06 2017-09-06 Dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system Pending CN107527625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710793362.4A CN107527625A (en) 2017-09-06 2017-09-06 Dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710793362.4A CN107527625A (en) 2017-09-06 2017-09-06 Dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system

Publications (1)

Publication Number Publication Date
CN107527625A true CN107527625A (en) 2017-12-29

Family

ID=60683465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710793362.4A Pending CN107527625A (en) 2017-09-06 2017-09-06 Dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system

Country Status (1)

Country Link
CN (1) CN107527625A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109300481A (en) * 2018-10-19 2019-02-01 武汉轻工大学 Audio attention rate calculation method and system based on comentropy and time trend analysis
CN111048110A (en) * 2018-10-15 2020-04-21 杭州网易云音乐科技有限公司 Musical instrument identification method, medium, device and computing equipment
CN111414832A (en) * 2020-03-16 2020-07-14 中国科学院水生生物研究所 Real-time online recognition and classification system based on whale dolphin low-frequency underwater acoustic signals

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6798715B2 (en) * 2000-07-08 2004-09-28 Neptune Technologies, Inc. Biomimetic sonar system and method
CN101813771A (en) * 2009-12-08 2010-08-25 中国科学院声学研究所 Dolphin biomimetic sonar signal processing method
CN103559893A (en) * 2013-10-17 2014-02-05 西北工业大学 Gammachirp cepstrum coefficient auditory feature extraction method of underwater targets
CN105575387A (en) * 2015-12-25 2016-05-11 重庆邮电大学 Sound source localization method based on acoustic bionic cochlea basal membrane

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6798715B2 (en) * 2000-07-08 2004-09-28 Neptune Technologies, Inc. Biomimetic sonar system and method
CN101813771A (en) * 2009-12-08 2010-08-25 中国科学院声学研究所 Dolphin biomimetic sonar signal processing method
CN103559893A (en) * 2013-10-17 2014-02-05 西北工业大学 Gammachirp cepstrum coefficient auditory feature extraction method of underwater targets
CN105575387A (en) * 2015-12-25 2016-05-11 重庆邮电大学 Sound source localization method based on acoustic bionic cochlea basal membrane

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王永琦: "基于听觉模型反演方法的语音信号的分析及其应用", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
王磊等: "听觉外周计算模型在水中目标分类识别中的应用", 《电子学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111048110A (en) * 2018-10-15 2020-04-21 杭州网易云音乐科技有限公司 Musical instrument identification method, medium, device and computing equipment
CN109300481A (en) * 2018-10-19 2019-02-01 武汉轻工大学 Audio attention rate calculation method and system based on comentropy and time trend analysis
CN109300481B (en) * 2018-10-19 2022-01-11 武汉轻工大学 Audio attention calculation method and system based on information entropy and time trend analysis
CN111414832A (en) * 2020-03-16 2020-07-14 中国科学院水生生物研究所 Real-time online recognition and classification system based on whale dolphin low-frequency underwater acoustic signals
CN111414832B (en) * 2020-03-16 2021-06-25 中国科学院水生生物研究所 Real-time online recognition and classification system based on whale dolphin low-frequency underwater acoustic signals

Similar Documents

Publication Publication Date Title
CN104485114B (en) A kind of method of the voice quality objective evaluation based on auditory perception property
CN105845127B (en) Audio recognition method and its system
CN103236260B (en) Speech recognition system
CN106952649A (en) Method for distinguishing speek person based on convolutional neural networks and spectrogram
CN109890043B (en) Wireless signal noise reduction method based on generative countermeasure network
CN110610719A (en) Sound processing apparatus
CN110428842A (en) Speech model training method, device, equipment and computer readable storage medium
CN106782565A (en) A kind of vocal print feature recognition methods and system
Pianese et al. Deepfake audio detection by speaker verification
CN110503967B (en) Voice enhancement method, device, medium and equipment
Gabor Communication theory and cybernetics
CN108630209A (en) A kind of marine organisms recognition methods of feature based fusion and depth confidence network
CN102664010B (en) Robust speaker distinguishing method based on multifactor frequency displacement invariant feature
CN107527625A (en) Dolphin whistle signal aural signature extracting method based on analog cochlea in bionical auditory system
CN110456332A (en) A kind of underwater sound signal Enhancement Method based on autocoder
CN107767859A (en) The speaker&#39;s property understood detection method of artificial cochlea&#39;s signal under noise circumstance
CN103903632A (en) Voice separating method based on auditory center system under multi-sound-source environment
Nossier et al. Mapping and masking targets comparison using different deep learning based speech enhancement architectures
CN104778948A (en) Noise-resistant voice recognition method based on warped cepstrum feature
CN103559893B (en) One is target gammachirp cepstrum coefficient aural signature extracting method under water
CN115841821A (en) Voice interference noise design method based on human voice structure
CN103557925B (en) Underwater target gammatone discrete wavelet coefficient auditory feature extraction method
CN108520757A (en) Music based on auditory properties is applicable in scene automatic classification method
Wang et al. Low pass filtering and bandwidth extension for robust anti-spoofing countermeasure against codec variabilities
CN110211569A (en) Real-time gender identification method based on voice map and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171229

RJ01 Rejection of invention patent application after publication