CN107170462A - Hidden method for acoustic based on MVDR - Google Patents

Hidden method for acoustic based on MVDR Download PDF

Info

Publication number
CN107170462A
CN107170462A CN201710163190.2A CN201710163190A CN107170462A CN 107170462 A CN107170462 A CN 107170462A CN 201710163190 A CN201710163190 A CN 201710163190A CN 107170462 A CN107170462 A CN 107170462A
Authority
CN
China
Prior art keywords
mrow
msub
omega
mtd
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710163190.2A
Other languages
Chinese (zh)
Inventor
陈景东
梁菲菲
王雪瀚
黄海
聂玮奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Linjing Acoustics Technology Jiangsu Co Ltd
Original Assignee
Linjing Acoustics Technology Jiangsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Linjing Acoustics Technology Jiangsu Co Ltd filed Critical Linjing Acoustics Technology Jiangsu Co Ltd
Priority to CN201710163190.2A priority Critical patent/CN107170462A/en
Publication of CN107170462A publication Critical patent/CN107170462A/en
Priority to CN201810221809.5A priority patent/CN108597532A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0224Processing in the time domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The invention discloses a kind of hidden method for acoustic based on MVDR, this method is first with Short Time Fourier Transform, the time-domain signal that sensor array is received resolves into frequency domain sub-band signal, the hidden acoustic filter based on MVDR is constructed on each subband, so that the acoustical signal of sound source is undamped by hidden acoustic filter within from hidden throw, obtain estimating signal eventually through anti-STFT.The present invention has preferable hidden sound effective value.

Description

Hidden method for acoustic based on MVDR
Technical field
The present invention relates to the hidden audio technology of microphone array, and in particular to a kind of hidden method for acoustic based on MVDR.
Background technology
The research of hidden audio technology has had very long history, and people mainly explore two kinds of sides in acoustical signal sensory field Method:Bone-conduction microphone and ultrasonic microphone, signal separation techniques and difference microphone array are explored in Underwater Acoustic channels field Row.
Bone-conduction microphone is the slight vibration of caused incidence bone when talking using people to collect voice signal Get up to switch to electric signal.Because it is different from conventional microphone by air transmitted pickup sound, so in very noisy environment In the outflow of sound high-resolution can also be come.Early in before the centuries, people have had many real to bone conduction technology Using, but the speech quality of early stage bone-conduction microphone is not fine, the speech quality of especially high frequency is poor, so early The bone-conduction microphone of phase is only used for the conventional microphone of auxiliary, for example, do speech terminals detection using bone-conduction microphone, To improve the performance of single-channel voice noise reduction.Nearly ten or twenty year, bone-conduction microphone starts really to be paid close attention to by people, its performance Have and greatly improve.At present, in the market has occurred in that communication headsets of many moneys based on bone-conduction microphone.Wherein, 2013 5 The moon, a kind of bone-conduction microphone was invented by Beijing MeiErSiTong Science Development stock Co., Ltd, realized the product independent research state The interior breakthrough of zero, has put goods on the market at present.Recently, attention is also obtain similar to other sonic transducers of osteoacusis principle, such as DAIKIN-D Talk Mic headsets, its operation principle is to pick up speaker's laryngograph signal using highly sensitive microphone, And it is converted into electric signal.This pickup mode has many similar parts with bone-conduction microphone.Although bone-conduction microphone technology Breakthrough development has been obtained, but it is not also ideal that its communication headset is promoted at present, is primarily present Railway Project:1) move State effect is poor;2) cost is high;3) packaging effect is poor;4) lower tone.
Before more than ten years, a collection of scientist of AT&T Labs of the U.S. devises a kind of ultrasonic microphone.This microphone by The conventional microphone composition in one small ultrasonic transmitter and one big broadband.During work, ultrasonic transmitter sends a cycle The wideband pulse sequence of property.The frequency of this wideband pulse signal is between 20kHz to 70kHz, the sound channel reflection through speaker Afterwards, reflected signal is received by microphone, then the Digital Signal Processing link of rear end using transmission signal and reflected signal come Estimate the form parameter of sound channel, and then synthesize the voice described in speaker.The maximum technical characterstic of this microphone be operated in it is super Audio frequency range, therefore do not disturbed by audio signal in the range of sense of hearing perceived frequency, available in solution class cocktail party environment Voice communication problem.The scientist of AT&T Labs of the U.S. constructs a prototype system that can be worked, this prototype system The vowel for receiving and being synthesized is sent in a speech recognition system, 95% discrimination can be obtained.Preliminary hearing Experiment also confirms that the quality of the vowel of synthesis can reach the speech quality of conventional microphone substantially.Certainly, this microphone is true Positive practical preceding also many problems need to solve, and maximum technical problem is exactly the sound not protruded for track characteristics such as nasal sound Element, the speech quality and intelligibility of synthesis be not high.
Hidden audio technology theoretically can be regarded as Signal separator or strengthen a subproblem of problem.In a complexity Acoustic enviroment in, the signal from some sound source is picked up using microphone, the signal can be contaminated with few exceptions.Root Noise in the mechanism produced according to pollution, Speech processing is divided into 4 classes:Ambient noise, echo, reverberation and interference.In order to incite somebody to action Sound source and noise are separated, and are handled for each noise like with specific method:
Ambient noise (Noise):Ambient noise can not be avoided and ubiquitous, and its presence can have a strong impact on voice letter Number the perception to spatial information of speech quality, intelligibility and human ear.Ambient noise is generally all relatively stable, that is to say, that The statistical property of current time spot noise can be replaced with the noise statisticses in historical time.According to the system of signals with noise Characteristic and the statistical property of noise are counted, wave filter can be designed observation signal is filtered, and then strengthens voice signal, is suppressed Ambient noise, this technology is referred to as noise reduction technology.Noise reduction technology can utilize single channel pickup system, can also utilize multichannel Pickup system, they correspond respectively to single channel noise reduction technology and multichannel noise reduction technology.Single channel noise reduction technology is being made an uproar Sound can cause speech distortion while suppression, Comparatively speaking, and multichannel noise reduction technology is while identical output signal-to-noise ratio is obtained Speech distortion can be reduced.
Echo (Echo):Acoustic echo is that the acoustical coupling between microphone and loudspeaker is produced.The presence meeting of echo Have a strong impact on multi-party duplex interaction.The characteristics of echo is maximum is that sound-source signal is known, as long as can estimate from loudspeaker To the acoustic propagation channel between microphone, the echo composition in observation signal with regard to microphone pickup can be estimated, by this composition Estimation subtracted from the signal picked up, it is possible to realize echo cancellor, this technology is referred to as echo cancellation technology.
Reverberation (Reverberation):Reverberation is due to that the interface reflection (multipath effect) in room environment is caused.Instead Penetrate and be divided into early reflection and late period reflection.Early reflection (within usual 40ms) can typically carry useful information, such as pass through analysis The structure of early reflection, can size up the room.In addition, early reflection can also strengthen the harmonic components of music, raising is listened Sense.But late period reflection can cause Spectrum Distortion, and then cause speech quality, the decline of intelligibility, and sound source can be obscured Positional information.In voice communication system, late period reflection causes reverberation, so that the quality of voice communication is had a strong impact on, so needing Want dereverberation technology.A kind of dereverberation technology is to carry out blind estimate to channel first, then recycles balancing technique to realize and goes to mix Ring;Another technology for suppressing reverberation is super sensing array beamses formation technology, and its general principle is to extract the sound of desired orientation While source signal, suppress the signal from other directions.Reverberation be from from all directions, therefore it is super point to array can be with one Suppress reverberation with determining degree.
Interference (Interference) signal from other sound sources:Interference signal is caused by a source noise, in being space Noise from some direction.In voice communication, surrounding often has many people, and there are other sound sources, therefore each logical Believe end, the situation of multi-acoustical is inevitable, can be interfered with each other between the signal from different sound sources.The allusion quotation of AF panel Type technology is beam-forming technology, and its basic thought is the wave filter for being initially formed a spatial domain, then wave filter is responded into maximum The size of array response that is depended on to the direction of quasiexpectation sound source, the inhibition level to interference on interference radiating way of direction.Language The target of sound noise reduction, Sound seperation and Wave beam forming is all to separate desired useful signal with other interference signals, so These technologies may be used to hidden sonication.But the separating property that current isolation technics can be obtained is also very limited, it is impossible to meet The demand of hidden sound application.
Aforementioned signal separation techniques need to use microphone array (microphone array).For microphone The research of array has had the history of more than 40 years, and in this four ten years, people have had been developed for many Array Designs With processing method.According to response theory of the array to sound field, these arrays can be divided into two major classes:Plus type array [additive Microphone array (AMA)] and difference array [differential microphone array (DMA)].Additivity array Stock size is larger, and what each microphone was measured is the acoustic pressure of sound field, and whole array beamses formation is also that sonic pressure field is rung Should.Substantial amounts of work is all the processing method on additivity array on additivity array in current document.In contrast, it is poor Subarray is that the space differentiation of sonic pressure field is responded, with array sizes are small, beam pattern frequency invariance is preferable, to The features such as determining that in the case of array element number array directive property can be maximized.
The content of the invention
The technical problem to be solved in the present invention is to provide a kind of hidden method for acoustic based on MVDR, with good stealthy effect Really.
In order to solve the above technical problems, the present invention is adopted the following technical scheme that:Hidden method for acoustic based on MVDR, this method is First with Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into subband signal, the structure on each subband Make the hidden acoustic filter based on MVDR so that the acoustical signal of sound source is undamped by hidden acoustic filter within from hidden throw, Obtain estimating signal eventually through anti-STFT.
Further, this method comprises the following steps:
S1:According to parameters such as the structure of array, array element number, the positions of sound source, steering vector is constructed
The signal y that sensor in microphone array is receivedm(k)=xm(k)+vm(k), m=1,2 ..., M, which is divided into, to be had Certain short time frame for overlapping ratio, frame length can from several milliseconds to tens milliseconds, then in M passage each passage it is every One frame carries out Short Time Fourier Transform, obtains Ym(ω, i), wherein i represent the i-th frame, then construct
Y (ω, i)=[Y1(ω,i) Y2(ω,i) … YM(ω,i)]T,
S2:With Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into subband signal;
S3:On the subband that frequency is ω, the hidden acoustic filter based on MVDR is constructed
S4:On subband, the signal of the i-th frame is handled using hidden acoustic filter h (ω)
S5:Using inverse Fourier transform in short-term and overlap-add method, to Z, (ω, i) enters line translation, so as to obtain wave beam shape Time-domain signal Z (k) after.
Further, with Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into subband signal Comprise the following steps that:
Assuming that the spacing between two neighboring microphone is δ, because hidden audio technology is used to pick up closely sound source, therefore assume There is preferable near-field sound source and interference effect in acoustic enviroment on the microphone array, the sound source distance away from each microphone point Wei not rs,1, rs,2..., rs,M, the center of array is defined as reference point, distance of the sound source away from reference point is rs, incidence angle is θs, Then distance of the sound source away from m-th of microphone can be expressed as
Wherein,
Discrete time k is located at, the signal that sound source is sent is x (k), if ignoring the absorption loss in communication process, m-th Also there is the amplitude fading being inversely proportional with distance in the signal that microphone is picked up, can relative to sound-source signal only phase delay It is expressed as:
Wherein, xm(k) sound-source signal that m-th of microphone is picked up, v are representedm(k) represent what m-th of microphone was picked up Noise signal;τmRepresent time delay of m-th of microphone compared to sound source;Because the wave surface of the preferable sound source near field is spherical, τmCan It is expressed as:
Wherein c represents the velocity of sound in air,
Formula (1) is changed into a frequency domain:
Wherein,Wave number is represented, the π f of ω=2 represent angular frequency;F represents temporal frequency,Represent imaginary number Unit, Ym(ω)、Xm(ω)、Vm(ω) represents y respectivelym(k)、xm(k)、vm(k) Fourier transformation.
Further, according to parameters such as the structure of array, array element number, the positions of sound source, construction length is sweared for M guiding Amount:Superscript T represents vectorial transposition computing, makes r=rs, θ=θs, then,The signal that M microphone is picked up is with vector representation:
Y (ω)=[Y1(ω) Y2(ω) … YM(ω)]T
=ds(ω,rss)X(ω)+v(ω),
Y (ω) passes through ARRAY PROCESSING, and obtained output signal is:
Wherein, Z (ω) is sound-source signal X (ω) estimation;Subscript H represents conjugate transposition computing, h (ω)=[H1(ω) H2(ω) … HM(ω)]TRepresent wave filter of the microphone array to the weighing vector of input signal, that is, microphone array Coefficient.
Further, in S3 on the subband that frequency is ω, hidden acoustic filter method of the construction based on MVDR is as follows:Do not wane Subtract closely sound-source signal while the variance for minimizing array output end residual noise has just obtained the hidden acoustic filters of MVDR, mathematics Expression formula is as follows:
Formula (2) can be solved using method of Lagrange multipliers, is obtained
Further, it is assumed that noise is isotropic noise, (m, n) individual element of its normalized correlation matrix can be with It is written as form:
Wherein,There are two kinds of extreme situations:If 1) ω τ0It is very big, i.e., high frequency or greatly Under spacer conditions, the noise signal that two sensors are received is close to uncorrelated, and isotropic noise is close to space white noise;2) If ω τ0Very small, i.e., under low frequency or small spacer conditions, the noise signal that two sensors are received is just close relevant, Isotropic noise points of proximity source noise;Hidden acoustic filter based on MVDR has the computing of a matrix inversion, works as microphone array When first number is more, morbid state occurs in matrix, and extremely unstable situation occurs in wave filter, and in order to avoid inverting, shakiness is pledged love The generation of condition, using traditional way:Diagonal loading technique, above-mentioned matrix is added with pair of horns matrix, after diagonal loading Matrix can be expressed as [Γdn(ω)+ε Ι], wherein Ι is the unit matrix that size is M, and ε is loading coefficient, now based on MVDR Hidden acoustic filter coefficient be:
Beneficial effects of the present invention:The present invention is minitype microphone array in platform, and its core is first with Fu in short-term Leaf transformation, subband signal is resolved into by the time-domain signal that sensor array is received, and appropriate hidden sound filter is constructed on each subband Ripple device so that the acoustical signal of sound source is undamped by hidden acoustic filter within from hidden throw, and method of the invention has very Good hidden sound effective value.
Brief description of the drawings
In order to more clearly illustrate the technical scheme in the embodiment of the present invention, below by using required in embodiment Accompanying drawing is simply introduced, it should be apparent that, drawings in the following description are only some embodiments described in the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings Accompanying drawing.
Fig. 1 is hidden acoustic model figure.
Fig. 2 is the system schematic that minitype microphone array handles voice.
Fig. 3 is that uniform linear array is schemed to the pickup of near field acoustical signal and processing.
Fig. 4 is the hidden acoustic attenuation curve map of the hidden method for acoustic based on 8 array element MVDR.
Fig. 5 is the hidden method for acoustic based on 8 array element MVDR that frequency is 1300Hz, 3300Hz
Beam pattern (indigo plant, red, purple, black represent r=5cm, 10cm, 30cm, 50cm respectively).
Fig. 6 is sound source and end of the hidden method for acoustic based on 8 array element MVDR to end-on direction 0cm
Penetrate the signal interference ratio figure of the interference in direction.
Embodiment
Technical scheme will be clearly and completely described by embodiment below.
The hidden method for acoustic based on MVDR of the present invention, this method is first with Short Time Fourier Transform, by sensor array The time-domain signal received resolves into subband signal, appropriate hidden acoustic filter is constructed on each subband so that from hidden throw The acoustical signal of sound source is undamped by hidden acoustic filter within, obtains estimating signal eventually through anti-STFT.
The method of the present invention comprises the following steps:
S1:According to parameters such as the structure of array, array element number, the positions of sound source, steering vector is constructed
S2:The signal y that sensor in microphone array is receivedm(k)=xm(k)+vm(k), m=1,2 ..., M is divided into There is certain short time frame for overlapping ratio, frame length can be from several milliseconds to tens milliseconds, then to each passage in M passage Each frame carry out Short Time Fourier Transform, obtain Ym(ω, i), wherein i represent the i-th frame, then construct
Y (ω, i)=[Y1(ω,i) Y2(ω,i) … YM(ω,i)]T
S3:With Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into subband signal;Assuming that phase Spacing between adjacent two microphones is δ, because hidden audio technology is used to pick up closely sound source, therefore assumes exist in acoustic enviroment Preferable near-field sound source and interference effect are on the microphone array, and distance of the sound source away from each microphone is respectively rs,1, rs,2..., rs,M, the center of array is defined as reference point, distance of the sound source away from reference point is rs, incidence angle is θs, then sound source away from The distance of m-th of microphone can be expressed as
Wherein,
Discrete time k is located at, the signal that sound source is sent is x (k), if ignoring the absorption loss in communication process, m-th Also there is the amplitude fading being inversely proportional with distance in the signal that microphone is picked up, can relative to sound-source signal only phase delay It is expressed as:
Wherein, xm(k) sound-source signal that m-th of microphone is picked up, v are representedm(k) represent what m-th of microphone was picked up Noise signal;τmRepresent time delay of m-th of microphone compared to sound source;
Because the wave surface of the preferable sound source near field is spherical, τmIt is represented by:
Wherein c represents the velocity of sound in air,
Formula (1) is changed into a frequency domain:
Wherein,Wave number is represented, the π f of ω=2 represent angular frequency;F represents temporal frequency,Represent imaginary number Unit, Ym(ω)、Xm(ω)、Vm(ω) represents y respectivelym(k)、xm(k)、vm(k) Fourier transformation.
According to parameters such as the structure of array, array element number, the positions of sound source, the steering vector that construction length is M:Superscript T represents vectorial transposition computing, makes r=rs, θ=θs, then,The signal that M microphone is picked up is with vector representation:
Y (ω)=[Y1(ω) Y2(ω) … YM(ω)]T
=ds(ω,rss)X(ω)+v(ω),
Y (ω) passes through ARRAY PROCESSING, and obtained output signal is:
Wherein, Z (ω) is sound-source signal X (ω) estimation;Subscript H represents conjugate transposition computing, h (ω)=[H1(ω) H2(ω) … HM(ω)]TRepresent wave filter of the microphone array to the weighing vector of input signal, that is, microphone array Coefficient.
Before hidden acoustic filter is derived, the several important indicators for weighing Stealth Fighter are first introduced.
The composition relevant with sound-source signal is in array output:hH(ω)ds(ω,rss)X(ω).Therefore, array is to sound The response of source signal is:
H(ω,rss)=hH(ω)ds(ω,rss)。
Response of the array to sound-source signal has three variables:ω,rss.Fixed two of which variable, array is believed sound source Number response generate three indexs with another variable change:1) fixed ω, θs, array is to the response of sound-source signal with rs The index of change is referred to as hidden acoustic attenuation function;2) fixed rssThe index that response of the array to sound-source signal changes with ω is referred to as hidden Acoustic frequency receptance function;3) fixed ω, rs, array is to the response of sound-source signal with θsThe index of change is referred to as beam pattern.
Hidden acoustic attenuation function
Hidden acoustic attenuation function describes gain of the array to single-frequency sound-source signal at different distance, its mathematical definition For:
H(rs)=hH(ω)ds(rs)。
Hidden acoustic frequency receptance function
Hidden acoustic frequency receptance function describes gain of the array to broadband signal, and its mathematical definition is:
H (ω)=hH(ω)ds(ω)。
Beam pattern
Beam pattern describes susceptibility of the array to different directions incoming signal, and its mathematical definition is:
H(θs)=hH(ω)dss)
Signal interference ratio and signal interference ratio gain
Signal to noise ratio weighs the relative size of sound-source signal and noise signal.By contrasting input and output signal to noise ratio, Ke Yiheng Measure the performance of Beam-former.The present invention concerns interference noise.The input signal interference ratio of array is defined as:
Wherein, X0(ω) is the sound-source signal of reference position pickup,V0(ω) is reference position The interference signal of pickup,rnRepresent interference noise to the distance of array center.φX0(ω)、φV0 (ω) represents X respectively0(ω),V0The power of (ω).
The output signal interference ratio of array is expressed as:
Wherein θnRepresent direction, the φ of interference noiseX(ω)、φV(ω)X (ω), V (ω) power are represented respectively.
Therefore, signal interference ratio gain is:
S4:On the subband that frequency is ω, the hidden acoustic filter based on MVDR is constructed;Unattenuated closely sound-source signal is same When minimize array output end residual noise variance just obtained the hidden acoustic filters of MVDR, mathematic(al) representation is as follows:
Formula (2) can be solved using method of Lagrange multipliers, is obtained
Assuming that noise is isotropic noise, (m, n) individual element of its normalized correlation matrix can be written as Form:
Wherein,There are two kinds of extreme situations:If 1) ω τ0It is very big, i.e., high frequency or greatly Under spacer conditions, the noise signal that two sensors are received is close to uncorrelated, and isotropic noise is close to space white noise;2) If ω τ0Very small, i.e., under low frequency or small spacer conditions, the noise signal that two sensors are received is just close relevant, Isotropic noise points of proximity source noise;
Hidden acoustic filter based on MVDR has the computing of a matrix inversion, when microphone array element number is more, matrix Morbid state occurs, extremely unstable situation occurs in wave filter, the occurrence of in order to avoid inverting unstable, using tradition Way:Diagonal loading technique, above-mentioned matrix is added with pair of horns matrix, and the matrix after diagonal loading can be expressed as [Γdn (ω)+ε Ι], wherein Ι is the unit matrix that size is M, and ε is loading coefficient, and now the hidden acoustic filter coefficient based on MVDR is:
S5:On subband, the signal of the i-th frame is handled using hidden acoustic filter h (ω)
S6:Using inverse Fourier transform in short-term and overlap-add method, to Z, (ω, i) enters line translation, so as to obtain wave beam shape Time-domain signal Z (k) after.
In order to show the effect of the present invention, provide following specific example to verify the correct of the algorithm that this patent is proposed Property.Wherein, hidden acoustic attenuation function, hidden acoustic frequency receptance function and beam pattern are MATLAB simulation results, and signal interference ratio is in northwest Polytechnical university's intelligent acoustic and the experimental result that sound darkroom is tested that totally disappeared for facing border communication speech research center.Experiment condition Set as follows:
Microphone array even structure linear array, microphone number M=8, array element spacing δ=1.1cm.
Sound source position:θs=0 °, rs=5cm.
Fig. 4 (a) is the hidden acoustic attenuation curve map of the hidden method for acoustic based on 8 array element MVDR.It can be seen that:Point source signal exists Decay quickly within 20cm, and it is faster apart from nearlyer decay;More than 20cm, attenuation is almost inversely proportional with distance.Illustrate to work as sound During the distance no more than 20cm of source, the hidden method for acoustic based on 8 array element MVDR can at a distance be disturbed with hidden sound.Explanation:Hidden sound radius r0 ≈ 20cm, when sound source is apart from rs< r0When, the hidden method for acoustic based on 8 array element MVDR can be with hidden sound r0Outer interference.
Fig. 4 (b) is the hidden acoustic frequency response curve of the hidden method for acoustic based on 8 array element MVDR.It can be seen that:2000Hz's Putting response of the source signal at 5cm, 10cm, 30cm, 50cm is about successively:0dB、19dB、33dB、38dB.Explanation:Sound source distance During for 5cm, the hidden method for acoustic based on 8 array element MVDR can be with the interference at hidden sound end-on direction 10cm, 30cm, 50cm.
Fig. 5 is the beam pattern for the hidden method for acoustic based on 8 array element MVDR that frequency is 1300Hz, 3300Hz.It can be seen that: On different frequency, sound source distance be 5cm when, the hidden method for acoustic based on 8 array element MVDR can with hidden sound any direction 10cm, 30cm, Interference at 50cm, the hidden sound effective value to the interference of non-end-on direction is more preferable.
Fig. 6 is that the hidden method for acoustic based on 8 array element MVDR is done to the letter of the interference of end-on direction 0cm sound source and end-on direction Than figure.It can be seen that:When interference is at 50cm, it is right in about 10dB, Fig. 6 (b) that the signal interference ratio that voice is disturbed in Fig. 6 (a) is improved The signal interference ratio of white Gaussian noise interference is improved in about 6dB, Fig. 6 (c) improves about 7dB to the signal interference ratio of chirp waveform, About 7dB is improved to the signal interference ratio of mono-tone interference in Fig. 6 (d).Explanation:Hidden method for acoustic based on 8 array element MVDR has hidden well Sound effective value.
Embodiment described above is only that the preferred embodiment of the present invention is described, not to the design of the present invention It is defined with scope, on the premise of design concept of the present invention is not departed from, ordinary skill technical staff is to this hair in this area The all variations and modifications that bright technical scheme is made all should fall into protection scope of the present invention, claimed skill of the invention Art content, is all documented in technical requirements book.

Claims (10)

1. the hidden method for acoustic based on MVDR, it is characterised in that:This method is first with Short Time Fourier Transform, by sensor array The time-domain signal received resolves into subband signal, the hidden acoustic filter based on MVDR is constructed on each subband so that from hidden Throw acoustical signal of sound source within is undamped by hidden acoustic filter, obtains estimating signal eventually through anti-STFT.
2. the hidden method for acoustic according to claim 1 based on MVDR, it is characterised in that:This method comprises the following steps:
S1:According to parameters such as the structure of array, array element number, the positions of sound source, steering vector is constructed
<mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mfrac> </mtd> <mtd> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mfrac> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>M</mi> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>M</mi> </mrow> </msub> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> </mrow>
S2:The signal y that sensor in microphone array is receivedm(k)=xm(k)+vm(k), m=1,2 ..., M, which is divided into, one Surely overlap ratio short time frame, frame length can from several milliseconds to tens milliseconds, then in M passage each passage it is every One frame carries out Short Time Fourier Transform, obtains Ym(ω, i), wherein i represent the i-th frame, then construct
Y (ω, i)=[Y1(ω,i) Y2(ω,i) … YM(ω,i)]T
S3:With Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into subband signal;
S4:On the subband that frequency is ω, the hidden acoustic filter based on MVDR is constructed
<mrow> <msub> <mi>h</mi> <mrow> <mi>M</mi> <mi>V</mi> <mi>D</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Gamma;</mi> <mi>v</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>d</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <msubsup> <mi>&amp;Gamma;</mi> <mi>v</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>;</mo> </mrow>
S5:On subband, the signal of the i-th frame is handled using hidden acoustic filter h (ω)
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>Z</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>h</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>H</mi> <mi>m</mi> <mo>*</mo> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
S6:Using inverse Fourier transform in short-term and overlap-add method, to Z, (ω, i) enters line translation, so as to obtain after Wave beam forming Time-domain signal Z (k).
3. the hidden method for acoustic according to claim 1 based on MVDR, it is characterised in that:With Short Time Fourier Transform, it will pass Sensor array received to time-domain signal resolve into comprising the following steps that for subband signal:
Assuming that the spacing between two neighboring microphone is δ, because hidden audio technology is used to pick up closely sound source, therefore acoustics is assumed There is preferable near-field sound source and interference effect in environment on the microphone array, distance of the sound source away from each microphone is respectively rs,1, rs,2..., rs,M, the center of array is defined as reference point, distance of the sound source away from reference point is rs, incidence angle is θs, then sound Distance of the source away from m-th of microphone can be expressed as
<mrow> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msqrt> <mrow> <msubsup> <mi>r</mi> <mi>s</mi> <mn>2</mn> </msubsup> <mo>-</mo> <mn>2</mn> <msub> <mi>r</mi> <mi>s</mi> </msub> <msub> <mi>d</mi> <mi>m</mi> </msub> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msubsup> <mi>d</mi> <mi>m</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mo>&amp;le;</mo> <mfrac> <mrow> <mi>M</mi> <mo>+</mo> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msqrt> <mrow> <msubsup> <mi>r</mi> <mi>s</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>2</mn> <msub> <mi>r</mi> <mi>s</mi> </msub> <msub> <mi>d</mi> <mi>m</mi> </msub> <mi>cos</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msubsup> <mi>d</mi> <mi>m</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mo>&gt;</mo> <mfrac> <mrow> <mi>M</mi> <mo>+</mo> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein,
Discrete time k is located at, the signal that sound source is sent is x (k), if ignoring the absorption loss in communication process, m-th of Mike Also there is the amplitude fading being inversely proportional with distance, can represent in the signal that wind is picked up relative to sound-source signal only phase delay For:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>v</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mfrac> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>-</mo> <msub> <mi>&amp;tau;</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>v</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>M</mi> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, xm(k) sound-source signal that m-th of microphone is picked up, v are representedm(k) noise that m-th of microphone is picked up is represented Signal;τmRepresent time delay of m-th of microphone compared to sound source;
Because the wave surface of the preferable sound source near field is spherical, τmIt is represented by:
<mrow> <msub> <mi>&amp;tau;</mi> <mi>m</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mi>c</mi> </mfrac> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>M</mi> <mo>,</mo> </mrow>
Wherein c represents the velocity of sound in air,
Formula (1) is changed into a frequency domain:
<mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mfrac> <mi>X</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>V</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>M</mi> <mo>,</mo> </mrow>
Wherein,Wave number is represented, the π f of ω=2 represent angular frequency;F represents temporal frequency,Represent imaginary number list Position, Ym(ω)、Xm(ω)、Vm(ω) represents y respectivelym(k)、xm(k)、vm(k) Fourier transformation.
4. the hidden method for acoustic according to claim 3 based on MVDR, it is characterised in that:According to the structure of array, array number The parameters such as mesh, the position of sound source, the steering vector that construction length is M:
Superscript T represents vectorial transposition computing, makes r=rs, θ=θs, that ,The signal that M microphone is picked up is with vector representation:
Y (ω)=[Y1(ω) Y2(ω) … YM(ω)]T
=ds(ω,rss)X(ω)+v(ω),
Y (ω) passes through ARRAY PROCESSING, and obtained output signal is:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>Z</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>H</mi> <mi>m</mi> <mo>*</mo> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msup> <mi>h</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <msub> <mi>d</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>h</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, Z (ω) is sound-source signal X (ω) estimation;Subscript H represents conjugate transposition computing, h (ω)=[H1(ω) H2 (ω) … HM(ω)]TRepresent wave filter system of the microphone array to the weighing vector of input signal, that is, microphone array Number.
5. the hidden method for acoustic according to claim 1 based on MVDR, it is characterised in that:In the subband that frequency is ω in S3 On, hidden acoustic filter method of the construction based on MVDR is as follows:Unattenuated closely sound-source signal minimizes array output end simultaneously The variance of residual noise has just obtained the hidden acoustic filters of MVDR, and mathematic(al) representation is as follows:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>h</mi> <mrow> <mi>M</mi> <mi>V</mi> <mi>D</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mi>h</mi> <mi>H</mi> </msup> <msub> <mi>R</mi> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </msub> <mi>h</mi> </mrow> </mtd> <mtd> <mrow> <mi>s</mi> <mi>u</mi> <mi>b</mi> <mi>j</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> <mi> </mi> <mi>t</mi> <mi>o</mi> </mrow> </mtd> <mtd> <mrow> <msup> <mi>h</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Formula (2) can be solved using method of Lagrange multipliers, is obtained
<mrow> <msub> <mi>h</mi> <mrow> <mi>M</mi> <mi>V</mi> <mi>D</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Gamma;</mi> <mi>v</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>d</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <msubsup> <mi>&amp;Gamma;</mi> <mi>v</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>.</mo> </mrow>
6. the hidden method for acoustic according to claim 5 based on MVDR, it is characterised in that:Assuming that noise is made an uproar for isotropism Sound, (m, n) individual element of its normalized correlation matrix can be written as form:
Wherein,There are two kinds of extreme situations:1) such as s fruit ω τ0It is very big, i.e., high frequency or spacing greatly In the case of, the noise signal that two sensors are received is close to uncorrelated, and isotropic noise is close to space white noise;If 2) ωτ0It is very small, i.e., under low frequency or small spacer conditions, the noise signal that two sensors are received just close to relevant, respectively to Same sex noise points of proximity source noise;
Hidden acoustic filter based on MVDR has the computing of a matrix inversion, and when microphone array element number is more, matrix can go out Extremely unstable situation occurs in existing morbid state, wave filter, the occurrence of in order to avoid inverting unstable, is done using traditional Method:Diagonal loading technique, above-mentioned matrix is added with pair of horns matrix, and the matrix after diagonal loading can be expressed as [Γdn(ω) + ε Ι], wherein Ι is the unit matrix that size is M, and ε is loading coefficient, and now the hidden acoustic filter coefficient based on MVDR is:
<mrow> <msub> <mi>h</mi> <mrow> <mi>M</mi> <mi>V</mi> <mi>D</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msup> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;Gamma;</mi> <mrow> <mi>d</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;epsiv;</mi> <mi>I</mi> <mo>&amp;rsqb;</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <mo>-</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>d</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;Gamma;</mi> <mrow> <mi>d</mi> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;epsiv;</mi> <mi>I</mi> <mo>&amp;rsqb;</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>.</mo> </mrow>
7. the as claimed in claim 1 hidden method for acoustic for being based on MVDR, it is characterised in that the microphone array include but It is not limited to homogenous linear microphone array, non-homogeneous linear microphone array and circular microphone array.
8. the hidden method for acoustic as claimed in claim 1 based on MVDR, it is characterised in that the microphone array is small-sized Mike One kind in wind array and large-scale microphone array.
9. the hidden method for acoustic as claimed in claim 1 based on MVDR, it is characterised in that the audio signal processing method is used for Handle narrow band signal or broadband signal.
10. the hidden method for acoustic as claimed in claim 1 based on MVDR, it is characterised in that the hidden sound of the microphone array half Footpath increases with the increase of microphone number and array element spacing.
CN201710163190.2A 2017-03-19 2017-03-19 Hidden method for acoustic based on MVDR Withdrawn CN107170462A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710163190.2A CN107170462A (en) 2017-03-19 2017-03-19 Hidden method for acoustic based on MVDR
CN201810221809.5A CN108597532A (en) 2017-03-19 2018-03-18 Hidden method for acoustic based on MVDR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710163190.2A CN107170462A (en) 2017-03-19 2017-03-19 Hidden method for acoustic based on MVDR

Publications (1)

Publication Number Publication Date
CN107170462A true CN107170462A (en) 2017-09-15

Family

ID=59848862

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710163190.2A Withdrawn CN107170462A (en) 2017-03-19 2017-03-19 Hidden method for acoustic based on MVDR
CN201810221809.5A Pending CN108597532A (en) 2017-03-19 2018-03-18 Hidden method for acoustic based on MVDR

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201810221809.5A Pending CN108597532A (en) 2017-03-19 2018-03-18 Hidden method for acoustic based on MVDR

Country Status (1)

Country Link
CN (2) CN107170462A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018465A (en) * 2018-01-09 2019-07-16 中国科学院声学研究所 One kind being based on the pretreated MVDR Beamforming Method of all phase
WO2019205797A1 (en) * 2018-04-27 2019-10-31 深圳市沃特沃德股份有限公司 Noise processing method, apparatus and device
CN112420068A (en) * 2020-10-23 2021-02-26 四川长虹电器股份有限公司 Quick self-adaptive beam forming method based on Mel frequency scale frequency division
CN116013239A (en) * 2022-12-07 2023-04-25 广州声博士声学技术有限公司 Active noise reduction algorithm and device for air duct

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148420A (en) * 2019-06-30 2019-08-20 桂林电子科技大学 A kind of audio recognition method suitable under noise circumstance

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856866B (en) * 2012-12-04 2019-11-05 西北工业大学 Low noise differential microphone array
CN104464739B (en) * 2013-09-18 2017-08-11 华为技术有限公司 Acoustic signal processing method and device, Difference Beam forming method and device
CN103491397B (en) * 2013-09-25 2017-04-26 歌尔股份有限公司 Method and system for achieving self-adaptive surround sound
EP2916320A1 (en) * 2014-03-07 2015-09-09 Oticon A/s Multi-microphone method for estimation of target and noise spectral variances
DK2916321T3 (en) * 2014-03-07 2018-01-15 Oticon As Processing a noisy audio signal to estimate target and noise spectral variations
EP2928210A1 (en) * 2014-04-03 2015-10-07 Oticon A/s A binaural hearing assistance system comprising binaural noise reduction

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018465A (en) * 2018-01-09 2019-07-16 中国科学院声学研究所 One kind being based on the pretreated MVDR Beamforming Method of all phase
CN110018465B (en) * 2018-01-09 2020-11-06 中国科学院声学研究所 MVDR beam forming method based on full-phase preprocessing
WO2019205797A1 (en) * 2018-04-27 2019-10-31 深圳市沃特沃德股份有限公司 Noise processing method, apparatus and device
CN112420068A (en) * 2020-10-23 2021-02-26 四川长虹电器股份有限公司 Quick self-adaptive beam forming method based on Mel frequency scale frequency division
CN112420068B (en) * 2020-10-23 2022-05-03 四川长虹电器股份有限公司 Quick self-adaptive beam forming method based on Mel frequency scale frequency division
CN116013239A (en) * 2022-12-07 2023-04-25 广州声博士声学技术有限公司 Active noise reduction algorithm and device for air duct
CN116013239B (en) * 2022-12-07 2023-11-17 广州声博士声学技术有限公司 Active noise reduction algorithm and device for air duct

Also Published As

Publication number Publication date
CN108597532A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN107170462A (en) Hidden method for acoustic based on MVDR
KR101340215B1 (en) Systems, methods, apparatus, and computer-readable media for dereverberation of multichannel signal
CN107248413A (en) Hidden method for acoustic based on Difference Beam formation
CN106782590B (en) Microphone array beam forming method based on reverberation environment
Khalil et al. Microphone array for sound pickup in teleconference systems
Zhao et al. Design of robust differential microphone arrays
US8351554B2 (en) Signal extraction
CN105869651A (en) Two-channel beam forming speech enhancement method based on noise mixed coherence
Peled et al. Linearly-constrained minimum-variance method for spherical microphone arrays based on plane-wave decomposition of the sound field
Ryan et al. Application of near-field optimum microphone arrays to hands-free mobile telephony
Jeub et al. Binaural dereverberation based on a dual-channel wiener filter with optimized noise field coherence
Shabtai Optimization of the directivity in binaural sound reproduction beamforming
Jarrett et al. Dereverberation performance of rigid and open spherical microphone arrays: Theory & simulation
Geng et al. A speech enhancement method based on the combination of microphone array and parabolic reflector
Kallinger et al. A spatial filtering approach for directional audio coding
Kowalczyk Raking early reflection signals for late reverberation and noise reduction
Li et al. A two-microphone noise reduction method in highly non-stationary multiple-noise-source environments
Leng et al. On speech enhancement using microphone arrays in the presence of co-directional interference
CN112017684B (en) Closed space reverberation elimination method based on microphone array
Ma et al. A time-domain nearfield frequency-invariant beamforming method
Habets Towards multi-microphone speech dereverberation using spectral enhancement and statistical reverberation models
Kwan et al. Speech separation algorithms for multiple speaker environments
Lotter et al. A stereo input-output superdirective beamformer for dual channel noise reduction.
Kowalczyk Multichannel Wiener filter with early reflection raking for automatic speech recognition in presence of reverberation
Yermeche Soft-Constrained Subband Beamforming for Speech Enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20170915

WW01 Invention patent application withdrawn after publication