CN107248413A - Hidden method for acoustic based on Difference Beam formation - Google Patents

Hidden method for acoustic based on Difference Beam formation Download PDF

Info

Publication number
CN107248413A
CN107248413A CN201710164086.5A CN201710164086A CN107248413A CN 107248413 A CN107248413 A CN 107248413A CN 201710164086 A CN201710164086 A CN 201710164086A CN 107248413 A CN107248413 A CN 107248413A
Authority
CN
China
Prior art keywords
mrow
msub
mtd
mtr
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710164086.5A
Other languages
Chinese (zh)
Inventor
陈景东
梁菲菲
王雪瀚
黄海
聂玮奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Linjing Acoustics Technology Jiangsu Co Ltd
Original Assignee
Linjing Acoustics Technology Jiangsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Linjing Acoustics Technology Jiangsu Co Ltd filed Critical Linjing Acoustics Technology Jiangsu Co Ltd
Priority to CN201710164086.5A priority Critical patent/CN107248413A/en
Publication of CN107248413A publication Critical patent/CN107248413A/en
Priority to CN201810221808.0A priority patent/CN108337605A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention discloses a kind of hidden method for acoustic based on Difference Beam formation, this method is first with Short Time Fourier Transform, the time-domain signal that sensor array is received resolves into frequency domain sub-band signal, the hidden acoustic filter based on N order difference Wave beam formings is constructed on each subband, so that the acoustical signal of sound source is undamped by hidden acoustic filter within from hidden throw, obtain estimating signal eventually through anti-STFT.The present invention has preferable hidden sound effective value.

Description

Hidden method for acoustic based on Difference Beam formation
Technical field
The present invention relates to the hidden audio technology based on microphone array, and in particular to a kind of hidden sound based on Difference Beam formation Method.
Background technology
The research of hidden audio technology has had very long history, and people mainly explore two kinds of sides in acoustical signal sensory field Method:Bone-conduction microphone and ultrasonic microphone, signal separation techniques and difference microphone array are explored in Underwater Acoustic channels field Row.
Bone-conduction microphone is the slight vibration of caused incidence bone when talking using people to collect voice signal Get up to switch to electric signal.Because it is different from conventional microphone by air transmitted pickup sound, so in very noisy environment In the outflow of sound high-resolution can also be come.Early in before the centuries, people have had many real to bone conduction technology Using, but the speech quality of early stage bone-conduction microphone is not fine, the speech quality of especially high frequency is poor, so early The bone-conduction microphone of phase is only used for the conventional microphone of auxiliary, for example, do speech terminals detection using bone-conduction microphone, To improve the performance of single-channel voice noise reduction.Nearly ten or twenty year, bone-conduction microphone starts really to be paid close attention to by people, its performance Have and greatly improve.At present, in the market has occurred in that communication headsets of many moneys based on bone-conduction microphone.Wherein, 2013 5 The moon, a kind of bone-conduction microphone was invented by Beijing MeiErSiTong Science Development stock Co., Ltd, realized the product independent research state The interior breakthrough of zero, has put goods on the market at present.Recently, attention is also obtain similar to other sonic transducers of osteoacusis principle, such as DAIKIN-D Talk Mic headsets, its operation principle is to pick up speaker's laryngograph signal using highly sensitive microphone, And it is converted into electric signal.This pickup mode has many similar parts with bone-conduction microphone.Although bone-conduction microphone technology Breakthrough development has been obtained, but it is not also ideal that its communication headset is promoted at present, is primarily present Railway Project:1) move State effect is poor;2) cost is high;3) packaging effect is poor;4) lower tone.
Before more than ten years, a collection of scientist of AT&T Labs of the U.S. devises a kind of ultrasonic microphone.This microphone by The conventional microphone composition in one small ultrasonic transmitter and one big broadband.During work, ultrasonic transmitter sends a cycle The wideband pulse sequence of property.The frequency of this wideband pulse signal is between 20kHz to 70kHz, the sound channel reflection through speaker Afterwards, reflected signal is received by microphone, then the Digital Signal Processing link of rear end using transmission signal and reflected signal come Estimate the form parameter of sound channel, and then synthesize the voice described in speaker.The maximum technical characterstic of this microphone be operated in it is super Audio frequency range, therefore do not disturbed by audio signal in the range of sense of hearing perceived frequency, available in solution class cocktail party environment Voice communication problem.The scientist of AT&T Labs of the U.S. constructs a prototype system that can be worked, this prototype system The vowel for receiving and being synthesized is sent in a speech recognition system, 95% discrimination can be obtained.Preliminary hearing Experiment also confirms that the quality of the vowel of synthesis can reach the speech quality of conventional microphone substantially.Certainly, this microphone is true Positive practical preceding also many problems need to solve, and maximum technical problem is exactly the sound not protruded for track characteristics such as nasal sound Element, the speech quality and intelligibility of synthesis be not high.
Hidden audio technology theoretically can be regarded as Signal separator or strengthen a subproblem of problem.In a complexity Acoustic enviroment in, the signal from some sound source is picked up using microphone, the signal can be contaminated with few exceptions.Root Noise in the mechanism produced according to pollution, Speech processing is divided into 4 classes:Ambient noise, echo, reverberation and interference.In order to incite somebody to action Sound source and noise are separated, and are handled for each noise like with specific method:
Ambient noise (Noise):Ambient noise can not be avoided and ubiquitous, and its presence can have a strong impact on voice letter Number the perception to spatial information of speech quality, intelligibility and human ear.Ambient noise is generally all relatively stable, that is to say, that The statistical property of current time spot noise can be replaced with the noise statisticses in historical time.According to the system of signals with noise Characteristic and the statistical property of noise are counted, wave filter can be designed observation signal is filtered, and then strengthens voice signal, is suppressed Ambient noise, this technology is referred to as noise reduction technology.Noise reduction technology can utilize single channel pickup system, can also utilize multichannel Pickup system, they correspond respectively to single channel noise reduction technology and multichannel noise reduction technology.Single channel noise reduction technology is being made an uproar Sound can cause speech distortion while suppression, Comparatively speaking, and multichannel noise reduction technology is while identical output signal-to-noise ratio is obtained Speech distortion can be reduced.
Echo (Echo):Acoustic echo is that the acoustical coupling between microphone and loudspeaker is produced.The presence meeting of echo Have a strong impact on multi-party duplex interaction.The characteristics of echo is maximum is that sound-source signal is known, as long as can estimate from loudspeaker To the acoustic propagation channel between microphone, the echo composition in observation signal with regard to microphone pickup can be estimated, by this composition Estimation subtracted from the signal picked up, it is possible to realize echo cancellor, this technology is referred to as echo cancellation technology.
Reverberation (Reverberation):Reverberation is due to that the interface reflection (multipath effect) in room environment is caused.Instead Penetrate and be divided into early reflection and late period reflection.Early reflection (within usual 40ms) can typically carry useful information, such as pass through analysis The structure of early reflection, can size up the room.In addition, early reflection can also strengthen the harmonic components of music, raising is listened Sense.But late period reflection can cause Spectrum Distortion, and then cause speech quality, the decline of intelligibility, and sound source can be obscured Positional information.In voice communication system, late period reflection causes reverberation, so that the quality of voice communication is had a strong impact on, so needing Want dereverberation technology.A kind of dereverberation technology is to carry out blind estimate to channel first, then recycles balancing technique to realize and goes to mix Ring;Another technology for suppressing reverberation is super sensing array beamses formation technology, and its general principle is to extract the sound of desired orientation While source signal, suppress the signal from other directions.Reverberation be from from all directions, therefore it is super point to array can be with one Suppress reverberation with determining degree.Interference (Interference) signal from other sound sources:Interference signal is caused by a source noise, It is the noise from some direction in space.In voice communication, surrounding often has many people, and there are other sound sources, therefore In each communication ends, the situation of multi-acoustical is inevitable, can be interfered with each other between the signal from different sound sources.Interference The typical technology of suppression is beam-forming technology, and its basic thought is the wave filter for being initially formed a spatial domain, then by wave filter Direction of the maximum direction to quasiexpectation sound source is responded, the array response that the inhibition level to interference is depended on interference radiating way Size.The target of voice de-noising, Sound seperation and Wave beam forming is all by desired useful signal and other interference signals point Open, so these technologies may be used to hidden sonication.But the separating property that current isolation technics can be obtained is also very limited, nothing Method meets the demand of hidden sound application.
Aforementioned signal separation techniques need to use microphone array (microphone array).For microphone The research of array has had the history of more than 40 years, and in this four ten years, people have had been developed for many Array Designs With processing method.According to response theory of the array to sound field, these arrays can be divided into two major classes:Plus type array [additive Microphone array (AMA)] and difference array [differential microphone array (DMA)].Additivity array Stock size is larger, and what each microphone was measured is the acoustic pressure of sound field, and whole array beamses formation is also that sonic pressure field is rung Should.Substantial amounts of work is all the processing method on additivity array on additivity array in current document.In contrast, it is poor Subarray is that the space differentiation of sonic pressure field is responded, with array sizes are small, beam pattern frequency invariance is preferable, to The features such as determining that in the case of array element number array directive property can be maximized.
The content of the invention
The technical problem to be solved in the present invention is to provide a kind of hidden method for acoustic based on Difference Beam formation, with good Hidden sound effective value.
In order to solve the above technical problems, the present invention is adopted the following technical scheme that:Based on the hidden method for acoustic of Difference Beam formation, This method is, first with Short Time Fourier Transform, the time-domain signal that sensor array is received to be resolved into subband signal, each The hidden acoustic filter based on N order difference Wave beam formings is constructed on subband so that the acoustical signal from hidden throw sound source within without Decay obtains estimating signal by hidden acoustic filter eventually through anti-STFT.
Further, this method comprises the following steps:
S1:According to parameters such as the structure of array, array element number, the positions of sound source, steering vector is constructed
S2:The signal y that sensor in microphone array is receivedm(k)=xm(k)+vm(k), m=1,2 ..., M is divided into There is certain short time frame for overlapping ratio, frame length can be from several milliseconds to tens milliseconds, then to each passage in M passage Each frame carry out Short Time Fourier Transform, obtain Ym(ω, i), wherein i represent the i-th frame, then construct
Y (ω, i)=[Y1(ω,i) Y2(ω,i) … YM(ω,i)]T,
S3:With Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into frequency domain sub-band signal;
S4:On the subband that frequency is ω, the hidden acoustic filter based on N order difference Wave beam formings is constructed
hLC(ω)=D-1(ω,θ)β;
S5:On subband, the signal of the i-th frame is handled using hidden acoustic filter h (ω)
S6:Using inverse Fourier transform in short-term and overlap-add method, to Z, (ω, i) enters line translation, so as to obtain wave beam shape Time-domain signal Z (k) after.
Further, with Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into subband signal Comprise the following steps that:
Assuming that the spacing between two neighboring microphone is δ, because hidden audio technology is used to pick up closely sound source, therefore assume There is preferable near-field sound source and interference effect in acoustic enviroment on the microphone array, the sound source distance away from each microphone point Wei not rs,1, rs,2..., rs,M, the center of array is defined as reference point, distance of the sound source away from reference point is rs, incidence angle is θs, Then distance of the sound source away from m-th of microphone can be expressed as:
Wherein,
Discrete time k is located at, the signal that sound source is sent is x (k), if ignoring the absorption loss in communication process, m-th Also there is the amplitude fading being inversely proportional with distance in the signal that microphone is picked up, can relative to sound-source signal only phase delay It is expressed as:
Wherein, xm(k) sound-source signal that m-th of microphone is picked up, v are representedm(k) represent what m-th of microphone was picked up Noise signal;τmRepresent time delay of m-th of microphone compared to sound source;
Because the wave surface of the preferable sound source near field is spherical, τmIt is represented by:
Wherein c represents the velocity of sound in air,
Formula (1) is changed into a frequency domain:
Wherein,Wave number is represented, the π f of ω=2 represent angular frequency;F represents temporal frequency,Represent imaginary number Unit, Ym(ω)、Xm(ω)、Vm(ω) represents y respectivelym(k)、xm(k)、vm(k) Fourier transformation.
Further, according to parameters such as the structure of array, array element number, the positions of sound source, construction length is sweared for M guiding Amount:Superscript T represents vectorial transposition computing, makes r=rs, θ=θs, that ,The signal that M microphone is picked up is with vector representation:
Y (ω)=[Y1(ω) Y2(ω) … YM(ω)]T
=ds(ω,rss)X(ω)+v(ω),
Y (ω) passes through ARRAY PROCESSING, and obtained output signal is:
Wherein, Z (ω) is sound-source signal X (ω) estimation;Subscript H represents conjugate transposition computing, h (ω)=[H1(ω) H2(ω) … HM(ω)]TRepresent wave filter of the microphone array to the weighing vector of input signal, that is, microphone array Coefficient.
Further, in S2 on the subband that frequency is ω, hidden acoustic filter of the construction based on N order difference Wave beam formings Comprise the following steps that:
Sound source is obtained from preferable beam pattern apart from rsThe N+1 constraint at place, forming system of linear equations is:
Wherein, N=M-1, θN,mIt is not mutually equal, 0≤βN,m≤ 1, m=1,2 ... N;
For uniform linear difference microphone array, its optimal array response direction is end-on direction,
Assuming that end-on direction of the sound source in array, i.e. θs=0 °, order
θ=[0 ° of θN,1(ω) … θN,N]T
β=[1 βN,1(ω) … βN,N]T
So, system of linear equations (2) can be write as
D (ω, θ) h (ω)=β
The solution of equation group, i.e., based on Difference Beam formation hidden acoustic filter be:
hLC(ω)=D-1(ω,θ)β。
Beneficial effects of the present invention:The present invention is minitype microphone array in platform, and its core is first with Fu in short-term Leaf transformation, subband signal is resolved into by the time-domain signal that sensor array is received, and appropriate hidden sound filter is constructed on each subband Ripple device so that the acoustical signal of sound source is undamped by hidden acoustic filter within from hidden throw, and method of the invention has very Good hidden sound effective value.
Brief description of the drawings
In order to more clearly illustrate the technical scheme in the embodiment of the present invention, below by using required in embodiment Accompanying drawing is simply introduced, it should be apparent that, drawings in the following description are only some embodiments described in the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings Accompanying drawing.
Fig. 1 is hidden acoustic model figure.
Fig. 2 is the system schematic that minitype microphone array handles voice.
Fig. 3 is that uniform linear array is schemed to the pickup of near field acoustical signal and processing.
Fig. 4 is the hidden acoustic attenuation curve map of the hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks.
Fig. 5 is the hidden acoustic frequency response curve of the hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks.
Fig. 6 is beam pattern (indigo plant, red, purple, the black difference of the hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks Represent r=5cm, 10cm, 30cm, 50cm).
Fig. 7 is sound source and end-on direction of the hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks to end-on direction 0cm Interference signal interference ratio figure [abscissa represents the distance (cm) of interference, and ordinate represents signal interference ratio (dB)].
Embodiment
Technical scheme will be clearly and completely described by embodiment below.
The hidden method for acoustic based on Difference Beam formation of the present invention, this method is, first with Short Time Fourier Transform, will to pass Sensor array received to time-domain signal resolve into subband signal, appropriate hidden acoustic filter is constructed on each subband so that come From hidden throw, the acoustical signal of sound source is undamped by hidden acoustic filter within, obtains estimating signal eventually through anti-STFT.
The method of the present invention comprises the following steps:
S1:According to parameters such as the structure of array, array element number, the positions of sound source, steering vector is constructed
S2:The signal y that sensor in microphone array is receivedm(k)=xm(k)+vm(k), m=1,2 ..., M is divided into There is certain short time frame for overlapping ratio, frame length can be from several milliseconds to tens milliseconds, then to each passage in M passage Each frame carry out Short Time Fourier Transform, obtain Ym(ω, i), wherein i represent the i-th frame, then construct
Y (ω, i)=[Y1(ω,i) Y2(ω,i) … YM(ω,i)]T,
S3:With Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into subband signal;Assuming that phase Spacing between adjacent two microphones is δ, because hidden audio technology is used to pick up closely sound source, therefore assumes exist in acoustic enviroment Preferable near-field sound source and interference effect are on the microphone array, and distance of the sound source away from each microphone is respectively rs,1, rs,2..., rs,M, the center of array is defined as reference point, distance of the sound source away from reference point is rs, incidence angle is θs, then sound source away from The distance of m-th of microphone can be expressed as
Wherein,
Discrete time k is located at, the signal that sound source is sent is x (k), if ignoring the absorption loss in communication process, m-th Also there is the amplitude fading being inversely proportional with distance in the signal that microphone is picked up, can relative to sound-source signal only phase delay It is expressed as:
Wherein, xm(k) sound-source signal that m-th of microphone is picked up, v are representedm(k) represent what m-th of microphone was picked up Noise signal;τmRepresent time delay of m-th of microphone compared to sound source;
Because the wave surface of the preferable sound source near field is spherical, τmIt is represented by:
Wherein c represents the velocity of sound in air,
Formula (1) is changed into a frequency domain:
Wherein,Wave number is represented, the π f of ω=2 represent angular frequency;F represents temporal frequency,Represent imaginary number Unit, Ym(ω)、Xm(ω)、Vm(ω) represents y respectivelym(k)、xm(k)、vm(k) Fourier transformation.
According to parameters such as the structure of array, array element number, the positions of sound source, the steering vector that construction length is M:Superscript T represents vectorial transposition computing, makes r=rs, θ=θs, then,The signal that M microphone is picked up is with vector representation:
Y (ω)=[Y1(ω) Y2(ω) … YM(ω)]T
=ds(ω,rss)X(ω)+v(ω),
Y (ω) passes through ARRAY PROCESSING, and obtained output signal is:
Wherein, Z (ω) is sound-source signal X (ω) estimation;Subscript H represents conjugate transposition computing, h (ω)=[H1(ω) H2(ω) … HM(ω)]TRepresent wave filter of the microphone array to the weighing vector of input signal, that is, microphone array Coefficient.
Before hidden acoustic filter is derived, the several important indicators for weighing Stealth Fighter are first introduced.
The composition relevant with sound-source signal is in array output:hH(ω)ds(ω,rss)X(ω).Therefore, array is to sound The response of source signal is:
H(ω,rss)=hH(ω)ds(ω,rss)。
Response of the array to sound-source signal has three variables:ω,rss.Fixed two of which variable, array is believed sound source Number response generate three indexs with another variable change:1) fixed ω, θs, array is to the response of sound-source signal with rs The index of change is referred to as hidden acoustic attenuation function;2) fixed rssThe index that response of the array to sound-source signal changes with ω is referred to as hidden Acoustic frequency receptance function;3) fixed ω, rs, array is to the response of sound-source signal with θsThe index of change is referred to as beam pattern.
Hidden acoustic attenuation function
Hidden acoustic attenuation function describes gain of the array to single-frequency sound-source signal at different distance, its mathematical definition For:
H(rs)=hH(ω)ds(rs)。
Hidden acoustic frequency receptance function
Hidden acoustic frequency receptance function describes gain of the array to broadband signal, and its mathematical definition is:
H (ω)=hH(ω)ds(ω)。
Beam pattern
Beam pattern describes susceptibility of the array to different directions incoming signal, and its mathematical definition is:
H(θs)=hH(ω)dss)
Signal interference ratio and signal interference ratio gain
Signal to noise ratio weighs the relative size of sound-source signal and noise signal.By contrasting input and output signal to noise ratio, Ke Yiheng Measure the performance of Beam-former.The present invention concerns interference noise.The input signal interference ratio of array is defined as:
Wherein, X0(ω) is the sound-source signal of reference position pickup,V0(ω) is reference position The interference signal of pickup,rnRepresent interference noise to the distance of array center.φX0(ω)、φV0 (ω) represents X respectively0(ω),V0The power of (ω).
The output signal interference ratio of array is expressed as:
Wherein θnRepresent direction, the φ of interference noiseX(ω)、φV(ω)X (ω), V (ω) power are represented respectively.
Therefore, signal interference ratio gain is:
S4:On the subband that frequency is ω, the hidden acoustic filter based on N order difference Wave beam formings is constructed;From preferable wave beam Sound source is obtained in figure apart from rsThe N+1 constraint at place, forming system of linear equations is:
Wherein, N=M-1, θN,mIt is not mutually equal, 0≤βN,m≤ 1, m=1,2 ... N;
For uniform linear difference microphone array, its optimal array response direction is end-on direction, it is assumed that sound source exists The end-on direction of array, i.e. θs=0 °, order
θ=[0 ° of θN,1(ω) … θN,N]T
β=[1 βN,1(ω) … βN,N]T
So, system of linear equations (2) can be write as
D (ω, θ) h (ω)=β
The solution of equation group, i.e., based on Difference Beam formation hidden acoustic filter be:
hLC(ω)=D-1(ω,θ)β。
S5:On subband, the signal of the i-th frame is handled using hidden acoustic filter h (ω)
S6:Using inverse Fourier transform in short-term and overlap-add method, to Z, (ω, i) enters line translation, so as to obtain wave beam shape Time-domain signal Z (k) after.
In order to show the effect of the present invention, provide following specific example to verify the correct of the algorithm that this patent is proposed Property.Wherein, hidden acoustic attenuation function, hidden acoustic frequency receptance function and beam pattern are MATLAB simulation results, and signal interference ratio is in northwest Polytechnical university's intelligent acoustic and the experimental result that sound darkroom is tested that totally disappeared for facing border communication speech research center.Experiment condition Set as follows:
Microphone array even structure linear array, microphone number M=8, array element spacing δ=1.1cm.
Sound source position:θs=0 °, rs=5cm.
Hidden sound effective value based on 3 order difference Wave beam formings
θ and β in hidden acoustic filter based on N order difference Wave beam formings determine the shape of beam pattern, typical beam pattern There are dipole shape, heart, sharp heart-shaped, super heart-shaped, hidden sound effective value of the present invention displaying based on the heart-shaped Difference Beam formation of 3 ranks.
As shown in figure 4, being the hidden of the hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks that spacing is 1.1cm, 2.2cm Acoustic attenuation curve map.It can be seen that:Point source signal is decayed quickly within 10cm, and faster apart from nearlyer decay;More than 10cm, Attenuation is almost inversely proportional with distance.Explanation:Hidden sound radius r0≈ 10cm, when sound source is apart from rs< r0When, it is heart-shaped poor based on three ranks Divide the hidden method for acoustic of Wave beam forming can be with hidden sound r0Outer interference;Compare Fig. 4 (a) and (b), spacing is 2.2cm based on three The hidden sound effective value of the hidden method for acoustic of rank heart Difference Beam formation is more preferable.
Fig. 5 is the hidden acoustic frequency for the hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks that spacing is 1.1cm, 2.2cm Response curve.It can be seen that:2000Hz's puts response of the source signal at 5cm, 10cm, 30cm, 50cm successively in Fig. 5 (a) About:Response of the 2000Hz point source signal at 5cm, 10cm, 30cm, 50cm in 0dB, 10dB, 20dB, 25dB, Fig. 5 (b) It is about successively:0dB、12dB、25dB、31dB.Explanation:When sound source distance is 5cm, spacing is managed for 1.1cm based on the heart of three ranks Think that the hidden method for acoustic and spacing of beam pattern can hidden sound for the 2.2cm hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks Interference at end-on direction 10cm, 30cm, 50cm;Compare 5 (a) and 5 (b), spacing is 2.2cm based on the heart-shaped difference of three ranks The hidden sound effective value of the hidden method for acoustic of Wave beam forming is more preferable.
Fig. 6 is that spacing is 1.1cm, 2.2cm, and frequency is formed for 1300Hz, 3300Hz based on the heart-shaped Difference Beam of three ranks Hidden method for acoustic beam pattern.It can be seen that:On a different frequency, when sound source distance is 5cm, spacing is 1.1cm based on three The hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks that the hidden method for acoustic and spacing of rank heart Difference Beam formation are 2.2cm Interference that can be at hidden sound any direction 10cm, 30cm, 50cm, the hidden sound effective value to the interference of non-end-on direction is more preferable.
Fig. 7 is sound source and end-on direction of the hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks to end-on direction 0cm Interference signal interference ratio figure.It can be seen that:When interference is at 50cm, the signal interference ratio that voice is disturbed is improved respectively in Fig. 7 (a) Signal interference ratio in kind is disturbed to improve respectively in about 11dB, 12dB, Fig. 7 (c) white Gaussian noise in about 13dB, 14dB, Fig. 7 (b) The signal interference ratio of chirp waveform is improved in about 11dB, 11dB, Fig. 7 (d) respectively the signal interference ratio of mono-tone interference is carried respectively About 10dB, 11dB are risen.Explanation:Hidden method for acoustic based on the heart-shaped Difference Beam formation of three ranks has good hidden sound effective value.
Embodiment described above is only that the preferred embodiment of the present invention is described, not to the design of the present invention It is defined with scope, on the premise of design concept of the present invention is not departed from, ordinary skill technical staff is to this hair in this area The all variations and modifications that bright technical scheme is made all should fall into protection scope of the present invention, claimed skill of the invention Art content, is all documented in technical requirements book.

Claims (9)

1. the hidden method for acoustic based on Difference Beam formation, it is characterised in that:This method is, first with Short Time Fourier Transform, will to pass Sensor array received to time-domain signal resolve into subband signal, constructed on each subband based on the hidden of N order difference Wave beam formings Acoustic filter so that the acoustical signal of sound source is undamped by hidden acoustic filter within from hidden throw, eventually through anti-STFT Obtain estimating signal.
2. the hidden method for acoustic according to claim 1 based on Difference Beam formation, it is characterised in that:This method includes as follows Step:
S1:According to parameters such as the structure of array, array element number, the positions of sound source, steering vector is constructed
<mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mfrac> </mtd> <mtd> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mfrac> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>M</mi> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>M</mi> </mrow> </msub> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> </mrow>
S2:The signal y that sensor in microphone array is receivedm(k)=xm(k)+vm(k), m=1,2 ..., M, which is divided into, one Surely overlap ratio short time frame, frame length can from several milliseconds to tens milliseconds, then in M passage each passage it is every One frame carries out Short Time Fourier Transform, obtains Ym(ω, i), wherein i represent the i-th frame, then construct
Y (ω, i)=[Y1(ω,i) Y2(ω,i) … YM(ω,i)]T,
S3:With Short Time Fourier Transform, the time-domain signal that sensor array is received is resolved into frequency domain sub-band signal;
S4:On the subband that frequency is ω, the hidden acoustic filter based on N order difference Wave beam formings is constructed
hLC(ω)=D-1(ω,θ)β;
S5:On subband, the signal of the i-th frame is handled using hidden acoustic filter h (ω)
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>Z</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>h</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>H</mi> <mi>m</mi> <mo>*</mo> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
S6:Using inverse Fourier transform in short-term and overlap-add method, to Z, (ω, i) enters line translation, so as to obtain after Wave beam forming Time-domain signal Z (k).
3. the hidden method for acoustic according to claim 1 based on Difference Beam formation, it is characterised in that:Become with Fourier in short-term Change, the time-domain signal that sensor array is received is resolved into comprising the following steps that for subband signal:
Assuming that the spacing between two neighboring microphone is δ, because hidden audio technology is used to pick up closely sound source, therefore acoustics is assumed There is preferable near-field sound source and interference effect in environment on the microphone array, distance of the sound source away from each microphone is respectively rs,1, rs,2..., rs,M, the center of array is defined as reference point, distance of the sound source away from reference point is rs, incidence angle is θs, then Distance of the sound source away from m-th of microphone can be expressed as
<mrow> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msqrt> <mrow> <msubsup> <mi>r</mi> <mi>s</mi> <mn>2</mn> </msubsup> <mo>-</mo> <mn>2</mn> <msub> <mi>r</mi> <mi>s</mi> </msub> <msub> <mi>d</mi> <mi>m</mi> </msub> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msubsup> <mi>d</mi> <mi>m</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mo>&amp;le;</mo> <mfrac> <mrow> <mi>M</mi> <mo>+</mo> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msqrt> <mrow> <msubsup> <mi>r</mi> <mi>s</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>2</mn> <msub> <mi>r</mi> <mi>s</mi> </msub> <msub> <mi>d</mi> <mi>m</mi> </msub> <mi>cos</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msubsup> <mi>d</mi> <mi>m</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>m</mi> <mo>&gt;</mo> <mfrac> <mrow> <mi>M</mi> <mo>+</mo> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein,
Discrete time k is located at, the signal that sound source is sent is x (k), if ignoring the absorption loss in communication process, m-th of Mike Also there is the amplitude fading being inversely proportional with distance, can represent in the signal that wind is picked up relative to sound-source signal only phase delay For:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>v</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mfrac> <mi>x</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>-</mo> <msub> <mi>&amp;tau;</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>v</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>M</mi> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, xm(k) sound-source signal that m-th of microphone is picked up, v are representedm(k) noise that m-th of microphone is picked up is represented Signal;τmRepresent time delay of m-th of microphone compared to sound source;
Because the wave surface of the preferable sound source near field is spherical, τmIt is represented by:
<mrow> <msub> <mi>&amp;tau;</mi> <mi>m</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> <mi>c</mi> </mfrac> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>M</mi> <mo>,</mo> </mrow>
Wherein c represents the velocity of sound in air,
Formula (1) is changed into a frequency domain:
<mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>jkr</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </msup> <msub> <mi>r</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mfrac> <mi>X</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>V</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>M</mi> <mo>,</mo> </mrow>
Wherein,Wave number is represented, the π f of ω=2 represent angular frequency;F represents temporal frequency,Represent imaginary number list Position, Ym(ω)、Xm(ω)、Vm(ω) represents y respectivelym(k)、xm(k)、vm(k) Fourier transformation.
4. the hidden method for acoustic according to claim 3 based on Difference Beam formation, it is characterised in that:According to the structure of array, battle array The parameters such as first number, the position of sound source, the steering vector that construction length is M: Superscript T represents vectorial transposition computing, makes r=rs, θ=θs, then, The signal that M microphone is picked up is with vector representation:
Y (ω)=[Y1(ω) Y2(ω) … YM(ω)]T
=ds(ω,rss)X(ω)+v(ω),
Y (ω) passes through ARRAY PROCESSING, and obtained output signal is:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>Z</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msubsup> <mi>H</mi> <mi>m</mi> <mo>*</mo> </msubsup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msup> <mi>h</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <msub> <mi>d</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mi>X</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>h</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, Z (ω) is sound-source signal X (ω) estimation;Subscript H represents conjugate transposition computing, h (ω)=[H1(ω) H2 (ω) … HM(ω)]TRepresent wave filter system of the microphone array to the weighing vector of input signal, that is, microphone array Number.
5. the hidden method for acoustic according to claim 1 based on Difference Beam formation, it is characterised in that:In S2 frequency be ω Subband on, hidden acoustic filter of the construction based on N order difference Wave beam formings is comprised the following steps that:
Sound source is obtained from preferable beam pattern apart from rsThe N+1 constraint at place, forming system of linear equations is:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <mi>d</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>d</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>N</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>d</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>N</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>d</mi> <mi>H</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>N</mi> <mo>,</mo> <mi>N</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mi>h</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;beta;</mi> <mrow> <mi>N</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;beta;</mi> <mrow> <mi>N</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&amp;beta;</mi> <mrow> <mi>N</mi> <mo>,</mo> <mi>N</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, N=M-1, θN,mIt is not mutually equal, 0≤βN,m≤ 1, m=1,2 ... N;
For uniform linear difference microphone array, its optimal array response direction is end-on direction, it is assumed that sound source is in array End-on direction, i.e. θs=0 °, order
θ=[0 ° of θN,1(ω) … θN,N]T
β=[1 βN,1(ω) … βN,N]T
So, system of linear equations (2) can be write as
D (ω, θ) h (ω)=β
The solution of equation group, i.e., based on Difference Beam formation hidden acoustic filter be:
hLC(ω)=D-1(ω,θ)β。
6. the hidden method for acoustic as claimed in claim 1 based on Difference Beam formation, it is characterised in that the microphone array bag Include but be not limited to homogenous linear microphone array, non-homogeneous linear microphone array and circular microphone array.
7. the hidden method for acoustic based on Difference Beam formation as claimed in claim 1, it is characterised in that the microphone array is One kind in minitype microphone array and large-scale microphone array.
8. the hidden method for acoustic as claimed in claim 1 based on Difference Beam formation, it is characterised in that the Speech processing Method is used to handle narrow band signal or broadband signal.
9. the hidden method for acoustic as claimed in claim 1 based on Difference Beam formation, it is characterised in that the microphone array Hidden sound radius increases with the increase of microphone number and array element spacing.
CN201710164086.5A 2017-03-19 2017-03-19 Hidden method for acoustic based on Difference Beam formation Withdrawn CN107248413A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710164086.5A CN107248413A (en) 2017-03-19 2017-03-19 Hidden method for acoustic based on Difference Beam formation
CN201810221808.0A CN108337605A (en) 2017-03-19 2018-03-18 The hidden method for acoustic formed based on Difference Beam

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710164086.5A CN107248413A (en) 2017-03-19 2017-03-19 Hidden method for acoustic based on Difference Beam formation

Publications (1)

Publication Number Publication Date
CN107248413A true CN107248413A (en) 2017-10-13

Family

ID=60016884

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710164086.5A Withdrawn CN107248413A (en) 2017-03-19 2017-03-19 Hidden method for acoustic based on Difference Beam formation
CN201810221808.0A Pending CN108337605A (en) 2017-03-19 2018-03-18 The hidden method for acoustic formed based on Difference Beam

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201810221808.0A Pending CN108337605A (en) 2017-03-19 2018-03-18 The hidden method for acoustic formed based on Difference Beam

Country Status (1)

Country Link
CN (2) CN107248413A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694957A (en) * 2018-04-08 2018-10-23 湖北工业大学 The echo cancelltion design method formed based on circular microphone array beams
CN108736917A (en) * 2018-05-09 2018-11-02 哈尔滨工业大学 A kind of the spread spectrum diversity receiving method and realization device of time-frequency collaboration
CN111755021A (en) * 2019-04-01 2020-10-09 北京京东尚科信息技术有限公司 Speech enhancement method and device based on binary microphone array
CN114822579A (en) * 2022-06-28 2022-07-29 天津大学 Signal estimation method based on first-order differential microphone array

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110383378B (en) * 2019-06-14 2023-05-19 深圳市汇顶科技股份有限公司 Differential beam forming method and module, signal processing method and device and chip

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100959983B1 (en) * 2005-08-11 2010-05-27 아사히 가세이 가부시키가이샤 Sound source separating device, speech recognizing device, portable telephone, and sound source separating method, and program
CN103856866B (en) * 2012-12-04 2019-11-05 西北工业大学 Low noise differential microphone array
CN104464739B (en) * 2013-09-18 2017-08-11 华为技术有限公司 Acoustic signal processing method and device, Difference Beam forming method and device
EP2928210A1 (en) * 2014-04-03 2015-10-07 Oticon A/s A binaural hearing assistance system comprising binaural noise reduction

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694957A (en) * 2018-04-08 2018-10-23 湖北工业大学 The echo cancelltion design method formed based on circular microphone array beams
CN108694957B (en) * 2018-04-08 2021-08-31 湖北工业大学 Echo cancellation design method based on circular microphone array beam forming
CN108736917A (en) * 2018-05-09 2018-11-02 哈尔滨工业大学 A kind of the spread spectrum diversity receiving method and realization device of time-frequency collaboration
CN108736917B (en) * 2018-05-09 2020-08-07 哈尔滨工业大学 Time-frequency cooperative spread spectrum diversity receiving method and realizing device
CN111755021A (en) * 2019-04-01 2020-10-09 北京京东尚科信息技术有限公司 Speech enhancement method and device based on binary microphone array
CN111755021B (en) * 2019-04-01 2023-09-01 北京京东尚科信息技术有限公司 Voice enhancement method and device based on binary microphone array
CN114822579A (en) * 2022-06-28 2022-07-29 天津大学 Signal estimation method based on first-order differential microphone array

Also Published As

Publication number Publication date
CN108337605A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN107248413A (en) Hidden method for acoustic based on Difference Beam formation
CN106782590B (en) Microphone array beam forming method based on reverberation environment
CN108597532A (en) Hidden method for acoustic based on MVDR
JP3521914B2 (en) Super directional microphone array
Khalil et al. Microphone array for sound pickup in teleconference systems
CN101828407B (en) Based on the microphone array processor of spatial analysis
CN108172235A (en) LS Wave beam forming reverberation suppression methods based on wiener post-filtering
US8351554B2 (en) Signal extraction
CN105869651A (en) Two-channel beam forming speech enhancement method based on noise mixed coherence
EP3008924B1 (en) Method of signal processing in a hearing aid system and a hearing aid system
JP2013504283A (en) System, method, apparatus and computer readable medium for dereverberation of multi-channel signals
CN105590631A (en) Method and apparatus for signal processing
US8275147B2 (en) Selective shaping of communication signals
Peled et al. Linearly-constrained minimum-variance method for spherical microphone arrays based on plane-wave decomposition of the sound field
Ryan et al. Application of near-field optimum microphone arrays to hands-free mobile telephony
CN113593612A (en) Voice signal processing method, apparatus, medium, and computer program product
Geng et al. A speech enhancement method based on the combination of microphone array and parabolic reflector
Yamamoto et al. Spherical microphone array post-filtering for reverberation suppression using isotropic beamformings
Hossein et al. Performance investigation of acoustic microphone array beamformer to enhance the speech quality
Li et al. A two-microphone noise reduction method in highly non-stationary multiple-noise-source environments
Habets Towards multi-microphone speech dereverberation using spectral enhancement and statistical reverberation models
CN112017684B (en) Closed space reverberation elimination method based on microphone array
Ma et al. A time-domain nearfield frequency-invariant beamforming method
Jin et al. Multi-channel speech enhancement in driving environment
Lotter et al. A stereo input-output superdirective beamformer for dual channel noise reduction.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20171013

WW01 Invention patent application withdrawn after publication