US20040252852A1 - Hearing system beamformer - Google Patents
Hearing system beamformer Download PDFInfo
- Publication number
- US20040252852A1 US20040252852A1 US10/812,718 US81271804A US2004252852A1 US 20040252852 A1 US20040252852 A1 US 20040252852A1 US 81271804 A US81271804 A US 81271804A US 2004252852 A1 US2004252852 A1 US 2004252852A1
- Authority
- US
- United States
- Prior art keywords
- signal
- signals
- sound
- sound signals
- weighting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/552—Binaural
Definitions
- the present invention relates to sound signal enhancement.
- Beamforming is a method whereby a narrow (or at least narrower) polar directional pattern can be developed by combining multiple signals from spatially separated sensors to create a monaural, or simple, output signal representing the signal from the narrower beam.
- array processing used, for example, in broadside antenna array systems, underwater sonar systems and medical ultrasound imaging systems.
- Signal processing usually includes the steps of adjusting the phase (or delay) of the individual input signals and then adding (or summing) them together. Sometimes predetermined, fixed amplitude weightings are applied to the individual signals prior to summation, for example to reduce sidelobe amplitudes.
- reverberant signals operate to create what appears to be additional interfering signals with many different angles of arrival and times of arrival—i.e., a reverberant signal acts like many simultaneous interferers.) Also, the algorithm works best when an interfering signal is long-lasting—it does not work well for transient interference.
- the prior-art beamforming method suffers from serious drawbacks. First, it takes too long to acquire the signal and null it out (adaptation takes too long). Long adaptation time creates a problem with wearer head movements (which change the angle of arrival of the interfering signal) and with transient interfering signals. Second, it does not beneficially reduce the noise in real life situations with numerous interfering signals and/or moderate-to-high reverberation.
- a simpler beamforming approach is known from classical beamforming.
- classical beamforming simply sums the two signals together. Since it is assumed that the target speech is from straight ahead (i.e., that the hearing aid wearer is looking at the talker), the speech signal in the binaural pair of raw signals is highly correlated, and therefore the sum increases the level of this signal, while the noise sources, assumed to be off-axis, create highly uncorrelated noise signals at each ear. Therefore, there is an enhancement of the desired speech signal over that of the noise signal in the beamformer output. This enhancement is analogous to the increased sensitivity of a broadside array to signals coming from in front as compared to those coming from the side.
- the present invention generally speaking, picks up a voice or other sound signal of interest and creates a higher voice-to-background-noise ratio in the output signal so that a user enjoys higher intelligibility of the voice signal.
- beamforming techniques are used to provide optimized signals to the user for further increasing the understanding of speech in noisy environments and for reducing user listening fatigue.
- signal-to-noise performance is optimized even if some of the binaural cues are sacrificed.
- an optimum mix ratio or weighting ratio is determined in accordance with the ratio of noise power in the binaural signals.
- Enhancement circuitry is easily implemented in either analog or digital form and is compatible with existing sound processing methods, e.g., noise reduction algorithms and compression/expansion processing.
- the sound enhancement approach is compatible with, and additive to, any microphone directionality or noise cancelling technology.
- FIG. 1 is a graph illustrating how the optimum mix ratio for two sound signals varies in accordance with the noise ratio of the two sound signals;
- FIG. 2 is a block diagram illustrating a beamforming technique in accordance with one embodiment of the invention.
- FIG. 3 is a graph illustrating one suitable control function for the power ratio block of FIG. 2;
- FIG. 4 is a graph illustrating another control function for the power ratio block of FIG. 2;
- FIG. 5 is a graph illustrating relative noise improvement using the present beamforming technique as compared to using a 50/50 signal mix
- FIG. 6 is a graph illustrating relative noise improvement using the present beamforming technique as compared to using the quieter signal only;
- FIG. 7 is a block diagram of a multiband beamformer
- FIG. 8 is a block diagram of a binaural beamformer
- FIG. 9 is a block diagram of a one embodiment of a DSP-based beamformer
- FIG. 10 is a block diagram of an alternative equivalent realization of the beamformer of FIG. 9;
- FIG. 11 is a block diagram of a another embodiment of a DSP-based beamformer
- FIG. 12 is a plot is a plot of the polar response patterns and DI values in a beamforming system using first-order directional microphones
- FIG. 13 is a plot of the polar response patterns and DI values of a conventional first-order microphone without beamforming
- FIG. 14 is a plot of the polar response patterns and DI values using second-order directional microphones
- FIG. 15 is a table showing interaural difference as a function of azimuth angle
- FIG. 16 is a graph corresponding to the table of FIG. 15;
- FIG. 17 is a table corresponding to that of FIG. 15, showing propagation phase difference (“electrical” phase difference) as a function of azimuth angle;
- FIG. 18 is a table showing correction factors based on the data of FIG. 16 and FIG. 17;
- FIG. 19 is a table representing a control surface on which the correction factors of FIG. 18 are located;
- FIG. 20 is a depiction of the control surface of FIG. 19;
- FIG. 21 is a graph of correction factor versus frequency
- FIG. 22 is a block diagram of a monaural beamforming system with IAD correction
- FIG. 23 is a block diagram of a binaural beamforming system with IAD correction.
- FIG. 24 is plot is a plot of the polar response patterns and DI values in a beamforming system using first-order directional microphones and IAD correction.
- the present invention is the recognition that, for any ratio of noise power in the binaural signals, for example, there is an optimum mix ratio or weighting ratio that optimizes the SNR of the output signal. For example, if the noise power is equal in each signal, such as in a crowded restaurant with people all around, moving chairs, clattering plates, etc., then the optimum weighting is 50%/50%. In other environments, the noise power in the two signals will be quite unequal, e.g., on the side of a road. If there is more noise in one signal by, for example 10 dB, the optimum mix is not 50/50, but moves toward including a greater amount of the quieter signal.
- FIG. 1 shows a comparison plot of voice power (target) and noise power in the output signal as a function of mix ratio. Note that whereas the voice power stays constant with mix ratio, the noise power does not. Rather, as the ratio of noise power in the two signals increases (i.e., there is a greater imbalance in the noise “picked up” at each ear), the optimum mix ratio moves to weight the quieter signal heavier than the noisier signal before summing the two signals to form the output signal. The optimum mix ratio occurs where the noise in the output is minimum.
- FIG. 2 a block diagram is shown of a beamforming apparatus in accordance with one embodiment of the present invention.
- the left ear signal is input in parallel to an attenuator and to a noise power determination block.
- the noise power determination block measures the noise power of the signal and outputs a noise level signal P NL .
- the right ear signal is input in parallel to an attenuator and to a noise power determination block which outputs a signal P NR .
- Noise level signals from the noise power determination blocks are input to a power ratio block, which determines, based on the relative noise levels of the two input signals, an appropriate weighting ratio, e.g., 50/50, 40/60, 60/40, etc.
- Corresponding control signals are applied to the respective attenuators to cause the input signals to be attenuated in proportion to the input signal's weighting ratio. For example, for a 60/40 weight, the left input signal is attenuated to 60% of its input value while the right input signal is attenuated to 40% of its input value. Attenuated versions of the input signals, attenuated by the optimum amount, are then applied to a summing block, which sums the attenuated signals to produce an output signal that is then applied to both ears.
- Noise measurement may be performed as described in U.S. application Ser. No. 09/247,621 filed Feb. 10, 1999. (Attys. Dkt. No. 022577-530), incorporated hereinby reference. Generally speaking, a noise measurement is obtained by squaring the instantaneous signal and filtering the result using a low-pass filter or valley detector (opposite of peak detector).
- FIG. 3 One suitable control function for the power ratio block is shown in FIG. 3. As the noise power in one ear's signal exceeds the noise power in the other ear's signal, the optimum percentage of the noisier signal's contribution to the output signal decreases. In FIG. 3, the comparison of noise powers is made using the decibel scale. If instead the comparison of noise powers is made using simple proportions, then the control function becomes linear as shown in FIG. 4.
- FIG. 5 The resulting SNR improvement over classical 50/50 beamforming achieved using the foregoing control strategy is shown in FIG. 5.
- Realistic noise ratio values give relative SNR improvements that are dramatic.
- FIG. 6 shows the resulting SNR improvement over using the quieter signal only.
- the foregoing approach to beamforming is not limited to simultaneous operation on the signals over their entire bandwidths. Rather, the same approach can be implemented on a frequency-band-by-frequency-band basis.
- Existing sound processing algorithms of the assignee divide the audio frequency bandwidth into multiple separate, narrower bands. By applying the current method separately to each band, the optimum SNR can be achieved on a band-by-band basis to further optimize the voice-to-noise ratio in the overall output.
- a multiband beamformer in accordance with one embodiment of the invention.
- a microphone For each of the right ear and the left ear, a microphone produces an input signal which is amplified and applied to a band-splitting filter (BSF).
- BSF band-splitting filter
- the BSF produces a number of narrower-band signals.
- Multiple beamformers (BF), one per band, are provided such as the beamformer of FIG. 2.
- Each beamformer receives narrower-band signals of a particular band and produces an enhanced output signal for that band.
- the resulting enhanced band signals are then summed to form a final output signal that is output to both the right ear and the left ear.
- the multiband beamformer has the advantage of optimally reducing background noises from multiple sources with different spectral characteristics, for example a fan with mostly low-frequency rumble on one side and a squeaky toy on the other. As long as the interferers occupy different frequency bands, this multiband approach improves upon the single band method discussed above.
- some binaural cues can be left in the final output by biasing the weightings slightly away from the optimum mix.
- the right ear output signal might be weighted N % (say, 5-10%) away from the optimum toward the right ear signal
- the left ear output signal might be weighted N % away from the optimum toward the left ear signal.
- the right ear output signal might be weighted N % (say, 5-10%) away from the optimum toward the right ear signal
- the left ear output signal might be weighted N % away from the optimum toward the left ear signal.
- This arrangement helps to make a more comfortable sound and “externalizes the image,” i.e., causes the user to perceive an external aural environment containing discernible sound sources. Furthermore, this arrangement entails some but very little compromise of SNR. Referring again to FIG. 1, the shape of the curves is such that the minima are broad and shallow. Appreciable deviation from the minimum can therefore be tolerated with very little discernible decrease in noise reduction.
- N may be regarded as a “binaurality coefficient” that controls the amount of binaural information retained in the output.
- This binaurality parameter can be tailored for the individual. As this parameter is varied, there is little loss of directionality until after the binaural cues are significantly restored, so the directionality and noise reduction benefits of the beamformer's signal processing can still be realized even with a usable level of binaural cue retention.
- human binaural processing tends to be lost in proportion to hearing deficit. So those individuals most needing the benefits that can be provided by the beamforming algorithm tend to be those who have already lost the ability to beneficially utilize their natural binaural processing for extracting a voice from noise or babble.
- the algorithm can provide the greatest directionality benefit for those needing it the most, but can be adjusted, although with a loss of directionality, for those with better binaural processing who need it less.
- FIG. 8 shows a block diagram of a binaural sound enhancement system. Elements within the dashed-line block correspond to elements of the beamforming system of FIG. 2. Now, however, instead of a single summing block, two summing blocks are provided, one to form the output signal for the right ear and one to form the output signal for the left ear. Output signals from variable attenuators are applied to both of the summing blocks. In addition, fixed (or infrequently updated) attenuators are provided, one for each of the right ear signal and the left ear signal. The function of these attenuators is to provide an additional amount of an input signal to a corresponding one of the summing blocks.
- a right fixed attenuator provides an additional amount of the right input signal to the right summing block, which produces a right output signal
- a left fixed attenuator provides an additional amount of the left input signal to the left summing block, which produces a left output signal
- the adaptive method can take seconds to adapt
- the present method can react nearly instantaneously to changes in noise or other varying environmental conditions such as the user's head position, since there is no adaptation requirement.
- the present method thus, can remove impulse noise such as the sound of a fork dropped on a plate at a restaurant or the sound of a car door being closed.
- noise power detectors are already provided in some binaural hearing aid sets for use in noise-reduction algorithms.
- the simple addition of two multipliers (attenuators) and an additional processing step enables dramatically improved results to be achieved.
- An important observation is that the improvement in voice-to-background noise that the invention provides is in addition to that of the noise-reduction created by pre-existing noise-reduction algorithms—further improving the SNR.
- DSP digital signal processor
- all of the blocks are realized in the form of DSP code.
- Most of the required software functions are simply multiplications (e.g., attenuators) or additions (summing blocks).
- FFT methods may be employed. Outputs from FFT processes are easily analyzed as power spectra for implementing the noise power detectors.
- One such implementation divides the sound spectrum into 64 FFT bins and processes all 64 bins simultaneously every 3.5 ms.
- the beamformer is able to adjust for various noise conditions in 64 separate frequency bands at approximately 300 times each second.
- FIG. 9 a block diagram is shown of a DSP-based monaural beamformer in accordance with one embodiment of the invention.
- the DSP approach uses well-known “overlap-add” techniques, various well-known details of which are omitted for simplicity.
- a signal from a left-ear microphone Lin 901 is transformed using an FFT (Fast Fourier Transform) 903 or similar transform.
- the resulting transformed signal feeds two separate operations, a squaring operation 905 and a multiplication operation 907 .
- the multiplication operation may be considered as realizing an attenuator where the attenuation factor is set by a circuit 909 .
- a signal from a right ear microphone Rin 911 follows a corresponding path. Outputs of the multiplication operations for the left ear and the right ear are summed ( 921 ), inverse-transformed ( 923 ) and output to transducers of both the left ear and the right ear ( 925 , 927 ).
- the circuit 909 calculates attenuation ratios for the left and right ears by forming the sum S of the squares of the signals and by forming 1 ) the ratio L/S of the square of the left ear signal to the sum; and 2) the ratio R/S of the square of the right ear signal to the sum.
- the operations for forming these ratios are represented as an addition ( 931 ) and two divisions ( 933 , 935 ).
- the resulting attenuation factors are coupled in cross-over fashion to the multipliers; that is, the signal L/S is used to control the multiplier for the right ear, and the signal R/S is used to control the multiplier for the left ear.
- the circuitry may be simplified to conserve compute power by, instead of performing two divisions, performing a single division and a subtraction as illustrated in FIG. 10. That is, once one of the ratios has been determined, the other ratio can be determined by subtracting the known ratio from 1, since the ratios must add to 1 .
- FIG. 11 An embodiment of a corresponding binaural DSP-based beamformer is shown in FIG. 11. Note that the operations within the block 1101 may be performed on a frequency-bin-by-frequency-bin basis. Hence, additional instances of this block ate indicated in dashed lines. Instead of the left input signal contributing only to the left output signal and the right input signal contributing only to the right output signal as in the previous embodiment, in this embodiment, the operations are arranged such that both input signals may contribute, in different amounts, to both output signals. That is, referring in particular to the control block 1109 , a binaurality control X is provided that “biases” the output signal for a particular ear toward the input signal for that ear.
- the binaurality control may be realized by a subtraction operation 1103 and a multiplication operation 1105 , and by an additional operation 1107 and another multiplication operation 1111 .
- the binaurality control might be set within a range of 5 to 15%. However, the binaurality control may also be set to one extreme or the other or anywhere in between. If the binaurality control is set to 0%, then operation becomes the same as in the case of the monaural beamformer of FIG. 9. If the binaurality control is set to 100%, then full-stereo operation ensues and any beamforming action is lost.
- FIG. 11 The remainder of the arrangement of FIG. 11 may be appreciated by noting that the output processing block 1021 of FIG. 10 occurs twice, once for the left ear ( 1121 a ) and once for the right ear ( 1121 b ), since the output signals to the two ears may be different.
- two different nodes Y and Z correspond generally to the node W of FIG. 10, reflecting the “biasing apart” of the two channels. (It is assumed in FIG. 11, however, that the attenuation factors applied to the multipliers 1131 and 1133 are bounded within the range from 0 to 1.)
- the arrangement of the two DSP-based embodiments is similar.
- the right (quieter) input signal is weighted more heavily, but in the left output signal, the left input signal is weighted more heavily than it would otherwise be, and in the right output signal, the right input signal is weighted more heavily than it would otherwise be for optimum noise reduction.
- beamforming can be performed selectively within one or more frequency ranges.
- an enhancement to the beamformer would be to pass the frequencies below, say, 1000 Hz directly to their respective ears, while beamforming only those frequency bins above that frequency in order to achieve better SNR in the higher frequency band where directionality cues are not needed.
- the beamforming algorithm is simply applied only to the higher frequencies as stated.
- a look-up table having a series of “binaurality” coefficients, one for each frequency bin, to control the amount of binaural cues retained at each frequency.
- the use of such a “binaurality coefficient” to control the beamformer smoothly between full binaural (no beamforming) to full beamforming (no binaural) has been previously described.
- the coefficients for each low frequency bin may be biased far toward, or even at, full binaural processing, while the coefficients for each high frequency bin may be biased toward, or completely at, full beamforming, thus achieving the desired action.
- the coefficients could abruptly change at some frequency, such as 1000 Hz, more preferably, the transition occurs gradually over, say, 800 Hz to 1200 Hz, where the coefficients “fade” smoothly from full binaural to full beamforming.
- a beamformer as described herein can be used in products other than hearing aids, i.e., anywhere that a more “focused” sound pickup is desired.
- the foregoing beamforming methods demonstrate very high directionality, and enable the user of a binaural hearing aid product to be provided with a “super directionality” mode of operation for those noisy situations where conversation is otherwise extremely difficult.
- Second-order microphone technology may be used to further enhance directionality.
- the described beamformer was modeled in the dSpace/MatLab environment, and the MLSSA method of directionality measurement was implemented in the same environment.
- the MLSSA method which uses signal autocorrelation, is quite immune to ambient noises and gives very clean results. Only data for the usual 500, 1000, 2000 and 4000 Hz frequencies was recorded.
- Two BZ5 first-order directional microphones were placed in-situ on a KEMAR mannequin, and the 0 ⁇ axis was taken to be a line straight in front of the mannequin as is standard practice. Measurements were taken at 3.75 ⁇ increments between +30 ⁇ and at 15 ⁇ increments elsewhere. Care was taken to ensure that the system was working well above the noise floor and below saturation or clipping.
- FIG. 12 shows the polar response characteristics and the calculated Directionality Index (DI) of the beamforming system for each of the four recorded frequencies. Beamforming inherently affects only the horizontal characteristics of the directional pattern and does not affect symmetry about the front-to-back axis. A narrowed horizontal pattern with left-right symmetry is therefore expected and is demonstrated in FIG. 12.
- DI Directionality Index
- Directionality can be improved further still using second-order microphones. Since the second-order microphones have superior directionality, as compared to first-order designs, especially with respect to their front-to-back ratio, this property of the second-order microphone complements the beamformer's processing algorithm, which is limited to side-to-side enhancement. Thus, the combined result is a very narrow, forward-only beam pattern as shown in FIG. 14.
- HRTFs Head Related Transfer Functions
- E.A.G. Shaw HRTFs describe the effects of the head upon signal reception at the ears, and include what is called “head shadowing.”
- the present method uses the head shadowing effect to optimize SNR.
- phase adjustment may be used to provide a more natural sound quality and in fact to further improve the directionality of the beamformer.
- peaks and nulls occur at different positions for different frequencies.
- the cause of these peaks and nulls in the beam pattern is the relative signal phase between right ear and left ear signals (as distinguished from head shadowing, which is relates to the amplitude difference—Interaural Difference, or LAD—caused by the head).
- the relative signal phase between the right ear and left ear signals is due to the path length difference for off-axis signals—i.e. the signal from a source located, say, 45 degrees to the right will arrive at the right ear before it arrives at the left ear.
- the path length difference translates directly into a delay time, because of the essentially constant speed of sound in air.
- a constant delay translates directly into a phase shift which is directly-proportional to frequency.
- the basic beamformer algorithm has the attribute of matching (in amplitude) the contribution from each ear's signal to the output. Accordingly, an N ⁇ 180 degree phase shift will create a deep null, i.e. nearly perfect cancellation, and an N ⁇ 360 degree phase shift will create a +6-dB peak. This is one reason why the beamformer polar pattern shows such distinct peaks and nulls. If the amplitudes did't well matched, the peaks and nulls would be much less distinct, although there would still be as many and at the same angle locations.
- the most desirable response pattern in FIG. 12 is the response pattern for 1000 Hz.
- the following description will describe how the response patterns for other frequencies can be made to have a very similar response pattern, resulting in a more natural sound and greater directionality.
- FIG. 15 a table is shown presenting known data regarding LAD as a function of azimuthal angle. This data may be represented graphically as shown in FIG. 16. As seen in FIG. 16, depending on frequency, IAD is quite linear from 0 degrees azimuthal angle to between 40 and 70 degrees azimuthal angle
- FIG. 17 shows a partial table of the azimuthal dependence of electrical phase difference in the embodiment of the beamformer previously described. Agreement between FIG. 17 and FIG. 12 may be readily observed. A clear pattern emerges from FIG. 17, i.e., each time the frequency is halved (from 4 kHz to 2 kHz, 2 kHz to 1 kHz, etc.), as would be expected, the azimuthal angle for a particular null or peak doubles. For example, at 4 kHz, the first null occurs at 15 degrees. At 2 kHz, the first null occurs at 30 degrees.
- the following actions are required: at 500 Hz, double the (azimuthal-angle-dependent) phase rate; at 1 kHz, do nothing; at 2 kHz, halve the phase rate; and at 4 kHz, quarter the phase rate.
- IAD already forms the basis of the beamformer as previously described, it is desirable to, for each frequency, obtain a phase correction factor in terms of IAD (measured in dB) to be applied to the signal at that frequency to bring that signal substantially into phase with the 1 kHz signal.
- phase correction factors may be obtained in the manner shown in FIG. 18.
- An IAD slope (in dB/ADeg.) is obtained from FIG. 16, and a phase slope (EDeg./ADeg.) is obtained from FIG. 17. Dividing the latter by the former results in the phase rate (EDeg./dB). Given the phase rate for a particular frequency, the action to be taken at that frequency determines the appropriate correction factor.
- the phase rate is to be doubled. Since the phase rate is 6.563 EDeg./dB, the correction to be applied is also 6.563 EDeg./dB. At 2 kHz, the phase rate (36 EDeg./dB) is to be halved, resulting in a correction of ⁇ 18 EDeg./dB.
- FIG. 19 A graph of the control surface is shown in FIG. 20.
- the information of FIG. 19 and FIG. 20 may be represented more compactly in the form of a correction slope graph, shown in FIG. 21. If a look-up table approach to phase equalization is used, then the representation of FIG. 19 and FIG. 20 is preferred. If a mathematical approach to phase equalization is used, then the representation of FIG. 21 is preferred.
- a phase controller 2201 is responsive to the signal W to produce frequency-dependent phase corrections to be applied to different frequency components.
- the phase controller may take the form of a lookup table or a mathematical calculation.
- a phase shifter block 2203 receives the phase corrections from the phase controller and applies the phase corrections to the different frequency components. Similar components 2201 ′ and 2203 ′ appear in dashed lines in the right ear signal path. Whether elements 2201 and 2203 are used or elements 2201 ′ and 2203 ′ are used, the result is the same.
- FIG. 23 shows an embodiment of a corresponding binaural beamformer, including phase controllers 2301 and 2301 ′ and phase shifter blocks 2303 and 2303 ′.
- the present invention has been described primarily in a hearing health care context, the principles of the invention can be applied in any situation in which an obstacle to energy propagation is present between sensors or is provided to create a shadowing effect like the head shadowing effect in hearing health care applications.
- the energy may be acoustic, electromagnetic, or even optical.
- the invention should therefore be understood to be applicable to sonar applications, medical imaging applications, etc.
Abstract
The present invention, generally speaking, picks up a voice or other sound signal of interest and creates a higher voice-to-background-noise ratio in the output signal so that a user enjoys higher intelligibility of the voice signal. In particular, beamforming techniques are used to provide optimized signals to the user for further increasing the understanding of speech in noisy environments and for reducing user listening fatigue. In one embodiment, signal-to-noise performance is optimized even if some of the binaural cues are sacrificed. In this embodiment, an optimum mix ratio or weighting ratio is determined in accordance with the ratio of noise power in the binaural signals. Enhancement circuitry is easily implemented in either analog or digital form and is compatible with existing sound processing methods, e.g., noise reduction algorithms and compression/expansion processing. The sound enhancement approach is compatible with, and additive to, any microphone directionality or noise cancelling technology.
Description
- 1. Field of the Invention
- The present invention relates to sound signal enhancement.
- 2. State of the Art
- For the hearing impaired, clearly hearing speech is very difficult for hearing aid wearers, especially in noisy locations. Discrimination of the speech signal is confused because directional cues are not well received or processed by the hearing impaired, and the normal directional cues are poorly preserved by standard hearing aid microphone technologies. For this reason, electronic directionality has been shown to be very beneficial, and directional microphones are becoming common in hearing aids. However, there are limitations to the amount of directionality achievable in microphones alone. Therefore, further benefits are being sought by the use of beamforming techniques, utilizing the multiple microphone signals available for example from a binaural pair of hearing aids.
- Beamforming is a method whereby a narrow (or at least narrower) polar directional pattern can be developed by combining multiple signals from spatially separated sensors to create a monaural, or simple, output signal representing the signal from the narrower beam. Another name for this general category of processing is “array processing,” used, for example, in broadside antenna array systems, underwater sonar systems and medical ultrasound imaging systems. Signal processing usually includes the steps of adjusting the phase (or delay) of the individual input signals and then adding (or summing) them together. Sometimes predetermined, fixed amplitude weightings are applied to the individual signals prior to summation, for example to reduce sidelobe amplitudes.
- With two sensors, it is possible to create a direction of maximum sensitivity and a null, or direction of minimum sensitivity.
- One known beamforming algorithm is described in U.S. Pat. No. 4,956,867, incorporated herein by reference. This algorithm operates to direct a null at the strongest noise source. Since it is assumed that the desired talker signal is from straight ahead, a small region of angles around zero degrees is excluded so that the null is never steered to straight ahead, where it would remove the desired signal. Because the algorithm is adaptive, time is required to find and null out the interfering signal. The algorithm works best when there is a single strong interferer with little reverberation. (Reverberant signals operate to create what appears to be additional interfering signals with many different angles of arrival and times of arrival—i.e., a reverberant signal acts like many simultaneous interferers.) Also, the algorithm works best when an interfering signal is long-lasting—it does not work well for transient interference.
- The prior-art beamforming method suffers from serious drawbacks. First, it takes too long to acquire the signal and null it out (adaptation takes too long). Long adaptation time creates a problem with wearer head movements (which change the angle of arrival of the interfering signal) and with transient interfering signals. Second, it does not beneficially reduce the noise in real life situations with numerous interfering signals and/or moderate-to-high reverberation.
- A simpler beamforming approach is known from classical beamforming. With only two signals (e.g., in the case of binaural hearing health care, one from the microphone at each ear) classical beamforming simply sums the two signals together. Since it is assumed that the target speech is from straight ahead (i.e., that the hearing aid wearer is looking at the talker), the speech signal in the binaural pair of raw signals is highly correlated, and therefore the sum increases the level of this signal, while the noise sources, assumed to be off-axis, create highly uncorrelated noise signals at each ear. Therefore, there is an enhancement of the desired speech signal over that of the noise signal in the beamformer output. This enhancement is analogous to the increased sensitivity of a broadside array to signals coming from in front as compared to those coming from the side.
- This classical beamforming approach still does not optimize the signal-to-noise (voice-to-background) ratio, however, producing only a maximum 3 dB improvement. It is also fixed, and therefore cannot adjust to varying noise conditions.
- The present invention, generally speaking, picks up a voice or other sound signal of interest and creates a higher voice-to-background-noise ratio in the output signal so that a user enjoys higher intelligibility of the voice signal. In particular, beamforming techniques are used to provide optimized signals to the user for further increasing the understanding of speech in noisy environments and for reducing user listening fatigue. In one embodiment, signal-to-noise performance is optimized even if some of the binaural cues are sacrificed. In this embodiment, an optimum mix ratio or weighting ratio is determined in accordance with the ratio of noise power in the binaural signals. Enhancement circuitry is easily implemented in either analog or digital form and is compatible with existing sound processing methods, e.g., noise reduction algorithms and compression/expansion processing. The sound enhancement approach is compatible with, and additive to, any microphone directionality or noise cancelling technology.
- The present invention may be further understood from the following description in conjunction with the appended drawing. In the drawing:
- FIG. 1 is a graph illustrating how the optimum mix ratio for two sound signals varies in accordance with the noise ratio of the two sound signals;
- FIG. 2 is a block diagram illustrating a beamforming technique in accordance with one embodiment of the invention;
- FIG. 3 is a graph illustrating one suitable control function for the power ratio block of FIG. 2;
- FIG. 4 is a graph illustrating another control function for the power ratio block of FIG. 2;
- FIG. 5 is a graph illustrating relative noise improvement using the present beamforming technique as compared to using a 50/50 signal mix;
- FIG. 6 is a graph illustrating relative noise improvement using the present beamforming technique as compared to using the quieter signal only;
- FIG. 7 is a block diagram of a multiband beamformer;
- FIG. 8 is a block diagram of a binaural beamformer;
- FIG. 9 is a block diagram of a one embodiment of a DSP-based beamformer;
- FIG. 10 is a block diagram of an alternative equivalent realization of the beamformer of FIG. 9;
- FIG. 11 is a block diagram of a another embodiment of a DSP-based beamformer;
- FIG. 12 is a plot is a plot of the polar response patterns and DI values in a beamforming system using first-order directional microphones;
- FIG. 13 is a plot of the polar response patterns and DI values of a conventional first-order microphone without beamforming;
- FIG. 14 is a plot of the polar response patterns and DI values using second-order directional microphones;
- FIG. 15 is a table showing interaural difference as a function of azimuth angle;
- FIG. 16 is a graph corresponding to the table of FIG. 15;
- FIG. 17 is a table corresponding to that of FIG. 15, showing propagation phase difference (“electrical” phase difference) as a function of azimuth angle;
- FIG. 18 is a table showing correction factors based on the data of FIG. 16 and FIG. 17;
- FIG. 19 is a table representing a control surface on which the correction factors of FIG. 18 are located;
- FIG. 20 is a depiction of the control surface of FIG. 19;
- FIG. 21 is a graph of correction factor versus frequency;
- FIG. 22 is a block diagram of a monaural beamforming system with IAD correction;
- FIG. 23 is a block diagram of a binaural beamforming system with IAD correction; and
- FIG. 24 is plot is a plot of the polar response patterns and DI values in a beamforming system using first-order directional microphones and IAD correction.
- Underlying the present invention is the recognition that, for any ratio of noise power in the binaural signals, for example, there is an optimum mix ratio or weighting ratio that optimizes the SNR of the output signal. For example, if the noise power is equal in each signal, such as in a crowded restaurant with people all around, moving chairs, clattering plates, etc., then the optimum weighting is 50%/50%. In other environments, the noise power in the two signals will be quite unequal, e.g., on the side of a road. If there is more noise in one signal by, for example 10 dB, the optimum mix is not 50/50, but moves toward including a greater amount of the quieter signal. In the case of a 10 dB noise differential, the optimum noise mix is 92% quieter signal and 8% noisier signal. Such a result is counterintuitive, where intuition would suggest simply using the quieter signal. Simply using the quieter signal would be optimal only if the noise and voice both had the same amount of correlation. However, in nearly all real-world situations, the voice signals are highly correlated, while the noise signals are not. This disparity biases the optimum point.
- FIG. 1 shows a comparison plot of voice power (target) and noise power in the output signal as a function of mix ratio. Note that whereas the voice power stays constant with mix ratio, the noise power does not. Rather, as the ratio of noise power in the two signals increases (i.e., there is a greater imbalance in the noise “picked up” at each ear), the optimum mix ratio moves to weight the quieter signal heavier than the noisier signal before summing the two signals to form the output signal. The optimum mix ratio occurs where the noise in the output is minimum.
- Referring now to FIG. 2, a block diagram is shown of a beamforming apparatus in accordance with one embodiment of the present invention. Assume a system having two input signals, i.e., a right ear signal and a left ear signal. The left ear signal is input in parallel to an attenuator and to a noise power determination block. The noise power determination block measures the noise power of the signal and outputs a noise level signal PNL. Similarly, the right ear signal is input in parallel to an attenuator and to a noise power determination block which outputs a signal PNR. Noise level signals from the noise power determination blocks are input to a power ratio block, which determines, based on the relative noise levels of the two input signals, an appropriate weighting ratio, e.g., 50/50, 40/60, 60/40, etc. The weighting ratio may be determined using the following formulas:
- Corresponding control signals are applied to the respective attenuators to cause the input signals to be attenuated in proportion to the input signal's weighting ratio. For example, for a 60/40 weight, the left input signal is attenuated to 60% of its input value while the right input signal is attenuated to 40% of its input value. Attenuated versions of the input signals, attenuated by the optimum amount, are then applied to a summing block, which sums the attenuated signals to produce an output signal that is then applied to both ears.
- Noise measurement may be performed as described in U.S. application Ser. No. 09/247,621 filed Feb. 10, 1999. (Attys. Dkt. No. 022577-530), incorporated hereinby reference. Generally speaking, a noise measurement is obtained by squaring the instantaneous signal and filtering the result using a low-pass filter or valley detector (opposite of peak detector).
- One suitable control function for the power ratio block is shown in FIG. 3. As the noise power in one ear's signal exceeds the noise power in the other ear's signal, the optimum percentage of the noisier signal's contribution to the output signal decreases. In FIG. 3, the comparison of noise powers is made using the decibel scale. If instead the comparison of noise powers is made using simple proportions, then the control function becomes linear as shown in FIG. 4.
- The resulting SNR improvement over classical 50/50 beamforming achieved using the foregoing control strategy is shown in FIG. 5. Realistic noise ratio values give relative SNR improvements that are dramatic. FIG. 6 shows the resulting SNR improvement over using the quieter signal only.
- Assuming that the signal of interest to the listener is straight ahead, then the signal of interest will be equal in both ears. Signals from other directions, which because of head shadowing are not equal in both ears, may therefore be considered to be noise. If a signal is equal in both ears, then beamforming has no effect on it. Therefore, although noise power detectors may be used as shown in FIG. 2, a simpler approach is to use simple signal power detectors as shown and described hereafter in relation to FIG. 9 and FIG. 10. Interestingly, one result of such a beamforming strategy is that the power in the signals from the two ears is equalized prior to combining the signals.
- As a further improvement, the foregoing approach to beamforming is not limited to simultaneous operation on the signals over their entire bandwidths. Rather, the same approach can be implemented on a frequency-band-by-frequency-band basis. Existing sound processing algorithms of the assignee divide the audio frequency bandwidth into multiple separate, narrower bands. By applying the current method separately to each band, the optimum SNR can be achieved on a band-by-band basis to further optimize the voice-to-noise ratio in the overall output.
- Referring more particularly to FIG. 7, there is shown a multiband beamformer in accordance with one embodiment of the invention. For each of the right ear and the left ear, a microphone produces an input signal which is amplified and applied to a band-splitting filter (BSF). The BSF produces a number of narrower-band signals. Multiple beamformers (BF), one per band, are provided such as the beamformer of FIG. 2. Each beamformer receives narrower-band signals of a particular band and produces an enhanced output signal for that band. The resulting enhanced band signals are then summed to form a final output signal that is output to both the right ear and the left ear.
- The multiband beamformer has the advantage of optimally reducing background noises from multiple sources with different spectral characteristics, for example a fan with mostly low-frequency rumble on one side and a squeaky toy on the other. As long as the interferers occupy different frequency bands, this multiband approach improves upon the single band method discussed above.
- As a further enhancement, some binaural cues can be left in the final output by biasing the weightings slightly away from the optimum mix. For example, the right ear output signal might be weighted N % (say, 5-10%) away from the optimum toward the right ear signal, and the left ear output signal might be weighted N % away from the optimum toward the left ear signal. To take a concrete example, if the optimum mix was 60% left and 40% right, then the right ear would get 55% L +45% R and the left ear would get 65% L+35% R (with
N 5%). This arrangement helps to make a more comfortable sound and “externalizes the image,” i.e., causes the user to perceive an external aural environment containing discernible sound sources. Furthermore, this arrangement entails some but very little compromise of SNR. Referring again to FIG. 1, the shape of the curves is such that the minima are broad and shallow. Appreciable deviation from the minimum can therefore be tolerated with very little discernible decrease in noise reduction. - More generally, N may be regarded as a “binaurality coefficient” that controls the amount of binaural information retained in the output. Such a binaurality coefficient may be used to control the beamformer smoothly between full binaural (N=100%; no beamforming) to full beamforming (N=0%; no binaural). This binaurality parameter can be tailored for the individual. As this parameter is varied, there is little loss of directionality until after the binaural cues are significantly restored, so the directionality and noise reduction benefits of the beamformer's signal processing can still be realized even with a usable level of binaural cue retention.
- Furthermore, human binaural processing tends to be lost in proportion to hearing deficit. So those individuals most needing the benefits that can be provided by the beamforming algorithm tend to be those who have already lost the ability to beneficially utilize their natural binaural processing for extracting a voice from noise or babble. Thus, the algorithm can provide the greatest directionality benefit for those needing it the most, but can be adjusted, although with a loss of directionality, for those with better binaural processing who need it less.
- FIG. 8 shows a block diagram of a binaural sound enhancement system. Elements within the dashed-line block correspond to elements of the beamforming system of FIG. 2. Now, however, instead of a single summing block, two summing blocks are provided, one to form the output signal for the right ear and one to form the output signal for the left ear. Output signals from variable attenuators are applied to both of the summing blocks. In addition, fixed (or infrequently updated) attenuators are provided, one for each of the right ear signal and the left ear signal. The function of these attenuators is to provide an additional amount of an input signal to a corresponding one of the summing blocks. That is, a right fixed attenuator provides an additional amount of the right input signal to the right summing block, which produces a right output signal, and a left fixed attenuator provides an additional amount of the left input signal to the left summing block, which produces a left output signal.
- The foregoing approach to beamforming is simple and therefore easy to implement. Whereas the adaptive method can take seconds to adapt, the present method can react nearly instantaneously to changes in noise or other varying environmental conditions such as the user's head position, since there is no adaptation requirement. The present method, thus, can remove impulse noise such as the sound of a fork dropped on a plate at a restaurant or the sound of a car door being closed. Furthermore, noise power detectors are already provided in some binaural hearing aid sets for use in noise-reduction algorithms. The simple addition of two multipliers (attenuators) and an additional processing step enables dramatically improved results to be achieved. An important observation is that the improvement in voice-to-background noise that the invention provides is in addition to that of the noise-reduction created by pre-existing noise-reduction algorithms—further improving the SNR.
- Moreover, the foregoing methods all lend themselves to easy implementation in digital form, especially using a digital signal processor (DSP). In a DSP implementation, all of the blocks are realized in the form of DSP code. Most of the required software functions are simply multiplications (e.g., attenuators) or additions (summing blocks). To do frequency band implementations, FFT methods may be employed. Outputs from FFT processes are easily analyzed as power spectra for implementing the noise power detectors. One such implementation divides the sound spectrum into 64 FFT bins and processes all 64 bins simultaneously every 3.5 ms. Thus, the beamformer is able to adjust for various noise conditions in64 separate frequency bands at approximately 300 times each second.
- Referring to FIG. 9, a block diagram is shown of a DSP-based monaural beamformer in accordance with one embodiment of the invention. The DSP approach uses well-known “overlap-add” techniques, various well-known details of which are omitted for simplicity. In the arrangement of FIG. 9, a signal from a left-
ear microphone Lin 901 is transformed using an FFT (Fast Fourier Transform) 903 or similar transform. The resulting transformed signal feeds two separate operations, a squaringoperation 905 and amultiplication operation 907. The multiplication operation may be considered as realizing an attenuator where the attenuation factor is set by acircuit 909. A signal from a rightear microphone Rin 911 follows a corresponding path. Outputs of the multiplication operations for the left ear and the right ear are summed (921), inverse-transformed (923) and output to transducers of both the left ear and the right ear (925, 927). - The
circuit 909 calculates attenuation ratios for the left and right ears by forming the sum S of the squares of the signals and by forming 1) the ratio L/S of the square of the left ear signal to the sum; and 2) the ratio R/S of the square of the right ear signal to the sum. The operations for forming these ratios are represented as an addition (931) and two divisions (933, 935). The resulting attenuation factors are coupled in cross-over fashion to the multipliers; that is, the signal L/S is used to control the multiplier for the right ear, and the signal R/S is used to control the multiplier for the left ear. Hence, as a noise source increases the signal level in one ear, the signal of the other ear is emphasized and the signal of the ear most influenced by the noise source is de-emphasized. - The circuitry may be simplified to conserve compute power by, instead of performing two divisions, performing a single division and a subtraction as illustrated in FIG. 10. That is, once one of the ratios has been determined, the other ratio can be determined by subtracting the known ratio from 1, since the ratios must add to1.
- An embodiment of a corresponding binaural DSP-based beamformer is shown in FIG. 11. Note that the operations within the
block 1101 may be performed on a frequency-bin-by-frequency-bin basis. Hence, additional instances of this block ate indicated in dashed lines. Instead of the left input signal contributing only to the left output signal and the right input signal contributing only to the right output signal as in the previous embodiment, in this embodiment, the operations are arranged such that both input signals may contribute, in different amounts, to both output signals. That is, referring in particular to thecontrol block 1109, a binaurality control X is provided that “biases” the output signal for a particular ear toward the input signal for that ear. The binaurality control may be realized by asubtraction operation 1103 and amultiplication operation 1105, and by anadditional operation 1107 and anothermultiplication operation 1111. In order to retain beamforming operation while preserving binaural cues to some degree, the binaurality control might be set within a range of 5 to 15%. However, the binaurality control may also be set to one extreme or the other or anywhere in between. If the binaurality control is set to 0%, then operation becomes the same as in the case of the monaural beamformer of FIG. 9. If the binaurality control is set to 100%, then full-stereo operation ensues and any beamforming action is lost. - The remainder of the arrangement of FIG. 11 may be appreciated by noting that the
output processing block 1021 of FIG. 10 occurs twice, once for the left ear (1121 a) and once for the right ear (1121 b), since the output signals to the two ears may be different. Note also that in the arrangement of FIG. 11, two different nodes Y and Z correspond generally to the node W of FIG. 10, reflecting the “biasing apart” of the two channels. (It is assumed in FIG. 11, however, that the attenuation factors applied to themultipliers - To take a particular example of the operation of the arrangement of FIG. 11, assume that the binaurality control is set to 10%. First assume a “no noise” situation in which the ratio L/S is0.5. To obtain the signal at node Y, L/S is decreased by 10% to 0.45. At the same time, to obtain the signal at node Z, L/S is increased by 10% to 0.55. In the output processing stage, to form the left output signal, the left input signal is multiplied by a
factor 1−0.45=0.55, and the right output signal is multiplied by 0.45. To form the right output signal, the left input signal is multiplied by afactor 1−0.55=0.45, and the right output signal is multiplied by 0.55. - Now assume a noisy situation in which the ratio L/S is 0.6. To obtain the signal at node Y, L/S is decreased by 10% to 0.54. At the same time, to obtain the signal at node Z, L/S is increased by 10% to 0.66. In the output processing stage, to form the left output signal, the left input signal is multiplied by a
factor 1−0.54=0.46, and die right output signal is multiplied by. 0.54. To form the right output signal, the left input signal is multiplied by afactor 1−0.66=0.44, and the right output signal is multiplied by 0.66. In both output signals, the right (quieter) input signal is weighted more heavily, but in the left output signal, the left input signal is weighted more heavily than it would otherwise be, and in the right output signal, the right input signal is weighted more heavily than it would otherwise be for optimum noise reduction. - In accordance with a further aspect of the invention, beamforming can be performed selectively within one or more frequency ranges. In particular, since most binaural directionality cues are carried by the lower frequencies (typically below 1000 Hz), an enhancement to the beamformer would be to pass the frequencies below, say, 1000 Hz directly to their respective ears, while beamforming only those frequency bins above that frequency in order to achieve better SNR in the higher frequency band where directionality cues are not needed.
- In one implementation, the beamforming algorithm is simply applied only to the higher frequencies as stated.
- In another implementation, a look-up table is provided having a series of “binaurality” coefficients, one for each frequency bin, to control the amount of binaural cues retained at each frequency. The use of such a “binaurality coefficient” to control the beamformer smoothly between full binaural (no beamforming) to full beamforming (no binaural) has been previously described. By extending this concept to provide for per-bin binaurality coefficients, the coefficients for each low frequency bin may be biased far toward, or even at, full binaural processing, while the coefficients for each high frequency bin may be biased toward, or completely at, full beamforming, thus achieving the desired action. Although the coefficients could abruptly change at some frequency, such as 1000 Hz, more preferably, the transition occurs gradually over, say, 800 Hz to 1200 Hz, where the coefficients “fade” smoothly from full binaural to full beamforming.
- Note that other beamforming methods, although inferior to those disclosed, may also be used to enhance sound signals. In addition, a beamformer as described herein can be used in products other than hearing aids, i.e., anywhere that a more “focused” sound pickup is desired.
- The foregoing beamforming methods demonstrate very high directionality, and enable the user of a binaural hearing aid product to be provided with a “super directionality” mode of operation for those noisy situations where conversation is otherwise extremely difficult. Second-order microphone technology may be used to further enhance directionality.
- The described beamformer was modeled in the dSpace/MatLab environment, and the MLSSA method of directionality measurement was implemented in the same environment. The MLSSA method, which uses signal autocorrelation, is quite immune to ambient noises and gives very clean results. Only data for the usual 500, 1000, 2000 and 4000 Hz frequencies was recorded. Two BZ5 first-order directional microphones were placed in-situ on a KEMAR mannequin, and the 0×axis was taken to be a line straight in front of the mannequin as is standard practice. Measurements were taken at 3.75×increments between +30× and at 15×increments elsewhere. Care was taken to ensure that the system was working well above the noise floor and below saturation or clipping.
- FIG. 12 shows the polar response characteristics and the calculated Directionality Index (DI) of the beamforming system for each of the four recorded frequencies. Beamforming inherently affects only the horizontal characteristics of the directional pattern and does not affect symmetry about the front-to-back axis. A narrowed horizontal pattern with left-right symmetry is therefore expected and is demonstrated in FIG. 12.
- As compared to DI values for a single microphone, shown in FIG. 13, the calculated in-situ DI values of FIG. 12 demonstrate a remarkable improvement, averaging upwards of 9 dB over the four tested frequencies as compared to a value of less than about 5 dB for typical first-order microphones. The benefits of the described beamformer are therefore clearly evident: higher directionality can be achieved than with any single or binaural pair of hearing aid acting independently.
- Directionality can be improved further still using second-order microphones. Since the second-order microphones have superior directionality, as compared to first-order designs, especially with respect to their front-to-back ratio, this property of the second-order microphone complements the beamformer's processing algorithm, which is limited to side-to-side enhancement. Thus, the combined result is a very narrow, forward-only beam pattern as shown in FIG. 14.
- Unlike prior art beamformers, the present beamforming technique is based upon Head Related Transfer Functions (HRTFs) documented in the paper by E.A.G. Shaw. HRTFs describe the effects of the head upon signal reception at the ears, and include what is called “head shadowing.” In particular, the present method uses the head shadowing effect to optimize SNR.
- Furthermore, whereas prior art beamforming systems usually include delay or phase shift of signals in addition to amplitude-based operations, the foregoing embodiments of the present beamformer do not. Only amplitudes are adjusted or modified—thereby making the present beamformer simpler and less costly to implement.
- In other embodiments, however, phase adjustment may be used to provide a more natural sound quality and in fact to further improve the directionality of the beamformer. Note that in the pattern of FIG. 12, for example, peaks and nulls occur at different positions for different frequencies. The cause of these peaks and nulls in the beam pattern is the relative signal phase between right ear and left ear signals (as distinguished from head shadowing, which is relates to the amplitude difference—Interaural Difference, or LAD—caused by the head). The relative signal phase between the right ear and left ear signals is due to the path length difference for off-axis signals—i.e. the signal from a source located, say, 45 degrees to the right will arrive at the right ear before it arrives at the left ear. The path length difference translates directly into a delay time, because of the essentially constant speed of sound in air. In turn, a constant delay translates directly into a phase shift which is directly-proportional to frequency.
- As previously described, the basic beamformer algorithm has the attribute of matching (in amplitude) the contribution from each ear's signal to the output. Accordingly, an N×180 degree phase shift will create a deep null, i.e. nearly perfect cancellation, and an N×360 degree phase shift will create a +6-dB peak. This is one reason why the beamformer polar pattern shows such distinct peaks and nulls. If the amplitudes weren't well matched, the peaks and nulls would be much less distinct, although there would still be as many and at the same angle locations.
- Due to the relatively large spacing between the two ear microphones (sensors), a large path length difference for the two signals exists. In turn, this creates a large phase shift for relatively small off-axis (azimuthal) angles, and thus, enough phase shift to reach 180, 360, 540, 720, etc. electrical degrees for arrival angles between 0 and 90 azimuthal degrees, especially at the higher frequencies. This is the second reason that the beamformer pattern shows numerous peaks and nulls. A closer spacing (a pin head, for example) would move the peaks and nulls azimuthally toward 90 degrees, so that fewer would show up. If the spacing were small enough, no peaks or nulls would show up at all, except at very high frequencies.
- The most desirable response pattern in FIG. 12 is the response pattern for 1000 Hz. The following description will describe how the response patterns for other frequencies can be made to have a very similar response pattern, resulting in a more natural sound and greater directionality.
- Referring to FIG. 15, a table is shown presenting known data regarding LAD as a function of azimuthal angle. This data may be represented graphically as shown in FIG. 16. As seen in FIG. 16, depending on frequency, IAD is quite linear from 0 degrees azimuthal angle to between 40 and 70 degrees azimuthal angle
- FIG. 17 shows a partial table of the azimuthal dependence of electrical phase difference in the embodiment of the beamformer previously described. Agreement between FIG. 17 and FIG. 12 may be readily observed. A clear pattern emerges from FIG. 17, i.e., each time the frequency is halved (from 4 kHz to 2 kHz, 2 kHz to 1 kHz, etc.), as would be expected, the azimuthal angle for a particular null or peak doubles. For example, at 4 kHz, the first null occurs at 15 degrees. At 2 kHz, the first null occurs at 30 degrees. In order to “equalize” the phases of the various signals to match the phase of the 1 kHz signal, the following actions are required: at 500 Hz, double the (azimuthal-angle-dependent) phase rate; at 1 kHz, do nothing; at 2 kHz, halve the phase rate; and at 4 kHz, quarter the phase rate.
- Since IAD already forms the basis of the beamformer as previously described, it is desirable to, for each frequency, obtain a phase correction factor in terms of IAD (measured in dB) to be applied to the signal at that frequency to bring that signal substantially into phase with the 1 kHz signal. These correction factors may be obtained in the manner shown in FIG. 18. An IAD slope (in dB/ADeg.) is obtained from FIG. 16, and a phase slope (EDeg./ADeg.) is obtained from FIG. 17. Dividing the latter by the former results in the phase rate (EDeg./dB). Given the phase rate for a particular frequency, the action to be taken at that frequency determines the appropriate correction factor. For example, at 500 Hz, the phase rate is to be doubled. Since the phase rate is 6.563 EDeg./dB, the correction to be applied is also 6.563 EDeg./dB. At 2 kHz, the phase rate (36 EDeg./dB) is to be halved, resulting in a correction of −18 EDeg./dB.
- Using the correction values of FIG. 18, a table representing a control surface for performing phase “equalization” may be obtained as shown in FIG. 19. A graph of the control surface is shown in FIG. 20. The information of FIG. 19 and FIG. 20 may be represented more compactly in the form of a correction slope graph, shown in FIG. 21. If a look-up table approach to phase equalization is used, then the representation of FIG. 19 and FIG. 20 is preferred. If a mathematical approach to phase equalization is used, then the representation of FIG. 21 is preferred.
- Referring to FIG. 22, a block diagram is shown of a monaural beamformer like that of FIG. 10, modified to perform phase equalization as described. A
phase controller 2201 is responsive to the signal W to produce frequency-dependent phase corrections to be applied to different frequency components. The phase controller may take the form of a lookup table or a mathematical calculation. Aphase shifter block 2203 receives the phase corrections from the phase controller and applies the phase corrections to the different frequency components.Similar components 2201′ and 2203′ appear in dashed lines in the right ear signal path. Whetherelements elements 2201′ and 2203′ are used, the result is the same. Alternatively, bothelements elements 2201′ and 2203′ may be used, in which case the phase corrections would be halved such that half of the shift is applied in each of the left ear path and the right ear path. FIG. 23 shows an embodiment of a corresponding binaural beamformer, includingphase controllers phase shifter blocks - The expected results of phase correction are shown in FIG. 24. In the case of the frontal lobe, the response pattern is very similar regardless of frequency. Furthermore, in comparison with FIG. 12, the DI values of FIG. 24 show substantial improvement.
- Although the present invention has been described primarily in a hearing health care context, the principles of the invention can be applied in any situation in which an obstacle to energy propagation is present between sensors or is provided to create a shadowing effect like the head shadowing effect in hearing health care applications. The energy may be acoustic, electromagnetic, or even optical. The invention should therefore be understood to be applicable to sonar applications, medical imaging applications, etc.
- It will be appreciated by those of ordinary skill in the art that the invention can be embodied in other specific forms without departing from the spirit or essential character thereof. The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restrictive. The scope of the invention is indicated by the appended claims rather than the foregoing description, and all changes which come within the meaning and range of equivalents thereof are intended to be embraced therein.
Claims (28)
1. A method of combining multiple sound signals to provide an enhanced sound output, comprising:
determining respective power levels of all or part of each of said multiple sound signals;
weighting the sound signals by applying a lesser weight to a sound signal having a higher power level and a greater weight to a sound signal having a lower power level to obtain weighted sound signals; and
combining the weighted sound signals to produce an output signal.
2. The method of claim 1 , further comprising weighting a sound signal in accordance with a ratio of signal power for that sound signal divided by a sum of signal powers for the multiple sound signals.
3. The method of claim 1 , further comprising weighting a sound signal in accordance with a ratio of noise power for that sound signal divided by a sum of noise powers for the multiple sound signals.
4. The method of claim 1 , further comprising:
splitting the sound signals into multiple bands; and
for each of multiple bands, combining multiple sound signals for that band by:
determining respective power levels of all or part of each of said multiple sound signals;
weighting the sound signals by applying a lesser weight to a sound signal having a higher power level and a greater weight to a sound signal having a lower power level to obtain weighted sound signals; and
combining the weighted sound signals to produce an output signal.
5. The method of claim 1 , further comprising:
producing multiple output signals in accordance with multiple weightings of the sound signals.
6. The method of claim 5 , wherein the multiple sound signals include a right sound signal and a left signal; the multiple output signals include a right output signal and a left output signal; and, in the right output signal, the right sound signal is weighted differently than indicated by relative powers of the right and left sound signals in accordance with a binaurality coefficient and, in the left output signal, the left sound signal is weighted differently than indicated by relative powers in accordance with a binaurality coefficient.
7. The method of claim 6 , further comprising providing separate binaurality coefficients for each of multiple frequency bands, and applying the binaurality coefficients to the sound signals on a band-by-band basis.
8. The method of claim 1 , wherein said determining, weighting and combining are performed in DSP code.
9. The method of claim 1 , wherein said determining, weighting and combining are performed in analog or switched capacitor filter circuitry.
10. The method of claim 1 , further comprising applying a noise-reduction algorithm to at least one of the multiple sound signals and the output signal.
11. A sound processing apparatus for processing multiple sound signals, comprising:
determination means for determining respective power levels of all or part of each of said multiple sound signals;
weighting means for determining a weighting of the multiple sound signals in accordance with the power within the multiple sound signals such that a lesser weight is assigned to a sound signal having a higher power level and a greater weight is assigned to a sound signal having a lower power level, and for applying the weighting to the multiple sound signals to obtain weighted sound signals; and
means for combining the weighted sound signals to obtain an output signal.
12. The apparatus of claim 11 , wherein said weighting means determines a weighting for a sound signal in accordance with a ratio of signal power for that sound signal divided by a sum of signal powers for the multiple sound signals.
13. The apparatus of claim 11 , wherein said weighting means determines a weighting for a sound signal in accordance with a ratio of noise power for that sound signal divided by a sum of noise powers for the multiple sound signals.
14. The apparatus of claim 11 , further comprising:
means for splitting the sound signals into multiple bands; and
for each of multiple bands, means for combining multiple sounds signals for that band, comprising:
determination means for determining respective power levels of all or part of each of said multiple sound signals;
weighting means for determining a weighting of the multiple sound signals in accordance with the noise power within the multiple sound signals such that a lesser weight is assigned to a noisier sound signal and a greater weight is assigned to a quieter sound signal, and for applying the weighting to the multiple sound signals to obtain weighted sound signals; and
means for combining the weighted sound signals to obtain an output signal.
15. The apparatus of claim 14 , wherein the weighting means determines multiple weightings of the sound signals, and the combining means produces multiple output signals in accordance with the multiple weightings.
16. The apparatus of claim 15 , wherein the multiple sound signals include a right sound signal and a left signal; the multiple output signals include a right output signal and a left output signal; and, in the right output signal, the right sound signal is weighted differently than indicated by relative powers of the right and left sound signals in accordance with a binaurality coefficient and, in the left output signal, the left sound signal is weighted differently than indicated by relative powers in accordance with a binaurality coefficient.
17. A method of achieving directional pickup of a radiated energy signal using a shadowing effect created by an energy propagation barrier, the method comprising:
locating a first sensor on one side of the barrier and a second sensor on an opposite side of the barrier;
adjusting amplitudes of signals produced by the first and second sensors to produce adjusted signals; and
summing together the adjusted signals to produce a directional signal.
18. The method of claim 17 , wherein the adjusted signals are of approximately equal magnitude.
19. The method of claim 17 , comprising summing together the adjusted signals to produce multiple directional signals.
20. The method of claim 19 , wherein the multiple directional signals form a binaural signal pair including a first directional signal in which energy from the first sensor is greater than energy from the second sensor, and a second directional signal in which energy from the second sensor is greater than energy from the first sensor.
21. The method of claim 17 , further comprising:
for each of multiple frequency bands, deriving a phase correction value and applying the phase correction value within that frequency band.
22. The method of claim 21 , wherein deriving a phase correction value comprises determining within that frequency band a measure of a magnitude difference between a signal produced by the first sensor and a signal produced by the second sensor.
23. Apparatus for achieving directional pickup of a radiated energy signal using a shadowing effect created by an energy propagation barrier, the apparatus comprising:
a first sensor located on one side of the barrier and a second sensor located on an opposite side of the barrier;
means for adjusting amplitudes of signals produced by the first and second sensors to produce adjusted signals; and
means for summing together the adjusted signals to produce a directional signal.
24. The apparatus of claim 23 , wherein the adjusted signals are of approximately equal magnitude.
25. The apparatus of claim 23 , comprising means for summing together the adjusted signals to produce multiple directional signals.
26. The apparatus of claim 25 , wherein the multiple directional signals form a binaural signal pair including a first directional signal in which energy from the first sensor is greater than energy from the second sensor, and a second directional signal in which energy from the second sensor is greater than energy from the first sensor.
27. The apparatus of claim 23 , further comprising:
means for, for each of multiple frequency bands, deriving a phase correction value and applying the phase correction value within that frequency band.
28. The apparatus of claim 27 , wherein said means for deriving a phase correction value comprises means for determining within that frequency band a measure of a magnitude difference between a signal produced by the first sensor and a signal produced by the second sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/812,718 US20040252852A1 (en) | 2000-07-14 | 2004-03-29 | Hearing system beamformer |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/617,108 US7206421B1 (en) | 2000-07-14 | 2000-07-14 | Hearing system beamformer |
US10/812,718 US20040252852A1 (en) | 2000-07-14 | 2004-03-29 | Hearing system beamformer |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/617,108 Division US7206421B1 (en) | 2000-07-14 | 2000-07-14 | Hearing system beamformer |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040252852A1 true US20040252852A1 (en) | 2004-12-16 |
Family
ID=33511895
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/617,108 Expired - Fee Related US7206421B1 (en) | 2000-07-14 | 2000-07-14 | Hearing system beamformer |
US10/812,718 Abandoned US20040252852A1 (en) | 2000-07-14 | 2004-03-29 | Hearing system beamformer |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/617,108 Expired - Fee Related US7206421B1 (en) | 2000-07-14 | 2000-07-14 | Hearing system beamformer |
Country Status (1)
Country | Link |
---|---|
US (2) | US7206421B1 (en) |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060227976A1 (en) * | 2005-04-07 | 2006-10-12 | Gennum Corporation | Binaural hearing instrument systems and methods |
US20060245596A1 (en) * | 2005-05-02 | 2006-11-02 | Siemens Audiologische Technik Gmbh | Hearing aid system |
US20060291679A1 (en) * | 2005-02-25 | 2006-12-28 | Burns Thomas H | Microphone placement in hearing assistance devices to provide controlled directivity |
US20070047743A1 (en) * | 2005-08-26 | 2007-03-01 | Step Communications Corporation, A Nevada Corporation | Method and apparatus for improving noise discrimination using enhanced phase difference value |
US20070047742A1 (en) * | 2005-08-26 | 2007-03-01 | Step Communications Corporation, A Nevada Corporation | Method and system for enhancing regional sensitivity noise discrimination |
US20070050441A1 (en) * | 2005-08-26 | 2007-03-01 | Step Communications Corporation,A Nevada Corporati | Method and apparatus for improving noise discrimination using attenuation factor |
EP1887831A2 (en) * | 2006-08-09 | 2008-02-13 | Fujitsu Limited | Method, apparatus and program for estimating the direction of a sound source |
EP1917533A2 (en) * | 2005-08-26 | 2008-05-07 | Step Communications Corporation | Method and apparatus for accommodating device and/or signal mismatch in a sensor array |
US20080152167A1 (en) * | 2006-12-22 | 2008-06-26 | Step Communications Corporation | Near-field vector signal enhancement |
WO2008089784A1 (en) | 2007-01-22 | 2008-07-31 | Phonak Ag | System and method for providing hearing assistance to a user |
US20080205659A1 (en) * | 2007-02-22 | 2008-08-28 | Siemens Audiologische Technik Gmbh | Method for improving spatial perception and corresponding hearing apparatus |
EP2009955A2 (en) | 2007-06-29 | 2008-12-31 | Siemens Medical Instruments Pte. Ltd. | Hearing device with passive, incoming volume-dependant sound reduction |
US20090136057A1 (en) * | 2007-08-22 | 2009-05-28 | Step Labs Inc. | Automated Sensor Signal Matching |
US20090190769A1 (en) * | 2008-01-29 | 2009-07-30 | Qualcomm Incorporated | Sound quality by intelligently selecting between signals from a plurality of microphones |
US20090264961A1 (en) * | 2008-04-22 | 2009-10-22 | Med-El Elektromedizinische Geraete Gmbh | Tonotopic Implant Stimulation |
US20090304203A1 (en) * | 2005-09-09 | 2009-12-10 | Simon Haykin | Method and device for binaural signal enhancement |
US20100109951A1 (en) * | 2005-08-26 | 2010-05-06 | Dolby Laboratories, Inc. | Beam former using phase difference enhancement |
US20100119077A1 (en) * | 2006-12-18 | 2010-05-13 | Phonak Ag | Active hearing protection system |
US20100312308A1 (en) * | 2007-03-22 | 2010-12-09 | Cochlear Limited | Bilateral input for auditory prosthesis |
US20110013791A1 (en) * | 2007-03-26 | 2011-01-20 | Kyriaky Griffin | Noise reduction in auditory prostheses |
US20110029288A1 (en) * | 2005-08-26 | 2011-02-03 | Dolby Laboratories Licensing Corporation | Method And Apparatus For Improving Noise Discrimination In Multiple Sensor Pairs |
WO2011032024A1 (en) * | 2009-09-11 | 2011-03-17 | Advanced Bionics, Llc | Dynamic noise reduction in auditory prosthesis systems |
EP2360943A1 (en) * | 2009-12-29 | 2011-08-24 | GN Resound A/S | Beamforming in hearing aids |
WO2011110239A1 (en) * | 2010-03-10 | 2011-09-15 | Siemens Medical Instruments Pte. Ltd. | Reverberation reduction for signals in a binaural hearing apparatus |
EP2375781A1 (en) * | 2010-04-07 | 2011-10-12 | Oticon A/S | Method for controlling a binaural hearing aid system and binaural hearing aid system |
US20110261965A1 (en) * | 2009-06-01 | 2011-10-27 | Red Tail Hawk Corporation | Talk-Through Listening Device Channel Switching |
EP2392938A1 (en) * | 2010-06-01 | 2011-12-07 | Sony Corporation | Sound Signal Processing Apparatus and Sound Signal Processing Method |
US20120010737A1 (en) * | 2009-03-16 | 2012-01-12 | Pioneer Corporation | Audio adjusting device |
US20120231732A1 (en) * | 2011-03-08 | 2012-09-13 | Nxp B.V. | Hearing device and methods of operating a hearing device |
US20150334493A1 (en) * | 2008-12-31 | 2015-11-19 | Thomas Howard Burns | Systems and methods of telecommunication for bilateral hearing instruments |
US9352154B2 (en) | 2007-03-22 | 2016-05-31 | Cochlear Limited | Input selection for an auditory prosthesis |
US9398379B2 (en) | 2012-04-25 | 2016-07-19 | Sivantos Pte. Ltd. | Method of controlling a directional characteristic, and hearing system |
US20160295322A1 (en) * | 2015-03-30 | 2016-10-06 | Bose Corporation | Adaptive Mixing of Sub-Band Signals |
EP3148217A1 (en) * | 2015-09-24 | 2017-03-29 | Sivantos Pte. Ltd. | Method for operating a binaural hearing system |
CN107682529A (en) * | 2017-09-07 | 2018-02-09 | 维沃移动通信有限公司 | A kind of acoustic signal processing method and mobile terminal |
US10003893B2 (en) * | 2016-06-03 | 2018-06-19 | Sivantos Pte. Ltd. | Method for operating a binaural hearing system and binaural hearing system |
CN110996238A (en) * | 2019-12-17 | 2020-04-10 | 杨伟锋 | Binaural synchronous signal processing hearing aid system and method |
WO2020105746A1 (en) * | 2018-11-20 | 2020-05-28 | Samsung Electronics Co., Ltd. | Method, device and system for data compression and decompression |
EP2541971B1 (en) * | 2010-02-24 | 2020-08-12 | Panasonic Intellectual Property Management Co., Ltd. | Sound processing device and sound processing method |
EP4084501A1 (en) * | 2021-04-29 | 2022-11-02 | GN Hearing A/S | Hearing device with omnidirectional sensitivity |
US11651759B2 (en) * | 2019-05-28 | 2023-05-16 | Bose Corporation | Gain adjustment in ANR system with multiple feedforward microphones |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1320281B1 (en) * | 2003-03-07 | 2013-08-07 | Phonak Ag | Binaural hearing device and method for controlling such a hearing device |
US8027495B2 (en) | 2003-03-07 | 2011-09-27 | Phonak Ag | Binaural hearing device and method for controlling a hearing device system |
JP4096801B2 (en) * | 2003-04-28 | 2008-06-04 | ヤマハ株式会社 | Simple stereo sound realization method, stereo sound generation system and musical sound generation control system |
US8543390B2 (en) * | 2004-10-26 | 2013-09-24 | Qnx Software Systems Limited | Multi-channel periodic signal enhancement system |
US7646876B2 (en) * | 2005-03-30 | 2010-01-12 | Polycom, Inc. | System and method for stereo operation of microphones for video conferencing system |
US20070043608A1 (en) * | 2005-08-22 | 2007-02-22 | Recordant, Inc. | Recorded customer interactions and training system, method and computer program product |
US8130977B2 (en) * | 2005-12-27 | 2012-03-06 | Polycom, Inc. | Cluster of first-order microphones and method of operation for stereo input of videoconferencing system |
WO2008106649A1 (en) * | 2007-03-01 | 2008-09-04 | Recordant, Inc. | Calibration of word spots system, method, and computer program product |
US8005238B2 (en) * | 2007-03-22 | 2011-08-23 | Microsoft Corporation | Robust adaptive beamforming with enhanced noise suppression |
US8005237B2 (en) * | 2007-05-17 | 2011-08-23 | Microsoft Corp. | Sensor array beamformer post-processor |
US8934640B2 (en) * | 2007-05-17 | 2015-01-13 | Creative Technology Ltd | Microphone array processor based on spatial analysis |
EP2243303A1 (en) * | 2008-02-20 | 2010-10-27 | Koninklijke Philips Electronics N.V. | Audio device and method of operation therefor |
KR100951321B1 (en) * | 2008-02-27 | 2010-04-05 | 아주대학교산학협력단 | Method of object tracking in 3D space based on particle filter using acoustic sensors |
CN102077607B (en) * | 2008-05-02 | 2014-12-10 | Gn奈康有限公司 | A method of combining at least two audio signals and a microphone system comprising at least two microphones |
CN102265643B (en) | 2008-12-23 | 2014-11-19 | 皇家飞利浦电子股份有限公司 | Speech reproducer, method and system |
US8433076B2 (en) | 2010-07-26 | 2013-04-30 | Motorola Mobility Llc | Electronic apparatus for generating beamformed audio signals with steerable nulls |
US9253566B1 (en) | 2011-02-10 | 2016-02-02 | Dolby Laboratories Licensing Corporation | Vector noise cancellation |
JP5744236B2 (en) | 2011-02-10 | 2015-07-08 | ドルビー ラボラトリーズ ライセンシング コーポレイション | System and method for wind detection and suppression |
US9100735B1 (en) | 2011-02-10 | 2015-08-04 | Dolby Laboratories Licensing Corporation | Vector noise cancellation |
US9037458B2 (en) * | 2011-02-23 | 2015-05-19 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for spatially selective audio augmentation |
US9794678B2 (en) * | 2011-05-13 | 2017-10-17 | Plantronics, Inc. | Psycho-acoustic noise suppression |
US20150172807A1 (en) | 2013-12-13 | 2015-06-18 | Gn Netcom A/S | Apparatus And A Method For Audio Signal Processing |
US9949041B2 (en) | 2014-08-12 | 2018-04-17 | Starkey Laboratories, Inc. | Hearing assistance device with beamformer optimized using a priori spatial information |
US10924872B2 (en) | 2016-02-23 | 2021-02-16 | Dolby Laboratories Licensing Corporation | Auxiliary signal for detecting microphone impairment |
US9843861B1 (en) | 2016-11-09 | 2017-12-12 | Bose Corporation | Controlling wind noise in a bilateral microphone array |
WO2019064181A1 (en) * | 2017-09-26 | 2019-04-04 | Cochlear Limited | Acoustic spot identification |
US10425745B1 (en) | 2018-05-17 | 2019-09-24 | Starkey Laboratories, Inc. | Adaptive binaural beamforming with preservation of spatial cues in hearing assistance devices |
US10812929B1 (en) * | 2019-08-28 | 2020-10-20 | Facebook Technologies, Llc | Inferring pinnae information via beam forming to produce individualized spatial audio |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3992584A (en) * | 1975-05-09 | 1976-11-16 | Dugan Daniel W | Automatic microphone mixer |
US4388494A (en) * | 1980-01-12 | 1983-06-14 | Schoene Peter | Process and apparatus for improved dummy head stereophonic reproduction |
US4956867A (en) * | 1989-04-20 | 1990-09-11 | Massachusetts Institute Of Technology | Adaptive beamforming for noise reduction |
US5228093A (en) * | 1991-10-24 | 1993-07-13 | Agnello Anthony M | Method for mixing source audio signals and an audio signal mixing system |
US5414776A (en) * | 1993-05-13 | 1995-05-09 | Lectrosonics, Inc. | Adaptive proportional gain audio mixing system |
US5764778A (en) * | 1995-06-07 | 1998-06-09 | Sensimetrics Corporation | Hearing aid headset having an array of microphones |
US6240192B1 (en) * | 1997-04-16 | 2001-05-29 | Dspfactory Ltd. | Apparatus for and method of filtering in an digital hearing aid, including an application specific integrated circuit and a programmable digital signal processor |
US6697494B1 (en) * | 1999-12-15 | 2004-02-24 | Phonak Ag | Method to generate a predetermined or predeterminable receiving characteristic of a digital hearing aid, and a digital hearing aid |
US6778674B1 (en) * | 1999-12-28 | 2004-08-17 | Texas Instruments Incorporated | Hearing assist device with directional detection and sound modification |
US6987856B1 (en) * | 1996-06-19 | 2006-01-17 | Board Of Trustees Of The University Of Illinois | Binaural signal processing techniques |
-
2000
- 2000-07-14 US US09/617,108 patent/US7206421B1/en not_active Expired - Fee Related
-
2004
- 2004-03-29 US US10/812,718 patent/US20040252852A1/en not_active Abandoned
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3992584A (en) * | 1975-05-09 | 1976-11-16 | Dugan Daniel W | Automatic microphone mixer |
US4388494A (en) * | 1980-01-12 | 1983-06-14 | Schoene Peter | Process and apparatus for improved dummy head stereophonic reproduction |
US4956867A (en) * | 1989-04-20 | 1990-09-11 | Massachusetts Institute Of Technology | Adaptive beamforming for noise reduction |
US5228093A (en) * | 1991-10-24 | 1993-07-13 | Agnello Anthony M | Method for mixing source audio signals and an audio signal mixing system |
US5414776A (en) * | 1993-05-13 | 1995-05-09 | Lectrosonics, Inc. | Adaptive proportional gain audio mixing system |
US5764778A (en) * | 1995-06-07 | 1998-06-09 | Sensimetrics Corporation | Hearing aid headset having an array of microphones |
US6987856B1 (en) * | 1996-06-19 | 2006-01-17 | Board Of Trustees Of The University Of Illinois | Binaural signal processing techniques |
US6240192B1 (en) * | 1997-04-16 | 2001-05-29 | Dspfactory Ltd. | Apparatus for and method of filtering in an digital hearing aid, including an application specific integrated circuit and a programmable digital signal processor |
US6697494B1 (en) * | 1999-12-15 | 2004-02-24 | Phonak Ag | Method to generate a predetermined or predeterminable receiving characteristic of a digital hearing aid, and a digital hearing aid |
US6778674B1 (en) * | 1999-12-28 | 2004-08-17 | Texas Instruments Incorporated | Hearing assist device with directional detection and sound modification |
Non-Patent Citations (1)
Title |
---|
Wikipedia, Discrete Fourier Transform, 22 pages, September 3, 2013. * |
Cited By (86)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7542580B2 (en) | 2005-02-25 | 2009-06-02 | Starkey Laboratories, Inc. | Microphone placement in hearing assistance devices to provide controlled directivity |
US20060291679A1 (en) * | 2005-02-25 | 2006-12-28 | Burns Thomas H | Microphone placement in hearing assistance devices to provide controlled directivity |
US7809149B2 (en) | 2005-02-25 | 2010-10-05 | Starkey Laboratories, Inc. | Microphone placement in hearing assistance devices to provide controlled directivity |
US20090323992A1 (en) * | 2005-02-25 | 2009-12-31 | Starkey Laboratories, Inc. | Microphone placement in hearing assistance devices to provide controlled directivity |
US20060227976A1 (en) * | 2005-04-07 | 2006-10-12 | Gennum Corporation | Binaural hearing instrument systems and methods |
US20060245596A1 (en) * | 2005-05-02 | 2006-11-02 | Siemens Audiologische Technik Gmbh | Hearing aid system |
EP1720376A2 (en) * | 2005-05-02 | 2006-11-08 | Siemens Audiologische Technik GmbH | Hearing-aid system with generation of monophonic signal and corresponding method |
US7783064B2 (en) * | 2005-05-02 | 2010-08-24 | Siemens Audiologische Technik Gmbh | Hearing aid system |
EP1720376A3 (en) * | 2005-05-02 | 2007-07-25 | Siemens Audiologische Technik GmbH | Hearing-aid system with generation of monophonic signal and corresponding method |
US8155926B2 (en) | 2005-08-26 | 2012-04-10 | Dolby Laboratories Licensing Corporation | Method and apparatus for accommodating device and/or signal mismatch in a sensor array |
US20070050441A1 (en) * | 2005-08-26 | 2007-03-01 | Step Communications Corporation,A Nevada Corporati | Method and apparatus for improving noise discrimination using attenuation factor |
EP1917837A2 (en) * | 2005-08-26 | 2008-05-07 | Step Communications Corporation | Method and apparatus for improving noise discrimination using attenuation factor |
EP1917838A4 (en) * | 2005-08-26 | 2011-03-23 | Dolby Lab Licensing Corp | Method and apparatus for improving noise discrimination using enhanced phase difference value |
US20110029288A1 (en) * | 2005-08-26 | 2011-02-03 | Dolby Laboratories Licensing Corporation | Method And Apparatus For Improving Noise Discrimination In Multiple Sensor Pairs |
US20070047743A1 (en) * | 2005-08-26 | 2007-03-01 | Step Communications Corporation, A Nevada Corporation | Method and apparatus for improving noise discrimination using enhanced phase difference value |
USRE47535E1 (en) | 2005-08-26 | 2019-07-23 | Dolby Laboratories Licensing Corporation | Method and apparatus for accommodating device and/or signal mismatch in a sensor array |
EP1917533A4 (en) * | 2005-08-26 | 2012-08-29 | Dolby Lab Licensing Corp | Method and apparatus for accommodating device and/or signal mismatch in a sensor array |
US8155927B2 (en) | 2005-08-26 | 2012-04-10 | Dolby Laboratories Licensing Corporation | Method and apparatus for improving noise discrimination in multiple sensor pairs |
EP1917837A4 (en) * | 2005-08-26 | 2011-03-02 | Dolby Lab Licensing Corp | Method and apparatus for improving noise discrimination using attenuation factor |
EP3193512A1 (en) * | 2005-08-26 | 2017-07-19 | Dolby Laboratories Licensing Corp. | Method and system for accommodating mismatch of a sensor array |
EP1917838A2 (en) * | 2005-08-26 | 2008-05-07 | Step Communications Corporation | Method and apparatus for improving noise discrimination using enhanced phase difference value |
US8111192B2 (en) | 2005-08-26 | 2012-02-07 | Dolby Laboratories Licensing Corporation | Beam former using phase difference enhancement |
US20090234618A1 (en) * | 2005-08-26 | 2009-09-17 | Step Labs, Inc. | Method & Apparatus For Accommodating Device And/Or Signal Mismatch In A Sensor Array |
US20070047742A1 (en) * | 2005-08-26 | 2007-03-01 | Step Communications Corporation, A Nevada Corporation | Method and system for enhancing regional sensitivity noise discrimination |
EP1917533A2 (en) * | 2005-08-26 | 2008-05-07 | Step Communications Corporation | Method and apparatus for accommodating device and/or signal mismatch in a sensor array |
US20100109951A1 (en) * | 2005-08-26 | 2010-05-06 | Dolby Laboratories, Inc. | Beam former using phase difference enhancement |
US8139787B2 (en) | 2005-09-09 | 2012-03-20 | Simon Haykin | Method and device for binaural signal enhancement |
US20090304203A1 (en) * | 2005-09-09 | 2009-12-10 | Simon Haykin | Method and device for binaural signal enhancement |
EP1887831A3 (en) * | 2006-08-09 | 2011-12-21 | Fujitsu Limited | Method, apparatus and program for estimating the direction of a sound source |
EP1887831A2 (en) * | 2006-08-09 | 2008-02-13 | Fujitsu Limited | Method, apparatus and program for estimating the direction of a sound source |
US20100119077A1 (en) * | 2006-12-18 | 2010-05-13 | Phonak Ag | Active hearing protection system |
EP2115565A1 (en) * | 2006-12-22 | 2009-11-11 | STEP Labs, Inc. | Near-field vector signal enhancement |
JP2010513987A (en) * | 2006-12-22 | 2010-04-30 | ステップ・ラブス・インク | Near-field vector signal amplification |
WO2008079327A1 (en) | 2006-12-22 | 2008-07-03 | Step Labs, Inc. | Near-field vector signal enhancement |
EP2115565A4 (en) * | 2006-12-22 | 2011-02-09 | Dolby Lab Licensing Corp | Near-field vector signal enhancement |
US20080152167A1 (en) * | 2006-12-22 | 2008-06-26 | Step Communications Corporation | Near-field vector signal enhancement |
US20100128907A1 (en) * | 2007-01-22 | 2010-05-27 | Phonak Ag | System and method for providing hearing assistance to a user |
US8526648B2 (en) | 2007-01-22 | 2013-09-03 | Phonak Ag | System and method for providing hearing assistance to a user |
WO2008089784A1 (en) | 2007-01-22 | 2008-07-31 | Phonak Ag | System and method for providing hearing assistance to a user |
US20080205659A1 (en) * | 2007-02-22 | 2008-08-28 | Siemens Audiologische Technik Gmbh | Method for improving spatial perception and corresponding hearing apparatus |
EP1962556A3 (en) * | 2007-02-22 | 2009-05-06 | Siemens Audiologische Technik GmbH | Method for improving spatial awareness and corresponding hearing device |
US20100312308A1 (en) * | 2007-03-22 | 2010-12-09 | Cochlear Limited | Bilateral input for auditory prosthesis |
US9352154B2 (en) | 2007-03-22 | 2016-05-31 | Cochlear Limited | Input selection for an auditory prosthesis |
US20110013791A1 (en) * | 2007-03-26 | 2011-01-20 | Kyriaky Griffin | Noise reduction in auditory prostheses |
US9049524B2 (en) | 2007-03-26 | 2015-06-02 | Cochlear Limited | Noise reduction in auditory prostheses |
EP2009955A3 (en) * | 2007-06-29 | 2011-02-23 | Siemens Medical Instruments Pte. Ltd. | Hearing device with passive, incoming volume-dependant noise reduction |
US8433086B2 (en) | 2007-06-29 | 2013-04-30 | Siemens Medical Instruments Pte. Ltd. | Hearing apparatus with passive input level-dependent noise reduction |
EP2009955A2 (en) | 2007-06-29 | 2008-12-31 | Siemens Medical Instruments Pte. Ltd. | Hearing device with passive, incoming volume-dependant sound reduction |
US20090003627A1 (en) * | 2007-06-29 | 2009-01-01 | Heike Heuermann | Hearing apparatus with passive input level-dependent noise reduction |
JP2010537586A (en) * | 2007-08-22 | 2010-12-02 | ドルビー・ラボラトリーズ・ライセンシング・コーポレーション | Automatic sensor signal matching |
US20090136057A1 (en) * | 2007-08-22 | 2009-05-28 | Step Labs Inc. | Automated Sensor Signal Matching |
US8855330B2 (en) * | 2007-08-22 | 2014-10-07 | Dolby Laboratories Licensing Corporation | Automated sensor signal matching |
US20090190769A1 (en) * | 2008-01-29 | 2009-07-30 | Qualcomm Incorporated | Sound quality by intelligently selecting between signals from a plurality of microphones |
US8411880B2 (en) * | 2008-01-29 | 2013-04-02 | Qualcomm Incorporated | Sound quality by intelligently selecting between signals from a plurality of microphones |
US9180295B2 (en) * | 2008-04-22 | 2015-11-10 | Med-El Elektromedizinische Geraete Gmbh | Tonotopic implant stimulation |
US20090264961A1 (en) * | 2008-04-22 | 2009-10-22 | Med-El Elektromedizinische Geraete Gmbh | Tonotopic Implant Stimulation |
US20150334493A1 (en) * | 2008-12-31 | 2015-11-19 | Thomas Howard Burns | Systems and methods of telecommunication for bilateral hearing instruments |
US9473859B2 (en) * | 2008-12-31 | 2016-10-18 | Starkey Laboratories, Inc. | Systems and methods of telecommunication for bilateral hearing instruments |
US20120010737A1 (en) * | 2009-03-16 | 2012-01-12 | Pioneer Corporation | Audio adjusting device |
US8379872B2 (en) * | 2009-06-01 | 2013-02-19 | Red Tail Hawk Corporation | Talk-through listening device channel switching |
US20110261965A1 (en) * | 2009-06-01 | 2011-10-27 | Red Tail Hawk Corporation | Talk-Through Listening Device Channel Switching |
WO2011032024A1 (en) * | 2009-09-11 | 2011-03-17 | Advanced Bionics, Llc | Dynamic noise reduction in auditory prosthesis systems |
EP2360943A1 (en) * | 2009-12-29 | 2011-08-24 | GN Resound A/S | Beamforming in hearing aids |
US9282411B2 (en) | 2009-12-29 | 2016-03-08 | Gn Resound A/S | Beamforming in hearing aids |
US8630431B2 (en) | 2009-12-29 | 2014-01-14 | Gn Resound A/S | Beamforming in hearing aids |
EP2629551A1 (en) * | 2009-12-29 | 2013-08-21 | GN Resound A/S | Binaural hearing aid |
EP2541971B1 (en) * | 2010-02-24 | 2020-08-12 | Panasonic Intellectual Property Management Co., Ltd. | Sound processing device and sound processing method |
WO2011110239A1 (en) * | 2010-03-10 | 2011-09-15 | Siemens Medical Instruments Pte. Ltd. | Reverberation reduction for signals in a binaural hearing apparatus |
EP2545717A1 (en) * | 2010-03-10 | 2013-01-16 | Siemens Medical Instruments Pte. Ltd. | Reverberation reduction for signals in a binaural hearing apparatus |
US9014406B2 (en) | 2010-04-07 | 2015-04-21 | Oticon A/S | Method for controlling a binaural hearing aid system and binaural hearing aid system |
EP2375781A1 (en) * | 2010-04-07 | 2011-10-12 | Oticon A/S | Method for controlling a binaural hearing aid system and binaural hearing aid system |
EP2392938A1 (en) * | 2010-06-01 | 2011-12-07 | Sony Corporation | Sound Signal Processing Apparatus and Sound Signal Processing Method |
US8976978B2 (en) | 2010-06-01 | 2015-03-10 | Sony Corporation | Sound signal processing apparatus and sound signal processing method |
US20120231732A1 (en) * | 2011-03-08 | 2012-09-13 | Nxp B.V. | Hearing device and methods of operating a hearing device |
US9398379B2 (en) | 2012-04-25 | 2016-07-19 | Sivantos Pte. Ltd. | Method of controlling a directional characteristic, and hearing system |
US20160295322A1 (en) * | 2015-03-30 | 2016-10-06 | Bose Corporation | Adaptive Mixing of Sub-Band Signals |
US9838782B2 (en) * | 2015-03-30 | 2017-12-05 | Bose Corporation | Adaptive mixing of sub-band signals |
EP3148217A1 (en) * | 2015-09-24 | 2017-03-29 | Sivantos Pte. Ltd. | Method for operating a binaural hearing system |
US10003893B2 (en) * | 2016-06-03 | 2018-06-19 | Sivantos Pte. Ltd. | Method for operating a binaural hearing system and binaural hearing system |
CN107682529A (en) * | 2017-09-07 | 2018-02-09 | 维沃移动通信有限公司 | A kind of acoustic signal processing method and mobile terminal |
WO2020105746A1 (en) * | 2018-11-20 | 2020-05-28 | Samsung Electronics Co., Ltd. | Method, device and system for data compression and decompression |
US11489542B2 (en) | 2018-11-20 | 2022-11-01 | Samsung Electronics Co., Ltd. | Method, device and system for data compression and decompression |
US11651759B2 (en) * | 2019-05-28 | 2023-05-16 | Bose Corporation | Gain adjustment in ANR system with multiple feedforward microphones |
CN110996238A (en) * | 2019-12-17 | 2020-04-10 | 杨伟锋 | Binaural synchronous signal processing hearing aid system and method |
EP4084501A1 (en) * | 2021-04-29 | 2022-11-02 | GN Hearing A/S | Hearing device with omnidirectional sensitivity |
US11617037B2 (en) | 2021-04-29 | 2023-03-28 | Gn Hearing A/S | Hearing device with omnidirectional sensitivity |
Also Published As
Publication number | Publication date |
---|---|
US7206421B1 (en) | 2007-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7206421B1 (en) | Hearing system beamformer | |
US8036404B2 (en) | Binaural signal enhancement system | |
Marquardt et al. | Theoretical analysis of linearly constrained multi-channel Wiener filtering algorithms for combined noise reduction and binaural cue preservation in binaural hearing aids | |
Lotter et al. | Dual-channel speech enhancement by superdirective beamforming | |
US9282411B2 (en) | Beamforming in hearing aids | |
US9761243B2 (en) | Vector noise cancellation | |
US8213623B2 (en) | Method to generate an output audio signal from two or more input audio signals | |
JP5617133B2 (en) | Directional output signal generation system and method | |
EP2716069B1 (en) | A method of processing a signal in a hearing instrument, and hearing instrument | |
US6704422B1 (en) | Method for controlling the directionality of the sound receiving characteristic of a hearing aid a hearing aid for carrying out the method | |
AU2004202688B2 (en) | Method For Operation Of A Hearing Aid, As Well As A Hearing Aid Having A Microphone System In Which Different Directional Characteristics Can Be Set | |
US9601133B2 (en) | Vector noise cancellation | |
CA2479675C (en) | Directional controller for a hearing aid | |
Lotter et al. | A stereo input-output superdirective beamformer for dual channel noise reduction. | |
Marquardt et al. | Incorporating relative transfer function preservation into the binaural multi-channel wiener filter for hearing aids | |
Puder | Acoustic noise control: An overview of several methods based on applications in hearing aids | |
Koutrouli | Low Complexity Beamformer structures for application in Hearing Aids | |
CN114550745A (en) | Method and device for binaural speech enhancement based on parametric unconstrained beam forming | |
Goetze et al. | OBJECTIVE PERCEPTUAL QUALITY ASSESSMENT FOR SELF-STEERING BINAURAL HEARING AID MICROPHONE ARRAYS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |