EP1133899B1 - Techniques de traitement d'un signal binaural - Google Patents

Techniques de traitement d'un signal binaural Download PDF

Info

Publication number
EP1133899B1
EP1133899B1 EP99958975A EP99958975A EP1133899B1 EP 1133899 B1 EP1133899 B1 EP 1133899B1 EP 99958975 A EP99958975 A EP 99958975A EP 99958975 A EP99958975 A EP 99958975A EP 1133899 B1 EP1133899 B1 EP 1133899B1
Authority
EP
European Patent Office
Prior art keywords
signal
signals
sources
frequency
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP99958975A
Other languages
German (de)
English (en)
Other versions
EP1133899A1 (fr
EP1133899A4 (fr
Inventor
Albert S. Feng
Chen Liu
Robert C. Bilger
Douglas L. Jones
Charissa R. Lansing
William D. O'brien, Jr.
Bruce C. Wheeler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Illinois
Original Assignee
University of Illinois
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/193,058 external-priority patent/US6987856B1/en
Application filed by University of Illinois filed Critical University of Illinois
Publication of EP1133899A1 publication Critical patent/EP1133899A1/fr
Publication of EP1133899A4 publication Critical patent/EP1133899A4/fr
Application granted granted Critical
Publication of EP1133899B1 publication Critical patent/EP1133899B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural

Definitions

  • the present invention is directed to the processing of acoustic signals, and more particularly, but not exclusively, relates to the localization and extraction of acoustic signals emanating from different sources.
  • the difficulty of extracting a desired signal in the presence of interfering signals is a long-standing problem confronted by acoustic engineers.
  • This problem impacts the design and construction of many kinds of devices such as systems for voice recognition and intelligence gathering.
  • hearing aid devices do not permit selective amplification of a desired sound when contaminated by noise from a nearby source - - particularly when the noise is more intense.
  • This problem is even more severe when the desired sound is a speech signal and the nearby noise is also a speech signal produced by multiple talkers (e.g. babble).
  • noise refers to random or nondeterministic signals and alternatively or additionally refers to any undesired signals and/or any signals interfering with the perception of a desired signal.
  • Still another approach has been the application of two microphones displaced from one another to provide two signals to emulate certain aspects of the binaural hearing system common to humans and many types of animals.
  • biologic binaural hearing Although certain aspects of biologic binaural hearing are not fully understood, it is believed that the ability to localize sound sources is based on evaluation by the auditory system of binaural time delays and sound levels across different frequency bands associated with each of the two sound signals. The localization of sound sources with systems based on these interaural time and intensity differences is discussed in W. Lindemann, Extension of a Binaural Cross-Correlation Model by Contralateral Inhibition - I. Simulation of Lateralization for Stationary Signals, 80 Journal of the Acoustical Society of America 1608 (December 1986 ).
  • the localization of multiple acoustic sources based on input from two microphones presents several significant challenges, as does the separation of a desired signal once the sound sources are localized.
  • the system set forth in Markus Bodden, Modeling Human Sound-Source Localization and the Cocktail-Party-Effect, 1 Acta Acustica 43 employs a Wiener filter including a windowing process in an attempt to derive a desired signal from binaural input signals once the location of the desired signal has been established.
  • this approach results in significant deterioration of desired speech fidelity.
  • the system has only been demonstrated to suppress noise of equal intensity to the desired signal at an azimuthal separation of at least 30 degrees.
  • the present invention relates to the processing of acoustic signals.
  • Various aspects of the invention are novel, nonobvious, and provide various advantages. While the actual nature of the invention covered herein can only be determined with reference to the claims appended hereto, selected forms and features of the preferred embodiments as disclosed herein are described briefly as follows.
  • One form of the present invention includes a method having the features of claim 1, which follows.
  • a further form of the present invention is a system having the features of claim 9, which follows.
  • An additional object is to provide a system for the localization and extraction of acoustic signals by detecting a combination of these signals with two differently located sensors.
  • Fig. 1 illustrates an acoustic signal processing system 10 of one embodiment of the present invention.
  • System 10 is configured to extract a desired acoustic signal from source 12 despite interference or noise emanating from nearby source 14.
  • System 10 includes a pair of acoustic sensors 22, 24 configured to detect acoustic excitation that includes signals from sources 12, 14.
  • Sensors 22, 24 are operatively coupled to processor 30 to process signals received therefrom.
  • processor 30 is operatively coupled to output device 90 to provide a signal representative of a desired signal from source 12 with reduced interference from source 14 as compared to composite acoustic signals presented to sensors 22, 24 from sources 12, 14.
  • Sensors 22, 24 are spaced apart from one another by distance D along lateral axis T.
  • Midpoint M represents the half way point along distance D from sensor 22 to sensor 24.
  • Reference axis R1 is aligned with source 12 and intersects axis T perpendicularly through midpoint M.
  • Axis N is aligned with source 14 and also intersects midpoint M.
  • Axis N is positioned to form angle A with reference axis R1.
  • Fig. 1 depicts an angle A of about 20 degrees.
  • reference axis R1 may be selected to define a reference azimuthal position of zero degrees in an azimuthal plane intersecting sources 12, 14; sensors 22, 24; and containing axes T, N, R1.
  • source 12 is "on-axis" and source 14, as aligned with axis N, is “off-axis.”
  • Source 14 is illustrated at about a 20 degree azimuth relative to source 12.
  • sensors 22, 24 are fixed relative to each other and configured to move in tandem to selectively position reference axis R1 relative to a desired acoustic signal source. It is also preferred that sensors 22, 24 be microphones of a conventional variety, such as omnidirectional dynamic microphones. In other embodiments, a different sensor type may be utilized as would occur to one skilled in the art.
  • a signal flow diagram illustrates various processing stages for the embodiment shown in FIG. 1 .
  • Sensors 22, 24 provide analog signals Lp(t) and Rp(t) corresponding to the left sensor 22, and right sensor 24, respectively.
  • Signals Lp(t) and Rp(t) are initially input to processor 30 in separate processing channels L and R.
  • signals Lp(t) and Rp(t) are conditioned and filtered in stages 32a, 32b to reduce aliasing, respectively.
  • the conditioned signals Lp(t), Rp(t) are input to corresponding Analog to Digital (A/D) converters 34a, 34b to provide discrete signals Lp(k), Rp(k), where k indexes discrete sampling events.
  • A/D stages 34a, 34b sample signals Lp(t) and Rp(t) at a rate of at least twice the frequency of the upper end of the audio frequency range to assure a high fidelity representation of the input signals.
  • Discrete signals Lp(k) and Rp(k) are transformed from the time domain to the frequency domain by a short-term Discrete Fourier Transform (DFT) algorithm in stages 36a, 36b to provide complex-valued signals XLp(m) and XRp(m).
  • frequencies M encompass the audible frequency range and the number of samples employed in the short-term analysis is selected to strike an optimum balance between processing speed limitations and desired resolution of resulting output signals.
  • an audio range of 0.1 to 6 kHz is sampled in A/D stages 34a, 34b at a rate of at least 12.5 kHz with 512 samples per short-term spectral analysis time frame.
  • the frequency domain analysis may be provided by an analog filter bank employed before A/D stages 34a, 34b. It should be understood that the spectral signals XLp(m) and XRp(m) may be represented as arrays each having a 1xM dimension corresponding to the different frequencies f m .
  • Spectral signals XLp(m) and XRp(m) are input to dual delay line 40 as further detailed in FIG. 3.
  • FIG. 3 depicts two delay lines 42, 44 each having N number of delay stages. Each delay line 42, 44 is sequentially configured with delay stages D 1 through D N .
  • Delay lines 42, 44 are configured to delay corresponding input signals in opposing directions from one delay stage to the next, and generally correspond to the dual hearing channels associated with a natural binaural hearing process.
  • Delay stages D 1 , D 2 , D 3 , ..., D N-2 , D N-1 , and D N each delay an input signal by corresponding time delay increments ⁇ 1 , ⁇ 2 , ⁇ 3 ,..., ⁇ N-2 , ⁇ N-1 , and ⁇ N , (collectively designated ⁇ i ), where index i goes from left to right.
  • XLp(m) is alternatively designated XLp 1 (m).
  • XLp 1 (m) is sequentially delayed by time delay increments ⁇ 1 , ⁇ 2 , ⁇ 3 , ..., ⁇ N-2 , ⁇ N-1 , and ⁇ N to produce delayed outputs at the taps of delay line 42 which are respectively designated XLp 2 (m), XLp 3 (m), Xlp 4 (m), ..., XLp N-1 (m), XLp N (m), and XLp N+1 (m); (collectively designated XLp i (m)).
  • XRp(m) is alternatively designated XRp N+1 (m).
  • XRp N+1 (m) is sequentially delayed by time delay increments increments ⁇ 1 , ⁇ 2 , ⁇ 3 , ..., ⁇ N-2 , ⁇ N-1 , and ⁇ N to produce delayed outputs at the taps of delay line 44 which are respectively designated: XRp N (m), XRp N-1 (m), XRp N-2 (m), ..., XLp 3 (m), XLp 2 (m), and Xlp 1 (m); (collectively designated XRp i (m)).
  • the input spectral signals and the signals from delay line 42, 44 taps are arranged as input pairs to operation array 46. A pair of taps from delay lines 42, 44 is illustrated as input pair P in FIG. 3 .
  • Operation array 46 has operation units (OP) numbered from 1 to N+1, depicted as OP1, OP2, OP3, OP4,..., OPN-2, OPN-1, OPN, OPN+1 and collectively designated operations OPi.
  • Input pairs from delay lines 42, 44 correspond to the operations of array 46 as follows: OP1[XLp 1 (m), XRp 1 (m)], OP2[XLp 2 (m), XRp 2 (m)], OP3[XLp 3 (m), XRp 3 (m)], OP4[XLp 4 (m), XRp 4 (m)],..., OPN-2[XLp (N-2) (m), XRp (N-2) (m)], OPN-1[XLp (N-1) (m), XRp (N-1) (m)], OPN[XLp N (m), XRp N (m)], and OPN+1[XLp (N+1 )(m), XRp (
  • the outputs of operation array 46 are Xp 1 (m), Xp 2 (m), Xp 3 (m), Xp 4 (m), ..., Xp (N-2) (m), Xp (N-1) (m), Xp N (m), and Xp (N+1) (m) (collectively designated Xp i (m)).
  • each OPi of operation array 46 is defined to be representative of a different azimuthal position relative to reference axis R.
  • This arrangement is analogous to the different interaural time differences associated with a natural binaural hearing system. In these natural systems, there is a relative position in each sound passageway within the ear that corresponds to a maximum "in phase" peak for a given sound source.
  • each operation of array 46 represents a position corresponding to a potential azimuthal or angular position range for a sound source, with the center operation representing a source at the zero azimuth -- a source aligned with reference axis R.
  • dual delay line 40 provides a two dimensional matrix of outputs with N+1 columns corresponding to Xp i (m), and M rows corresponding to each discrete frequency f m of Xp i (m). This (N+1)xM matrix is determined for each short-term spectral analysis interval p. Furthermore, by subtracting XRp i (m) from XLp i (m), the denominator of each expression CE1, CE2 is arranged to provide a minimum value of Xp i (m) when the signal pair is "in-phase" at the given frequency f m . Localization stage 70 uses this aspect of expressions CE1, CE2 to evaluate the location of source 14 relative to source 12.
  • Localization stage 70 accumulates P number of these matrices to determine the Xp i (m) representative of the position of source 14. For each column i, localization stage 70 performs a summation of the amplitude of
  • the X i are analyzed to determine the minimum value, min(X i ).
  • the index i of min(X i ), designated "I,” estimates the column representing the azimuthal location of source 14 relative to source 12.
  • the spectral content of a desired signal from source 12 when approximately aligned with reference axis R1, can be estimated from Xp I (m).
  • the spectral signal output by array 46 which most closely corresponds to the relative location of the "off-axis" source 14 contemporaneously provides a spectral representation of a signal emanating from source 12.
  • the signal processing of dual delay line 40 not only facilitates localization of source 14, but also provides a spectral estimate of the desired signal with only minimal post-localization processing to produce a representative output.
  • Post-localization processing includes provision of a designation signal by localization stage 70 to conceptual "switch" 80 to select the output column Xp I (m) of the dual delay line 40.
  • the Xp I (m) is routed by switch 80 to an inverse Discrete Fourier Transform algorithm (Inverse DFT) in stage 82 for conversion from a frequency domain signal representation to a discrete time domain signal representation denoted as s(k).
  • the signal estimate s(k) is then converted by Digital to Analog (D/A) converter 84 to provide an output signal to output device 90.
  • D/A Digital to Analog
  • Output device 90 amplifies the output signal from processor 30 with amplifier 92 and supplies the amplified signal to speaker 94 to provide the extracted signal from a source 12.
  • the present invention provides for the extraction of desired signals even when the interfering or noise signal is of equal or greater relative intensity.
  • the localization algorithm is configured to dynamically respond to relative positioning as well as relative strength, using automated learning techniques.
  • the present invention is adapted for use with highly directional microphones, more than two sensors to simultaneously extract multiple signals, and various adaptive amplification and filtering techniques known to those skilled in the art.
  • the present invention greatly improves computational efficiency compared to conventional systems by determining a spectral signal representative of the desired signal as part of the localization processing.
  • an output signal characteristic of a desired signal from source 12 is determined as a function of the signal pair XLp I (m), XRp I (m) corresponding to the separation of source 14 from source 12.
  • the exponents in the denominator of CE1, CE2 correspond to phase difference of frequencies f m resulting from the separation of source 12 from 14.
  • ⁇ i be selected to provide generally equal azimuthal positions relative to reference axis R. In one embodiment, this arrangement corresponds to the values of ⁇ i changing about 20% from the smallest to the largest value. In other embodiments, ⁇ i are all generally equal to one another, simplifying the operations of array 46. Notably, the pair of time increments in the numerator of CE1, CE2 corresponding to the separation of the sources 12 and 14 become approximately equal when all values ⁇ i are generally the same.
  • Processor 30 may be comprised of one or more components or pieces of equipment.
  • the processor may include digital circuits, analog circuits, or a combination of these circuit types.
  • Processor 30 may be programmable, an integrated state machine, or utilize a combination of these techniques.
  • processor 30 is a solid state integrated digital signal processor circuit customized to perform the process of the present invention with a minimum of external components and connections.
  • the extraction process of the present invention may be performed on variously arranged processing equipment configured to provide the corresponding functionality with one or more hardware modules, firmware modules, software modules, or a combination thereof.
  • signal includes, but is not limited to, software, firmware, hardware, programming variable, communication channel, and memory location representations.
  • System 110 includes eyeglasses G with microphones 122 and 124 fixed to glasses G and displaced from one another.
  • Microphones 122, 124 are operatively coupled to hearing aid processor 130.
  • Processor 130 is operatively coupled to output device 190.
  • Output device 190 is positioned in ear E to provide an audio signal to the wearer.
  • Microphones 122, 124 are utilized in a manner similar to sensors 22, 24 of the embodiment depicted by FIGS 1-3 .
  • processor 130 is configured with the signal extraction process depicted in of FIGS. 1-3 .
  • Processor 130 provides the extracted signal to output device 190 to provide an audio output to the wearer.
  • the wearer of system 110 may position glasses G to align with a desired sound source, such as a speech signal, to reduce interference from a nearby noise source off axis from the midpoint between microphones 122, 124.
  • the wearer may select a different signal by realigning with another desired sound source to reduce interference from a noisy environment.
  • Processor 130 and output device 190 may be separate units (as depicted) or included in a common unit worn in the ear.
  • the coupling between processor 130 and output device 190 may be an electrical cable or a wireless transmission.
  • sensors 122, 124 and processor 130 are remotely located and are configured to broadcast to one or more output devices 190 situated in the ear E via a radio frequency transmission or other conventional telecommunication method.
  • FIG. 4B shows a voice recognition system 210 employing the present invention as a front end speech enhancement device.
  • System 210 includes personal computer C with two microphones 222, 224 spaced apart from each other in a predetermined relationship. Microphones 222, 224 are operatively coupled to a processor 230 within computer C. Processor 230 provides an output signal for internal use or responsive reply via speakers 294a, 294b or visual display 296. An operator aligns in a predetermined relationship with microphones 222, 224 of computer C to deliver voice commands. Computer C is configured to receive these voice commands, extracting the desired voice command from a noisy environment in accordance with the process system of FIGS. 1-3 .
  • FIG. 10 depicts left “ L " and right “ R " input channels for signal processor 330 of system 310.
  • Channels L, R each include an acoustic sensor 22, 24 that provides an input signal x Ln (t), x Rn (t), respectively.
  • Input signals x Ln (t) and x Rn (t) correspond to composites of sounds from multiple acoustic sources located within the detection range of sensors 22, 24.
  • sensors 22, 24 be standard microphones spaced apart from each other at a predetermined distance D. In other embodiments a different sensor type or arrangement may be employed as would occur to those skilled in the art.
  • Sensors 22, 24 are operatively coupled to processor 330 of system 310 to provide input signals x Ln (t) and x Rn (t) to A/D converters 34a, 34b.
  • A/D converters 34a, 34b of processor 330 convert input signals x Ln (t) and x Rn (t) from an analog form to a discrete form as represented as x Ln (k) and x Rn (k), respectively; where "t” is the familiar continuous time domain variable and "k” is the familiar discrete sample index variable.
  • a corresponding pair of preconditioning filters may also be included in processor 330 as described in connection with system 10.
  • Delay operator 340 receives spectral signals X Ln (m) and X Rn (m) from stages 36a, 36b, respectively.
  • delay operator 340 may be described as a single dual delay line that simultaneously operates on M frequencies like dual delay line 40 of system 10.
  • the pair of frequency components from DFT stages 36a, 36b corresponding to a given value of m are inputs into a corresponding one of dual delay lines 342.
  • Each dual delay line 342 includes a left channel delay line 342a receiving a corresponding frequency component input from DFT stage 36a and right channel delay line 342b receiving a corresponding frequency component input from DFT stage 36b.
  • the I number of delayed signal pairs are provided on outputs 345 of delay stages 344 and are correspondingly sent to complex multipliers 346.
  • Multipliers 346 provide equalization weighting for the corresponding outputs of delay stages 344.
  • Each delayed signal pair from corresponding outputs 345 has one member from a delay stage 344 of left delay line 342a and the other member from a delay stage 344 of right delay line 342b.
  • Complex multipliers 346 of each dual delay line 342 output corresponding products of the I number of delayed signal pairs along taps 347.
  • the I number of signal pairs from taps 347 for each dual delay line 342 of operator 340 are input to signal operator 350.
  • the I number of pairs of multiplier taps 347 are each input to a different Operation Array (OA) 352 of operator 350.
  • Each pair of taps 347 is provided to a different operation stage 354 within a corresponding operation array 352.
  • OA Operation Array
  • Fig. 11 only a portion of delay stages 344, multipliers 346, and operation stages 354 are shown corresponding to the two stages at either end of delay lines 342a, 342b and the middle stages of delay lines 342a, 342b.
  • the intervening stages follow the pattern of the illustrated stages and are represented by ellipses to preserve clarity.
  • ITD max D/c is the maximum Intermicrophone Time Difference
  • D is the distance between sensors 22, 24
  • c is the speed of sound.
  • the dual delay-line structure is similar to the embodiment of system 10, except that a different dual delay line is represented for each value of m and multipliers 346 have been included to multiply each corresponding delay stage 344 by an appropriate one of equalization factors ⁇ i ( m ); where i is the delay stage index previously described.
  • elements ⁇ i ( m ) are selected to compensate for differences in the noise intensity at sensors 22, 24 as a function of both azimuth and frequency.
  • ⁇ i ( m ) One preferred embodiment for determining equalization factors ⁇ i ( m ) assumes amplitude compensation is independent of frequency, regarding any departure from this model as being negligible.
  • Fig. 12 depicts sensors 22, 24 and a representative acoustic source S 1 within the range of reception to provide input signals x Ln (t) and x Rn (t). According to the geometry illustrated in Fig.
  • Equation (A7) K ⁇ l 2 - lD ⁇ sin ⁇ i + D 2 / 4 , where, K is in units of inverse length and is chosen to provide a convenient amplitude level
  • equations (7) and (8) further define certain terms of equations (5) and (6) as follows:
  • X Ln i m X Ln m ⁇ exp - j ⁇ 2 ⁇ ⁇ ⁇ f m ⁇ ⁇ i
  • X Rn i m X Rn m ⁇ exp - j ⁇ 2 ⁇ ⁇ ⁇ f m ⁇ ⁇ I - i + 1
  • Each signal pair ⁇ i ( m ) X Ln (i) ( m ) and ⁇ l-i+ 1 ( m ) X Rn (i) ( m ) is input to a corresponding operation stage 354 of a corresponding one of operation arrays 352 for all m; where each operator array 352 corresponds to a different value of m as in the case of dual delay lines 342.
  • g i m ⁇ i / ⁇ g ⁇ exp j ⁇ ⁇ m ( ⁇ g - ⁇ i ) - ⁇ I - i + 1 / ⁇ I - g + 1 ⁇ exp - j ⁇ ⁇ m ( ⁇ g - ⁇ i ) ⁇ i / ⁇ s ⁇ exp j ⁇ ⁇ m ( ⁇ s - ⁇ i ) - ⁇ I - i + 1 / ⁇ I - s + 1 ⁇ exp - j ⁇ ⁇ m ( ⁇ s - ⁇ i ) , i ⁇ s .
  • T denotes transposition.
  • 2 , i 1 , ... , I .
  • Equation (14) is a double summation over time and frequency that approximates a double integration in a continuous time domain representation.
  • s ( S 1 1 , S 1 2 , ... , S 1 M , S 2 1 , ... , S 2 M , ... , S N 1 , ... , S N M ⁇ ) T
  • g i ( G 1 1 ⁇ ⁇ s . g i 1 , G 1 2 ⁇ ⁇ s . g i 2 , ... , G 1 M ⁇ ⁇ s . g i , G 2 1 ⁇ ⁇ s . g i , ... , G 2 M ⁇ ⁇ s . g i M , ... , G N 1 ⁇ ⁇ s .
  • the localization procedure includes finding the position i noise along the operation array 352 for each of the delay lines 342 that produces the minimum value of ⁇ x i ⁇ 2 2 .
  • the azimuth position of the noise source may be determined with equation (3).
  • the estimated noise location i noise may be utilized for noise cancellation or extraction of the desired signal as further described hereinafter.
  • Localization operator 360 embodies the localization technique of system 310.
  • summation operators 362 and 364 perform the operation corresponding to equation (14) to generate ⁇ x i ⁇ 2 2 for each value of i.
  • IFT Inverse Fourier Transform
  • extraction operator 380 preferably includes a multiplexer or matrix switch that has IxM complex inputs and M complex outputs; where a different set of M inputs is routed to the outputs for each different value of the index I in response to the output from stage 366 of localization operator 360.
  • Stage 82 converts the M spectral components received from extraction unit 380 to transform the spectral approximation of the desired signal, S ⁇ n m , from the frequency domain to the time domain as represented by signal S ⁇ n k .
  • Stage 82 is operatively coupled to digital-to-analog (D/A) converter 84.
  • D/A converter 84 receives signal S ⁇ n k for conversion from a discrete form to an analog form represented by S ⁇ n t .
  • Signal S ⁇ n t is input to output device 90 to provide an auditory representation of the desired signal or other indicia as would occur to those skilled in the art.
  • Stage 82, converter 84, and device 90 are further described in connection with system 10.
  • the terms w Ln and w Rn are equivalent to beamforming weights for the left and right channels, respectively.
  • the operation of equation (9) may be equivalently modeled as a beamforming procedure that places a null at the location corresponding to the predominant noise source, while steering to the desired output signal S ⁇ n t .
  • Fig. 14 depicts system 410 of still another embodiment of the present invention.
  • System 410 is depicted with several reference numerals that are the same as those used in connection with systems 10 and 310 and are intended to designate like features.
  • a number of acoustic sources 412, 414, 416, 418 are depicted in Fig. 14 within the reception range of acoustic sensors 22, 24 of system 410.
  • the positions of sources 412, 414, 416, 418 are also represented by the azimuth angles relative to axis AZ that are designated with reference numerals 412a, 414a, 416a, 418a.
  • angles 412a, 414a, 416a, 418a correspond to about 0°, +20°, +75°, and -75°, respectively.
  • Sensors 22, 24 are operatively coupled to signal processor 430 with axis AZ extending about midway therebetween.
  • Processor 430 receives input signals x Ln (t), x Rn (t) from sensors 22, 24 corresponding to left channel L and right channel R as described in connection with system 310.
  • Processor 430 processes signals x Ln (t), x Rn (t) and provides corresponding output signals to output devices 90, 490 operatively coupled thereto.
  • System 410 includes D/A converters 34a, 34b and DFT stages 36a, 36b to provide the same left and right channel processing as described in connection with system 310.
  • localization operator 460 of system 410 directly receives the output signals of delay operator 340 instead of the output signals of signal operator 350, unlike system 310.
  • the localization technique embodied in operator 460 begins by establishing two-dimensional (2-D) plots of coincidence loci in terms of frequency versus azimuth position.
  • the coincidence points of each loci represent a minimum difference between the left and right channels for each frequency as indexed by m. This minimum difference may be expressed as the minimum magnitude difference ⁇ X n (i) ( m ) between the frequency domain representations X Lp (i) ( m ) and X Lp (i) ( m ), at each discrete frequency m, yielding M/2 potentially different loci. If the acoustic sources are spatially coherent, then these loci will be the same across all frequencies.
  • phase difference technique detects the minimum angle between two complex vectors, there is also no need to compensate for the inter-sensor intensity difference.
  • Fig. 17 illustrates a 2-D coincidence plot 500 in terms of frequency in Hertz (Hz) along the vertical axis and azimuth position in degrees along the horizontal axis.
  • Plot 500 indicates two sources corresponding to the generally vertically aligned locus 512a at about -20 degrees and the vertically aligned locus 512b at about + 40 degrees.
  • Plot 500 also includes misidentified or phantom source points 514a, 514b, 514c, 514d, 514e at other azimuths positions that correspond to frequencies where both sources have significant energy. For more than two differently located competing acoustic sources, an even more complex plot generally results.
  • localization operator 460 integrates over time and frequency.
  • the signals are not correlated at each frequency, the mutual interference between the signals can be gradually attenuated by the temporal integration.
  • This approach averages the locations of the coincidences, not the value of the function used to determine the minima, which is equivalent to applying a Kronecker delta function, ⁇ ( i-i n ( m )) to ⁇ X n (i) ( m ) and averaging the ⁇ ( i-i n ( m )) over time.
  • the coincidence loci corresponding to the true position of the sources are enhanced.
  • ⁇ 0 is an empirically determined threshold. While this approach assumes the inter-sensor delays are independent of frequency, it has been found that departures from this assumption may generally be considered negligible.
  • the peaks in H n ( ⁇ d ) represent the source azimuth positions. If there are Q sources, Q peaks in H N ( ⁇ d ) may generally be expected. When compared with the patterns ⁇ ( i-i n ( m )) at each frequency, not only is the accuracy of localization enhanced when more than one sound source is present, but also almost immediate localization of multiple sources for the current frame is possible. Furthermore, although a dominant source usually has a higher peak in H N ( ⁇ d ) than do weaker sources, the height of a peak in H N ( ⁇ d ) only indirectly reflects the energy of the sound source.
  • the height is influenced by several factors such as the energy of the signal component corresponding to ⁇ d relative to the energy of the other signal components for each frequency band, the number of frequency bands, and the duration over which the signal is dominant.
  • each frequency is weighted equally in equation (28).
  • equation (28) As a result, masking of weaker sources by a dominant source is reduced.
  • existing time-domain cross-correlation methods incorporate the signal intensity, more heavily biasing sensitivity to the dominant source.
  • the interaural time difference is ambiguous for high frequency sounds where the acoustic wavelengths are less than the separation distance D between sensors 22, 24.
  • This ambiguity arises from the occurrence of phase multiples above this inter-sensor distance related frequency, such that a particular phase difference ⁇ cannot be distinguished from ⁇ +2 ⁇ .
  • the graph 600 of Fig. 18 illustrates a number of representative coincidence patterns 612, 614, 616, 618 determined in accordance with equations (31) and (32); where the vertical axis represents frequency in Hz and the horizontal axis represents azimuth position in degrees. Pattern 612 corresponds to the azimuth position of 0°. Pattern 612 has a primary relationship corresponding to the generally straight, solid vertical line 612a and a number of secondary relationships corresponding to curved solid line segments 612b.
  • patterns 614, 616, 618 correspond to azimuth positions of -75°, 20°, and 75° and have primary relationships shown as straight vertical lines 614a, 616a, 618a and secondary relationships shown as curved line segments 614b, 616b, 618b, in correspondingly different broken line formats.
  • the vertical lines are designated primary contours and the curved line segments are designated secondary contours.
  • Coincidence patterns for other azimuth positions may be determined with equations (31) and (32) as would occur to those skilled in the art.
  • each stencil is a predictive pattern of the coincidence points attributable to an acoustic source at the azimuth position of the primary contour, including phantom loci corresponding to other azimuth positions as a factor of frequency.
  • the stencil pattern may be used to filter the data at different values of m.
  • a ( ⁇ d ) denotes the number of points involved in the summation.
  • equation (33) is used in place of equation (30) when the second technique of integration over frequency is desired.
  • both variables ⁇ i and ⁇ i are equivalent and represent the position in the dual delay-line.
  • ⁇ d the range of valid ⁇ m,d is given by equation (35) as follows: - ITD max / 2 + ⁇ d ⁇ f m ⁇ ⁇ m . d ⁇ ITD max / 2 - ⁇ d ⁇ f m , ⁇ m , d is an integer . Changing value of ⁇ d only shifts the coincidence pattern (or stencil pattern) along the ⁇ i -axis without changing its shape.
  • equations (34) and (35) may be utilized as an alternative to separate patterns for each azimuth position of interest; however, because the scaling of the delay units ⁇ i is uniform along the dual delay-line, azimuthal partitioning by the dual delay-line is not uniform, with the regions close to the median plane having higher azimuthal resolution. On the other hand, in order to obtain an equivalent resolution in azimuth, using a uniform ⁇ i would require a much larger I of delay units than using a uniform ⁇ i .
  • the signal flow diagram of Fig. 16 further illustrates selected details concerning localization operator 460.
  • equalization factors ⁇ i ( m ) set to unity, the delayed signal of pairs of delay stages 344 are sent to coincidence detection operators 462 for each frequency indexed to m to determine the coincidence points.
  • Detection operators 462 determine the minima in accordance with equation (22) or (26).
  • Each coincidence detection operator 462 sends the results i n ( m ) to a corresponding pattern generator 464 for the given m.
  • Generators 464 build a 2-D coincidence plot for each frequency indexed to m and pass the results to a corresponding summation operator 466 to perform the operation expressed in equation (28) for that given frequency.
  • Summation operators 466 approximate integration over time.
  • Summation operators 466 pass results to summation operator 468 to approximate integration over frequency.
  • Operators 468 may be configured in accordance with equation (30) if artifacts resulting from the secondary relationships at high frequencies are not present or may be ignored.
  • stencil filtering with predictive coincidence patterns that include the secondary relationships may be performed by applying equation (33) with summation operator 468.
  • Operator 468 outputs H N ( ⁇ d ) to output device 490 to map corresponding acoustic source positional information.
  • Device 490 preferably includes a display or printer capable of providing a map representative of the spatial arrangement of the acoustic sources relative to the predetermined azimuth positions.
  • the acoustic sources may be localized and tracked dynamically as they move in space. Movement trajectories may be estimated from the sets of locations ⁇ ( i - i n ( m )) computed at each sample window n.
  • output device 490 is preferably not included. In still other embodiments, output device 90 may not be included.
  • the localization techniques of localization operator 460 are particularly suited to localize more than two acoustic sources of comparable sound pressure levels and frequency ranges, and need not specify an on-axis desired source. As such, the localization techniques of system 410 provide independent capabilities to localize and map more than two acoustic sources relative to a number of positions as defined with respect to sensors 22, 24. However, in other embodiments, the localization capability of localization operator 460 may also be utilized in conjunction with a designated reference source to perform extraction and noise suppression. Indeed, extraction operator 480 of the illustrated embodiment incorporates such features as more fully described hereinafter.
  • These signals include a component of the desired signal at frequency m as well as components from sources other than the one to be canceled.
  • the equalization factors ⁇ i ( m ) need not be set to unity once localization has taken place.
  • X n i noise ⁇ 2 m
  • X n i noiseQ m
  • the original signal ⁇ s ( m ) X Ln (s) (m) is included.
  • the resulting beam pattern may at times amplify other less intense noise sources.
  • the amount of noise amplification is larger than the amount of cancellation of the most intense noise source, further conditions may be included in operator 480 to prevent changing the input signal for that frequency at that moment.
  • Processors 30, 330, 430 include one or more components that embody the corresponding algorithms, stages, operators, converters, generators, arrays, procedures, processes, and techniques described in the respective equations and signal flow diagrams in software, hardware, or both utilizing techniques known to those skilled in the art.
  • Processors 30, 330, 430 may be of any type as would occur to those skilled in the art; however, it is preferred that processors 30, 330, 430 each be based on a solid-state, integrated digital signal processor with dedicated hardware to perform the necessary operations with a minimum of other components.
  • Systems 310, 410 may be sized and adapted for application as a hearing aide of the type described in connection with Fig. 4A .
  • sensors application 22, 24 are sized and shaped to fit in the pinnae of a listener, and the processor algorithms are adjusted to account for shadowing caused by the head and torso. This adjustment may be provided by deriving a Head-Related-Transfer-Function (HRTF) specific to the listener or from a population average using techniques known to those skilled in the art. This function is then used to provide appropriate weightings of the dual delay stage output signals that compensate for shadowing.
  • HRTF Head-Related-Transfer-Function
  • system 310, 410 are adapted to voice recognition systems of the type described in connection with Fig. 4B .
  • systems 310, 410 may be utilized in sound source mapping applications, or as would otherwise occur to those skilled in the art.
  • a signal processing system includes a first sensor configured to provide a first signal corresponding to an acoustic excitation; where this excitation includes a first acoustic signal from a first source and a second acoustic signal from a second source displaced from the first source.
  • the system also includes a second sensor displaced from the first sensor that is configured to provide a second signal corresponding to the excitation.
  • a processor responsive to the first and second sensor signals that has means for generating a desired signal with a spectrum representative of the first acoustic signal.
  • This means includes a first delay line having a number of first taps to provide a number of delayed first signals and a second delay line having a number of second taps to provide a number of delayed second signals.
  • the system also includes output means for generating a sensory output representative of the desired signal.
  • a method of signal processing includes detecting an acoustic excitation at both a first location to provide a corresponding first signal and at a second location to provide a corresponding second signal.
  • the excitation is a composite of a desired acoustic signal from a first source and an interfering acoustic signal from a second source that is spaced apart from the first source.
  • This method also includes spatially localizing the second source relative to the first source as a function of the first and second signals and generating a characteristic signal representative of the desired acoustic signal during performance of this localization.
  • a Sun Sparc-20 workstation was programmed to emulate the signal extraction process of the present invention.
  • One loudspeaker (L1) was used to emit a speech signal and another loudspeaker (L2) was used to emit babble noise in a semi-anechoic room.
  • Two microphones of a conventional type were positioned in the room and operatively coupled to the workstation. The microphones had an inter-microphone distance of about 15 centimeters and were positioned about 3 feet from L1.
  • L1 was aligned with the midpoint between the microphones to define a zero degree azimuth.
  • L2 was placed at different azimuths relative to L1 approximately equidistant to the midpoint between L1 and L2.
  • FIG. 5 a clean speech of a sentence about two seconds long is depicted, emanating from L1 without interference from L2.
  • FIG. 6 depicts a composite signal from L1 and L2.
  • the composite signal includes babble noise from L2 combined with the speech signal depicted in FIG. 5 .
  • the babble noise and speech signal are of generally equal intensity (0dB) with L2 placed at a 60 degree azimuth relative to L1.
  • FIG. 7 depicts the signal recovered from the composite signal of FIG. 6 . This signal is nearly the same as the signal of FIG. 5 .
  • FIG. 8 depicts another composite signal where the babble noise is 30dB more intense than the desired signal of FIG. 5 . Furthermore, L2 is placed at only a 2 degree azimuth relative to L1.
  • FIG. 9 depicts the signal recovered from the composite signal of FIG. 8 , providing a clearly intelligible representation of the signal of FIG. 5 despite the greater intensity of the babble noise from L2 and the nearby location.
  • the experimental set-up for the tests utilized two microphones for sensors 22, 24 with an inter-microphone distance of about 144mm. No diffraction or shadowing effect existed between the two microphones, and the inter-microphone intensity difference was set to zero for the tests.
  • the signals were low-pass filtered at 6 kHz and sampled at a 12.8-kHz rate with 16-bit quantization.
  • a Wintel-based computer was programmed to receive the quantized signals for processing in accordance with the present invention and output the test results described hereinafter.
  • a 20-ms segment of signal was weighted by a Hamming window and then padded with zeros to 2048 points for DFT, and thus the frequency resolution was about 6Hz.
  • the dual delay-line used in the tests was azimuth-uniform.
  • the coincidence detection method was based on minimum magnitude differences.
  • Each of the five tests consisted of four subtests in which a different talker was taken as the desired source.
  • the speech materials four equally-intense spondaic words
  • the speech material was presented in free-field.
  • the localization of the talkers was done using both the equation (30) and equation (33) techniques.
  • the experimental results are presented in Tables I, II, III, and IV of FIGs. 19-22 , respectively.
  • the five tests described in Table I of FIG. 19 approximate integration over frequency by utilizing equation (30); and includes two male speakers M1, M2 and two female speakers F1, F2.
  • the five tests described in Table II of FIG. 20 are the same as Table I, except that integration over frequency was approximated by equation (33).
  • the five tests described in Table III of FIG. 21 approximate integration over frequency by utilizing equation (30); and includes two different male speakers M3, M4 and two different female speakers F3, F4.
  • the five tests described in Table IV of FIG. 22 are the same as Table III. except that integration over frequency was approximated by equation (33).
  • the data was arranged in a matrix with the numbers on the diagonal line representing the degree of noise cancellation in dB of the desired source (ideally 0 dB) and the numbers elsewhere representing the degree of noise cancellation for each noise source.
  • the next to the last column shows a degree of cancellation of all the noise sources lumped together, while the last column gives the net intelligibility-weighted improvement (which considers both noise cancellation and loss in the desired signal).
  • the results generally show cancellation in the intelligibility-weighted measure in a range of about 3-11 dB, while degradation of the desired source was generally less than about 0.1 dB).
  • the total noise cancellation was in the range of about 8-12 dB.
  • Comparison of the various Tables suggests very little dependence on the talker or the speech materials used in the tests. Similar results were obtained from six-talker experiments. Generally, a 7-10 dB enhancement in the intelligibility-weighted signal-to-noise ratio resulted when there were six equally loud, temporally aligned speech sounds originating from six different loudspeakers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Stereophonic System (AREA)
  • Stereophonic Arrangements (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Claims (14)

  1. Procédé, comprenant:
    le positionnement d'un premier capteur acoustique (22) et d'un second capteur acoustique (24) pour détecter une pluralité de sources acoustiques situées différemment (10, 12, 412, 416, 418);
    la génération d'un premier signal correspondant auxdites sources avec ledit premier capteur et d'un second signal correspondant auxdites sources avec ledit second capteur;
    la fourniture d'un certain nombre de paires de signaux retardés à partir des premier et second signaux, les paires de signaux retardés correspondant chacune à l'une d'un certain nombre de positions par rapport aux premier et second capteurs; et
    la localisation des sources en fonction des paires de signaux retardés et d'un certain nombre de motifs de coïncidences (612, 614, 616, 618) générés par les paires de signaux retardés, chacun des motifs correspondant à l'une dudit certain nombre de positions et établissant une variation attendue d'information de position de source acoustique avec une fréquence attribuable à une source en l'une dudit certain nombre de positions,
    caractérisé en ce que:
    les motifs de coïncidence (612, 614, 616, 618) correspondent chacun à un certain nombre de relations caractérisant une variation de position de source acoustique fantôme (514a, 514b, 514c, 514d, 514e) avec la fréquence, et les relations correspondent chacune à un multiple de phases ambiguës différentes.
  2. Procédé selon la revendication 1, comprenant en outre la détermination des relations pour chacun des motifs de coïncidences (612, 614, 616, 618) en fonction d'un distance séparant les premier et second capteurs (22, 24).
  3. Procédé selon la revendication 1, dans lequel les relations correspondent chacune à un contour secondaire qui s'incurve en relation avec un contour primaire, le contour primaire représentant l'information de position de source acoustique invariante en fréquence, déterminée à partir de la paire de signaux retardés correspondant à l'une des positions.
  4. Procédé selon l'une quelconque des revendications 1-3, dans lequel ladite localisation inclut un filtrage avec les motifs de coïncidences (612, 614, 616, 618) afin d'améliorer l'information de position vraie avec l'information de position fantôme.
  5. Procédé selon la revendication 4, dans lequel ladite localisation inclut une intégration sur le temps et une intégration sur la fréquence.
  6. Procédé selon l'une quelconque des revendications 1-3, dans lequel le premier capteur (22) et le second capteur (24) font partie d'un dispositif d'assistance auditive (110) et comprenant en outre l'ajustement des paires de signaux retardés avec une fonction de transfert en relation avec la tête.
  7. Procédé selon l'une quelconque des revendications 1-3, comprenant en outre:
    l'extraction d'un signal souhaité après ladite localisation; et
    la réduction d'un jeu différent de composantes de fréquence pour chacun d'un certain nombre choisi des sources, afin de réduire le bruit.
  8. Procédé selon l'une quelconque des revendications 1-3, dans lequel les positions correspondent chacune à un azimut établi par rapport aux premier et second capteurs (22, 24) et comprenant en outre la génération d'une carte pxésentan2 la position relative de chacune des sources (10, 12, 412, 414, 416, 418).
  9. Système, comprenant:
    deux capteurs acoustiques espacés (22, 24) dont chacun est configuré pour générer l'un correspondant de deux signaux d'entrée, les signaux étant représentatifs d'un certain nombre de sources acoustiques situées différemment (10, 12, 412, 414, 416, 418);
    un opérateur de retard (340) répondant auxdits signaux d'entrée pour générer un certain nombre de signaux retardés correspondant chacun à l'une d'un certain nombre de positions par rapport auxdits capteurs;
    un certain nombre de motifs de coïncidence (612, 614, 616, 618) générés par lesdites paires de signaux retardés, dans lequel chaque motif correspond à l'une dudit certain nombre de positions;
    un opérateur de localisation (360, 460) répondant auxdits signaux retardés pour déterminer un certain nombre de signaux de localisation de source de son à partir desdits signaux retardés et dudit certain nombre de motifs de coïncidence (612, 614, 616, 618); et
    un dispositif de sortie (90, 190, 490) répondant auxdits signaux de localisation pour fournir une sortie correspondant à au moins l'une desdites sources,
    caractérisé en ce que:
    lesdits motifs qui correspondent chacun à l'une desdites positions mettent en relation une information de position de source variant en fréquence provoquée par des multiples de phases ambiguës avec ladite une desdites positions, afin d'améliorer la localisation de source de son.
  10. Système selon la revendication 9, comprenant en outre:
    un convertisseur analogique-numérique (36a, 36b) répondant auxdits signaux d'entrée pour convertir chacun desdits signaux d'entrée d'une forme analogique à une forme numérique; et
    un premier étage de transformation (36a, 36b) répondant à ladite forme numérique desdits signaux d'entrée pour transformer lesdits signaux d'entrée d'une forme du domaine temporel à une forme du domaine des fréquences en termes d'une pluralité de fréquences discrètes, ledit opérateur de retard incluant une ligne de retard duale pour chacune des fréquences.
  11. Système selon la revendication 10, comprenant en outre:
    un opérateur d'extraction (380, 480) répondant auxdits signaux de localisation pour extraire un signal souhaité;
    un second étage de transformation (82) répondant audit signal souhaité pour transformer ledit signal souhaité d'une forme du domaine des fréquences numériques à une forme du domaine temporel numérique; et
    un convertisseur numérique-analogue (84) répondant à ladite forme du domaine temporel numérique pour convertir ledit signal souhaité en une forme de sortie analogique pour ledit dispositif de sortie.
  12. Système selon l'une quelconque des revendications 9-11, dans lequel ledit dispositif de sortie (90, 190, 490) est configuré pour fournir une carte de localisations de sources acoustiques.
  13. Système selon l'une quelconque des revendications 9-11, dans lequel ledit opérateur de retard (340) et ledit opérateur de localisation (360, 460) sont définis par un processeur de signal à l'état solide intégré.
  14. Système selon l'une quelconque des revendications 9-11, dans lequel ledit opérateur de localisation (360, 460) répond auxdits signaux de retard pour déterminer celle la plus proche desdits positions pour l'une desdits sources (10, 12, 412, 414, 416, 418) en fonction d'au moins un desdits signaux retardés correspondant à ladite celle la plus proche desdites positions et au moins deux autres desdits signaux retardés correspondant à l'autre desdites positions, lesdits au moins deux autres desdits signaux retardés étant déterminés avec l'un correspondant desdits motifs de coïncidences (612, 614, 616, 618).
EP99958975A 1998-11-16 1999-11-16 Techniques de traitement d'un signal binaural Expired - Lifetime EP1133899B1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US193058 1998-11-16
US09/193,058 US6987856B1 (en) 1996-06-19 1998-11-16 Binaural signal processing techniques
PCT/US1999/026965 WO2000030404A1 (fr) 1998-11-16 1999-11-16 Techniques de traitement d'un signal binaural

Publications (3)

Publication Number Publication Date
EP1133899A1 EP1133899A1 (fr) 2001-09-19
EP1133899A4 EP1133899A4 (fr) 2003-09-03
EP1133899B1 true EP1133899B1 (fr) 2008-08-06

Family

ID=22712122

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99958975A Expired - Lifetime EP1133899B1 (fr) 1998-11-16 1999-11-16 Techniques de traitement d'un signal binaural

Country Status (9)

Country Link
EP (1) EP1133899B1 (fr)
JP (1) JP3745227B2 (fr)
CN (1) CN1333994A (fr)
AT (1) ATE404028T1 (fr)
AU (1) AU748113B2 (fr)
CA (1) CA2348894C (fr)
DE (1) DE69939272D1 (fr)
DK (1) DK1133899T3 (fr)
WO (1) WO2000030404A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9049531B2 (en) 2009-11-12 2015-06-02 Institut Fur Rundfunktechnik Gmbh Method for dubbing microphone signals of a sound recording having a plurality of microphones

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7206423B1 (en) 2000-05-10 2007-04-17 Board Of Trustees Of University Of Illinois Intrabody communication for a hearing aid
ITMI20020566A1 (it) * 2002-03-18 2003-09-18 Daniele Ramenzoni Dispositivo per captare movimenti anche piccoli nell'aria e nei fluidi adatto per applicazioni cibernetiche e di laboratorio come trasduttor
US7433821B2 (en) * 2003-12-18 2008-10-07 Honeywell International, Inc. Methods and systems for intelligibility measurement of audio announcement systems
JP4580210B2 (ja) * 2004-10-19 2010-11-10 ソニー株式会社 音声信号処理装置および音声信号処理方法
DE602007011807D1 (de) * 2006-11-09 2011-02-17 Panasonic Corp Schallquellenpositionsdetektor
JP4854533B2 (ja) * 2007-01-30 2012-01-18 富士通株式会社 音響判定方法、音響判定装置及びコンピュータプログラム
CA2721702C (fr) * 2008-05-09 2016-09-27 Nokia Corporation Appareil et procedes pour reproduction de codage audio
US20100074460A1 (en) * 2008-09-25 2010-03-25 Lucent Technologies Inc. Self-steering directional hearing aid and method of operation thereof
WO2010051606A1 (fr) * 2008-11-05 2010-05-14 Hear Ip Pty Ltd Système et procédé de production d'un signal de sortie directionnel
US20110096941A1 (en) * 2009-10-28 2011-04-28 Alcatel-Lucent Usa, Incorporated Self-steering directional loudspeakers and a method of operation thereof
CN102111697B (zh) * 2009-12-28 2015-03-25 歌尔声学股份有限公司 一种麦克风阵列降噪控制方法及装置
WO2011101045A1 (fr) * 2010-02-19 2011-08-25 Siemens Medical Instruments Pte. Ltd. Dispositif et procédé pour diminuer le bruit spatial en fonction de la direction
EP2709101B1 (fr) * 2012-09-13 2015-03-18 Nxp B.V. Système et procédé de traitement audio numérique
JP6107151B2 (ja) 2013-01-15 2017-04-05 富士通株式会社 雑音抑圧装置、方法、及びプログラム
CN105307095B (zh) * 2015-09-15 2019-09-10 中国电子科技集团公司第四十一研究所 一种基于fft的高分辨率音频频率测量方法
CN108727363B (zh) * 2017-04-19 2020-06-19 劲方医药科技(上海)有限公司 一种新型细胞周期蛋白依赖性激酶cdk9抑制剂
CN109493877B (zh) * 2017-09-12 2022-01-28 清华大学 一种助听装置的语音增强方法和装置
KR20240033108A (ko) * 2017-12-07 2024-03-12 헤드 테크놀로지 에스아에르엘 음성인식 오디오 시스템 및 방법
CN111435598B (zh) * 2019-01-15 2023-08-18 北京地平线机器人技术研发有限公司 语音信号处理方法、装置、计算机可读介质及电子设备
CN112190259B (zh) * 2020-09-10 2024-06-28 北京济声科技有限公司 用于测试声源定位能力的方法、测试者终端、受试者终端
CN114624652B (zh) * 2022-03-16 2022-09-30 浙江浙能技术研究院有限公司 一种强多径干扰条件下的声源定位方法
CN117031397B (zh) * 2023-10-07 2023-12-12 成都流体动力创新中心 一种运动物体噪声源定位和评估的快速计算方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6223300A (ja) * 1985-07-23 1987-01-31 Victor Co Of Japan Ltd 指向性マイクロホン装置
US5029216A (en) * 1989-06-09 1991-07-02 The United States Of America As Represented By The Administrator Of The National Aeronautics & Space Administration Visual aid for the hearing impaired
US5400409A (en) * 1992-12-23 1995-03-21 Daimler-Benz Ag Noise-reduction method for noise-affected voice channels
US5473701A (en) * 1993-11-05 1995-12-05 At&T Corp. Adaptive microphone array
US6130949A (en) * 1996-09-18 2000-10-10 Nippon Telegraph And Telephone Corporation Method and apparatus for separation of source, program recorded medium therefor, method and apparatus for detection of sound source zone, and program recorded medium therefor
EP0976197B1 (fr) * 1997-04-14 2003-06-25 Andrea Electronics Corporation Systeme et procede de suppression d'interferences par double traitement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9049531B2 (en) 2009-11-12 2015-06-02 Institut Fur Rundfunktechnik Gmbh Method for dubbing microphone signals of a sound recording having a plurality of microphones

Also Published As

Publication number Publication date
DE69939272D1 (de) 2008-09-18
CA2348894A1 (fr) 2000-05-22
AU748113B2 (en) 2002-05-30
ATE404028T1 (de) 2008-08-15
CN1333994A (zh) 2002-01-30
DK1133899T3 (da) 2009-01-12
JP3745227B2 (ja) 2006-02-15
AU1624000A (en) 2000-06-05
EP1133899A1 (fr) 2001-09-19
JP2002530966A (ja) 2002-09-17
CA2348894C (fr) 2007-09-25
WO2000030404A1 (fr) 2000-05-25
EP1133899A4 (fr) 2003-09-03

Similar Documents

Publication Publication Date Title
US6978159B2 (en) Binaural signal processing using multiple acoustic sensors and digital filtering
US6987856B1 (en) Binaural signal processing techniques
EP1133899B1 (fr) Techniques de traitement d'un signal binaural
US6222927B1 (en) Binaural signal processing system and method
US7076072B2 (en) Systems and methods for interference-suppression with directional sensing patterns
EP2537353B1 (fr) Dispositif et procédé pour diminuer le bruit spatial en fonction de la direction
Liu et al. Localization of multiple sound sources with two microphones
EP3509325B1 (fr) Prothèse auditive comprenant une unité de filtrage à formateur de faisceau comprenant une unité de lissage
CA2407855C (fr) Techniques de suppression d'interferences
EP1329134B1 (fr) Communication intracorporelle destinee a une prothese auditive
JP3521914B2 (ja) 超指向性マイクロホンアレイ
Lotter et al. Dual-channel speech enhancement by superdirective beamforming
EP2716069B1 (fr) Procédé de traitement d'un signal dans un instrument auditif, et instrument auditif
Marquardt et al. Theoretical analysis of linearly constrained multi-channel Wiener filtering algorithms for combined noise reduction and binaural cue preservation in binaural hearing aids
Lockwood et al. Performance of time-and frequency-domain binaural beamformers based on recorded signals from real rooms
EP1691576A2 (fr) Procédé pour régler un système auditif,procédé pour actionner le système auditif et système auditif
Marquardt et al. Interaural coherence preservation for binaural noise reduction using partial noise estimation and spectral postfiltering
Saunders et al. Speech intelligibility enhancement using hearing-aid array processing
EP1579728B1 (fr) Systeme de microphone a reponse directionnelle
Lobato et al. Worst-case-optimization robust-MVDR beamformer for stereo noise reduction in hearing aids
Cornelis et al. Reduced-bandwidth multi-channel Wiener filter based binaural noise reduction and localization cue preservation in binaural hearing aids
Maj et al. SVD-based optimal filtering for noise reduction in dual microphone hearing aids: a real time implementation and perceptual evaluation
Moore et al. Improving robustness of adaptive beamforming for hearing devices
EP3148217B1 (fr) Procédé de fonctionnement d'un système auditif binauriculaire
Shimoyama et al. Multiple acoustic source localization using ambiguous phase differences under reverberative conditions

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010529

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

A4 Supplementary search report drawn up and despatched

Effective date: 20030721

RIC1 Information provided on ipc code assigned before grant

Ipc: 7H 04R 3/00 B

Ipc: 7H 04B 15/00 B

Ipc: 7H 03B 29/00 B

Ipc: 7A 61F 11/06 B

Ipc: 7H 04R 1/10 B

Ipc: 7H 04R 25/00 A

17Q First examination report despatched

Effective date: 20040903

17Q First examination report despatched

Effective date: 20040903

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 69939272

Country of ref document: DE

Date of ref document: 20080918

Kind code of ref document: P

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: SERVOPATENT GMBH

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080806

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081117

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DK

Payment date: 20081212

Year of fee payment: 10

Ref country code: CH

Payment date: 20081216

Year of fee payment: 10

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080806

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080806

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080806

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090106

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081130

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20081210

Year of fee payment: 10

26N No opposition filed

Effective date: 20090507

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080806

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20090731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081116

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081116

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: DK

Ref legal event code: EBP

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20091116

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080806

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091130

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081107

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091116

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091130

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20101110

Year of fee payment: 12

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081130

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 69939272

Country of ref document: DE

Effective date: 20120601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120601