New! Search for patents from more than 100 countries including Australia, Brazil, Sweden and more

EP1305975B1 - Adaptive microphone array system with preserving binaural cues - Google Patents

Adaptive microphone array system with preserving binaural cues Download PDF

Info

Publication number
EP1305975B1
EP1305975B1 EP20010942048 EP01942048A EP1305975B1 EP 1305975 B1 EP1305975 B1 EP 1305975B1 EP 20010942048 EP20010942048 EP 20010942048 EP 01942048 A EP01942048 A EP 01942048A EP 1305975 B1 EP1305975 B1 EP 1305975B1
Authority
EP
EUROPEAN PATENT OFFICE
Prior art keywords
noise
data
adaptive
left
right
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP20010942048
Other languages
German (de)
French (fr)
Other versions
EP1305975A2 (en
EP1305975A4 (en
Inventor
Fa-Long Luo
Jun Yang
Brent Edwards
Nick Michael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GN Hearing AS
Original Assignee
GN Hearing AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US593728 priority Critical
Priority to US59372800A priority
Application filed by GN Hearing AS filed Critical GN Hearing AS
Priority to PCT/US2001/018416 priority patent/WO2002003749A2/en
Publication of EP1305975A2 publication Critical patent/EP1305975A2/en
Publication of EP1305975A4 publication Critical patent/EP1305975A4/en
Application granted granted Critical
Publication of EP1305975B1 publication Critical patent/EP1305975B1/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets providing an auditory perception; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets providing an auditory perception; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets providing an auditory perception; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers

Abstract

A microphone system using left and right microphones produces good binaural cues as well as noise reduction by using two adaptive filters (56, 58). A noise enhanced reference signal (46) is sent to two different adaptive filters (56, 58). The output of each adaptive filter is combined with the left or right microphone signal, respectively, to produce left and right output signals. These left and right output signals are used as the error signals to modify the coefficients of the adaptive filters (56, 58). By using two adaptive filters, each of the adaptive filters is able to independently operate helping to preserve the binaural cues.

Description

    Background of the Invention
  • The present invention relates to adaptive microphone array systems.
  • The combination of multiple-microphone-based spacial processing with binaural listening is receiving increasing attention in many application fields such as hearing aids because this combination should be able to provide both the spacial filtering benefits of a microphone array and the natural benefits of binaural listening for sound-location ability and speech intelligibility. Typically, a microphone is positioned on each side of a user's head. Ideally, the system is designed such that the two microphones can be used to improve the signal-to-noise ratio as well as to maintain the binaural cues.
  • One proposed system is described in the reference, Microphone-Array Hearing Aids With Binaural Output - Part I: Fixed-Processing Systems, and Part II: A Two-Microphone Adaptive System, IEEE Transactions on Speech and Audio Processing. Vol. 5, No. 6 (Nov. 1997) pp 529-551 and shown in Fig. 1. In this example, XR(n) is the microphone signal received at the right ear, and XL(n) is the microphone signal received at the left ear. A low-pass filter and a high-pass filter with cut-off frequency fc are used at each channel. The outputs fR(n) and f L (n) of the two high-pass filters are sent to an adaptive processor 22 whose output is Y(n). The output of the two low-pass filters 24 and 26 are delayed in delays 28 and 30, and combined with the output Y(n) of the adaptive processor 22 to provide binaural outputs ZR(n) and ZL(n). In this prior-art scheme, the combination of the adaptive array processing with binaural listening is accomplished by dividing the frequency spectrum. The low-pass portion of the frequency spectrum is devoted to binaural processing, and the high-pass part of the spectrum is devoted to adaptive array processing. The binaural cues of signal parts with higher frequencies than the cut-off frequency will be lost in the system. Likewise, the benefits of the adaptive array processing for lower frequencies than the cut-off frequency also will be lost. Furthermore, two low-pass filters and two high-pass filters are required. More importantly, appropriate equalization processing between the adaptive array processing and the outputs of the low-pass filters are required so as to avoid unexpected artifacts that result from a simple cutoff of the spectrum and simple summation. These problems make this prior system complicated and fail as a practical solution in achieving maximum array processing benefits and binaural listening benefits.
  • It is desired to have an improved adaptive array microphone system that maintains binaural cues and provides spacial filtering benefits.
  • Summary of the Present Intention
  • The present invention comprises a microphone system using two adaptive filters, each receiving the same reference signal derived from two ear microphones but having different primary and error (filter adjustment) signals. The primary signals are preferably from the right and left microphone output signals, respectively.
  • One embodiment of the present invention comprises an apparatus including a noise-enhanced data producing unit receiving left- and right-ear microphone data and producing noise-enhanced data. A right adaptive unit receives the right microphone data and the noise-enhanced data, and produces a reduced-noise right data output. The right adaptive unit includes a first adaptive filter receiving the noise-enhanced data. A left adaptive unit receives the left microphone data and the noise-enhanced data. The left adaptive unit produces reduced-noise left data output. The left adaptive unit includes a second adaptive filter receiving the noise-enhanced data.
  • Another embodiment of the present invention is a method including the steps of: calculating the noise-enhanced data from the left and right microphone data; adaptively filtering the noise-enhanced data to produce first filtered noise-enhanced data; combining the first filtered noise-enhanced data with right microphone data to produce a reduced-noise right data output; adaptively filtering the noise-enhanced data to produce second filtered noise-enhanced data; and combining the second filtered noise-enhanced data with left microphone data to produce a reduced-noise left data output.
  • Brief Description of the Drawings
    • Fig. 1 is a diagram of a prior-art microphone system.
    • Fig. 2 is a diagram of a microphone system of one embodiment of the present invention.
    • Fig. 3 is a diagram of one embodiment of the present invention implementing the system of Fig. 2.
    • Fig. 4 is a flow chart illustrating the operation of one embodiment of the present invention.
    Detailed Description of the Preferred Embodiment
  • Fig. 2 is a diagram illustrating the functional portions of the microphone system of the present invention. XR(n) 42 comprises data from the right microphone. XL(n) 44 comprises the data from the left microphone. Data r(n) is derived from these two data signals. In a preferred embodiment, a noise-enhanced signal unit 48 produces an output, r(n), with enhanced noise. In one embodiment, the noise-enhanced signal unit 48 includes a summing unit 50 which subtracts one of the microphone signals from the other. This produces a noise-enhanced signal since sound coming from the front, including presumably the desired speech signal, will reach the left and right microphones at nearly the same time, thus forming a null in the output response at the front.
  • The noise-enhanced signal is applied to the right adaptive unit 52 and left adaptive unit 54. Preferably, the right adaptive unit includes a first adaptive filter 56 receiving the noise-enhance signal r(n). The right adaptive unit 52 preferably receives XR(n) as a primary signal to adjust the adaptive filter 56. In one embodiment, the unit 52 also includes a summing unit 59, which subtracts the output of the adaptive filter 56 from the right front signal XR(n) to produce the output ZR(n). The left adaptive unit also includes a second adaptive filter 58. The second adaptive filter also receives the reference signal r(n) and produces an output aL(n). The left adaptive unit 54 receives the output XL(n) as the primary signal to modify the adaptive filter coefficients.
  • Note that although the first and second adaptive filters receive the same reference signal, the coefficients of the adaptive filters are different since the primary and error signals used to modify the coefficients are different for the first and second adaptive filters.
  • As described below, the adaptive filters can be of a variety of different types which are known in the art. In one preferred embodiment, the adaptive filters use the same algorithm but, as described above, have different primary signals.
  • One embodiment of the system of the present invention is described in the following mathematical description. The received signals at the right ear microphone and the left ear microphone are XR(n) and XL(n) which consist of the target speech parts SR(n), SL(n) and the noise parts NR(n), NL(n), respectively, that is X R n = S R n + N R n
    Figure imgb0001

    and X L n = S L n + N L n
    Figure imgb0002
  • The reference signal r(n) = XR(n) - XL(n) is sent to two adaptive filters with weights W R(n) = [W R1(n), W R2(n)......., W RN(n)]T and W L(n) = [W L1(n), WL2(n) ........ WLN(n)]T and these two adaptive filters provide the outputs aR(n) and aL(n) as follows, respectively: a R n = m = 1 N W Rm n r n - m + 1 = W T R n R n
    Figure imgb0003

    and a L n = m = 1 N W Lm n r n - m + 1 = W T L n R n
    Figure imgb0004

    where R(n)=[r(n),r(n-1),.....,r(n-N+1)] T and N is the length of two adaptive filters. Note that the length of two filters is here selected to be the same for simplicity but could be different. The primary signals at two adaptive filters are XR(n) and XL(n), and two outputs: ZR(n) and ZR(n) for the right ear and the left ear, respectively, are obtained by Z R n = X R n - a R n
    Figure imgb0005

    and Z L n = X L ʹ n - a L n
    Figure imgb0006
  • The weights of these two adaptive filters are adjusted so as to minimize the average power of the two outputs, that is, min W R n E Z R n 2 = min W R n E X R n - a R n 2
    Figure imgb0007

    and min W L n E Z L n 2 = min W L n E X L n - a L n 2
    Figure imgb0008

    In the ideal case r(n) contains only the noise part and the two adaptive filters can provide two outputs aR(n) and aL(n) by minimizing the two foregoing equations. Since the reference signal r(n) does not include the signal portion, the adaptive filters will adjust aR(n) and aL(n) to remove the noise portion from the primary signals, that is, a L (n)N L (n) and a L (n)≅N L (n). As a result, two system outputs ZR(n) and ZL(n) will approximate the signal parts SR(n) and SL(n), respectively. This means that the above processing not only realizes maximum noise reduction by two adaptive filters but also preserves the binaural cues contained in the target signal parts SR(n) and SL(n).
  • The learning algorithm can be an adaptive algorithm such as the LS, RLS, TLS and LMS algorithms. The LMS algorithm version to update the coefficients of two adaptive filters is W R n + 1 = W R n + λ R n Z R n
    Figure imgb0009
    W L n + 1 = W L n + λ R n Z L n
    Figure imgb0010

    where λ is a step parameter which is a positive constant less than 2 P
    Figure imgb0011
    where P is the power of the input r(n) of these two adaptive filters. For better performance and faster convergence speed, λ can be also time varying as done in the normalized LMS algorithm uses, that is, W R n + 1 = W R n + μ R n 2 R n Z R n
    Figure imgb0012
    W L n + 1 = W L n + μ R n 2 R n Z L n
    Figure imgb0013

    where µ is a positive constant less than 2.
  • Based on the frame-by-frame processing configuration, a further modified algorithm can be obtained as follows (Denoted by Equation 1 and Equation 2): W Rk n + 1 = W Rk n + μ R n 2 R n Z Rk n
    Figure imgb0014

    and W Lk n + 1 = W Lk n + μ R n 2 R n Z Lk n
    Figure imgb0015

    where k represents the k'th repeating in the same frame.
  • In comparison with the prior-art scheme of Fig. 1, the advantages of the present invention include the following: First, no binaural cue of the target speech will be lost because two system outputs ZR(n) and ZL(n) will approximate the signal parts SR(n) and SL(n) respectively. In the prior scheme, the binaural cues of signals with higher frequencies than the cut-off frequency have been lost. Second, the array processing benefit for the part with frequencies less than the cut-off frequency will be preserved because the frequency spectrum is not divided. Note that in the prior scheme, this benefit also has been lost. Third, in the present invention no low-pass filters and high-pass filters are required, while the prior scheme requires two low-pass filters and two high-pass filters. Consequently, the related equalization processing is not required in the present invention. The only cost involved in implementing the present invention is to include an additional adaptive filter. The two adaptive filters can use the same structure and adaptive algorithm. This property results in a large convenience in hardware implementation because the related assembly code and machine code of two adaptive filters can be shared.
  • Fig. 3 is a diagram that illustrates an embodiment of the system of the present invention. The system 70 includes a right ear microphone 72 and a left ear microphone 74. The converters 76 and 78 convert the analog signals into digital signals which are sent to the processor 80. In a preferred embodiment, the processor 80 is a digital signal processor. The processor 80 loads the adaptive microphone array program 82 from memory 84. The adaptive microphone array program can implement the functional blocks as shown in Fig. 2. Other programs 86 such as hearing aid algorithms can also be stored in the memory 84 for loading into the processor 80. The output signals of the processor 80 can be sent to speakers if a hearing-aid system is used.
  • Fig. 4 is a flow chart that illustrates the operation of one embodiment of the present invention. In step 90, an enhanced noise signal is calculated using left and right microphone samples. This enhanced noise signal can be the reference signal constructed by subtracting one of the microphone samples from the other microphone sample. In step 92, a noise-enhanced signal and the right ear signal are used to produce a noise-reduced right-ear signal. This is preferably done by adaptively filtering the noise-enhanced signal and combining the filtered signal with the right-ear microphone system to produce an output. The output being used as an error signal for the adaptive filter. In step 94, the noise-enhanced signal and the left-ear signal are used to produce a noise-reduced left-ear signal. The output of a second adaptive filter is combined with the left-ear signal, producing the noise-reduced left-ear signal. The noise-reduced left-ear signal is then used as an error signal to affect the coefficients of the adaptive filter. Note that the order of steps 92 and 94 is not important. In step 96 the left and right noise-reduced signals are used to produce coefficients which are used in the adaptive filters used in the producing steps 92 and 94.
  • Any kind of adaptive algorithms, such as LMS-based, LS-based, TLS-based, RLS-based and related algorithms, can be used in this scheme. The weights can also be obtained by directly solving the estimated Wienner-Hopf equations. Moreover, repeated adaptive algorithms like Equations 1 and 2 and adaptive lattice filters can be used in this scheme as well. In one embodiment, the lengths of two adaptive filters are adjustable and can take different values. Also, the step parameters in related adaptive algorithms for two adaptive filters can take different values. Trade-offs between performance and cost (complexity, etc.) in practical applications determine which algorithm is used.
  • The two adaptive filters in Fig. 2 can be nonlinear filters and can be implemented by some neural networks such as multi-layer perceptron networks, radial basis function networks, high-order neural networks, etc. The corresponding learning algorithms in neural networks such as the back-propagation algorithm can be used for the adaptive filters.
  • A matching filter could be added in either the left ear channel or the right ear channel before obtaining the difference signal r(n) so as to compensate for the magnitude mismatch of two-ear microphones. The matching filter can be in either a finite impulse response (FIR) filter or an infinite impulse response (IIR) filter. The matching filter could also be either in a fixed model or in an adaptive model.
  • In one embodiment, to reduce the target signal cancellation problem existing in most adaptive array processing algorithms, a speech pause detection system is used and the weight update of the two adaptive filters is made during a pause in the speech.
  • Either a directional microphone or an omnidirectional microphone could be used in this invention. Some pre-processing methods could be used to improve the signal-to-noise ratio of two primary signals XR(n) and XL(n). These pre-processing methods include that more than one microphone can be used in each ear and then these microphone signals are combined in some way to produce the signals XR(n) and XL(n). This pre-processing could be either in a fixed model or in an adaptive model and also either in the spatial domain or in the temporal domain.

Claims (18)

  1. An apparatus comprising:
    a noise-enhanced data producing unit receiving left and right ear microphone data and producing noise-enhanced data;
    a right adaptive unit (52) receiving the right microphone data (42) and the noise-enhanced data, the right adaptive unit (52) producing reduced-noise right data output, the right adaptive unit (52) including a first adaptive filter (56) receiving the noise-enhanced data; and
    a left adaptive unit (54) receiving the left microphone data (44) and the noise-enhanced data, the left adaptive unit (54) producing reduced-noise left data output, the left adaptive unit (54) including a second adaptive filter (58) receiving the noise-enhanced data.
  2. The apparatus of Claim 1 wherein the first adaptive filter (56) uses the reduced-noise right data output as an error signal to modify the coefficients of the first adaptive filter (56).
  3. The apparatus of Claim 1 wherein the second adaptive filter (58) uses the reduced-noise left data output as an error signal to modify the coefficients of the second adaptive filter (58).
  4. The apparatus of Claim 1 wherein the apparatus is implemented as a software program on a processor-based system.
  5. The apparatus of Claim 1 wherein the noise-enhanced-data-producing unit comprises a summer (50) to subtract one of the ear's microphone data from the other ear's microphone data.
  6. The apparatus of Claim 1 wherein the right adaptive unit (52) includes a summer unit to subtract the output of the first adaptive filter (56) from the right microphone data (42) to produce the reduced-noise right-data output.
  7. The apparatus of Claim 1 wherein the left adaptive unit (54) includes a summer unit to subtract the output of the second adaptive filter (58) from the left microphone data (44) to produce the reduced-noise left-data output.
  8. A method comprising:
    calculating noise-enhanced data from left and right microphone data; adaptive filtering the noise-enhanced data to produce first filtered noise-enhanced data;
    combining the first filtered noise-enhanced data with right microphone data (42) to produce a reduced noise right data output;
    adaptive filtering the noise-enhanced data to produce second filtered noise-enhanced data;
    and combining the second filtered noise-enhanced data with left microphone data (44) to produce a reduced noise left data output.
  9. The method of Claim 8 wherein the method is implemented on a processor.
  10. The method of Claim 8 wherein the noise-enhanced data-calculating step comprises subtracting one of the microphone data from the other microphone data.
  11. The method of Claim 8 wherein the first adaptive filtering step uses the reduced-noise right data output as an error signal to modify the coefficients of the first adaptive filter (56).
  12. The method of Claim 8 wherein the second adaptive filtering step uses the reduced-noise left data output as an error signal to modify the coefficients of the second adaptive filter (58).
  13. The method of Claim 8 wherein the combining step comprises subtracting the filtered noise-enhanced data from the microphone data.
  14. A computer-readable medium containing a program which executes the following procedure:
    calculating noise-enhanced data from left and right microphone data;
    adaptive filtering the noise-enhanced data to produce first filtered noise-enhanced data;
    combining the first filtered noise-enhanced data with right microphone data (42) to produce a reduced noise right data output;
    adaptive filtering the noise-enhanced data to produce second filtered noise-enhanced data; and
    combining the second filtered noise-enhanced data with left microphone data (44) to produce a reduced noise left data output.
  15. The computer-readable medium of Claim 14 wherein the procedure is implemented on a processor.
  16. The computer-readable medium of Claim 14 wherein the noise-enhanced data-calculating step comprises subtracting one of the microphone data from the other microphone data.
  17. The computer-readable medium of Claim 14 wherein the adaptive filtering step uses the reduced-noise data output as an error signal to modify the coefficients of the adaptive filter.
  18. The computer-readable medium of Claim 14 wherein the combining step comprises subtracting the filtered noise-enhanced data from the microphone data.
EP20010942048 2000-06-13 2001-06-05 Adaptive microphone array system with preserving binaural cues Active EP1305975B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US593728 1990-10-05
US59372800A true 2000-06-13 2000-06-13
PCT/US2001/018416 WO2002003749A2 (en) 2000-06-13 2001-06-05 Adaptive microphone array system with preserving binaural cues

Publications (3)

Publication Number Publication Date
EP1305975A2 EP1305975A2 (en) 2003-05-02
EP1305975A4 EP1305975A4 (en) 2007-09-19
EP1305975B1 true EP1305975B1 (en) 2011-11-23

Family

ID=24375900

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20010942048 Active EP1305975B1 (en) 2000-06-13 2001-06-05 Adaptive microphone array system with preserving binaural cues

Country Status (4)

Country Link
EP (1) EP1305975B1 (en)
AT (1) AT535103T (en)
DK (1) DK1305975T3 (en)
WO (1) WO2002003749A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9560451B2 (en) 2014-02-10 2017-01-31 Bose Corporation Conversation assistance system

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100669820B1 (en) * 2005-05-16 2007-01-16 이시동 Directional sound receiving device
US8139787B2 (en) 2005-09-09 2012-03-20 Simon Haykin Method and device for binaural signal enhancement
GB0609248D0 (en) * 2006-05-10 2006-06-21 Leuven K U Res & Dev Binaural noise reduction preserving interaural transfer functions
EP2611215B1 (en) * 2011-12-30 2016-04-20 GN Resound A/S A hearing aid with signal enhancement
US8891777B2 (en) * 2011-12-30 2014-11-18 Gn Resound A/S Hearing aid with signal enhancement
CN107005778A (en) * 2014-12-04 2017-08-01 高迪音频实验室公司 Audio signal processing apparatus and method for binaural rendering

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE68921890T2 (en) * 1988-07-08 1995-07-20 Adaptive Audio Ltd Sound reproduction.
US5633935A (en) * 1993-04-13 1997-05-27 Matsushita Electric Industrial Co., Ltd. Stereo ultradirectional microphone apparatus
US5675659A (en) * 1995-12-12 1997-10-07 Motorola Methods and apparatus for blind separation of delayed and filtered sources

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9560451B2 (en) 2014-02-10 2017-01-31 Bose Corporation Conversation assistance system

Also Published As

Publication number Publication date
AT535103T (en) 2011-12-15
EP1305975A2 (en) 2003-05-02
WO2002003749A3 (en) 2002-06-20
DK1305975T3 (en) 2012-02-13
EP1305975A4 (en) 2007-09-19
WO2002003749A2 (en) 2002-01-10

Similar Documents

Publication Publication Date Title
US7464029B2 (en) Robust separation of speech signals in a noisy environment
US6072885A (en) Hearing aid device incorporating signal processing techniques
US5383164A (en) Adaptive system for broadband multisignal discrimination in a channel with reverberation
US7035415B2 (en) Method and device for acoustic echo cancellation combined with adaptive beamforming
US20080201138A1 (en) Headset for Separation of Speech Signals in a Noisy Environment
Hamacher et al. Signal processing in high-end hearing aids: state of the art, challenges, and future trends
US20030063759A1 (en) Directional audio signal processing using an oversampled filterbank
Klasen et al. Binaural noise reduction algorithms for hearing aids that preserve interaural time delay cues
US6434246B1 (en) Apparatus and methods for combining audio compression and feedback cancellation in a hearing aid
US5661813A (en) Method and apparatus for multi-channel acoustic echo cancellation
US20050111683A1 (en) Hearing compensation system incorporating signal processing techniques
US20040196994A1 (en) Binaural signal enhancement system
US8442251B2 (en) Adaptive feedback cancellation based on inserted and/or intrinsic characteristics and matched retrieval
US20140056435A1 (en) Noise estimation for use with noise reduction and echo cancellation in personal communication
US6917688B2 (en) Adaptive noise cancelling microphone system
US5848171A (en) Hearing aid device incorporating signal processing techniques
US20030169891A1 (en) Low-noise directional microphone system
US20080019548A1 (en) System and method for utilizing omni-directional microphones for speech enhancement
US5774562A (en) Method and apparatus for dereverberation
US7020291B2 (en) Noise reduction method with self-controlling interference frequency
US6983055B2 (en) Method and apparatus for an adaptive binaural beamforming system
US20040125962A1 (en) Method and apparatus for dynamic sound optimization
US20030053646A1 (en) Listening device
US6498858B2 (en) Feedback cancellation improvements
US20030223597A1 (en) Adapative noise compensation for dynamic signal enhancement

Legal Events

Date Code Title Description
17P Request for examination filed

Effective date: 20030113

AK Designated contracting states:

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent to

Extension state: AL LT LV MK RO SI

RIN1 Inventor (correction)

Inventor name: MICHAEL, NICK

Inventor name: LUO, FA-LONG

Inventor name: YANG, JUN

Inventor name: EDWARDS, BRENT

RIC1 Classification (correction)

Ipc: H04R 25/00 20060101ALI20070813BHEP

Ipc: G01S 3/80 20060101ALI20070813BHEP

Ipc: H04R 3/00 20060101AFI20020704BHEP

A4 Despatch of supplementary search report

Effective date: 20070820

17Q First examination report

Effective date: 20071115

RAP1 Transfer of rights of an ep published application

Owner name: GN RESOUND A/S

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

AK Designated contracting states:

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: PETER RUTZ

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 60145704

Country of ref document: DE

Effective date: 20120119

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20111123

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120323

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120224

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 535103

Country of ref document: AT

Kind code of ref document: T

Effective date: 20111123

26N No opposition filed

Effective date: 20120824

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 60145704

Country of ref document: DE

Effective date: 20120824

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120630

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120305

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120605

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20111123

PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120605

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 18

PGFP Postgrant: annual fees paid to national office

Ref country code: CH

Payment date: 20180618

Year of fee payment: 18

Ref country code: DE

Payment date: 20180619

Year of fee payment: 18

REG Reference to a national code

Ref country code: CH

Ref legal event code: PCAR

Free format text: NEW ADDRESS: ALPENSTRASSE 14 POSTFACH 7627, 6302 ZUG (CH)

PGFP Postgrant: annual fees paid to national office

Ref country code: FR

Payment date: 20180615

Year of fee payment: 18

PGFP Postgrant: annual fees paid to national office

Ref country code: GB

Payment date: 20180618

Year of fee payment: 18

PGFP Postgrant: annual fees paid to national office

Ref country code: DK

Payment date: 20180618

Year of fee payment: 18