EP2120484B1 - Method for operating a hearing device and hearing device - Google Patents

Method for operating a hearing device and hearing device Download PDF

Info

Publication number
EP2120484B1
EP2120484B1 EP09155859.3A EP09155859A EP2120484B1 EP 2120484 B1 EP2120484 B1 EP 2120484B1 EP 09155859 A EP09155859 A EP 09155859A EP 2120484 B1 EP2120484 B1 EP 2120484B1
Authority
EP
European Patent Office
Prior art keywords
hearing aid
prescribable
association
acoustic
acoustic signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP09155859.3A
Other languages
German (de)
French (fr)
Other versions
EP2120484A3 (en
EP2120484A2 (en
Inventor
Ulrich Kornagel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Sivantos Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sivantos Pte Ltd filed Critical Sivantos Pte Ltd
Publication of EP2120484A2 publication Critical patent/EP2120484A2/en
Publication of EP2120484A3 publication Critical patent/EP2120484A3/en
Application granted granted Critical
Publication of EP2120484B1 publication Critical patent/EP2120484B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers

Definitions

  • the invention relates to a specified in claim 1 method for operating a hearing aid and a specified in claim 6 hearing aid.
  • noise or unwanted acoustic signals are omnipresent. These interfere with the human voice of a person or with a desired acoustic signal.
  • Hearing aid carriers are particularly susceptible to noise and unwanted acoustic signals. Talking in the background, audible interference with electronic devices such as cell phones, and ambient noise or sounds may make it difficult for a person with a hearing aid to understand a desired speaker. Reducing the level of noise in an acoustic signal coupled with an automatic focus on a desired acoustic signal component can significantly improve the performance of a digital speech processor as used in modern hearing aids.
  • Hearing aids with digital signal processing include one or more microphones, A / D converters, digital signal processors and speakers.
  • digital signal processors divide the incoming signals into a plurality of frequency bands. Within each band, signal amplification and processing may be individually adjusted in accordance with the requirements of a particular wearer of the hearing aid.
  • algorithms for feedback and noise minimization are available in digital signal processing, but they also have disadvantages.
  • a disadvantage of the currently existing algorithms for noise minimization is, for example, their limited improvement in hearing aid acoustics when speech and background noise are in the same frequency region and they are therefore unable to distinguish between spoken language and background noise.
  • BSS Blind Source Separation
  • the directional microphone control in the sense of a BSS is subject to ambiguity as soon as several competing sources of use, eg. B. speaker, present simultaneously.
  • the BSS allowed in principle, the separation of the different sources, if they are spatially separated.
  • the ambiguity reduces the potential benefit of a directional microphone, although it is precisely in such scenarios that a directional microphone can be very useful for improving speech intelligibility.
  • the hearing aid or the mathematical algorithms for BSS are in principle faced with the problem of having to decide which of the signals generated by the BSS should be passed on most advantageously to the hearing aid wearer. This is a problem that in principle is impossible for the hearing aid since the selection of the desired acoustic source directly depends on the momentary will of the hearing aid wearer and thus can not be present as an input variable for a selection algorithm. The choice made by this algorithm must therefore be based on assumptions about the probable will of the listener.
  • EP 1 912 472 A1 a method for operating a hearing aid and a hearing aid is known. From a recorded ambient sound electrical acoustic signals are generated from which a signal processing an electrical voice signal with a high probability of speech is identified and selected. The electrical speech signal is selectively taken into account in an output sound of the hearing aid so that it at least emerges acoustically for the hearing aid wearer in comparison with another acoustic source and is thus better perceived by the hearing aid wearer.
  • EP 1 912 474 A1 discloses a method for operating a hearing aid and a hearing aid. From a recorded ambient sound electrical acoustic signals are generated, from which by signal processing, an electrical speaker signal is identified and selected by a database with voice profiles preferred speakers. The electrical speaker signal is selectively taken into account in an output sound of the hearing aid so that it at least emerges acoustically for the hearing aid wearer in comparison with another acoustic source and is thus better perceived by the hearing aid wearer.
  • the stated object is achieved by the method of independent claim 1 and the hearing aid of independent claim 6.
  • the invention comprises a method for operating a hearing device, wherein electrical acoustic signals are generated by the hearing device from a recorded ambient sound. These are weighted according to their degree of membership to a predeterminable acoustic signal class and mixed together to form an output sound signal. The weight of the acoustic signal is greater or smaller, the higher the degree of membership.
  • the advantage of this is that a hearing aid user a desired signal from a variety of ambient sound signals can be presented.
  • the degree of membership can be determined by the characteristics volume, frequency range, basic speech frequency, cepstral coefficients and / or temporal course of the acoustic signals. This achieves high flexibility.
  • the predeterminable acoustic signal class may include the classes voice or human voice, in a predeterminable frequency band, male voice, female voice, child's voice, voice of a predefinable person, music and ambient noise. This offers the advantage of a wide choice for a hearing device user.
  • the predeterminable acoustic signal class may also comprise any combination of the classes.
  • the electrical acoustic signals are generated from the ambient sound by means of a blind-source separation method. This results in a good sound signal separation.
  • the degree of membership is determined by a feature analysis of the electrical acoustic signals, wherein a probability of belonging to a predeterminable acoustic signal class is determined for the electrical acoustic signals.
  • the advantage of this is the simple mathematical weighting.
  • a hearing aid is also provided with at least one microphone for recording ambient sound and with a demixing unit for generating electrical acoustic signals from the recorded ambient sound.
  • the hearing device comprises a signal processing unit, by the acoustic signals which are weighted according to their degree of membership to a predetermined acoustic signal class and mixed together to form an output sound signal, wherein the weight of the acoustic signal is greater or smaller, the higher the degree of affiliation. This makes it possible to switch between acoustic signal classes "soft".
  • the demixing unit may comprise a blind-source separation module.
  • the signal processing unit may comprise at least one classification module, at least one weight determination module, at least one multiplier and at least one adder.
  • the hearing device can comprise an acoustic signal class input unit with which the desired, specifiable acoustic signal class is transmitted to the hearing aid. This can be arranged on the hearing aid or in a remote control.
  • the FIG. 1 shows the state of the art of a hearing aid 1 with three microphones 2 and a demixing unit 5 according to the blind source separation method.
  • Three signal sources generate three ambient acoustic sound signals s1, s2, s3, which are received by the three microphones 2 and converted into electrical microphone signals X1, X2, X3.
  • the three microphone signals x1, x2, x3 are each supplied to a signal input of the demixing unit 5.
  • the blind-source separation process takes place with the aid of which the ambient sound signals s1, s2, s3 can be reconstructed from the mixed electrical microphone signals x1, x2, x3.
  • a hearing device user can now select between the three separately reproduced acoustic signals s1 ', s2', s3 'by means of a selection switch 7 in a post-processor module 6.
  • the electrical acoustic signal s2 ' was selected and passed on to a handset 3.
  • the demixing unit 5 and the post-processor module form a signal processing unit 4.
  • the listener 3 sends as an acoustic output signal s2 '', which corresponds approximately to the ambient acoustic signal s2.
  • acoustic output signal s2 '' which corresponds approximately to the ambient acoustic signal s2.
  • a hearing aid user does not always want such a hard switching between different input signal sources.
  • FIG. 2 shows a hearing aid 1 with three microphones 2, a signal processing unit 4 and a receiver or loudspeaker 3.
  • Three ambient sound signals s1, s2, s3 are received by the microphones 2 and forwarded as microphone signals x1, x2, x3 to the signal processing unit 4.
  • the processed by the signal processing unit 4 microphone signals x1, x2, x3 are then forwarded to an input of the handset 3 and presented as an acoustic output sound signal s the hearing device user.
  • the microphone signals x1, x2, x3 are processed with the aid of a demixing unit 5 and passed as demixed electrical acoustic signals s1 ', s2', s3 'to the further processing units.
  • a hearing device user can specify a preferred acoustic signal class. This specification is passed to the classification module 8 and processed in this.
  • the preselected acoustic signal class may include, for example, a male voice, a female voice, a child's voice or even a certain frequency range, or generally human voice, or music, etc.
  • the classification module 8 calculates the probability with which an electrical acoustic signal s1 ', s2', s3 'belongs to a specific acoustic signal class. With the help of a weight determination module 9, this degree of affiliation is now weighted accordingly. For this purpose, the degrees of membership of the classified signals are routed from outputs of the classification module 8 to inputs of the weighting module 9.
  • the weight determination module 9 now determines the weights g1, g2, g3 such that the higher the degree of belonging to the preselected class has been determined, the higher the weight of an acoustic signal is selected.
  • the weights g1, g2, g3 are routed to these respective inputs of the multipliers 10.
  • the electrical acoustic signals s1 ', s2', s3 ' are now multiplied by the weights g1, g2, g3.
  • the weighted electrical acoustic signals are passed to an adder 11. In the adder 11, these signals are added and provided to the output of the adder 11. Subsequently, the electrical signal at the output of the adder in the receiver 3 is converted into an output sound signal S.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Description

Die Erfindung betrifft ein im Patentanspruch 1 angegebenes Verfahren zum Betreiben eines Hörgeräts und ein im Patentanspruch 6 angegebenes Hörgerät.The invention relates to a specified in claim 1 method for operating a hearing aid and a specified in claim 6 hearing aid.

Bei einem Gespräch zwischen Personen sind Störgeräusche oder unerwünschte akustische Signale allgegenwärtig. Diese interferieren mit der menschlichen Stimme einer Person oder mit einem erwünschten akustischen Signal. Hörgeräteträger sind besonders anfällig für Störgeräusche und unerwünscht akustische Signale. Gespräche im Hintergrund, akustische Beeinträchtigungen von elektronischen Geräten, wie beispielsweise Mobiltelefonen, sowie Lärm oder Geräusche in der Umgebung können es für einen Menschen mit einem Hörgerät schwierig machen, einen gewünschten Sprecher zu verstehen. Eine Reduktion des Störgeräuschpegels in einem akustischen Signal, gekoppelt mit einem automatischen Fokus auf eine gewünschte akustische Signalkomponente kann die Leistungsfähigkeit eines digitalen Sprachprozessors, wie er in modernen Hörhilfen Verwendung findet, signifikant verbessern.In a conversation between people noise or unwanted acoustic signals are omnipresent. These interfere with the human voice of a person or with a desired acoustic signal. Hearing aid carriers are particularly susceptible to noise and unwanted acoustic signals. Talking in the background, audible interference with electronic devices such as cell phones, and ambient noise or sounds may make it difficult for a person with a hearing aid to understand a desired speaker. Reducing the level of noise in an acoustic signal coupled with an automatic focus on a desired acoustic signal component can significantly improve the performance of a digital speech processor as used in modern hearing aids.

Hörgeräte mit einer digitalen Signalverarbeitung enthalten ein oder mehrere Mikrofone, A/D-Wandler, digitale Signalprozessoren und Lautsprecher. In der Regel teilen digitale Signalprozessoren die einkommenden Signale in eine Mehrzahl von Frequenzbändern auf. Innerhalb eines jeden Bands kann eine Signalverstärkung und -verarbeitung individuell in Übereinstimmung mit den Anforderungen eines bestimmten Trägers des Hörgeräts eingestellt werden. Ferner sind bei der digitalen Signalverarbeitung Algorithmen zur Rückkopplungs- und Störgeräuschminimierung verfügbar, die aber auch Nachteile aufweisen. Nachteilig bei den derzeit vorhandenen Algorithmen zur Störgeräuschminimierung ist zum Beispiel deren beschränkte Verbesserung in der Hörgeräteakustik, wenn sich Sprach- und Hintergrundgeräusche in derselben Frequenzregion befinden und sie daher nicht in der Lage sind, zwischen gesprochener Sprache und Hintergrundgeräusch zu unterscheiden. Auf dem Gebiet der akustischen Signalverarbeitung ist dies eines der häufigsten Aufgaben, nämlich aus verschiedenen, sich überlagernden, akustischen Signalen eines oder eine Mehrzahl davon herauszufiltern. Dies wird auch als das sogenannte "Cocktail-Party-Problem" bezeichnet. Hierbei mischen sich die unterschiedlichsten Geräusche, wie Musik und Unterhaltungen zu einer undefinierbaren Geräuschkulisse. Trotzdem fällt es einem Menschen ohne Hörschwäche im allgemeinen nicht schwer, sich in einer solchen Situation mit einem Gesprächspartner zu unterhalten. Es ist daher für Hörgeräteträger wünschenswert, sich in ebensolchen Situationen genauso unterhalten zu können, wie Menschen ohne Hörschwäche.Hearing aids with digital signal processing include one or more microphones, A / D converters, digital signal processors and speakers. As a rule, digital signal processors divide the incoming signals into a plurality of frequency bands. Within each band, signal amplification and processing may be individually adjusted in accordance with the requirements of a particular wearer of the hearing aid. Furthermore, algorithms for feedback and noise minimization are available in digital signal processing, but they also have disadvantages. A disadvantage of the currently existing algorithms for noise minimization is, for example, their limited improvement in hearing aid acoustics when speech and background noise are in the same frequency region and they are therefore unable to distinguish between spoken language and background noise. In the field of acoustic signal processing this is one of the most frequent tasks, namely to filter out different, overlapping, acoustic signals of one or a plurality of them. This is also referred to as the so-called "cocktail party problem". Here, the most diverse sounds, such as music and conversations mix to an indefinable soundscape. Nevertheless, it is generally not difficult for a person without hearing loss to talk to a conversation partner in such a situation. It is therefore desirable for hearing aid users to be able to talk in just as situations, as people without hearing loss.

Es gibt in der akustischen Signalverarbeitung räumliche, z. B. Richtmikrofon oder Beamforming, statistische, z. B. Blind Source Separation (BSS; "Unterscheiden nicht sichtbarer Schallquellen") oder gemischte Verfahren, die u. a. mittels Algorithmen aus mehreren gleichzeitig aktiven Schallquellen eine einzige oder eine Mehrzahl davon abtrennen können. So ermöglicht BSS mittels statistischer Signalverarbeitung von mindestens zwei Mikrofonsignalen, eine Trennung von Quellsignalen ohne Vorwissen über deren geometrische Anordnung durchzuführen. Dieses Verfahren hat in der Anwendung in Hörgeräten Vorteile gegenüber herkömmlichen Richtmikrofonlösungen. Prinzipbedingt lassen sich mit einem BSS-Verfahren mit n Mikrofonen, bis zu n Quellen trennen, d. h. n Ausgangssignale generieren.There are in the acoustic signal processing spatial, z. B. directional microphone or beamforming, statistical, z. B. Blind Source Separation (BSS) or mixed methods, which may include: a. by means of algorithms from several simultaneously active sound sources can separate a single or a plurality thereof. Thus, BSS allows statistical signal processing of at least two microphone signals to perform a separation of source signals without prior knowledge of their geometric arrangement. This method has advantages over conventional directional microphone solutions when used in hearing aids. By principle, with a BSS method with n microphones, up to n sources can be separated, i. H. n Generate output signals.

Verfahren zur BSS sind aus der Literatur zahlreich bekannt, wobei Schallquellen über die Analyse wenigstens zweier Mikrofonsignale analysiert werden. Einen guten Überblick gibt die nachveröffentlichte Patentschrift DE 10 2006 047 982 .Methods for BSS are well known in the literature, with sound sources analyzed by analyzing at least two microphone signals. A good overview is the post-published patent DE 10 2006 047 982 ,

Die Steuerung von Richtmikrofonen im Sinne einer BSS unterliegt Mehrdeutigkeiten, sobald mehrere konkurrierende Nutzquellen, z. B. Sprecher, gleichzeitig vorliegen. Die BSS erlaubt prinzipiell die Separation der verschiedenen Quellen, sofern diese räumlich getrennt sind. Durch die Mehrdeutigkeit wird jedoch der potenzielle Nutzen eines Richtmikrofons gemindert, obwohl gerade in solchen Szenarien ein Richtmikrofon zur Verbesserung der Sprachverständlichkeit von großem Nutzen sein kann.The directional microphone control in the sense of a BSS is subject to ambiguity as soon as several competing sources of use, eg. B. speaker, present simultaneously. The BSS allowed in principle, the separation of the different sources, if they are spatially separated. The ambiguity, however, reduces the potential benefit of a directional microphone, although it is precisely in such scenarios that a directional microphone can be very useful for improving speech intelligibility.

Das Hörgerät bzw. die mathematischen Algorithmen zur BSS stehen prinzipiell vor dem Problem, entscheiden zu müssen, welche der durch die BSS erzeugten Signale am Vorteilhaftesten an den Hörhilfeträger weitergegeben werden sollen. Dies ist für die Hörhilfe eine prinzipiell unlösbare Aufgabe, da die Auswahl der Wunschakustikquelle direkt vom momentanen Willen des Hörhilfeträgers abhängt und somit einem Auswahlalgorithmus nicht als Eingangsgröße vorliegen kann. Die durch diesen Algorithmus getroffene Auswahl muss sich also auf Annahmen über den wahrscheinlichen Willen des Hörers stützen.The hearing aid or the mathematical algorithms for BSS are in principle faced with the problem of having to decide which of the signals generated by the BSS should be passed on most advantageously to the hearing aid wearer. This is a problem that in principle is impossible for the hearing aid since the selection of the desired acoustic source directly depends on the momentary will of the hearing aid wearer and thus can not be present as an input variable for a selection algorithm. The choice made by this algorithm must therefore be based on assumptions about the probable will of the listener.

Im Stand der Technik wird von einer Bevorzugung eines akustischen Signals durch den Hörhilfeträger aus einer 0°-Richtung, also der Blickrichtung des Hörhilfeträgers ausgegangen. Dies ist insofern realistisch, als dass in einer akustisch schwierigen Situation der Hörhilfeträger seinen aktuellen Gesprächspartner anschauen würde, um weitere Hinweise zu bekommen, die die Sprachverständlichkeit des Gesprächspartners erhöhen (z. B. Lippenbewegungen). Hierdurch wird der Hörhilfeträger jedoch gezwungen, seinen Gesprächspartner anzusehen, damit das Richtmikrofon zu einer erhöhten Sprachverständlichkeit führt. Dies ist insbesondere lästig, wenn der Hörhilfeträger sich mit genau einer einzigen Person unterhalten will, d. h. nicht in eine Kommunikation mit mehreren Sprechern eingebunden ist, und seinen Gesprächspartner nicht immer ansehen möchte/muss.In the prior art, a preference is given to an acoustic signal by the hearing aid wearer from a 0 ° direction, that is to say the viewing direction of the hearing aid wearer. This is realistic in that, in an acoustically difficult situation, the hearing aid wearer would look at his current interlocutor in order to obtain further indications which increase the speech intelligibility of the interlocutor (eg lip movements). However, this forces the hearing aid wearer to look at his interlocutor so that the directional microphone leads to increased speech intelligibility. This is especially troublesome when the hearing aid wearer wants to talk to just one person, ie is not involved in communication with multiple speakers, and does not always want / need to view his interlocutor.

Aus EP 1 912 472 A1 ist ein Verfahren zum Betreiben einer Hörhilfe sowie eine Hörhilfe bekannt. Aus einem aufgenommenen Umgebungsschall werden elektrische Akustiksignale erzeugt, aus welchen durch eine Signalverarbeitung ein elektrisches Sprachsignal mit einer hohen Sprachwahrscheinlichkeit identifiziert und ausgewählt wird. Das elektrische Sprachsignal wird derart in einem Ausgangsschall der Hörhilfe selektiv berücksichtigt, dass es für den Hörhilfeträger im Vergleich mit einer anderen Akustikquelle akustisch wenigstens hervortritt und dadurch vom Hörhilfeträger besser wahrgenommen wird.Out EP 1 912 472 A1 a method for operating a hearing aid and a hearing aid is known. From a recorded ambient sound electrical acoustic signals are generated from which a signal processing an electrical voice signal with a high probability of speech is identified and selected. The electrical speech signal is selectively taken into account in an output sound of the hearing aid so that it at least emerges acoustically for the hearing aid wearer in comparison with another acoustic source and is thus better perceived by the hearing aid wearer.

In EP 1 912 474 A1 ein Verfahren zum Betreiben einer Hörhilfe sowie eine Hörhilfe offenbart. Aus einem aufgenommenen Umgebungsschall werden elektrische Akustiksignale erzeugt, aus welchen durch eine Signalverarbeitung ein elektrisches Sprechersignal durch eine Datenbank mit Sprachprofilen präferierter Sprecher identifiziert und ausgewählt wird. Das elektrische Sprechersignal wird derart in einem Ausgangsschall der Hörhilfe selektiv berücksichtigt, dass es für den Hörhilfeträger im Vergleich mit einer anderen Akustikquelle akustisch wenigstens hervortritt und dadurch vom Hörhilfeträger besser wahrgenommen wird.In EP 1 912 474 A1 discloses a method for operating a hearing aid and a hearing aid. From a recorded ambient sound electrical acoustic signals are generated, from which by signal processing, an electrical speaker signal is identified and selected by a database with voice profiles preferred speakers. The electrical speaker signal is selectively taken into account in an output sound of the hearing aid so that it at least emerges acoustically for the hearing aid wearer in comparison with another acoustic source and is thus better perceived by the hearing aid wearer.

Es ist daher Aufgabe der Erfindung, ein verbessertes Verfahren zum Betreiben eines Hörgeräts, sowie ein verbessertes Hörgerät anzugeben, mit welchen entschieden werden kann, welche Ausgangssignale einer Quellentrennung, insbesondere einer BSS, dem Hörhilfeträger akustisch zugeführt werden.It is therefore an object of the invention to provide an improved method for operating a hearing device, as well as an improved hearing aid, with which it can be decided which Output signals of a source separation, in particular a BSS, the hearing aid wearer are supplied acoustically.

Gemäß der Erfindung wird die gestellte Aufgabe mit dem Verfahren des unabhängigen Patentanspruchs 1 und dem Hörgerät des unabhängigen Patentanspruchs 6 gelöst.According to the invention, the stated object is achieved by the method of independent claim 1 and the hearing aid of independent claim 6.

Die Erfindung umfasst ein Verfahren zum Betreiben eines Hörgeräts, wobei durch das Hörgerät aus einem aufgenommenen Umgebungsschall elektrische Akustiksignale erzeugt werden. Diese werden entsprechend ihrem Zugehörigkeitsgrad zu einer vorgebbaren Akustiksignalklasse gewichtet und zu einem Ausgangsschallsignal zusammengemischt. Das Gewicht des Akustiksignals ist umso größer oder umso kleiner, je höher der Zugehörigkeitsgrad ist. Vorteilhaft daran ist, dass einem Hörgerätenutzer ein gewünschtes Signal aus einer Vielzahl von Umgebungsschallsignalen dargeboten werden kann.The invention comprises a method for operating a hearing device, wherein electrical acoustic signals are generated by the hearing device from a recorded ambient sound. These are weighted according to their degree of membership to a predeterminable acoustic signal class and mixed together to form an output sound signal. The weight of the acoustic signal is greater or smaller, the higher the degree of membership. The advantage of this is that a hearing aid user a desired signal from a variety of ambient sound signals can be presented.

In einer Weiterbildung kann der Zugehörigkeitsgrad durch die Merkmale Lautstärke, Frequenzbereich, Sprachgrundfrequenz, cepstrale Koeffizienten und/oder zeitlichen Verlauf der Akustiksignale bestimmt werden. Dadurch wird eine hohe Flexibilität erreicht.In a further development, the degree of membership can be determined by the characteristics volume, frequency range, basic speech frequency, cepstral coefficients and / or temporal course of the acoustic signals. This achieves high flexibility.

In einer weiteren Ausführungsform kann die vorgebbare Akustiksignalklasse die Klassen Sprache bzw. menschliche Stimme, in einem vorgebbaren Frequenzband, Männerstimme, Frauenstimme, Kinderstimme, Stimme einer vorgebbaren Person, Musik und Umgebungsgeräusch umfassen. Dies bietet den Vorteil einer großen Auswahl für einen Hörgerätenutzer.In a further embodiment, the predeterminable acoustic signal class may include the classes voice or human voice, in a predeterminable frequency band, male voice, female voice, child's voice, voice of a predefinable person, music and ambient noise. This offers the advantage of a wide choice for a hearing device user.

Auch kann die vorgebbare Akustiksignalklasse eine beliebige Kombination der Klassen umfassen.The predeterminable acoustic signal class may also comprise any combination of the classes.

Die elektrischen Akustiksignale werden aus dem Umgebungsschall mittels eines Blind Source Separation Verfahrens erzeugt. Dadurch kommt es zu einer guten Schallsignaltrennung.The electrical acoustic signals are generated from the ambient sound by means of a blind-source separation method. This results in a good sound signal separation.

Der Zugehörigkeitsgrad wird durch eine Merkmalsanalyse der elektrischen Akustiksignale bestimmt, wobei für die elektrischen Akustiksignale eine Wahrscheinlichkeit der Zugehörigkeit zu einer vorgebbaren Akustiksignalklasse ermittelt wird. Vorteilhaft daran ist die einfache rechnerische Gewichtung.The degree of membership is determined by a feature analysis of the electrical acoustic signals, wherein a probability of belonging to a predeterminable acoustic signal class is determined for the electrical acoustic signals. The advantage of this is the simple mathematical weighting.

Erfindungsgemäß wird auch ein Hörgerät mit mindestens einem Mikrofon zur Aufnahme eines Umgebungsschalls und mit einer Entmischungseinheit zur Erzeugung von elektrischen Akustiksignalen aus dem aufgenommenen Umgebungsschall angegeben. Das Hörgerät umfasst eine Signalverarbeitungseinheit, durch die Akustiksignale die entsprechend ihrem Zugehörigkeitsgrad zu einer vorgebbaren Akustiksignalklasse gewichtbar und zu einem Ausgangsschallsignal zusammenmischbar sind, wobei das Gewicht des Akustiksignals umso größer oder umso kleiner ist, je höher der Zugehörigkeitsgrad ist. Dadurch kann das Umschalten zwischen Akustiksignalklassen "weich" erfolgen.According to the invention, a hearing aid is also provided with at least one microphone for recording ambient sound and with a demixing unit for generating electrical acoustic signals from the recorded ambient sound. The hearing device comprises a signal processing unit, by the acoustic signals which are weighted according to their degree of membership to a predetermined acoustic signal class and mixed together to form an output sound signal, wherein the weight of the acoustic signal is greater or smaller, the higher the degree of affiliation. This makes it possible to switch between acoustic signal classes "soft".

In einer Weiterbildung kann die Entmischungseinheit ein Blind Source Separation Modul umfassen.In a further development, the demixing unit may comprise a blind-source separation module.

In einer weiteren Ausführungsform kann die Signalverarbeitungseinheit mindestens ein Klassifikationsmodul, mindestens ein Gewichtermittlungsmodul, mindestens einen Multiplizierer und mindestens einen Addierer umfassen.In a further embodiment, the signal processing unit may comprise at least one classification module, at least one weight determination module, at least one multiplier and at least one adder.

Des Weiteren kann das Hörgerät eine Akustiksignalklasse-Eingabeeinheit umfassen, mit der die gewünschte, vorgebbare Akustiksignalklasse dem Hörgerät übermittelt wird. Diese kann am Hörgerät oder in einer Fernbedienung angeordnet sein.Furthermore, the hearing device can comprise an acoustic signal class input unit with which the desired, specifiable acoustic signal class is transmitted to the hearing aid. This can be arranged on the hearing aid or in a remote control.

Weitere Besonderheiten und Vorteile der Erfindung werden aus den nachfolgenden Erläuterungen mehrerer Ausführungsbeispiele anhand von schematischen Zeichnungen ersichtlich.Other features and advantages of the invention will become apparent from the following explanations of several embodiments with reference to schematic drawings.

Es zeigen:

Figur 1:
ein Blockschaltbild eines Hörgeräts mit Blind Source Separation nach dem Stand der Technik und
Figur 2:
ein Blockschaltbild eines erfindungsgemäßen Hörgeräts.
Show it:
FIG. 1:
a block diagram of a hearing aid with blind-source separation according to the prior art and
FIG. 2:
a block diagram of a hearing aid according to the invention.

Die Figur 1 zeigt den Stand der Technik eines Hörgeräts 1 mit drei Mikrofonen 2 und einer Entmischungseinheit 5 nach dem Blind Source Separation Verfahren. Drei Signalquellen erzeugen drei akustische Umgebungsschall-Signale s1, s2, s3, die von den drei Mikrophonen 2 empfangen und in elektrische Mikrophonsignale X1, X2, X3 umgewandelt werden. Die drei Mikrophonsignale x1, x2, x3 werden jeweils einem Signaleingang der Entmischungseinheit 5 zugeführt. In der Entmischungseinheit 5 läuft das Blind Source Separation Verfahren ab, mit dessen Hilfe die Umgebungsschall-Signale s1, s2, s3 aus den gemischten elektrischen Mikrofonsignalen x1, x2, x3 rekonstruiert werden können. Somit stehen an drei Ausgängen der Entmischungseinheit 5 drei elektrische Akustik-Signale s1', s2', s3' zur Verfügung.The FIG. 1 shows the state of the art of a hearing aid 1 with three microphones 2 and a demixing unit 5 according to the blind source separation method. Three signal sources generate three ambient acoustic sound signals s1, s2, s3, which are received by the three microphones 2 and converted into electrical microphone signals X1, X2, X3. The three microphone signals x1, x2, x3 are each supplied to a signal input of the demixing unit 5. In the demixing unit 5, the blind-source separation process takes place with the aid of which the ambient sound signals s1, s2, s3 can be reconstructed from the mixed electrical microphone signals x1, x2, x3. Thus, at three outputs of the demixing unit 5, three electrical acoustic signals s1 ', s2', s3 'are available.

Im einfachsten Fall kann nun ein Hörgerätenutzer mit Hilfe eines Auswahlschalters 7 in einem Postprozessormodul 6 zwischen den drei getrennt wiedergegebenen Akustik-Signalen s1', s2', s3' wählen. In Figur 2 wurde das elektrische AkustikSignal s2' gewählt und an einen Hörer 3 weitergegeben. Die Entmischungseinheit 5 und das Postprozessormodul bilden eine Signalverarbeitungseinheit 4.In the simplest case, a hearing device user can now select between the three separately reproduced acoustic signals s1 ', s2', s3 'by means of a selection switch 7 in a post-processor module 6. In FIG. 2 the electrical acoustic signal s2 'was selected and passed on to a handset 3. The demixing unit 5 and the post-processor module form a signal processing unit 4.

Der Hörer 3 sendet als akustisches Ausgangssignal das Signal s2'', das in etwa dem akustischen Umgebungsschall-Signal s2 entspricht. Mit Hilfe des Hörgeräts 1 in Figur 1 können somit verschiedene akustische Eingangssignale separiert werden und getrennt entsprechend der Vorlieben eines Hörgerätenutzers über den Hörer 3 ausgegeben werden.The listener 3 sends as an acoustic output signal s2 '', which corresponds approximately to the ambient acoustic signal s2. With the help of the hearing aid 1 in FIG. 1 Thus, different acoustic input signals can be separated and output separately according to the preferences of a hearing device user via the handset 3.

Nicht immer will ein Hörgeräteträger eine derart harte Umschaltung zwischen unterschiedlichen Eingangssignalquellen.A hearing aid user does not always want such a hard switching between different input signal sources.

Auch ist es einer Entmischungseinheit 5 nicht immer möglich, die Signale derart sauber und sicher getrennt aufzubereiten. Eine verbesserte Darbietung von unterschiedlichen Umgebungsschall Signalen bietet daher die erfindungsgemäße Vorrichtung nach Figur 2.Also, it is not always possible for a demixing unit 5 to process the signals in such a clean and secure manner. An improved presentation of different ambient sound signals therefore provides the device according to the invention FIG. 2 ,

Figur 2 zeigt ein Hörgerät 1 mit drei Mikrofonen 2, einer Signalverarbeitungseinheit 4 und einem Hörer bzw. Lautsprecher 3. Drei Umgebungsschall-Signale s1, s2, s3 werden von den Mikrofonen 2 aufgenommen und als Mikrophonsignale x1, x2, x3 an die Signalverarbeitungseinheit 4 weitergeleitet. Die von der Signalverarbeitungseinheit 4 aufbereiteten Mikrofonsignale x1, x2, x3 werden dann an einen Eingang des Hörers 3 weitergeleitet und als akustisches Ausgangsschallsignal s dem Hörgerätenutzer dargeboten. FIG. 2 shows a hearing aid 1 with three microphones 2, a signal processing unit 4 and a receiver or loudspeaker 3. Three ambient sound signals s1, s2, s3 are received by the microphones 2 and forwarded as microphone signals x1, x2, x3 to the signal processing unit 4. The processed by the signal processing unit 4 microphone signals x1, x2, x3 are then forwarded to an input of the handset 3 and presented as an acoustic output sound signal s the hearing device user.

In der Signalverarbeitungseinheit 4 werden mit Hilfe einer Entmischungseinheit 5 die Mikrofonsignale x1, x2, x3 verarbeitet und als entmischte elektrische Akustiksignale s1', s2', s3' an die weiterverarbeitenden Einheiten geleitet. Die elektrischen Akustiksignale s1', s2', s3', gelangen einerseits jeweils zu Eingängen von Multiplizierern 10, andererseits zu Eingängen eines Klassifikationsmoduls 8. Mit Hilfe einer Akustiksignalklasse-Eingabeeinheit 12 kann ein Hörgerätenutzer eine bevorzugte Akustiksignalklasse vorgeben. Diese Vorgabe wird an das Klassifikationsmodul 8 geleitet und in diesem verarbeitet. Die vorgewählte Akustiksignalklasse kann beispielsweise eine Männerstimme, eine Frauenstimme, eine Kinderstimme oder auch einen bestimmten Frequenzbereich, oder generell menschliche Stimme bzw. Sprache, oder Musik etc umfassen. Im Klassifikationsmodul 8 wird die Wahrscheinlichkeit errechnet, mit der ein elektrisches Akustiksignal s1', s2', s3', zu einer bestimmten Akustiksignalklasse gehört. Mit Hilfe eines Gewichtermittlungsmoduls 9 wird nun dieser Grad der Zugehörigkeit entsprechend gewichtet. Dazu werden die Zugehörigkeitsgrade der klassifizierten Signale von Ausgängen des Klassifikationsmoduls 8 an Eingänge des Gewichtermittlungsmoduls 9 geleitet. Das Gewichtermittlungsmodul 9 ermittelt nun die Gewichte g1, g2, g3 derart, dass das Gewicht eines Akustiksignals umso höher gewählt wird, je höher der Grad der Zugehörigkeit zu der vorgewählten Klasse ermittelt wurde. Die Gewichte g1, g2, g3 werden an diesen entsprechenden Eingängen der Multiplizierer 10 geleitet. In den Multiplizierern 10 werden nun die elektrischen Akustik-Signale s1', s2', s3' mit den Gewichten g1, g2, g3 multipliziert. Von Ausgängen der Multiplizierer 10 werden die gewichteten elektrischen Akustik-Signale an einen Addierer 11 geleitet. Im Addierer 11 werden diese Signale addiert und dem Ausgang des Addierers 11 zur Verfügung gestellt. Anschließend wird das elektrische Signal am Ausgang des Addierers in dem Hörer 3 in ein Ausgangsschallsignal S umgewandelt.In the signal processing unit 4, the microphone signals x1, x2, x3 are processed with the aid of a demixing unit 5 and passed as demixed electrical acoustic signals s1 ', s2', s3 'to the further processing units. On the one hand, the electrical acoustic signals s1 ', s2', s3 'reach inputs of multipliers 10, on the other hand inputs of a classification module 8. With the aid of an acoustic signal class input unit 12, a hearing device user can specify a preferred acoustic signal class. This specification is passed to the classification module 8 and processed in this. The preselected acoustic signal class may include, for example, a male voice, a female voice, a child's voice or even a certain frequency range, or generally human voice, or music, etc. The classification module 8 calculates the probability with which an electrical acoustic signal s1 ', s2', s3 'belongs to a specific acoustic signal class. With the help of a weight determination module 9, this degree of affiliation is now weighted accordingly. For this purpose, the degrees of membership of the classified signals are routed from outputs of the classification module 8 to inputs of the weighting module 9. The weight determination module 9 now determines the weights g1, g2, g3 such that the higher the degree of belonging to the preselected class has been determined, the higher the weight of an acoustic signal is selected. The weights g1, g2, g3 are routed to these respective inputs of the multipliers 10. In the multipliers 10, the electrical acoustic signals s1 ', s2', s3 'are now multiplied by the weights g1, g2, g3. From outputs of the multipliers 10, the weighted electrical acoustic signals are passed to an adder 11. In the adder 11, these signals are added and provided to the output of the adder 11. Subsequently, the electrical signal at the output of the adder in the receiver 3 is converted into an output sound signal S.

BezugszeichenlisteLIST OF REFERENCE NUMBERS

11
Hörgeräthearing Aid
22
Mikrofonmicrophone
33
Hörer / LautsprecherHandset / speaker
44
SignalverarbeitungseinheitSignal processing unit
55
Entmischungseinheitunscrambling
66
PostprozessormodulPost-processor module
77
Auswahlschalterselector switch
88th
Klassifikationsmodulclassification module
99
GewichtermittlungsmodulWeight determination module
1010
Multiplizierermultipliers
1111
Addiereradder
1212
Akustiksignalklasse-EingabeeinheitAcoustic signal class input unit
g1, g2, g3g1, g2, g3
Gewichtsfaktoren / GewichteWeight factors / weights
ss
AusgangsschallsignalOutput sound signal
s1, s2, s3s1, s2, s3
Umgebungsschall-SignaleAmbient sound signals
s1', s2', s3's1 ', s2', s3 '
elektrische Akustik-Signaleelectrical acoustic signals
x1, x2, x3x1, x2, x3
Mikrofonsignalemicrophone signals

Claims (8)

  1. Method for operating a hearing aid (1), wherein the hearing aid (1) takes a picked-up ambient sound (s1, s2, s3) and produces electrical acoustic signals (s1', s2', s3') by means of a blind source separation method, wherein the electrical acoustic signals (s1', s2', s3') each have a probability of association with a prescribable acoustic signal class ascertained for them as a degree of association by dint of feature analysis of the electrical acoustic signals (s1', s2', s3'), and wherein the electrical acoustic signals (s1', s2', s3') are weighted in accordance with their respective degree of association with the prescribable acoustic signal class and are mixed together to form an output sound signal (s), the respective weight (g1, g2, g3) of the acoustic signal (s1', s2', s3') being all the greater or all the smaller the higher the respective degree of association.
  2. Method according to Claim 1,
    characterized in that the degree of association is determined by the features volume, frequency range, fundamental voice frequency, cepstral coefficients and/or time characteristic of the acoustic signals (s1', s2', s3').
  3. Method according to Claim 1 or 2,
    characterized in that the prescribable acoustic signal class comprises the following classes:
    - speech or human voice,
    - in a prescribable frequency band,
    - man's voice, woman's voice, child's voice,
    - voice of a prescribable person,
    - music and
    - ambient noise.
  4. Method according to Claim 3,
    characterized in that the prescribable acoustic signal class comprises any combination of the classes.
  5. Computer program product having a computer program that has software means for performing a method according to one of Claims 1 to 4 when the computer program is executed in a control unit.
  6. Hearing aid (1) having at least one microphone (2) for picking up an ambient sound (s1, s2, s3) and having a demixing unit (5) for producing electrical acoustic signals (s1', s2', s3') from the picked-up ambient sound (s1, s2, s3), wherein the demixing unit (5) comprises a blind source separation module and a signal processing unit (4) that acoustic signals (s1', s2', s3') that can be weighted in accordance with their respective degree of association with the prescribable acoustic signal class and can be mixed together to form an output sound signal (s), the respective weight (g1, g2, g3) of the acoustic signal (s1', s2', s3') being all the greater or all the smaller the higher the respective degree of association, which can be determined by dint of feature analysis of the electrical acoustic signals (s1', s2', s3'), wherein the electrical acoustic signals (s1', s2', s3') can have a probability of association with a prescribable acoustic signal class ascertained for them as a respective degree of association.
  7. Hearing aid (1) according to Claim 6,
    characterized in that the signal processing unit (4) comprises at least one classification module (8), at least one weight ascertainment module (9), at least one multiplier (10) and at least one adder (11).
  8. Hearing aid (1) according to Claim 6 or 7,
    characterized by an acoustic signal class input unit (12) that can be used to transmit the desired, prescribable acoustic signal class to the hearing aid (1).
EP09155859.3A 2008-05-13 2009-03-23 Method for operating a hearing device and hearing device Active EP2120484B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102008023370A DE102008023370B4 (en) 2008-05-13 2008-05-13 Method for operating a hearing aid and hearing aid

Publications (3)

Publication Number Publication Date
EP2120484A2 EP2120484A2 (en) 2009-11-18
EP2120484A3 EP2120484A3 (en) 2010-05-26
EP2120484B1 true EP2120484B1 (en) 2016-05-18

Family

ID=40957686

Family Applications (1)

Application Number Title Priority Date Filing Date
EP09155859.3A Active EP2120484B1 (en) 2008-05-13 2009-03-23 Method for operating a hearing device and hearing device

Country Status (4)

Country Link
US (1) US8737652B2 (en)
EP (1) EP2120484B1 (en)
DE (1) DE102008023370B4 (en)
DK (1) DK2120484T3 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2670168A1 (en) * 2012-06-01 2013-12-04 Starkey Laboratories, Inc. Adaptive hearing assistance device using plural environment detection and classification
JP6216169B2 (en) * 2012-09-26 2017-10-18 キヤノン株式会社 Information processing apparatus and information processing method
CN108353228B (en) 2015-11-19 2021-04-16 香港科技大学 Signal separation method, system and storage medium
WO2019084214A1 (en) 2017-10-24 2019-05-02 Whisper.Ai, Inc. Separating and recombining audio for intelligibility and comfort

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1912474A1 (en) * 2006-10-10 2008-04-16 Siemens Audiologische Technik GmbH Method for operating a hearing aid and hearing aid

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19948907A1 (en) * 1999-10-11 2001-02-01 Siemens Audiologische Technik Signal processing in hearing aid
DE10245567B3 (en) * 2002-09-30 2004-04-01 Siemens Audiologische Technik Gmbh Device and method for fitting a hearing aid
US7383178B2 (en) * 2002-12-11 2008-06-03 Softmax, Inc. System and method for speech processing using independent component analysis under stability constraints
EP1489882A3 (en) * 2003-06-20 2009-07-29 Siemens Audiologische Technik GmbH Method for operating a hearing aid system as well as a hearing aid system with a microphone system in which different directional characteristics are selectable.
US7957548B2 (en) * 2006-05-16 2011-06-07 Phonak Ag Hearing device with transfer function adjusted according to predetermined acoustic environments
EP1912472A1 (en) * 2006-10-10 2008-04-16 Siemens Audiologische Technik GmbH Method for operating a hearing aid and hearing aid

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1912474A1 (en) * 2006-10-10 2008-04-16 Siemens Audiologische Technik GmbH Method for operating a hearing aid and hearing aid

Also Published As

Publication number Publication date
DK2120484T3 (en) 2016-08-29
US8737652B2 (en) 2014-05-27
DE102008023370A1 (en) 2009-11-19
EP2120484A3 (en) 2010-05-26
DE102008023370B4 (en) 2013-08-01
US20090285422A1 (en) 2009-11-19
EP2120484A2 (en) 2009-11-18

Similar Documents

Publication Publication Date Title
EP1912474B1 (en) Method for operating a hearing aid and hearing aid
DE10146886B4 (en) Hearing aid with automatic switching to Hasp coil operation
EP1912472A1 (en) Method for operating a hearing aid and hearing aid
EP3451705B1 (en) Method and apparatus for the rapid detection of own voice
DE112009002617B4 (en) Optional switching between multiple microphones
EP1307072B1 (en) Method for operating a hearing aid and hearing aid
EP2077059B1 (en) Method for operating a hearing aid, and hearing aid
DE102011087984A1 (en) Hearing apparatus with speaker activity recognition and method for operating a hearing apparatus
WO2001020965A2 (en) Method for determining a current acoustic environment, use of said method and a hearing-aid
EP1489885A2 (en) Method for operating a hearing aid system as well as a hearing aid system with a microphone system in which different directional characteristics are selectable
EP3104627B1 (en) Method for improving a recording signal in a hearing system
EP2991379B1 (en) Method and device for improved perception of own voice
EP2226795B1 (en) Hearing aid and method for reducing noise in a hearing aid
DE102010041740A1 (en) Method for signal processing in a hearing aid device and hearing aid device
DE10327890A1 (en) Method for operating a hearing aid and hearing aid with a microphone system, in which different directional characteristics are adjustable
EP3337188A1 (en) Method for operating a hearing aid
EP3337187A1 (en) Method for operating a hearing aid
EP2120484B1 (en) Method for operating a hearing device and hearing device
EP3873108A1 (en) Hearing system with at least one hearing instrument worn in or on the ear of the user and method for operating such a hearing system
EP2200341B1 (en) Method for operating a hearing aid and hearing aid with a source separation device
EP1489882A2 (en) Method for operating a hearing aid system as well as a hearing aid system with a microphone system in which different directional characteristics are selectable.
WO2008043758A1 (en) Method for operating a hearing aid, and hearing aid
DE10334396B3 (en) Electrical hearing aid has individual microphones combined to provide 2 microphone units in turn combined to provide further microphone unit with same order directional characteristic
DE10114101A1 (en) Processing input signal in signal processing unit for hearing aid, involves analyzing input signal and adapting signal processing unit setting parameters depending on signal analysis results
EP1432282A2 (en) Method for adapting a hearing aid to a momentary acoustic environment situation and hearing aid system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA RS

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA RS

17P Request for examination filed

Effective date: 20100621

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20150421

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SIVANTOS PTE. LTD.

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20160115

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

Ref country code: CH

Ref legal event code: NV

Representative=s name: E. BLUM AND CO. AG PATENT- UND MARKENANWAELTE , CH

Ref country code: AT

Ref legal event code: REF

Ref document number: 801342

Country of ref document: AT

Kind code of ref document: T

Effective date: 20160615

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502009012584

Country of ref document: DE

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20160825

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20160518

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160818

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160919

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160819

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502009012584

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20170221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170323

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170323

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20170331

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 801342

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170323

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170331

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170323

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20090323

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160918

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230320

Year of fee payment: 15

Ref country code: DK

Payment date: 20230323

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20230402

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240321

Year of fee payment: 16

Ref country code: GB

Payment date: 20240322

Year of fee payment: 16