EP1912474B1 - Procédé pour le fonctionnement d'une prothèse auditive et prothèse auditive - Google Patents

Procédé pour le fonctionnement d'une prothèse auditive et prothèse auditive Download PDF

Info

Publication number
EP1912474B1
EP1912474B1 EP07117250.6A EP07117250A EP1912474B1 EP 1912474 B1 EP1912474 B1 EP 1912474B1 EP 07117250 A EP07117250 A EP 07117250A EP 1912474 B1 EP1912474 B1 EP 1912474B1
Authority
EP
European Patent Office
Prior art keywords
hearing aid
speaker
signal processor
electrical
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP07117250.6A
Other languages
German (de)
English (en)
Other versions
EP1912474A1 (fr
Inventor
Eghart Fischer
Matthias Fr˦hlich
Jens Hain
Henning Puder
André Steinbuß
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos GmbH
Original Assignee
Sivantos GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sivantos GmbH filed Critical Sivantos GmbH
Publication of EP1912474A1 publication Critical patent/EP1912474A1/fr
Application granted granted Critical
Publication of EP1912474B1 publication Critical patent/EP1912474B1/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest

Definitions

  • the invention relates to a method for operating a hearing aid consisting of a single or two hearing aids. Furthermore, the invention relates to a corresponding hearing aid or a corresponding hearing aid.
  • noise or unwanted acoustic signals that interfere with the voice of a counterpart or a wanted acoustic signal are omnipresent. People with a hearing loss are particularly susceptible to such noise. Conversations in the background, acoustic interference with digital devices (cell phones), car or other environmental noise can make it very difficult for a person with hearing loss to understand a desired speaker. Reducing the level of noise in an acoustic signal coupled with an automatic focus on a desired acoustic signal component can significantly improve the performance of an electronic speech processor as used in modern hearing aids.
  • Hearing aids with digital signal processing have been introduced in the recent past. They include one or more microphones, A / D converters, digital signal processors and speakers. Usually, the digital signal processors divide the incoming signals into a plurality of frequency bands. Within each band, signal amplification and processing may be individually adjusted in accordance with requirements for a particular hearing aid wearer to improve the intelligibility of a particular component. Furthermore, algorithms for feedback and noise minimization are available in digital signal processing, but have significant disadvantages. A disadvantage of the currently existing algorithms for noise minimization z. B. whose maximum achievable improvement in hearing aid acoustics when speech and background sounds are in the same frequency region and therefore unable to distinguish between spoken speech and background noise. (See also EP 1 017 253 A2 )
  • the EP 1 303 166 A2 discloses a method for operating a hearing aid, wherein a respective reference person direction is determined to improve a communication between a hearing aid wearer and one or a plurality of reference persons, regardless of current positions to each other.
  • hearing aid has, in addition to a signal processing unit, a direction identification unit, by means of which the respective reference person direction can be determined.
  • the reference person direction is determined by a speaker recognition, and a parameter set of a hearing aid transmission function can be changed according to a desired directional characteristic.
  • acoustic signal processing there are spatial (eg directional microphone, beamforming), statistical (eg blind source separation) or mixed methods, which, among other things, using algorithms from several simultaneously active sound sources, one or a plurality of them can disconnect.
  • This allows the blind source separation by statistical signal processing of at least two microphone signals to perform a separation of source signals without prior knowledge of their geometric arrangement.
  • This method has advantages over conventional directional microphone approaches when used in hearing aids. Due to the principle, with such a BSS method (BSS: Blind Source Separation) with n microphones, it is possible to separate up to n sources, ie generate n output signals.
  • BSS Blind Source Separation
  • the control of directional microphones in the sense of blind source separation is subject to ambiguity, as soon as several competing sources of use, eg. B. speaker, present simultaneously.
  • the blind source separation allows in principle the separation of the various sources, if this spatially are separated; however, the ambiguity mitigates the potential benefit of a directional microphone, though just now In such scenarios, a directional microphone to improve speech intelligibility can be of great benefit.
  • the hearing aid or the mathematical algorithms for blind source separation are in principle faced with the problem of having to decide which of the signals generated by the blind source separation should be passed on most advantageously to the user of the algorithm, ie the hearing aid wearer.
  • the choice made by this algorithm must therefore be based on assumptions about the probable will of the listener.
  • the focus of interest of hearing aid users is to formulate a more flexible acoustic signal selection method, which is not restricted by a geometric distribution of acoustic sources. It is therefore an object of the invention to provide an improved method for operating a hearing aid, as well as an improved hearing aid.
  • it is an object of the invention which output signal of a blind source separation, the hearing aid wearer is acoustically supplied. It is thus an object of the invention to find out which with high probability is a preferred speaker acoustic source of the hearing aid wearer.
  • a selection of the speaker acoustic source to be reproduced is made in such a way that - if present - a preferred speaker or a speaker known to the hearing aid wearer is always reproduced by the hearing aid.
  • a database with profiles of a single or several such preferred speakers is created for this purpose. Acoustic profiles are then determined or evaluated for the output signals of a blind source separation and compared with the entries in the database. In the event that one of the output signals of the source separation matches the or a database profile, then this electrical acoustic signal or this speaker is then explicitly selected and made available to the hearing aid wearer via the hearing aid. Such a decision may take precedence over other decisions that have lower decision-making power in such a case.
  • a method for operating a hearing aid wherein for tracking and selective amplification of a speaker acoustic source or an electrical speaker signal, by signal processing of the hearing aid for preferably all electrical acoustic signals available to it, comparison is made with voice profiles of desired or known speakers , where the language profiles are stored in a database that is preferably located in the hearing aid or hearing aids.
  • the speaker acoustic source or the speaker acoustic sources which largely coincide with the voice profiles of the database, are tracked by the signal processing and are particularly taken into account in an acoustic output signal of the hearing aid.
  • a hearing aid is provided, wherein by means of an acoustic module (signal processing) of the hearing aid generated by a blind source separation, electrical acoustic signals with voice profile entries of a database can be adjusted.
  • the acoustic module selects from the electrical acoustic signals at least one electrical speaker signal which corresponds to a voice profile of a desired or known speaker, wherein this electrical speaker signal can be taken into particular consideration in an output signal of the hearing aid.
  • the hearing aid it is possible, depending on the number of existing microphones in the hearing aid to select a single or a plurality of speaker acoustic sources from the ambient sound and to emphasize the output sound of the hearing aid. In this case, it is possible to arbitrarily set a volume of the speaker acoustic source or the speaker acoustic sources in the output sound of the hearing aid.
  • the signal processing has a demix module which functions as a means for blind source separation for separating the acoustic sources of ambient sound. Furthermore, the signal processing has a post-processor module, which establishes a corresponding operating mode "speaker" in the hearing aid when detecting an acoustic source with a high probability of speaker. Furthermore, the signal processing may comprise a preprocessor module - the electrical output signals of which are the electrical input signals of the demix module - which normalizes and processes electrical acoustic signals originating from microphones of the hearing aid. In terms of that Preprocessor module and the demixing module (Unmixer) is on the EP 1 017 253 A2 Refer to paragraphs [0008] to [0023].
  • a comparison of the speech profiles stored in the database with the acoustic profiles currently received by the hearing aid takes place, or a comparison of the profiles of the electrical acoustic signals currently generated by the signal processing with the speech profiles stored in the database takes place.
  • This preferably takes place by the signal processing or the post-processor module, wherein the database can be part of the signal processing or the post-processor module or part of the hearing aid.
  • the post-processor module tracks and selects the electrical speaker signal (s) and generates a corresponding electrical output acoustic signal for a loudspeaker loudspeaker.
  • the hearing aid has a data interface via which the hearing aid can communicate with a peripheral device.
  • a data interface via which the hearing aid can communicate with a peripheral device.
  • This is z. B. possible to exchange voice profiles of the desired or known speaker with other hearing aids. It is also possible to edit voice profiles in a computer and then transfer them to the hearing aid to update them.
  • the limited space within the hearing aid can be better utilized by the data interface, since this external processing and thus a "streamlining" the voice profiles is possible.
  • an external computer several databases with different language profiles, eg. As private and business, created and the hearing aid thus be configured for an upcoming situation accordingly.
  • the hearing aid or the signal processing can be trained on a speech characteristic of a new speaker. Furthermore, it is also possible to have additional voice profiles of the same Spokesman create what z. B. for different acoustic situations, eg. B. near / far, advantage.
  • the hearing aid or the signal processing on a device which meets a downstream corresponding acoustic source selection could e.g. B. look in such a way that, upon detection of (unknown) language in an electrical acoustic signal that speaker or those speakers are selected, which are in the direction of the hearing aid wearer. In addition, it is possible to make this subordinate decision as to which speaker is as close as possible to the hearing aid wearer or who speaks the loudest.
  • the hearing aid includes a remote control
  • the hearing aid can be made smaller overall and offer more storage space for voice profiles.
  • the remote control can communicate wireless or wired with the hearing aid.
  • a "tracking" of an electrical speaker signal by a hearing aid of a hearing aid wearer is mentioned.
  • This is to be understood as one of the hearing aid or a signal processing of the hearing aid or a post-processor module of the signal processing selection of one or a plurality of electrical speaker signals which are selected by the hearing aid electrically or electronically from other sources of acoustic ambient sound and which in a relation to the other acoustic sources of ambient sound amplified way, ie in a louder perceived for the hearing aid wearer, are reproduced.
  • a position of the hearing aid wearer in the room especially a position of the hearing aid in the room, ie a viewing direction of the hearing aid wearer, preferably not considered.
  • a hearing aid 1 has two microphones 200, 210, which together can form a directional microphone system, for generating two electrical acoustic signals 202, 212.
  • Such a microphone arrangement gives the two electrical output signals 202, 212 of the microphones 200, 210 an inherent directional characteristic.
  • Each of the microphones 200, 210 receives an ambient sound 100 that is a composite of unknown, acoustic signals from an unknown number of acoustic sources.
  • the electrical acoustic signals 202, 212 are processed primarily in three stages.
  • the electrical acoustic signals 202, 212 are preprocessed in a preprocessing module 310 to improve the directional characteristic, which begins with a normalization of the original signals (equalizing the signal strength).
  • a blind source separation takes place in a BSS module 320, wherein the output signals of the preprocessor module 310 a demixing subject.
  • the output signals of the BSS module 320 are post-processed in a post-processor module 330 to produce a desired electrical output signal 332 which serves as input to a handset 400 and loudspeaker 400 of the hearing aid 1, respectively, and a sound generated thereby to the hearing aid wearer leave.
  • steps 1 and 3 that is, preprocessor module 310 and postprocessor module 330, are optional.
  • Fig. 2 shows a first embodiment of the invention, wherein in a signal processing 300 of the hearing aid 1 a demix module 320, hereinafter referred to as BSS module 320, is located, which is a post-processor module 330 downstream.
  • a preprocessor module 310 can be provided which appropriately prepares or prepares the input signals for the BSS module 320.
  • the signal processing 300 is preferably carried out in a DSP (Digital Signal Processor) or in an ASIC (Application Specific Integrated Circuit).
  • acoustic sources 102 are independent independent acoustic 102, 104 or signal sources 102, 104 exist in ambient sound 100, one of these acoustic sources 102 being a speaker source 102 of a speaker known to the hearing aid wearer and the other acoustic source 104 being a noise source 104.
  • the speaker acoustic source 102 is to be selected and tracked by the hearing aid 1 and the signal processor 300, respectively, and be a primary acoustic component of the handset 400 so that an output sound 402 of the speaker 400 mainly contains this signal (102).
  • the two microphones 200, 210 of the hearing aid 1 each receive a mixture of the two acoustic signals 102, 104 - illustrated by the dotted arrow (representing the preferred, acoustic signal 102) and the solid arrow (representing the non-preferred, acoustic signal 104). and either pass them to the preprocessor module 310 or equal to the BSS module 320 as electrical input signals.
  • the two microphones 200, 210 can be distributed as desired. They may be located in a single hearing aid 1 of the hearing aid 1 or distributed to both hearing aids 1. In addition, it is possible, for. B. one or both microphones 200, 210 outside the hearing aid 1, z. B.
  • the electrical input signals of the BSS module 320 need not necessarily originate from a single hearing device 1 of the hearing aid 1.
  • a hearing aid 1 consisting of two hearing aids 1 has a total of four or six microphones.
  • the preprocessor module 310 prepares the data for the BSS module 320, which in turn forms two separate output signals from its two mixed input signals, depending on the capability, each of which represents one of the two acoustic signals 102, 104.
  • the two separate output signals of the BSS module 320 are input signals for the post-processor module 330, in which it is now decided which of the two acoustic signals 102, 104 is output to the loudspeaker 400 as an electrical output signal 332.
  • the post-processor module 330 for the electrical acoustic signals 322, 324 simultaneously performs a comparison with acoustic signals / data of desired or known speakers whose acoustic signals / data are stored in a database 340. If the post-processor module 330 identifies a known speaker or acoustic source 102 in an electrical acoustic signal 322, 324, ie ambient sound 100, it selects this electrical speaker signal 322 and outputs it in an amplified manner relative to other acoustic signals 324 as output electrical acoustic signal 332 (essentially corresponds to the acoustic signal 322).
  • the database 340 in which speech profiles P of the speakers are stored is located in the post-processor module 330, the signal processor 300 or in the hearing aid 1.
  • a remote control 10 belongs to the hearing aid 1 or the hearing aid 1 a remote control 10 (ie the remote control 10 is part of the hearing aid 1), it is also possible to accommodate the database 340 in the remote control 10. This is quite advantageous, since the remote control 10 is not subject to such great size restrictions as the part of the hearing aid 1 which is on or in the ear, so that more storage space for the database 340 can be available.
  • a communication with a peripheral device of the hearing aid 1, z As a computer, since in such a case, a data interface necessary for communication can also be located within the remote control 10 (see also below).
  • Fig. 3 shows the inventive method and the hearing aid 1 according to the invention in the processing of three acoustic signal sources s 1 (t), s 2 (t), s n (t), which together form the ambient sound 100.
  • This ambient sound 100 is recorded in each case by three microphones, each of which outputs an electrical microphone signal x 1 (t), x 2 (t), x n (t) to the signal processor 300.
  • the signal processor 300 has no preprocessor module 310, but may preferably contain this. (This also applies analogously to the first embodiment of the invention).
  • the electrical microphone signals x 1 (t), x 2 (t), x n (t) are input signals to the BSS module 320, which in each case in the electrical microphone signals x 1 (t), x 2 (t), x n (t) contained acoustic signals by acoustic sources s 1 (t), s 2 (t), s n (t) s' 1 (t), s' 2 (t), s' n (t) outputs to the post-processor module 330 as electrical output signals.
  • the hearing aid 1 is at least sufficiently capable of delivering such an acoustic signal s ' 1 (t), s' n (t) to the hearing aid wearer in such a way that he or she can sufficiently correctly interpret the information contained therein, ie at least sufficiently understands the speaker information contained therein.
  • the third acoustic signal s' 2 (t) (in this embodiment largely corresponds to the acoustic source s 2 (t)) contains no or hardly usable speaker information.
  • the post-processor module 330 will now be the electric acoustic signals s' 1 (t), s' 2 (t), s' n (t) examined to determine whether it contains speech information of known speakers (speakers Information). These voice information of the known speakers are stored as voice profiles P in the database 340 of the hearing aid 1.
  • the database 340 can in turn be provided in the remote control 10, the hearing aid 1, the signal processing 300 or the post-processor module 330.
  • the post-processor module 330 compares the data stored in the database 340 speech profile P with the electric acoustic signals s '1 (t), s' 2 (t), s 'n (t) and identified in this example, while the relevant electrical speaker signals s' 1 (t) and s' n (t).
  • a profile derivation preferably takes place by the post-processor module 330, which compares all voice profiles P of the database 340 with the electrical acoustic signals s' 1 (t), s' 2 (t), s' n (t).
  • This is preferably done by the post-processor module 330 a profile evaluation of the electrical acoustic signals s' 1 (t), s' 2 (t), s' n (t) instead, the profile evaluation acoustic profiles P 1 (t), P 2 (t), P n (t) created and these acoustic profiles P 1 (t), P 2 (t), P n (t) can then be compared with the language profiles P of the database 340.
  • the post-processor module 330 identifies and outputs the corresponding electrical talker signal s ' 1 (t), s' n (t) this as an electrical acoustic signal 332 to the speaker 400 from.
  • the identification of the acoustic profiles P 1 (t), P 2 (t), P n (t) can take place in that the hearing aid 1 probabilities p 1 (t), p 2 (t), p n (t) for the respective Acoustic profile P 1 (t), P 2 (t), P n (t) with respect to the respective voice profiles P created. This preferably takes place during the profile adjustment, which is followed by a corresponding signal selection. Ie. it is possible by means of the profiles stored in the database 340 to give a respective acoustic profile P 1 (t), P 2 (t), P n (t) a probability p 1 (t), p 2 (t), p n (t ) for a respective speaker 1, 2, n. In the signal selection, it is then possible to select those electrical acoustic signals s' 1 (t), s' 2 (t), s' n (t) which correspond at least to a certain probability for a speaker 1, 2, ..., n.
  • the hearing aid 1 can be brought into a training mode in which the database 340 can be supplied with electrical acoustic signals of desired speakers.
  • the provision of the database 340 with new language profiles P from desired or known speakers can also take place via a data interface of the hearing aid 1. This makes it possible, the hearing aid 1 (also via its remote control 10) to connect to a peripheral device.
  • a blind source separation method is combined with a speaker classification algorithm. This ensures that the hearing aid wearer can always best or most clearly perceive his or her preferred speaker or preferred speaker.
  • the hearing aid 1 which of the electrical speaker signals 322; s ' 1 (t), s' n (t) can preferably be reproduced on the hearing aid wearer as the output sound 402, s "(t) .
  • This can be an angle of incidence of the corresponding acoustic source 102, 104; s 1 (t), s 2 (t ), s n (t) may be on the hearing aid 1, with certain angles of incidence being preferred, for example the 0 ° viewing direction or a 90 ° lateral direction of the hearing aid wearer may be preferred s ' 1 (t), s' n (t) - even apart from their different probabilities p 1 (t), p 2 (t), p n (t) for speaker information contained therein (this of course also applies to all Embodiments of the invention) - to weight whether one of the electrical speaker signals 322; s ' 1 (t), s' n (t) is a predominant or
  • this other module of the hearing aid 1 is intended to be incorporated into the post-processor module 330, ie, in such an embodiment, the post-processor module 330 comprises this other module.
  • the present document relates inter alia to a post-processor module 20 of the EP 1 017 253 A2 (Reference numeral after the EP 1 017 253 A2 ), in which by means of a profile evaluation one or more known speakers for an electrical output signal of the post-processor module 20 are selected and reproduced therein at least amplified. See also paragraph [0025] of EP 1 017 253 A2 . Further, in the invention, the preprocessor module and the BSS module such as the preprocessor 16 and the unmixer 18 of the EP 1 017 253 A2 be constructed. See in particular paragraphs [0008] to [0024] of EP 1 017 253 A2 ,
  • the invention ties in with the EP 1 655 998 A2 to provide for a hearing aid wearer stereo voice signals or to enable a binaural acoustic care with speech.
  • the invention (notation according to the EP 1 655 998 A2 ) the output signals z1, z2 respectively for the right (k) and left (k) of a second filter device of EP 1 655 998 A2 (please refer Fig. 2 and 3 ) for accentuation / amplification of the corresponding acoustic source downstream.
  • it is possible to use the invention in the EP 1 655 998 A2 apply to the effect that it intervenes according to the teached there blind source separation and even before the second filter device. Ie. According to the invention, a selection of a signal y1 (k), y2 (k) takes place (see Fig. 3 of the EP 1 655 998 A2 ).

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Claims (41)

  1. Procédé d'utilisation d'un auxiliaire auditif (1) dans lequel, pour le suivi et la sélection d'une source acoustique de locuteur (102; s1(t), sn(t)) d'un son (100; 102, 104; s1(t), s2(t), ..., sn(t)) de l'environnement, un mode de fonctionnement "locuteur" est établi par un traitement (300) des signaux de l'auxiliaire auditif (1),
    des signaux acoustiques électriques (322, 324; 332, s'1(t), s'2(t), ..., s'n(t)) à partir desquels au moins un signal vocal électrique (322; s'1(t), s'n(t)) est identifié par une base de données (340) qui présente des profils de voix (P) de locuteurs préférés et est sélectionné, sont formés par l'auxiliaire auditif (1) à partir du son (100; 102, 104; s1(t), s2(t), ..., sn(t)) de l'environnement enregistré, par une séparation aveugle des sources,
    le ou les signaux vocaux électriques (322; s'1(t), s'n(t)) étant pris en compte sélectivement dans un son de sortie (402; s"(t), s"1(t) + s"n(t)) de l'auxiliaire auditif (1) de telle sorte qu'ils ressortent acoustiquement au moins mieux qu'une autre source acoustique (104; s2(t)) pour l'utilisateur de l'auxiliaire auditif et qu'ils soient ainsi mieux perçus par l'utilisateur de l'auxiliaire auditif.
  2. Procédé selon la revendication 1, dans lequel le traitement (300) des signaux de l'auxiliaire auditif (1) est conçu de telle sorte que plusieurs sources acoustiques de locuteur (102; s1(t), sn(t)) acoustiquement indépendantes les unes des autres sont suivies séparément les unes des autres.
  3. Procédé selon les revendications 1 ou 2, dans lequel le traitement (300) des signaux effectue une comparaison des profils de voix (P) conservés en mémoire dans la base de données (340) avec les signaux acoustiques électriques (322, 324; s'1(t), s'2(t), ..., s'n(t)).
  4. Procédé selon l'une des revendications 1 à 3, dans lequel le traitement (300) des signaux effectue une évaluation du profil des signaux acoustiques électriques (322, 324; s'1(t), s'2(t), ..., s'n(t)), un profil acoustique (P1(t), P2(t), ..., Pn(t)) étant associé à chaque signal acoustique (322, 324; s'1(t), s'2(t), ,..., s'n(t)).
  5. Procédé selon la revendication 4, dans lequel le traitement (300) des signaux effectue une égalisation des profils de voix (P) conservés en mémoire dans la base de données (340) avec le profil acoustique (P1(t), P2(t), , ..., Pn(t)).
  6. Procédé selon l'une des revendications 4 à 5, dans lequel lors de la comparaison ou de l'égalisation des signaux acoustiques électriques (322, 324; s'1(t), s'2(t), ..., s'n(t)) avec les profils de locuteurs (P) conservés en mémoire dans la base de données (340) ou avec les profils acoustiques (P1(t), P2(t), ..., Pn(t)) des locuteurs, une probabilité de locuteur (p1(t), p2(t), ..., pn(t)) est déterminée pour chaque signal acoustique électrique (s'1(t), s'2(t), ..., s'n(t)).
  7. Procédé selon la revendication 6, dans lequel le traitement (300) des signaux détermine le ou les signaux électriques de locuteur (322; s'1(t), ..., s'n(t)) dont les probabilités (p1(t), ..., pn(t)) de locuteur sont les plus élevées et qui sont présentés à l'utilisateur de l'auxiliaire auditif par l'intermédiaire du son de sortie (402; s"(t); s"1(t) + s"n(t)) de l'auxiliaire auditif (1).
  8. Procédé selon l'une des revendications 1 à 7, dans lequel des signaux acoustiques électriques (324; s'2(t)) qui ne contiennent pas de locuteur ou de voix identifiés ou des signaux acoustiques électriques (324; s'2(t)) qui sont trop fortement parasités par des signaux parasites ne sont pas pris en compte par le traitement (300) des signaux.
  9. Procédé selon l'une des revendications 1 à 8, dans lequel les profils de locuteurs (P) conservés en mémoire dans la base de données (340) présentent un classement qui peut être définie par l'utilisateur de l'auxiliaire auditif et par lequel ils sont reproduits de préférence par l'auxiliaire auditif (1).
  10. Procédé selon l'une des revendications 1 à 9, dans lequel le traitement (300) des signaux détermine le ou les signaux électriques de locuteurs (322; s'1(t), ..., s'n(t)) qui sont les plus proches de l'utilisateur de l'auxiliaire auditif et/ou qui aboutissent à un angle d'observation de préférence de 0° sur l'utilisateur de l'auxiliaire auditif et qui sont présentés à l'utilisateur de l'auxiliaire auditif par le son de sortie (402; s"(t); s"1(t) + s"n(t)) de l'auxiliaire auditif (1).
  11. Procédé selon l'une des revendications 1 à 10, dans lequel le traitement (300) des signaux détermine le ou les signaux électriques de locuteur (322; s'1(t), ..., s'n(t)) qui sont les plus forts et/ou qui sont prédominants dans le son ambiant (100; 102, 104; s1(t), s2(t), ..., sn(t)), et qui sont présentés à l'utilisateur de l'auxiliaire auditif par le son de sortie (402; s"(t); s"1(t) + s"n(t)) de l'auxiliaire auditif (1).
  12. Procédé selon l'une des revendications 1 à 11, dans lequel au cas où aucun signal électrique de locuteur ou un trop grand nombre de signaux électriques de locuteurs (322; s'1(t), ..., s'n(t)) sont identifiés, le traitement (300) de signaux applique une sélection hiérarchisée de sources acoustiques.
  13. Procédé selon la revendication 12, dans lequel pour la sélection secondaire des sources acoustiques, un signal acoustique électrique prioritaire (322, 324; s'1(t), s'2(t), ..., s'n(t)) est caractérisé par au moins l'un des critères suivants :
    son intensité,
    sa plage de fréquence ou ses extrema de fréquences,
    la richesse de timbre ou d'octaves,
    au moins un locuteur ou une voix non connus,
    de la musique,
    une absence aussi élevée que possible d'interférences,
    des écarts temporels similaires entre des événements acoustiques similaires et/ou
    par l'opposé des critères de sélection ci-dessus.
  14. Procédé selon l'une des revendications 1 à 13, dans lequel l'auxiliaire auditif (1) ou le traitement (300) des signaux peuvent être amenés en mode d'apprentissage dans lequel l'auxiliaire auditif (1) ou le traitement (300) des signaux apprennent des locuteurs inconnus.
  15. Procédé selon la revendication 14, dans lequel l'auxiliaire auditif (1) peut enregistrer en mode d'apprentissage une source acoustique de locuteurs inconnus par des microphones (200, 210) de telle sorte qu'un profil acoustique de la source acoustique du locuteur inconnu soit généré et conservé de préférence en permanence par l'auxiliaire auditif (1) ou de traitement (300) des signaux.
  16. Procédé selon l'une des revendications 1 à 15, dans lequel l'auxiliaire auditif (1) présente une interface de données par laquelle l'auxiliaire auditif (1) ou le traitement (300) de signaux peuvent recevoir des profils de locuteurs inconnus ou actualiser des profils de voix (P) connus.
  17. Procédé selon l'une des revendications 1 à 16, dans lequel l'auxiliaire auditif (1) est commandé par un élément de commande qui établit le mode de fonctionnement "locuteur" ou qui demande le mode d'apprentissage.
  18. Procédé selon la revendication 17, dans lequel l'élément de commande de l'auxiliaire auditif (1) est prévu au moins en partie sur l'auxiliaire auditif et/ou au moins en partie sur une commande à distance (10) de l'auxiliaire auditif (1).
  19. Procédé selon l'une des revendications 1 à 18, dans lequel le traitement (300) des signaux présente un mode de démixage (320) configuré comme module (320) de séparation de sources aveugle, qui sépare des signaux acoustiques électriques (312, 314; x1(t), x2(t), ..., xn(t)), et un module de post-processeur (330) par lequel le mode de fonctionnement "locuteur" est établi.
  20. Procédé selon la revendication 19, dans lequel l'intensité des signaux acoustiques électriques (322, 324; s'1(t), (s'2(t),) ..., s'n(t)) pour un signal acoustique électrique de sortie (332) du traitement (300) des signaux est réglée dans le module (330) de post-processeur.
  21. Procédé selon l'une des revendications 1 à 20, dans lequel le traitement (300) des signaux présente un module (310) de pré-processeur par lequel des signaux acoustiques électriques (202, 212; x1(t), x2(t), ..., xn(t)) sont préparés pour le module de démixage (320).
  22. Procédé selon l'une des revendications 1 à 21, dans lequel la base de données (340) est placée dans la commande à distance (10) de l'auxiliaire auditif (1) et la commande à distance (10) peut communiquer de préférence sans fil avec l'auxiliaire auditif (1).
  23. Procédé selon l'une des revendications 1 à 22, dans lequel la source acoustique de locuteurs (102; s1(t), s3(t)) est caractérisée dans le traitement (300) de signaux sous la forme de paramètres caractéristiques.
  24. Procédé selon l'une des revendications 1 à 23, dans lequel le mode de fonctionnement "locuteur" est conçu de telle sorte que dans le son de sortie (402; s" (t), s''1(t) + s"n(t)) de l'auxiliaire auditif (1), uniquement ou essentiellement uniquement la ou les sources acoustiques de locuteurs (102; s1(t), sn(t)) du son ambiant (100; 102, 104; s1(t), s2(t), ..., sn(t)) peuvent être détectés par l'utilisateur de l'auxiliaire auditif.
  25. Auxiliaire auditif destiné à suivre et sélectionner une source acoustique de locuteur (102; s1(t), sn(t)) dans un son ambiant (100; 102, 104; s1(t), S2(t), ..., sn(t)).
    l'auxiliaire auditif (1) formant à partir du son ambiant (100; 102, 104; s1(t), s2(t), ..., sn(t)) et au moyen d'une séparation aveugle des sources des signaux acoustiques électriques (322, 324; 332; s'1(t), s'2(t), ..., s'n(t)) et présentant un traitement (300) des signaux qui établit un mode de fonctionnement "locuteur",
    un module (330) de post-processeur de traitement (300) de signaux identifiant et sélectionnant à partir des signaux électriques acoustiques (322, 324; s'1(t), s'2(t), ..., s'n(t)) séparés au moins un signal électrique de locuteur (322; s'1(t), s'n(t)), au moyen d'une base de données (340) qui présente des profils (P) de locuteurs préférés,
    le ou les signaux électriques de locuteur (322; s'1(t), s'n(t)) pouvant être pris en compte sélectivement dans un son de sortie (402; s"(t), s"1(t) + s"n(t)) de l'auxiliaire auditif (1) de telle sorte qu'ils ressortent acoustiquement au moins mieux qu'une autre source acoustique (104; s2(t)) pour l'utilisateur de l'auxiliaire auditif et qu'ils puissent ainsi être mieux perçus par l'utilisateur de l'auxiliaire auditif.
  26. Auxiliaire auditif selon la revendication 25, dans lequel le traitement (300) de signaux de l'auxiliaire auditif (1) est conçu de manière à pouvoir suivre séparément les unes des autres plusieurs sources acoustiques de locuteur (102; si(t), sn(t)) acoustiquement indépendantes les unes des autres.
  27. Auxiliaire auditif selon les revendications 25 ou 26, dans lequel le module (330) de post-processeur suit le ou les signaux électriques de locuteur (322; s'1(t), s'n(t)), les sélectionne et génère un signal électrique de sortie (332) correspondant pour l'utilisateur (400) de l'auxiliaire auditif (1) qui délivre le son de sortie (402; s"(t), s"1(t) + s"3(t)) de l'auxiliaire auditif (1).
  28. Auxiliaire auditif selon l'une des revendications 25 à 27, dans lequel une comparaison des profils de voix (P) conservés dans la base de données (340) avec les signaux acoustiques électriques (322, 234; s'1(t), s'2(t), ..., s'n(t)) peut être réalisée dans le traitement (300) de signaux.
  29. Auxiliaire auditif selon l'une des revendications 25 à 28, dans lequel une évaluation du profil des signaux acoustiques électriques (322, 324; s'1(t), s'2(t), ..., s'n(t)) a lieu au moyen du traitement ( 300) des signaux, un profil acoustique (P1(t), P2(t), ..., Pn(t)) pouvant être ainsi associé à chaque signal acoustique (322, 324; s'1(t), s'2(t), ..., s'n(t)).
  30. Auxiliaire auditif selon la revendication 29, dans lequel le traitement (300) de signaux permet de réaliser une égalisation des profils de voix (P) conservés en mémoire dans la base de données (340) avec le profil acoustique (P1(t), P2(t), ..., Pn(t)).
  31. Auxiliaire auditif selon l'une des revendications 25 à 30, dans lequel l'auxiliaire auditif (1) ou le traitement (300) des signaux peuvent être amenés en mode d'apprentissage dans lequel l'auxiliaire auditif (1) ou le traitement (300) des signaux apprennent des locuteurs inconnus.
  32. Auxiliaire auditif selon la revendication 31, dans lequel l'auxiliaire auditif (1) peut enregistrer en mode d'apprentissage une source acoustique de locuteurs inconnus par des microphones (200, 210) de telle sorte qu'un profil acoustique de la source acoustique du locuteur inconnu soit généré et conservé de préférence en permanence par l'auxiliaire auditif (1) ou de traitement (300) des signaux.
  33. Auxiliaire auditif selon l'une des revendications 25 à 32, dans lequel l'auxiliaire auditif (1) présente une interface de données par laquelle l'auxiliaire auditif (1) ou le traitement (300) de signaux peuvent recevoir des profils de locuteurs inconnus ou actualiser des profils de locuteurs (P) connus.
  34. Auxiliaire auditif selon l'une des revendications 25 à 33, dans lequel l'auxiliaire auditif (1) est commandé par un élément de commande qui établit le mode de fonctionnement "locuteur" ou qui demande le mode d'apprentissage.
  35. Auxiliaire auditif selon la revendication 34, dans lequel l'élément de commande de l'auxiliaire auditif (1) est prévu au moins en partie sur l'auxiliaire auditif et/ou au moins en partie sur une commande à distance (10) de l'auxiliaire auditif (1).
  36. Auxiliaire auditif selon l'une des revendications 25 à 35, dans lequel deux appareils auditifs (1) de l'auxiliaire auditif (1) ou un seul auxiliaire auditif (1) présentent plusieurs microphones (200, 210) qui enregistrent le son ambiant (100; 102, 104; s1(t), s2(t), ..., sn(t)) qui contient la ou les sources acoustiques de locuteurs (102, s1(t), sn(t)), chacun des microphones (200, 210) délivrant un signal électrique de sortie (202, 212; x1(t), x2(t), ..., xn(t)) au traitement (300) de signaux.
  37. Auxiliaire auditif selon l'une des revendications 25 à 36, dans lequel le traitement (300) des signaux présente un mode de démixage (320) configuré comme module (320) de séparation de sources aveugle, qui sépare des signaux acoustiques électriques (202, 212 ; 312, 314; x1(t), x2(t), ..., xn(t)) et dans lequel le mode de fonctionnement "locuteur" de l'auxiliaire auditif (1) peut être établi par le module post-processeur (330).
  38. Auxiliaire auditif selon l'une des revendications 25 à 37, dans lequel les intensités des signaux acoustiques électriques (322, 324; s'1(t), (s'2(t),) ..., sn(t)) sont accordées les unes aux autres dans le module de post-processeur (330).
  39. Auxiliaire auditif selon l'une des revendications 25 à 38, dans lequel le traitement (300) des signaux présente un module (310) de pré-processeur par lequel des signaux acoustiques électriques (202, 212; x1(t), x2(t), ..., xn(t)) peuvent être préparés pour le module de démixage (320).
  40. Auxiliaire auditif selon l'une des revendications 25 à 39, dans lequel la base de données (340) est placée dans la commande à distance (10) de l'auxiliaire auditif (1) et la commande à distance (10) peut communiquer de préférence sans fil avec l'auxiliaire auditif (1).
  41. Auxiliaire auditif selon l'une des revendications 25 à 40, l'auxiliaire auditif (1) comportant un seul ou deux appareils auditifs (1) et de préférence la commande à distance (10).
EP07117250.6A 2006-10-10 2007-09-26 Procédé pour le fonctionnement d'une prothèse auditive et prothèse auditive Not-in-force EP1912474B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102006047982A DE102006047982A1 (de) 2006-10-10 2006-10-10 Verfahren zum Betreiben einer Hörfilfe, sowie Hörhilfe

Publications (2)

Publication Number Publication Date
EP1912474A1 EP1912474A1 (fr) 2008-04-16
EP1912474B1 true EP1912474B1 (fr) 2015-11-11

Family

ID=38922434

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07117250.6A Not-in-force EP1912474B1 (fr) 2006-10-10 2007-09-26 Procédé pour le fonctionnement d'une prothèse auditive et prothèse auditive

Country Status (5)

Country Link
US (1) US8194900B2 (fr)
EP (1) EP1912474B1 (fr)
CN (1) CN101163354B (fr)
DE (1) DE102006047982A1 (fr)
DK (1) DK1912474T3 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3836567A1 (fr) 2019-12-13 2021-06-16 Sivantos Pte. Ltd. Procédé de fonctionnement d'un système auditif et système auditif

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing
US10158337B2 (en) 2004-08-10 2018-12-18 Bongiovi Acoustics Llc System and method for digital signal processing
US10848118B2 (en) 2004-08-10 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US10701505B2 (en) 2006-02-07 2020-06-30 Bongiovi Acoustics Llc. System, method, and apparatus for generating and digitally processing a head related audio transfer function
US10848867B2 (en) 2006-02-07 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US9554061B1 (en) 2006-12-15 2017-01-24 Proctor Consulting LLP Smart hub
DE102008023370B4 (de) 2008-05-13 2013-08-01 Siemens Medical Instruments Pte. Ltd. Verfahren zum Betreiben eines Hörgeräts und Hörgerät
DK2262285T3 (en) * 2009-06-02 2017-02-27 Oticon As Listening device providing improved location ready signals, its use and method
CN102428716B (zh) * 2009-06-17 2014-07-30 松下电器产业株式会社 助听器装置
DE102009051508B4 (de) * 2009-10-30 2020-12-03 Continental Automotive Gmbh Vorrichtung, System und Verfahren zur Sprachdialogaktivierung und -führung
DK2352312T3 (da) * 2009-12-03 2013-10-21 Oticon As Fremgangsmåde til dynamisk undertrykkelse af omgivende akustisk støj, når der lyttes til elektriske input
DK2360943T3 (da) * 2009-12-29 2013-07-01 Gn Resound As Beamforming i høreapparater
US8369549B2 (en) * 2010-03-23 2013-02-05 Audiotoniq, Inc. Hearing aid system adapted to selectively amplify audio signals
DE102010026381A1 (de) * 2010-07-07 2012-01-12 Siemens Medical Instruments Pte. Ltd. Verfahren zum Lokalisieren einer Audioquelle und mehrkanaliges Hörsystem
BR112012031656A2 (pt) * 2010-08-25 2016-11-08 Asahi Chemical Ind dispositivo, e método de separação de fontes sonoras, e, programa
US9883318B2 (en) 2013-06-12 2018-01-30 Bongiovi Acoustics Llc System and method for stereo field enhancement in two-channel audio systems
US9906858B2 (en) 2013-10-22 2018-02-27 Bongiovi Acoustics Llc System and method for digital signal processing
US20150146099A1 (en) * 2013-11-25 2015-05-28 Anthony Bongiovi In-line signal processor
US10720153B2 (en) 2013-12-13 2020-07-21 Harman International Industries, Incorporated Name-sensitive listening device
US11310614B2 (en) 2014-01-17 2022-04-19 Proctor Consulting, LLC Smart hub
US10820883B2 (en) 2014-04-16 2020-11-03 Bongiovi Acoustics Llc Noise reduction assembly for auscultation of a body
US10575117B2 (en) 2014-12-08 2020-02-25 Harman International Industries, Incorporated Directional sound modification
CN105976829B (zh) * 2015-03-10 2021-08-20 松下知识产权经营株式会社 声音处理装置、声音处理方法
US9905244B2 (en) * 2016-02-02 2018-02-27 Ebay Inc. Personalized, real-time audio processing
US20170347348A1 (en) * 2016-05-25 2017-11-30 Smartear, Inc. In-Ear Utility Device Having Information Sharing
US9741360B1 (en) 2016-10-09 2017-08-22 Spectimbre Inc. Speech enhancement for target speakers
US10231067B2 (en) * 2016-10-18 2019-03-12 Arm Ltd. Hearing aid adjustment via mobile device
DE102017207581A1 (de) * 2017-05-05 2018-11-08 Sivantos Pte. Ltd. Hörsystem sowie Hörvorrichtung
IT201700073663A1 (it) * 2017-06-30 2018-12-30 Torino Politecnico Audio signal digital processing method and system thereof
WO2019199706A1 (fr) * 2018-04-10 2019-10-17 Acouva, Inc. Dispositif sans fil dans l'oreille avec communication mic à conduction osseuse
CN112236812A (zh) 2018-04-11 2021-01-15 邦吉欧维声学有限公司 音频增强听力保护系统
US10959035B2 (en) 2018-08-02 2021-03-23 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
EP3868128A2 (fr) * 2018-10-15 2021-08-25 Orcam Technologies Ltd. Systèmes de prothèse auditive et procédés
DE102020202483A1 (de) * 2020-02-26 2021-08-26 Sivantos Pte. Ltd. Hörsystem mit mindestens einem im oder am Ohr des Nutzers getragenen Hörinstrument sowie Verfahren zum Betrieb eines solchen Hörsystems
CN113766383B (zh) * 2021-09-08 2024-06-18 度小满科技(北京)有限公司 一种控制耳机静音的方法和装置
CN113825082B (zh) * 2021-09-19 2024-06-11 武汉左点科技有限公司 一种用于缓解助听延迟的方法及装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4032711A (en) 1975-12-31 1977-06-28 Bell Telephone Laboratories, Incorporated Speaker recognition arrangement
US4837830A (en) * 1987-01-16 1989-06-06 Itt Defense Communications, A Division Of Itt Corporation Multiple parameter speaker recognition system and methods
EP0472356B1 (fr) * 1990-08-16 1994-03-30 Fujitsu Ten Limited Appareil de reconnaissance de parole dans un véhicule, utilisant un arrangement de microphones pour déterminer le siège d'où provient une commande
US6327347B1 (en) 1998-12-11 2001-12-04 Nortel Networks Limited Calling party identification authentication and routing in response thereto
EP1017253B1 (fr) * 1998-12-30 2012-10-31 Siemens Corporation Séparation aveugle de sources pour prothèses auditives
AU2001261344A1 (en) 2000-05-10 2001-11-20 The Board Of Trustees Of The University Of Illinois Interference suppression techniques
JP3903105B2 (ja) 2000-05-23 2007-04-11 富士フイルム株式会社 動的変化検出方法、動的変化検出装置及び超音波診断装置
US7457426B2 (en) * 2002-06-14 2008-11-25 Phonak Ag Method to operate a hearing device and arrangement with a hearing device
EP1881738B1 (fr) 2002-06-14 2009-03-25 Phonak AG Procédé d'utilisation d'une prothèse auditive et assemblage avec une prothèse auditive
DE102004053790A1 (de) 2004-11-08 2006-05-18 Siemens Audiologische Technik Gmbh Verfahren zur Erzeugung von Stereosignalen für getrennte Quellen und entsprechendes Akustiksystem
US7319769B2 (en) * 2004-12-09 2008-01-15 Phonak Ag Method to adjust parameters of a transfer function of a hearing device as well as hearing device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3836567A1 (fr) 2019-12-13 2021-06-16 Sivantos Pte. Ltd. Procédé de fonctionnement d'un système auditif et système auditif

Also Published As

Publication number Publication date
CN101163354A (zh) 2008-04-16
US20080107297A1 (en) 2008-05-08
US8194900B2 (en) 2012-06-05
DE102006047982A1 (de) 2008-04-24
CN101163354B (zh) 2013-01-02
DK1912474T3 (da) 2016-02-22
EP1912474A1 (fr) 2008-04-16

Similar Documents

Publication Publication Date Title
EP1912474B1 (fr) Procédé pour le fonctionnement d'une prothèse auditive et prothèse auditive
EP1912472A1 (fr) Procédé pour le fonctionnement d'une prothèse auditive and prothèse auditive
DE69431037T2 (de) Hörgerät mit mikrofonumschaltungssystem
EP2077059B1 (fr) Procédé de fonctionnement d'une aide auditive et aide auditive
DE102019206743A1 (de) Hörgeräte-System und Verfahren zur Verarbeitung von Audiosignalen
DE10146886A1 (de) Hörgerät mit automatischer Umschaltung auf Hörspulenbetrieb
DE102011087984A1 (de) Hörvorrichtung mit Sprecheraktivitätserkennung und Verfahren zum Betreiben einer Hörvorrichtung
EP3104627B1 (fr) Procédé d'amélioration d'un signal d'enregistrement dans un système auditif
DE19721982A1 (de) Kommunikationssystem für Benutzer tragbarer Hörhilfen
EP1489885A2 (fr) Procédé pour l'opération d'une prothèse auditive aussi qu'une prothèse auditive avec un système de microphone dans lequel des diagrammes de rayonnement différents sont sélectionnables
EP3430819B1 (fr) Oreillette à microphones séparés pour recevoir de manière binaurale et téléphoner
EP3873108A1 (fr) Système auditif pourvu d'au moins un instrument auditif porté dans ou sur l'oreille de l'utilisateur, ainsi que procédé de fonctionnement d'un tel système auditif
DE102019200956A1 (de) Signalverarbeitungseinrichtung, System und Verfahren zur Verarbeitung von Audiosignalen
EP3337187A1 (fr) Procédé de fonctionnement d'un dispositif de correction auditive
DE102011085361A1 (de) Mikrofoneinrichtung
WO2008043758A1 (fr) Procédé d'utilisation d'une aide auditive et aide auditive
DE102006048295B4 (de) Verfahren und Vorrichtung zur Aufnahme, Übertragung und Wiedergabe von Schallereignissen für Kommunikationsanwendungen
EP0989775A1 (fr) Prothèse auditif avec dispositif de contrôle de la qualité d'un signal
EP2120484B1 (fr) Procédé destiné au fonctionnement d'un appareil auditif et appareil auditif
DE202019107201U1 (de) Binaurales Hörgerät für eine verbesserte räumliche Hörwahrnehmung
EP3836567B1 (fr) Procédé de fonctionnement d'un système auditif et système auditif
EP3585073A1 (fr) Procédé de commande de la transmission de données entre au moins un appareil auditif et un périphérique d'un système d'appareil auditif ainsi que système d'appareil auditif associé
DE102019208742B4 (de) Sprachübersetzungssystem zum Bereitstellen einer Übersetzung eines Spracheingabesignals eines Sprechers in ein anderssprachiges Sprachausgabesignal für einen Hörer sowie Übersetzungsverfahren für ein derartiges Sprachübersetzungssystem
DE102014210760B4 (de) Betrieb einer Kommunikationsanlage
DE102015212609A1 (de) Verfahren zum Betrieb eines Hörgerätesystems und Hörgerätesystem

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

17P Request for examination filed

Effective date: 20080710

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20150618

RIN1 Information on inventor provided before grant (corrected)

Inventor name: FROEHLICH, MATTHIAS

Inventor name: HAIN, JENS

Inventor name: STEINBUSS, ANDRE

Inventor name: FISCHER, EGHART

Inventor name: PUDER, HENNING

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SIVANTOS GMBH

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 760969

Country of ref document: AT

Kind code of ref document: T

Effective date: 20151215

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502007014385

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: E. BLUM AND CO. AG PATENT- UND MARKENANWAELTE , CH

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20160216

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20160211

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160311

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20160311

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502007014385

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

26N No opposition filed

Effective date: 20160812

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160926

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160926

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 760969

Country of ref document: AT

Kind code of ref document: T

Effective date: 20160926

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20160930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160926

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20070926

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20151111

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20180921

Year of fee payment: 12

Ref country code: DE

Payment date: 20180924

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: PL

Payment date: 20180709

Year of fee payment: 6

Ref country code: CH

Payment date: 20180924

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DK

Payment date: 20190920

Year of fee payment: 13

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 502007014385

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190930

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190930

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200401

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20190926

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190930

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190926

REG Reference to a national code

Ref country code: DK

Ref legal event code: EBP

Effective date: 20200930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200930