EP2088802B1 - Verfahren zur Schätzung der Gewichtungsfunktion von Audiosignalen in einem Hörgerät - Google Patents

Verfahren zur Schätzung der Gewichtungsfunktion von Audiosignalen in einem Hörgerät Download PDF

Info

Publication number
EP2088802B1
EP2088802B1 EP08101366.6A EP08101366A EP2088802B1 EP 2088802 B1 EP2088802 B1 EP 2088802B1 EP 08101366 A EP08101366 A EP 08101366A EP 2088802 B1 EP2088802 B1 EP 2088802B1
Authority
EP
European Patent Office
Prior art keywords
time
signal
frequency
hearing aid
directional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP08101366.6A
Other languages
English (en)
French (fr)
Other versions
EP2088802A1 (de
Inventor
Thomas Bo Elmedyb
Karsten Bo Rasmussen
Ulrik Kjems
Michael Syskind Pedersen
Jesper Bünsow Boldt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oticon AS
Original Assignee
Oticon AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oticon AS filed Critical Oticon AS
Priority to DK08101366.6T priority Critical patent/DK2088802T3/da
Priority to EP08101366.6A priority patent/EP2088802B1/de
Priority to US12/222,810 priority patent/US8204263B2/en
Priority to AU2008207437A priority patent/AU2008207437B2/en
Priority to CN2008101716047A priority patent/CN101505447B/zh
Publication of EP2088802A1 publication Critical patent/EP2088802A1/de
Application granted granted Critical
Publication of EP2088802B1 publication Critical patent/EP2088802B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/021Behind the ear [BTE] hearing aids
    • H04R2225/0216BTE hearing aids having a receiver in the ear mould
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/45Prevention of acoustic reaction, i.e. acoustic oscillatory feedback
    • H04R25/453Prevention of acoustic reaction, i.e. acoustic oscillatory feedback electronically
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • This invention generally relates to generating an audible signal in a hearing aid. More particularly, the invention relates to a method of estimating and applying a weighting function to audio signals.
  • Sound signals arriving frontally at the ear are accentuated due to the shape of the pinna, which is the external portion of the ear. This effect is called directionality, and for the listener it improves the signal-to-noise ratio for sound signals arriving from the front direction compared to sound signals arriving from behind. Furthermore, the reflections from the pinna enhance the listener's ability to localize sounds. Sound localization may enhance speech intelligibility, which is important for distinguishing different sound signals such as speech signals, when sound signals from more than one direction in space are present. Localization cues used by the brain to localize sounds can be related to frequency dependent time and level differences of the sound signals entering the ear as well as reflections due to the shape of the pinna. E.g. at low frequencies, localization of sound is primarily determined by means of the interaural time difference.
  • hearing aids For hearing aid users good sound localization and speech intelligibility may often be harder to obtain.
  • the hearing aid microphone is placed behind the external portion of the ear and therefore sound signals coming from behind and from the sides are not attenuated by the pinna. This is an unnatural sensation for the hearing aid user, because the shape of the pinna would normally only accentuate sound signals coming frontally.
  • a hearing aid user's ability to localize sound decreases as the hearing aid microphone is placed further away from the ear canal and thereby the eardrum.
  • sound localization may be degraded in BTE hearing aids compared to hearing aids such as in-the-ear (ITE) or completely-in-the-canal (CIC) hearing aids, where the microphone is placed closer to or in the ear canal.
  • ITE in-the-ear
  • CIC completely-in-the-canal
  • a directional microphone can be incorporated in hearing aids, e.g. in BTE hearing aids.
  • the directional microphone can be more sensitive towards the sound signals arriving frontally in the ear of the hearing aid user and may therefore reproduce the natural function of the external portion of the ear, and a directional microphone therefore allows the hearing aid user to focus hearing primarily in the direction the user's head is facing.
  • the directional microphone allows the hearing aid user to focus on whoever is directly in front of him/her and at the same time reducing the interference from sound signals, such as conversations, coming from the sides and from behind.
  • a directional microphone can therefore be very useful in crowded places, where there are many sound signals coming from many directions, and when the hearing aid user wishes only to hear one person talking.
  • a directionality pattern or beamforming pattern may be obtained from at least two omni-directional microphones or at least one directional microphone in order to perform signal processing of the incoming sound signals in the hearing aid.
  • EP1414268 relates to the use of an ITE microphone to estimate a transfer function between ITE microphone and other microphones in order to correct the misplacement of the other microphones and in order to estimate the arrival direction of impinging signals.
  • US2005/0058312 relates to different ways to combine tree or more microphones in order to obtain directionality and reduce microphone noise.
  • D1 discloses a hearing device with at least one microphone to be placed behind the ear, an output converter, a further microphone and a beam former unit.
  • One input of the beam forming unit is connected to the one microphone and the second input is connected to the further microphone.
  • the output of the beamformer unit is connected to the output converter and establishes together with the one and the further microphones a transfer characteristic with an amplification which is dependent on the direction with which acoustical signals impinge on the microphone and on the frequency of such acoustical signals.
  • the invention aims at providing the hearing-device user with a transfer characteristic at least similar to that of the natural ear.
  • the hearing device may be part of a binaural hearing system.
  • EP1351544 discloses a method for producing a directional signal using two omnidirectional microphones.
  • US2005/0041824 relates to level dependent choice of directionality pattern.
  • a second order directionality pattern provides better directionality than a first order directionality pattern, but a disadvantage is more microphone noise. However, at high sound levels, this noise will be masked by the sound entering the hearing aid from the sides, and thus a choice between first and second order directionality can be made based on the sound level.
  • EP1005783 relates to estimating a direction-based time-frequency gain by comparing different beamformer patterns.
  • the time delay between two microphones can be used to determine a frequency weighting (filtering) of an audio signal.
  • EP1005783 describes using the comparison between a directional signal obtained from at least 2 microphone signals with the amplitude of one of the microphone signals.
  • Enhanced microphone-array beamforming based on frequency-domain spatial analysis-synthesis by M.M. Goodwin describes a delay-and-sum beamforming system in relation to distant-talking hands-free communication, where reverberation and interference from unwanted sound sources is hindering.
  • the system improves the spatial selectivity by forming multiple steered beams and carrying out a spatial analysis of the acoustic scene.
  • the analysis derives a time-frequency gain that, when applied to a reference look-direction beam, enhances target sources and improves rejection of interferers that are outside of the specified target region.
  • the direction-dependent time-frequency gain is estimated by comparing two directional signals with each other, because the ratio between the power of the envelopes of the two directional signals is maximized, since one of the directional signals in the direction of the target signal aims at cancelling the noise sources, and the other directional signal aims at cancelling out the target source, while the noise sources are maintained.
  • the target and the noise/interferer signals are separated very well and by maximizing the ratio between the front and the rear aiming directional signals, it is easier to control the weighting function, and thereby the sound localization and the speech intelligibility of the target speaker may be improved for the hearing aid user.
  • the hearing aid user may, for example, want to focus on listening to one person speaking, while there are noise signals or signals which interfere at the same time.
  • two microphones such as a front and a rear microphone
  • the hearing aid user may turn his head in the direction from where the desired target source is coming from.
  • the front microphone in the hearing aid may pick up the desired audio signals from the target source
  • the rear microphone in the hearing aid may pick up the undesired audio signals not coming from the target source.
  • audio signals will typically be mixed, and the problem will then be to decide what contribution to the incoming signal is made from which sources.
  • Time-frequency representations may be complex-valued fields over time and frequency, where the absolute value of the field represents "energy density" (the concentration of the root mean square over time and frequency) or amplitude, and the argument of the field represents phase.
  • energy density the concentration of the root mean square over time and frequency
  • the argument of the field represents phase.
  • the time-frequency mask may be estimated in the hearing aid, which the user wears.
  • the time-frequency mask may be estimated in a device arranged externally relative to the hearing aid and located near the hearing aid user. It is an advantage that the estimated time-frequency mask may still be used in the hearing aid even though it may be estimated in an external device, because the hearing aid and the external device may communicate with each other by means of a wired or wireless connection.
  • using the time-frequency representation of the target signal and the noise signal to estimate a time-frequency mask comprises comparing the at least two directional signals with each other for each time-frequency coefficient in the time-frequency representation.
  • using the estimated time-frequency mask to estimate the direction-dependent time-frequency gain comprises determining, based on said comparison, for each time-frequency coefficient, whether the time-frequency coefficient is related to the target signal or the noise signal.
  • the method further comprises using the envelope of the time-frequency representation of the target signal and the noise signal to estimate a time-frequency mask comprises comparing the two envelopes of the directional signals with each other for each time-frequency envelope sample value.
  • determining for each time-frequency coefficient whether the time-frequency coefficient is related to the target signal or the noise signal comprises: determining the envelope of the time-frequency representation of the directional signals, determining the ratio of the power of the envelope of the directional signal in the direction of the target signal, i.e. the front direction, to the power of envelope of the directional signal in the direction of the noise signal, i.e. the rear direction; and assigning the time-frequency coefficient as relating to the target signal if this ratio exceeds a given threshold; and assigning the time-frequency coefficient as relating to the noise signal otherwise.
  • This threshold is typically implemented as a relative power threshold, i.e. in units of dB.
  • An envelope could e.g. be the power of the absolute magnitude value of each time-frequency coefficient.
  • An advantage of this embodiment is that if the directional signal in the direction of the target signal for a given threshold exceeds the directional signal in the direction of the noise signal for a time-frequency coefficient, then this time-frequency coefficient is labelled as belonging to the target signal, and this time-frequency coefficient will be retained. If the directional signal in the direction of the noise signal exceeds the directional signal in the direction of the target signal for a time-frequency coefficient, then this time-frequency coefficient is labelled as belonging to the noise/interferer signal, and this time-frequency coefficient will be removed.
  • the direction-dependent time-frequency mask is binary, and the direction-dependent time-frequency mask is 1 for time-frequency coefficients belonging to the target signal, and 0 for time-frequency coefficients belonging to the noise signal.
  • the pattern of assignments of time-frequency units as either belonging to the target or the noise signal may be termed a binary mask. It is an advantage of this embodiment that the direction-dependent time-frequency mask is binary, because it makes it possible to perform and simplify the assignment of the time-frequency coefficients as either belonging to the target source or to a noise/interferer source. Hence, it allows a simple binary gain assignment, which may improve speech intelligibility for the hearing aid user, when applying the gain to the signal which is presented to the listener.
  • a criterion for defining the amount of target and noise/interferer signals must be applied. This criterion controls the number of retained and removed time-frequency coefficients.
  • a "0 dB signal-to-noise ratio" (SNR) may be used, meaning that a time-frequency coefficient is labelled as belonging to the target signal, if the power of the target signal envelope is exactly larger than the noise/interferer signal envelope.
  • SNR signal-to-noise ratio
  • a criterion different from the "0 dB SNR” may also provide the same major improvement in speech intelligibility for the hearing aid user.
  • a criterion of 3 dB means that the level of the target has to be 3 dB higher than the noise.
  • a time-frequency gain estimated from the time-frequency mask can be multiplied to the directional signal.
  • an enhancement on top of the directional signal can be achieved.
  • Low frequencies may be frequencies below 200 Hz, 300 Hz, 400 Hz, 500 Hz, 600 Hz or the like.
  • the time-frequency mask may be binary, but other forms of masks may also be provided. However, when providing a binary mask, interpretation and/or decision about what 0 and 1 mean may be performed. 0 and 1 may be converted to a level measured in dB, such as a level enhancement, e.g. in relation to a level measured previously.
  • the method further comprises multiplying the estimated direction-dependent time-frequency gain to a directional signal and processing and transmitting the output signal to the output transducer in the hearing aid at low frequencies. It is an advantage of this embodiment that the direction-dependent time-frequency gain is multiplied to a directional signal, since applying the direction-dependent time-frequency gain will improve the directionality.
  • the time-frequency mask mainly relies on the time difference between the microphones. Whether the mask is estimated near the ear or a little further away, such as behind the ear, does not have much influence on the areas in time and in frequency where the noise signal or target signal dominates. Therefore, the directional signals from the two microphones, which are arranged in a behind the ear part of the hearing aid, can be used when estimating the weighting function, and audio signals may be processed in the hearing aid based on this.
  • the time-frequency mask may still be used in the hearing aid even though it may be estimated in an external device arranged relative to the hearing aid and located near the hearing aid user.
  • an alignment as regards time of the gain and the signal, to which the gain is applied may be provided.
  • the signal may be delayed in relation to the gain in order to obtain the temporal alignment.
  • smoothing, i.e. low pass filtering, of the gain may be provided.
  • the method further comprises multiplying the estimated direction-dependent time-frequency gain to a signal from one or more of the microphones, and processing and transmitting the output signal to the output transducer in the hearing aid at low frequencies.
  • the method further comprises applying the estimated direction-dependent time-frequency gain to a signal from a third microphone, the third microphone being arranged near or in the ear canal, and processing and transmitting the output signal to the output transducer in the hearing aid at high frequencies.
  • An advantage of this embodiment is that the direction-dependent time-frequency gain is applied to a third microphone arranged near or in the ear canal, because at higher frequencies, the location of the microphone is important for the sound localization. At high frequencies, localization cues is maintained by using a microphone near or in the ear canal, because the microphone is thus placed close to the ear drum, which improves the hearing aid user's ability to localize sounds.
  • the hearing aid may comprise three microphones. Two microphones may be located behind the ear, e.g.
  • the third microphone is located much closer to the ear canal, e.g. such as an in-the-ear hearing aid, than the two other microphones.
  • the two microphones used for estimating the gain may be arranged in a device arranged externally in relation to the hearing aid and the third microphone.
  • a further advantage of this embodiment is that because the two microphones are the microphones used in estimating the weighting function, only microphone matching between these two microphones should be performed, which simplifies the signal processing.
  • the direction-dependent time-frequency gain may be applied to the third microphone for all frequencies or for the higher frequencies in order to enhance directionality, while the direction-dependent time-frequency gain for the low frequencies may be applied to the directional signal from the microphones behind the ear or in the external device.
  • the third microphone may be a microphone near or in the ear canal, e.g. an in-the-ear microphone, or the like.
  • the method further comprises applying the estimated direction-dependent time-frequency gain to one or more of the microphone signals from one or more of the microphones, and processing and transmitting the output signal to the output transducer in the hearing aid. It is an advantage to apply the direction-dependent time-frequency gain to one or more signals from the microphones for all frequencies, both high and low frequencies, since this may improve the audible signal generated in the hearing aid.
  • the directional signals are provided by means of at least two beamformers, where at least one of the beamformers is chosen from the group consisting of:
  • the estimated time-frequency gain is applied to a directional signal, where the directional signal aims at attenuating signals in the direction, where the ratio between the transfer function of the front beamformer and the transfer function of the rear beamformer equals the decision threshold, i.e. that is in the direction of the decision boundary between the front-aiming and the rear-aiming beamformer.
  • the time-frequency mask estimate is based on a weak decision.
  • the method further comprises transmitting and interchanging the direction-dependent time-frequency masks between two hearing aids, when the user is wearing one hearing aid on each ear.
  • two time-frequency masks may be provided.
  • the estimated time-frequency gains from these masks may be transmitted from one of the hearing aids to the other hearing aid and vice versa.
  • the direction-dependent time-frequency gains measured in the two hearing aids may differ from each other due to microphone noise, microphone mismatch, head-shadow effects etc, and consequently an advantage of this embodiment is that a joint binary mask estimation is more robust towards noise. So by interchanging the binary direction-dependent time-frequency masks between the two ears a better estimate of the binary gain may be obtained.
  • a further advantage is that by synchronizing the binary gain pattern on both ears, the localization cues are less disturbed, as they would have been with different gain patterns on both ears. Furthermore, only the binary mask values have to be transmitted between the ears, and not the entire gains or audio signals, which simplify the interchanging and synchronization of the direction-dependent time-frequency gains.
  • the method further comprises performing parallel comparisons of the difference between the target signal and the noise signal and merging the parallel comparisons between sets of different beam patterns.
  • the merging comprises applying functions between the different time-frequency masks, at least one of the functions is chosen from the group consisting of:
  • An advantage of this embodiment is that by applying functions such as OR, AND and/or psychoacoustic model to the different estimates, an overall more robust binary gain estimate can be obtained.
  • a time-frequency mask provided by one of the two hearing aids may e.g. be used for both hearing aids, and thus the mask provided by the other of the two hearing aids may thus be disregarded. Whether an OR or AND function is used depends on the chosen comparison threshold.
  • the present invention relates to different aspects including the method described above and in the following, and corresponding methods, devices, and/or product means, each yielding one or more of the benefits and advantages described in connection with the first mentioned aspect, and each having one or more embodiments corresponding to the embodiments described in connection with the first mentioned aspect and/or disclosed in the appended claims.
  • a hearing aid adapted to be worn by a user is disclosed, according to claim 18.
  • Another embodiment according to claim 39 is disclosed. It is an advantage to use an external device for estimating time-frequency masks and then transmitting the masks to the hearing aid(s), since thereby a hearing aid may only require one microphone.
  • the external device may be a hand-held device.
  • the features of the method described above may be implemented in software and carried out on a data processing system or other processing means caused by the execution of computer-executable instructions.
  • the instructions may be program code means loaded in a memory, such as a RAM, from a storage medium or from another computer via a computer network.
  • the described features may be implemented by hardwired circuitry instead of software or in combination with software.
  • a computer program comprising program code means for causing a data processing system to perform the method is disclosed, when said computer program is executed on the data processing system.
  • a data processing system comprising program code means for causing the data processing system to perform the method is disclosed.
  • FIG. 1 a shows a schematic view of a hearing aid user wearing a hearing aid with a number of input transducers, such as microphones.
  • the hearing aid is shown to comprise a part away from the ear, such as a behind-the-ear (BTE) shell or part 101 and part near or in the ear canal, such as an in-the-ear (ITE) part 102.
  • BTE behind-the-ear
  • ITE in-the-ear
  • the part near or in the ear canal will be referred to as an ITE part, but it is understood that the part arranged near or in the ear canal is not limited to an ITE part, but may be any kind of part arranged near or in the ear canal.
  • the part arranged away from or behind the ear will be referred to as a BTE part, but it is understood that the part arranged away from or behind the ear is not limited to a BTE part, but it may be any kind of part arranged away from or behind the ear.
  • the two parts may be connected by means of a wire 103.
  • the BTE part 101 may comprise two input transducers 104, 105, which may be arranged as a front microphone and a rear microphone, respectively, and the ITE part 102 may comprise one input transducer 106, such as a microphone.
  • Figure 1b shows a more detailed view of a hearing aid with three input transducers, e.g. microphones.
  • Two of the input transducers 204 and 205 e.g. microphones, may be arranged as a front and a rear microphone in the BTE shell behind the ear or pinna 210 of a user as in a conventional BTE hearing aid.
  • a third input transducer 206 e.g. a microphone, may be arranged as an ITE microphone in an ear mould 207, such as a so called micro mould, which may be connected to the BTE shell by means of e.g. a small wire 203.
  • the connection between the BTE shell and the ear mould may be conducted by other means, such as wireless connection, such as radio frequency communication, microwave communication, infrared communication, and/or the like.
  • An output transducer 208 e.g. a receiver or loudspeaker, may be comprised in the ear mould part 207 in order to transmit incoming sounds close to the eardrum 209. Even though only one output transducer is shown in fig. 2 , the hearing aid may comprise more than one output transducer. Alternatively, the hearing aid may only comprise two BTE microphones and no ITE microphone. Alternatively and/or additionally, the hearing may comprise more than two BTE microphones and/or more than one ITE microphone.
  • a signal processing unit may be comprised in the ear mould part in order to process the received audio signals.
  • a signal processing unit may be comprised in the BTE shell.
  • the sound presented to the hearing aid user may be a mixture of the signals from the three input transducers.
  • the input transducers in the BTE hearing aid part may be omnidirectional microphones.
  • the BTE input transducers may be any kind of microphone array providing a directional hearing aid, i.e. by providing directional signals.
  • the part near or in the ear canal may be referred to as the second module in the following.
  • the microphone in the second module may be an omni-directional microphone or a directional microphone.
  • the part behind the ear may comprise the signal processing unit and the battery in order to save space in the part near or in the ear canal.
  • the second module adapted to be arranged at the ear canal may be an ear insert, a plastic insert and/or it may be shaped relative to the user's ear.
  • the second module may comprise a soft material.
  • the soft material may have a shape as a dome, a tip, a cap and/or the like.
  • the hearing aid may comprise communications means for communicating with a second hearing aid arranged at another ear of the user.
  • Fig. 2 shows a flowchart of a method of generating an audible signal in a hearing aid.
  • a microphone matching system may be provided between step 1 and 2.
  • a post-processing of the directional signals may be provided, before the time-frequency mask is estimated in step 4.
  • a post-processing of the time-frequency mask may be provided, before the gain is estimated in step 5.
  • Figure 3 shows how the signals from the three input transducers may be analysed, processed and combined before being transmitted to the output transducer.
  • a weighting function of the signals may be estimated in order to improve sound localization and thereby speech intelligibility for the hearing aid user.
  • a directional signal and a time-frequency direction-dependent gain can be estimated 301 from the two BTE microphones (mic 1 and mic 2), and a signal from the ITE microphone (mic. 3) can be obtained 302.
  • the direction-dependent gain 303 calculated from the signals from the two BTE microphones, is fast-varying in time and frequency, and it may be binary. Reference to how a directional signal can be calculated is found in "Directional Patterns Obtained from Two or Three Microphones" by Stephen C. Thompson, Knowles Electronics, 2000.
  • These signals may be combined in different ways depending on the frequency, and the estimation of the weighting function may thus depend on whether the frequency is high or low.
  • the processed high- and low-frequency signals may be added and synthesized before being transmitted to the
  • the estimated direction-dependent time-frequency gain may be multiplied to a directional signal 305 from the BTE microphones and the output signal 306 may be processed and transmitted to the output transducer in the hearing aid 307.
  • the directionality can be improved. Since localization of sounds is primarily determined by means of the interaural time difference at low frequencies, and the interaural time difference does not depend much on where by the ear the microphones are placed at low frequencies, the audio signals from the BTE microphones may be transmitted in the hearing aid at low frequencies.
  • the combination of the microphone signals from the BTE microphones may be a directional sound signal or an omni-directional sound signal. Furthermore, a sum of the two microphone signals may provide a better signal-to-noise ratio than e.g. a difference between the microphone signals.
  • the directionality may be further improved by multiplying the direction-dependent time-frequency gain to the directional signal.
  • the estimated direction-dependent time-frequency gain may be applied to the signal 302 from the third microphone, the ITE microphone, and the output signal 309 may be processed and transmitted to the output transducer 307 in the hearing aid.
  • the location of the microphone is important for the sound localization, and at high frequencies, localization cues are better maintained by using an ITE microphone, because the microphone is thus placed closer to the ear drum, which improves the hearing aid user's ability to localize sounds. It is therefore possible to obtain directional amplification by means of the BTE microphones and still preserve binaural listening by processing sound signals very close to or in the ear canal close to the ear drum by means of the ITE microphone.
  • the direction-dependent time-frequency gain may be applied to the signal 302 from the ITE microphone for all frequencies or for the higher frequencies in order to enhance directionality, while the direction-dependent time-frequency gain for the low frequencies 304 may be applied to the directional signal 305 from the BTE microphones.
  • a hearing loss or hearing impairment may be accounted for in the hearing aid before transmitting the output signal to the user, and noise reduction and/or dynamic compression may also be provided in the hearing aid.
  • Figure 4 shows possible ways of comparing beamformer patterns in order to obtain a weighting function of the BTE microphone signals.
  • Fig. 4a shows a prior art method of comparing beamformer patterns
  • fig. 4b shows the method of the present invention on how to estimate the direction-dependent time-frequency gain by comparing beamformer patterns in the target and in the noise directions.
  • Time-frequency masking can be used to perform signal processing of the sound signals entering the microphones in a hearing aid.
  • the time-frequency (TF) masking technique is based on the time-frequency (TF) representation of signals, which makes it possible to analyse and exploit the temporal and spectral properties of signals.
  • TF representation of signals it is possible to identify and divide sound signals into desired and undesired sound signals.
  • the desired sound signal can be the sound signal coming from a speaking person located in front of the hearing aid user.
  • Undesired sound signals may then be the sound signals coming from e.g. other speakers in the other directions, i.e. from the left, right and behind the hearing aid user.
  • the sound received by the microphone(s) in the hearing aid will be a mixture of all the sound signals, both the desired entering frontally and the undesired coming from the sides and behind.
  • the microphone's directionality or polar pattern indicates the sensitivity of the microphone depending on which angles about its central axis, the sound is coming from.
  • the two BTE microphones, from which the beamformer patterns arise may be omnidirectional microphones, and one of the microphones may be a front microphone in the direction of a target signal, and the other microphone may be a rear microphone in the direction of a noise/interferer signal.
  • the hearing aid user may, for example, want to focus on listening to one person speaking, i.e. the target signal, while there is a noise signal or a signal which interferes at the same time, i.e. the noise/interferer signal.
  • a directional signal may be provided, and the hearing aid user may turn his head in the direction from where the desired target signal is coming from.
  • the front microphone in the hearing aid may pick up the desired audio signals from the target source, and the rear microphone in the hearing aid may pick up the undesired audio signals coming from the noise/interferer source, but the audio signals may be mixed, and the method of the present invention solves the problem of deciding what contribution to the incoming signal is made from which sources. It may be assumed that two sound sources are present and separated in space.
  • beamformer output functions of the target signal and the noise signal can be obtained.
  • the distance between the two microphones will be smaller than the acoustic wavelength.
  • TF time-frequency
  • some steps are applied to both the target and the noise signal: filtering through a k-point filterbank, squaring, low-pass filtering, and downsampling with a factor. Assuming that the target and noise signals are uncorrelated, the four steps result in two directional signals, both containing the TF representation of the target and the noise signal.
  • the direction-dependent TF mask can now be estimated using the two directional signals, i.e.
  • the TF mask is estimated by comparing the powers of the two directional signals and labelling each time-frequency (TF) coefficient as either belonging to the target signal or the noise/interferer signal. This means that if the power of the directional signal in the direction of the target signal exceeds the power of the directional signal in the direction of the noise signal for a time-frequency coefficient, then this time-frequency coefficient is labelled as belonging to the target signal. If the power the directional signal in the direction of the noise signal exceeds the power of the directional signal in the direction of the target signal, then this time-frequency coefficient is labelled as belonging to the noise/interferer signal, and this time-frequency coefficient will be removed.
  • the time-frequency (TF) coefficients are also known as TF units.
  • the direction-dependent time-frequency mask may be binary, and the direction-dependent time-frequency mask may be 1 for time-frequency coefficients belonging to the target signal, and 0 for time-frequency coefficients belonging to the noise signal.
  • the direction-dependent time-frequency mask is binary, it is possible to perform and simplify the assignment of the time-frequency coefficients as either belonging to the target source or to a noise/interferer source. Hence, it allows a binary mask to be estimated, which will improve speech intelligibility for the hearing aid user.
  • a criterion for defining the amount of target and noise/interferer signals must be applied, which controls the number of retained and removed time-frequency coefficients. Decreasing the SNR value corresponds to increasing the amount of noise in the processed signal and vice versa.
  • SNR may also be defined as local SNR criterion or applied local SNR criterion.
  • the ratio between the two directional signals is maximized, since one of the directional signals in the direction of the target signal aims at cancelling the noise sources, and the other directional signal aims at cancelling out the target source, while the noise sources are maintained.
  • the target and the noise/interferer signals are separated very well and by maximizing the ratio between the front and the rear aiming directional signals, it is easier to control the weighting function, e.g. the sparsity of the weighting function, and thereby the sound localization and the speech intelligibility will be improved for the hearing aid user.
  • a sparse weighting function may only contain few TF units that retain the target signal compared to the amount of noise TF units that cancel the noise.
  • Fig. 5 shows a transmission of binary TF masks between the ears.
  • the direction-dependent time-frequency gains may be transmitted and interchanged between two hearing aids, when the user is wearing one hearing aid on each ear.
  • the direction-dependent time-frequency gains measured in the two hearing aids may differ from each other due to microphone noise, microphone mismatch, head-shadow effects etc, and a joint binary mask and estimation may therefore be more robust towards noise. So by interchanging the binary direction-dependent time-frequency mask between the two ears a better estimate of the binary gain may be obtained.
  • the localization cues may not be not disturbed, as they would have been with different gain patterns on both ears. Only the binary gain values, and not the entire functions, may be transmitted between the ears, which simplify the interchanging and synchronization of the direction-dependent time-frequency gains.
  • a frequent frame-by-frame transmission may be required when merging transmissions of binary TF masks between the ears due to possible transmission delay.
  • the joint mask may either not be completely time-aligned with the audio signal to which it is applied, or the signal have to be delayed in order to become time-aligned.
  • the transmission of TF masks between the ears may be performed by means of a wireless connection, such as radio frequency communication, microwave communication or infrared communication or by means of a small wire connection between the hearing aids.
  • a wireless connection such as radio frequency communication, microwave communication or infrared communication or by means of a small wire connection between the hearing aids.
  • Figure 6 shows merging of parallel comparisons between different beamformers.
  • Fig. 6a shows the beamformers patterns to compare. When making several comparisons in parallel instead of just one comparison, a more robust estimate of the binary mask will be made, since each comparison has a direction in which the estimate is more robust than in other directions. Towards the directions with the biggest difference between the front and the rear signals the binary gain estimates are very good and robust.
  • Fig. 6b shows how merging may be performed by applying AND/OR functions between the different direction-dependent time-frequency gains. By applying an OR or an AND function to the different estimates, an overall more robust binary gain estimate can be obtained. Alternatively, other suitable functions such as psychoacoustic functions may be applied. By having different beamformer patterns as seen in fig. 6a and fig. 6b it is possible to disregard or turn off certain sources, depending on the signals.
  • Fig. 7a) and fig. 7b ) each show an example of the application of an estimated time-frequency gain to a directional signal, where the directional signal aims at attenuating signals in the direction of the decision boundary between the front-aiming and the rear-aiming beamformer.
  • the direction of the decision boundary is where the ratio between the transfer function of the front beamformer and the transfer function of the rear beamformer equals the decision threshold.
  • the first polar diagram in figs 7a) and 7b ) shows the decision threshold 701, the front-aiming beam pattern 702, the rear-aiming beam pattern 703 and the beam pattern with nulls aiming towards the weak decision 704.
  • the null direction of the beam former has the same direction as the binary decision threshold.
  • the time-frequency mask estimate is based on a weak decision.
  • the resulting time-frequency gain is multiplied to a directional signal, which aims at attenuating signals in the direction of the weak decision.
  • the second polar diagram in figs 7a) and 7b ) shows the resulting sensitivity pattern 705 after the time-frequency gain is applied to the directional signal.
  • an external device arranged externally in relation to the one or more hearing aids may perform the estimation of one or more of the time-frequency masks, and the one or more time-frequency masks may then be transmitted to the one or more hearing aids.
  • An advantage of using an external device to estimate the time-frequency mask is that only a single microphone may be required in each hearing aid, and this may save space in the hearing aids.
  • the external device may be a hand-held device, and the connection between the external device and the one or more hearing aids may be a wireless connection or a connection by means of a wire.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (41)

  1. Verfahren zum Erzeugen eines hörbaren Signals in einer Hörhilfe durch Schätzen einer Gewichtsfunktion von empfangenen Audiosignalen, wobei die Hörhilfe ausgebildet ist, von einem Benutzer getragen zu werden und wobei das Verfahren die folgenden Schritte aufweist:
    - Schätzen eines direktionalen Signals (305) durch Schätzen einer gewichteten Summe aus zwei oder mehr Mikrofonsignalen (302) von zwei oder mehr Mikrofonen (104, 204, 105, 205, 106, 206), wobei ein erstes Mikrofon (104, 204) der zwei oder mehr Mikrofone ein Vorderseitenmikrofon ist und wobei ein zweites Mikrofon (105, 205) der zwei oder mehr Mikrofone ein Rückseitenmikrofon ist;
    - Schätzen einer richtungsabhängigen Zeit-Frequenz-Verstärkung (303), und
    - Synthetisieren eines Ausgabesignals (306, 309);
    wobei das Schätzen der richtungsabhängigen Zeit-Frequenz-Verstärkung (303) umfasst:
    - Gewinnen von wenigstens zwei direktionalen Signalen (305), wobei jedes eine Zeit-Frequenz-Darstellung eines Zielsignals und eines Rauschsignals enthält; und wobei ein erstes der direktionalen Signale (305) als auf die Vorderseite zielendes Signal definiert wird und wobei ein zweites der direktionalen Signale (305) als ein auf die Rückseite zielendes Signal definiert wird;
    - Verwenden der Zeit-Frequenz-Darstellung des Zielsignals und des Rauschsignals, um eine Zeit-Frequenz-Maske zu schätzen; und
    - Verwenden der geschätzten Zeit-Frequenz-Maske, um die richtungsabhängigen Zeit-Frequenz-Verstärkung (303) zu schätzen.
  2. Verfahren nach Anspruch 1, wobei Verwenden der Zeit-Frequenz-Darstellung des Zielsignals und des Rauschsignals zum Schätzen einer Zeit-Frequenz-Maske, Vergleichen von den wenigstens zwei direktionalen Signalen (305) miteinander für jeden Zeit-Frequenz-Koeffizienten in der Zeit-Frequenz-Darstellung umfasst.
  3. Verfahren nach Anspruch 2, wobei Verwenden der geschätzten Zeit-Frequenz-Maske zum Schätzen der richtungsabhängigen Zeit-Frequenz-Verstärkung (303), auf dem Vergleich basierendes Bestimmen für jeden Zeit-Frequenz-Koeffizienten, ob sich der Zeit-Frequenz-Koeffizient auf das Zielsignal oder auf das Rauschsignal bezieht, umfasst.
  4. Verfahren nach einem der Ansprüche 1 bis 3, welches des Weiteren umfasst:
    - Gewinnen einer Einhüllenden für jede Zeit-Frequenz-Darstellung der wenigstens zwei direktionalen Signale (305);
    - Verwenden der Einhüllenden der Zeit-Frequenz-Darstellung des Zielsignals und des Rauschsignals zum Schätzen der Zeit-Frequenz-Maske;
  5. Verfahren nach Anspruch 4, wobei Verwenden der Einhüllenden der Zeit-Frequenz-Darstellung des Zielsignals und des Rauschsignals zum Schätzen einer Zeit-Frequenz-Maske, Vergleichen der zwei Einhüllenden der direktionalen Signale (305) miteinander für jeden Zeit-Frequenz-Einhüllenden Beispielwert umfasst.
  6. Verfahren nach Anspruch 4 oder 5, wobei Bestimmen der Einhüllenden einer Zeit-Frequenz-Darstellung umfasst:
    - Erhöhen des absoluten Größenwerts von jedem Zeit-Frequenz-Koeffizienten zur p-ten Potenz, wobei p ein vordefinierter Wert ist;
    - Filtern der Potenz-erhöhten absoluten Größenwerte über die Zeit durch Verwenden eines vordefinierten Tiefpassfilters.
  7. Verfahren nach einem der Ansprüche 4 bis 6, wobei Bestimmen für jeden Zeit-Frequenz-Koeffizienten, ob sich der Zeit-Frequenz-Koeffizient auf das Zielsignal oder auf das Rauschsignal bezieht, umfasst:
    - Bestimmen, ob das Verhältnis des Einhüllenden-Signals der Zeit-Frequenz-Darstellung des direktionalen Signals (305) in die Richtung des Zielsignals zu der Einhüllenden des direktionalen Signals (305) in die Richtung des Rauschsignals einen vordefinierten Grenzwert übersteigt;
    - Zuordnen des Zeit-Frequenz-Koeffizienten zum Zielsignal, wenn das Verhältnis des Einhüllenden-Signals des direktionalen Signals (305) in die Richtung des Zielsignals zu der Einhüllenden des direktionalen Signals (305) in die Richtung des Rauschsignals einen vordefinierten Grenzwert übersteigt; und
    - Zuordnen des Zeit-Frequenz-Koeffizienten zum Rauschsignal, wenn das Verhältnis des Einhüllenden-Signals des direktionalen Signals (305) in die Richtung des Zielsignals zu der Einhüllenden des direktionalen Signals (305) in die Richtung des Rauschsignals einen vordefinierten Grenzwert nicht übersteigt.
  8. Verfahren nach einem der Ansprüche 1 bis 7, wobei die Zeit-Frequenz-Maske eine binäre Maske ist, wobei die Zeit-Frequenz-Maske 1 für zu dem Zielsignal gehörende Zeit-Frequenz-Koeffizienten und 0 für zu dem Rauschsignal gehörende Zeit-Frequenz-Koeffizienten ist.
  9. Verfahren nach einem der Ansprüche 1 bis 8, wobei das Verfahren des Weiteren Multiplizieren der geschätzten richtungsabhängigen Zeit-Frequenz-Verstärkung (303) zu einem direktionalen Signal (305) und Verarbeiten und Übertragen des Ausgabesignals (306) an einen Ausgabeschallwandler (208, 307) in der Hörhilfe bei tiefen Frequenzen (304) umfasst.
  10. Verfahren nach einem der Ansprüche 1 bis 8, wobei das Verfahren des Weiteren Multiplikation der geschätzten richtungsabhängigen Zeit-Frequenz-Verstärkung (303) zu einem Signal (302) von einem oder mehreren der zwei oder mehr Mikrofone (104, 204, 105, 205, 106, 206) und Verarbeiten und Übertragen des Ausgabesignals (306) an einen Ausgabeschallwandler (208, 307) in der Hörhilfe bei tiefen Frequenzen (304) umfasst.
  11. Verfahren nach einem der Ansprüche 1 bis 8, wobei das Verfahren des Weiteren Anwenden der geschätzten richtungsabhängigen Zeit-Frequenz-Verstärkung (303) auf ein Signal (302) von einem dritten Mikrofon (106, 206), wobei das dritte Mikrofon (106, 206) in oder nahe dem Gehörgang angeordnet ist und Verarbeiten und Übertragen des Ausgabesignals (309) an einen Ausgabeschallwandler (208, 307) in die Hörhilfe bei hohen Frequenzen (308) umfasst.
  12. Verfahren nach einem der Ansprüche 1 bis 8, wobei das Verfahren des Weiteren Anwenden der geschätzten richtungsabhängigen Zeit-Frequenz-Verstärkung (303) auf eines oder mehrere der Mikrofonsignale (302) von einem oder mehreren der zwei oder mehr Mikrofone (104, 204, 105, 205, 106, 206) und Verarbeiten und Übertragen des Ausgabesignals (306, 309) an einen Ausgabeschallwandler (208, 307) in der Hörhilfe umfasst.
  13. Verfahren nach einem der Ansprüche 1 bis 12, wobei die direktionalen Signale (305) mit Hilfe von wenigstens zwei Strahlformern bereitgestellt werden, wobei wenigstens einer der Strahlformer aus der Gruppe ausgewählt wird, die aus:
    - festen Strahlformern
    und
    - adaptiven Strahlformern
    besteht.
  14. Verfahren nach Anspruch 13, wobei die geschätzte Zeit-Frequenz-Verstärkung (303) auf ein direktionales Signal (305) angewendet wird, welches darauf abzielt, Signale in die Richtung der Entscheidungsgrenze (701) zwischen einem auf die Vorderseite zielenden und einem auf die Rückseite zielenden Strahlformer abzumildern.
  15. Verfahren nach einem der Ansprüche 1 bis 14, wobei das Verfahren des Weiteren Übertragen und Austauschen der Zeit-Frequenz-Masken zwischen zwei Hörhilfen umfasst, wenn der Nutzer eine Hörhilfe auf jedem Ohr trägt.
  16. Verfahren nach einem der Ansprüche 1 bis 15, wobei das Verfahren des Weiteren Durchführen von Vergleichen der Unterschiede zwischen dem Zielsignal und dem Rauschsignal und Verbinden der parallelen Vergleiche zwischen Sätzen von verschiedenen Strahlmustern (702, 703) umfasst.
  17. Verfahren nach Anspruch 16, wobei das Verbinden, Anwenden von Funktionen zwischen den verschiedenen Zeit-Frequenz-Masken umfasst, wobei wenigstens eine der Funktionen aus der Gruppe ausgewählt wird, die aus:
    - AND-Funktionen
    - OR-Funktionen
    und
    - psychoakustischen Modellen
    besteht.
  18. Hörhilfe, die ausgebildet ist von einem Benutzer getragen zu werden, wobei die Hörhilfe ein oder mehrere Mikrofone (104, 204, 105, 205, 106, 206), eine Signalverarbeitungseinheit, eine oder mehrere Ausgabeschallwandler (208, 307) und Verarbeitungsmittel, die ausgebildet sind, das Verfahren nach einem der Ansprüche 1 bis 17 durchzuführen, aufweist, wobei ein erstes Modul (101, 102) wenigstens eins von den ein oder mehreren Mikrofonen (104, 204, 105, 205, 106, 206) aufweist.
  19. Hörhilfe nach Anspruch 18, wobei das erste Modul (101) ausgebildet ist, hinter dem Ohr angeordnet zu werden.
  20. Hörhilfe nach Anspruch 18, wobei das erste Modul (102) ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden.
  21. Hörhilfe nach Anspruch 18, die des Weiteren ein zweites Modul (101, 102) aufweist, welches wenigstens eines von den ein oder mehreren Mikrofonen (104, 204, 105, 205, 106, 206) aufweist.
  22. Hörhilfe nach Anspruch 21, wobei das erste Modul (101) ausgebildet ist, hinter dem Ohr angeordnet zu werden und das zweite Modul (102) ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden.
  23. Hörhilfe nach Anspruch 21 oder 22, wobei das eine oder mehrere Mikrofone (104, 204, 105, 205, 106, 206), die in dem zweiten Modul enthalten sind (101, 102), ein omnidirektionales Mikrofon ist.
  24. Hörhilfe nach Anspruch 21 oder 22, wobei das eine oder mehrere Mikrofone (104, 204, 105, 205, 106, 206), die im zweiten Modul (101, 102) enthalten sind, ein direktionales Mikrofon ist.
  25. Hörhilfe nach Anspruch 22, wobei das erste Modul (101) des Weiteren die Signalverarbeitungseinheit aufweist.
  26. Hörhilfe nach Anspruch 22, wobei das erste Modul (101) des Weiteren eine Batterie aufweist.
  27. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 26, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, des Weiteren den einen oder mehrere Ausgabeschallwandler (208, 307) aufweist.
  28. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 27, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, ein Im-Ohr-Hörgerät (207) ist.
  29. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 28, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, ein Mikro-Im-Ohr-Hörgerät (207) ist.
  30. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 29, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, ein Ohrstöpsel ist.
  31. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 30, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, ein Plastikstöpsel ist.
  32. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 31, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, relativ zum Ohr des Benutzers geformt ist.
  33. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 32, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, ein weiches Material aufweist.
  34. Hörhilfe nach Anspruch 33, wobei das weiche Material die Form einer Kuppel hat.
  35. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 34, wobei das erste Modul (101), das ausgebildet ist, hinter dem Ohr angeordnet zu werden, und das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, mit Hilfe eines Drahtes (103) verbunden sind.
  36. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 35, wobei das erste Modul (101), das ausgebildet ist, hinter dem Ohr angeordnet zu werden, ein Hinterdem-Ohr-Modul ist.
  37. Hörhilfe nach Anspruch 22 und einem der Ansprüche 23 bis 36, wobei das zweite Modul (102), das ausgebildet ist, in oder nahe dem Gehörgang angeordnet zu werden, ein Im-Ohr-Modul ist.
  38. Hörhilfe nach einem der Ansprüche 18 bis 37, die des Weiteren Kommunikationsmittel zum Kommunizieren mit einer zweiten an einem anderen Ohr des Benutzers angeordneten Hörhilfe aufweist.
  39. Vorrichtung, die ausgebildet ist, außerhalb in Bezug zu einer oder mehreren Hörhilfen angeordnet zu werden, wobei die Vorrichtung Verarbeitungsmittel aufweist, die ausgebildet sind, das Verfahren nach einem der Ansprüche 1 bis 17 durchzuführen und wobei die eine oder mehrere geschätzte Zeit-Frequenz-Masken ausgebildet sind, zu der einen oder zu mehreren Hörhilfen übertragen zu werden.
  40. Computerprogramm, welches Programmcode-Mittel aufweist, um ein Datenverarbeitungssystem dazu zu veranlassen, das Verfahren aus einem der Ansprüche 1 bis 17 durchzuführen, wenn das Computerprogramm auf dem Datenverarbeitungssystem ausgeführt wird.
  41. Datenverarbeitungssystem, welches Programmcode-Mittel aufweist, um das Datenverarbeitungssystem zu veranlassen, das Verfahren aus einem der Ansprüche 1 bis 17 durchzuführen.
EP08101366.6A 2008-02-07 2008-02-07 Verfahren zur Schätzung der Gewichtungsfunktion von Audiosignalen in einem Hörgerät Active EP2088802B1 (de)

Priority Applications (5)

Application Number Priority Date Filing Date Title
DK08101366.6T DK2088802T3 (da) 2008-02-07 2008-02-07 Fremgangsmåde til estimering af lydsignalers vægtningsfunktion i et høreapparat
EP08101366.6A EP2088802B1 (de) 2008-02-07 2008-02-07 Verfahren zur Schätzung der Gewichtungsfunktion von Audiosignalen in einem Hörgerät
US12/222,810 US8204263B2 (en) 2008-02-07 2008-08-15 Method of estimating weighting function of audio signals in a hearing aid
AU2008207437A AU2008207437B2 (en) 2008-02-07 2008-08-20 Method of estimating weighting function of audio signals in a hearing aid
CN2008101716047A CN101505447B (zh) 2008-02-07 2008-10-21 估计助听器中的音频信号加权函数的方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP08101366.6A EP2088802B1 (de) 2008-02-07 2008-02-07 Verfahren zur Schätzung der Gewichtungsfunktion von Audiosignalen in einem Hörgerät

Publications (2)

Publication Number Publication Date
EP2088802A1 EP2088802A1 (de) 2009-08-12
EP2088802B1 true EP2088802B1 (de) 2013-07-10

Family

ID=39563500

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08101366.6A Active EP2088802B1 (de) 2008-02-07 2008-02-07 Verfahren zur Schätzung der Gewichtungsfunktion von Audiosignalen in einem Hörgerät

Country Status (5)

Country Link
US (1) US8204263B2 (de)
EP (1) EP2088802B1 (de)
CN (1) CN101505447B (de)
AU (1) AU2008207437B2 (de)
DK (1) DK2088802T3 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104703106A (zh) * 2013-12-06 2015-06-10 奥迪康有限公司 用于免提通信的助听器装置

Families Citing this family (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8543390B2 (en) * 2004-10-26 2013-09-24 Qnx Software Systems Limited Multi-channel periodic signal enhancement system
US8744101B1 (en) * 2008-12-05 2014-06-03 Starkey Laboratories, Inc. System for controlling the primary lobe of a hearing instrument's directional sensitivity pattern
EP2262285B1 (de) * 2009-06-02 2016-11-30 Oticon A/S Hörvorrichtung mit verbesserten Lokalisierungshinweisen, deren Verwendung und ein Verfahren
EP2306457B1 (de) 2009-08-24 2016-10-12 Oticon A/S Automatische Tonerkennung basierend auf binären Zeit-Frequenz-Einheiten
EP2352312B1 (de) 2009-12-03 2013-07-31 Oticon A/S Verfahren zur dynamischen Unterdrückung von Umgebungsgeräuschen beim Hören elektrischer Eingänge
AU2010346384B2 (en) 2010-02-19 2014-11-20 Sivantos Pte. Ltd. Method for the binaural left-right localization for hearing instruments
EP2372700A1 (de) 2010-03-11 2011-10-05 Oticon A/S Sprachverständlichkeitsprädikator und Anwendungen dafür
EP2381700B1 (de) 2010-04-20 2015-03-11 Oticon A/S Signalhallunterdrückung mittels Umgebungsinformationen
EP2439958B1 (de) 2010-10-06 2013-06-05 Oticon A/S Verfahren zur Bestimmung von Parametern in einem adaptiven Audio-Verarbeitungsalgorithmus und Audio-Verarbeitungssystem
EP2463856B1 (de) 2010-12-09 2014-06-11 Oticon A/s Verfahren zur Reduzierung von Artefakten in Algorithmen mit schnell veränderlicher Verstärkung
US9589580B2 (en) 2011-03-14 2017-03-07 Cochlear Limited Sound processing based on a confidence measure
US10418047B2 (en) * 2011-03-14 2019-09-17 Cochlear Limited Sound processing with increased noise suppression
EP2503794B1 (de) 2011-03-24 2016-11-09 Oticon A/s Audioverarbeitungsvorrichtung, System, Verwendung und Verfahren
EP2519032A1 (de) 2011-04-26 2012-10-31 Oticon A/s System mit einer tragbaren elektronischen Vorrichtung mit Zeitfunktion
EP2528358A1 (de) 2011-05-23 2012-11-28 Oticon A/S Verfahren zur Identifizierung eines drahtlosen Kommunikationskanals in einem Tonsystem
EP2541973B1 (de) 2011-06-27 2014-04-23 Oticon A/s Rückkoppelungssteuerung in einer Hörvorrichtung
JP2013025757A (ja) * 2011-07-26 2013-02-04 Sony Corp 入力装置、信号処理方法、プログラム、および記録媒体
DK2560410T3 (da) 2011-08-15 2019-09-16 Oticon As Kontrol af udgangsmodulation i et høreinstrument
DK2563045T3 (da) 2011-08-23 2014-10-27 Oticon As Fremgangsmåde og et binauralt lyttesystem for at maksimere en bedre øreeffekt
EP2563044B1 (de) 2011-08-23 2014-07-23 Oticon A/s Verfahren, Hörvorrichtung und Hörsystem zur Maximierung eines Effekts des besseren Ohrs.
EP2574082A1 (de) 2011-09-20 2013-03-27 Oticon A/S Steuerung eines adaptiven Feedback-Abbruchsystems basierend auf der Sondensignaleingabe
EP2584794A1 (de) 2011-10-17 2013-04-24 Oticon A/S Für Echtzeitkommunikation mit räumlicher Informationsbereitstellung in einem Audiostrom angepasstes Hörsystem
JP6069830B2 (ja) 2011-12-08 2017-02-01 ソニー株式会社 耳孔装着型収音装置、信号処理装置、収音方法
US8638960B2 (en) 2011-12-29 2014-01-28 Gn Resound A/S Hearing aid with improved localization
EP2611218B1 (de) * 2011-12-29 2015-03-11 GN Resound A/S Hörgerät mit verbesserter Ortung
EP2613567B1 (de) 2012-01-03 2014-07-23 Oticon A/S Verfahren zur Verbesserung der langfristigen Rückkopplungspfadschätzung in einer Hörvorrichtung
EP2613566B1 (de) 2012-01-03 2016-07-20 Oticon A/S Hörvorrichtung und Verfahren zur Überwachung der Befestigung einer Ohrform einer Hörvorrichtung
WO2013135263A1 (en) 2012-03-12 2013-09-19 Phonak Ag Method for operating a hearing device as well as a hearing device
DK2663095T3 (da) 2012-05-07 2016-02-01 Starkey Lab Inc Høreapparat med fordelt bearbejdning i øreprop
US9746916B2 (en) 2012-05-11 2017-08-29 Qualcomm Incorporated Audio user interaction recognition and application interface
US20130304476A1 (en) 2012-05-11 2013-11-14 Qualcomm Incorporated Audio User Interaction Recognition and Context Refinement
EP2701145B1 (de) 2012-08-24 2016-10-12 Retune DSP ApS Geräuschschätzung zur Verwendung mit Geräuschreduzierung und Echounterdrückung in persönlicher Kommunikation
EP2750411B1 (de) * 2012-12-28 2015-09-30 GN Resound A/S Hörgerät mit verbesserter Lokalisation
US9148735B2 (en) 2012-12-28 2015-09-29 Gn Resound A/S Hearing aid with improved localization
EP2750410B1 (de) * 2012-12-28 2018-10-03 GN Hearing A/S Hörgerät mit verbesserter Lokalisation
US9338561B2 (en) 2012-12-28 2016-05-10 Gn Resound A/S Hearing aid with improved localization
US9148733B2 (en) 2012-12-28 2015-09-29 Gn Resound A/S Hearing aid with improved localization
EP2787746A1 (de) 2013-04-05 2014-10-08 Koninklijke Philips N.V. Vorrichtung und Verfahren zur Verbesserung der Hörbarkeit spezifischer Töne für einen Benutzer
US9100762B2 (en) 2013-05-22 2015-08-04 Gn Resound A/S Hearing aid with improved localization
EP2806660B1 (de) * 2013-05-22 2016-11-16 GN Resound A/S Hörgerät mit verbesserter Ortung
EP3214857A1 (de) * 2013-09-17 2017-09-06 Oticon A/s Hörhilfegerät mit einem eingangswandlersystem
CN103686574A (zh) * 2013-12-12 2014-03-26 苏州市峰之火数码科技有限公司 立体声电子助听器
CN103824562B (zh) * 2014-02-10 2016-08-17 太原理工大学 基于心理声学模型的语音后置感知滤波器
EP3111672B1 (de) 2014-02-24 2017-11-15 Widex A/S Hörgerät mit unterstützer unterdrückung von störgeräuschen
EP2919484A1 (de) * 2014-03-13 2015-09-16 Oticon A/s Verfahren zur Herstellung von Hörgeräteformstücken
EP2928210A1 (de) * 2014-04-03 2015-10-07 Oticon A/s Binaurales Hörgerätesystem mit binauraler Rauschunterdrückung
CN104980869A (zh) * 2014-04-04 2015-10-14 Gn瑞声达A/S 改进的单声道信号源定位的助听器
US9432778B2 (en) 2014-04-04 2016-08-30 Gn Resound A/S Hearing aid with improved localization of a monaural signal source
EP2928211A1 (de) * 2014-04-04 2015-10-07 Oticon A/s Selbstkalibrierung eines Multimikrofongeräuschunterdrückungssystems für Hörgeräte mit einer zusätzlichen Vorrichtung
DK3057335T3 (en) 2015-02-11 2018-01-08 Oticon As HEARING SYSTEM, INCLUDING A BINAURAL SPEECH UNDERSTANDING
EP3057340B1 (de) * 2015-02-13 2019-05-22 Oticon A/s Partnermikrofoneinheit und hörsystem mit einer partnermikrofoneinheit
CN107431869B (zh) 2015-04-02 2020-01-14 西万拓私人有限公司 听力装置
CA3007511C (en) * 2016-02-04 2023-09-19 Magic Leap, Inc. Technique for directing audio in augmented reality system
US10616695B2 (en) * 2016-04-01 2020-04-07 Cochlear Limited Execution and initialisation of processes for a device
CN106019232B (zh) * 2016-05-11 2018-07-10 北京地平线信息技术有限公司 声源定位系统和方法
EP3285501B1 (de) * 2016-08-16 2019-12-18 Oticon A/s Hörsystem mit einem hörgerät und einer mikrofoneinheit zur erfassung der eigenen stimme des benutzers
US10469962B2 (en) * 2016-08-24 2019-11-05 Advanced Bionics Ag Systems and methods for facilitating interaural level difference perception by enhancing the interaural level difference
US10911877B2 (en) * 2016-12-23 2021-02-02 Gn Hearing A/S Hearing device with adaptive binaural auditory steering and related method
US10887691B2 (en) * 2017-01-03 2021-01-05 Koninklijke Philips N.V. Audio capture using beamforming
US11202159B2 (en) * 2017-09-13 2021-12-14 Gn Hearing A/S Methods of self-calibrating of a hearing device and related hearing devices
WO2019086433A1 (en) * 2017-10-31 2019-05-09 Widex A/S Method of operating a hearing aid system and a hearing aid system
EP3704872B1 (de) 2017-10-31 2023-05-10 Widex A/S Verfahren zum betrieb eines hörgerätesystems und ein hörgerätesystem
EP4236359A3 (de) 2017-12-13 2023-10-25 Oticon A/s Hörgerät und binaurales hörsystem mit einem binauralen rauschunterdrückungssystem
DK3503581T3 (da) * 2017-12-21 2022-05-09 Sonova Ag Støjreduktion i et lydsignal til en høreenhed
US10827265B2 (en) * 2018-01-25 2020-11-03 Cirrus Logic, Inc. Psychoacoustics for improved audio reproduction, power reduction, and speaker protection
DK3525488T3 (da) * 2018-02-09 2020-11-30 Oticon As Høreanordning, der omfatter en stråleformerfiltreringsenhed til reduktion af feedback
US11438712B2 (en) * 2018-08-15 2022-09-06 Widex A/S Method of operating a hearing aid system and a hearing aid system
WO2020035158A1 (en) * 2018-08-15 2020-02-20 Widex A/S Method of operating a hearing aid system and a hearing aid system
WO2020035778A2 (en) 2018-08-17 2020-02-20 Cochlear Limited Spatial pre-filtering in hearing prostheses
WO2020044166A1 (en) * 2018-08-27 2020-03-05 Cochlear Limited Integrated noise reduction
CN109839612B (zh) * 2018-08-31 2022-03-01 大象声科(深圳)科技有限公司 基于时频掩蔽和深度神经网络的声源方向估计方法及装置
DK3672282T3 (da) 2018-12-21 2022-07-04 Sivantos Pte Ltd Fremgangsmåde til stråleformning i et binauralt høreapparat
EP3694229A1 (de) * 2019-02-08 2020-08-12 Oticon A/s Hörgerät mit einem geräuschreduzierungssystem
US11062723B2 (en) * 2019-09-17 2021-07-13 Bose Corporation Enhancement of audio from remote audio sources
CN110996238B (zh) * 2019-12-17 2022-02-01 杨伟锋 双耳同步信号处理助听系统及方法
CN111128221B (zh) * 2019-12-17 2022-09-02 北京小米智能科技有限公司 一种音频信号处理方法、装置、终端及存储介质
DK181045B1 (en) 2020-08-14 2022-10-18 Gn Hearing As Hearing device with in-ear microphone and related method
WO2022076404A1 (en) * 2020-10-05 2022-04-14 The Trustees Of Columbia University In The City Of New York Systems and methods for brain-informed speech separation
US11259139B1 (en) 2021-01-25 2022-02-22 Iyo Inc. Ear-mountable listening device having a ring-shaped microphone array for beamforming
US11636842B2 (en) 2021-01-29 2023-04-25 Iyo Inc. Ear-mountable listening device having a microphone array disposed around a circuit board
US11617044B2 (en) 2021-03-04 2023-03-28 Iyo Inc. Ear-mount able listening device with voice direction discovery for rotational correction of microphone array outputs
US11388513B1 (en) 2021-03-24 2022-07-12 Iyo Inc. Ear-mountable listening device with orientation discovery for rotational correction of microphone array outputs
CN114136434B (zh) * 2021-11-12 2023-09-12 国网湖南省电力有限公司 一种变电站站界噪声抗干扰估算方法和系统

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1351544A2 (de) * 2002-03-08 2003-10-08 Gennum Corporation Rauscharmes Richtmikrofonsystem

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5721783A (en) * 1995-06-07 1998-02-24 Anderson; James C. Hearing aid with wireless remote processor
US5751817A (en) * 1996-12-30 1998-05-12 Brungart; Douglas S. Simplified analog virtual externalization for stereophonic audio
EP0820210A3 (de) 1997-08-20 1998-04-01 Phonak Ag Verfahren zur elektronischen Strahlformung von akustischen Signalen und akustisches Sensorgerät
DE19810043A1 (de) * 1998-03-09 1999-09-23 Siemens Audiologische Technik Hörgerät mit einem Richtmikrofon-System
DE10249416B4 (de) 2002-10-23 2009-07-30 Siemens Audiologische Technik Gmbh Verfahren zum Einstellen und zum Betrieb eines Hörhilfegerätes sowie Hörhilfegerät
DE10331956C5 (de) 2003-07-16 2010-11-18 Siemens Audiologische Technik Gmbh Hörhilfegerät sowie Verfahren zum Betrieb eines Hörhilfegerätes mit einem Mikrofonsystem, bei dem unterschiedliche Richtcharakteistiken einstellbar sind
DE10334396B3 (de) 2003-07-28 2004-10-21 Siemens Audiologische Technik Gmbh Hörhilfegerät sowie Verfahren zum Betrieb eines Hörhilfegerätes mit einem Mikrofonsystem, bei dem unterschiedliche Richtcharakteristiken einstellbar sind
EP1443798B1 (de) * 2004-02-10 2006-06-07 Phonak Ag Hörhilfegerät mit einer Zoom-Funktion für das Ohr eines Individuums
US7688991B2 (en) * 2006-05-24 2010-03-30 Phonak Ag Hearing assistance system and method of operating the same
DK2055140T3 (da) * 2006-08-03 2011-02-21 Phonak Ag Fremgangsmåde til justering af en høreinstrument

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1351544A2 (de) * 2002-03-08 2003-10-08 Gennum Corporation Rauscharmes Richtmikrofonsystem

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104703106A (zh) * 2013-12-06 2015-06-10 奥迪康有限公司 用于免提通信的助听器装置
CN104703106B (zh) * 2013-12-06 2020-03-17 奥迪康有限公司 用于免提通信的助听器装置

Also Published As

Publication number Publication date
EP2088802A1 (de) 2009-08-12
US20090202091A1 (en) 2009-08-13
AU2008207437B2 (en) 2013-11-07
DK2088802T3 (da) 2013-10-14
CN101505447A (zh) 2009-08-12
US8204263B2 (en) 2012-06-19
AU2008207437A1 (en) 2009-08-27
CN101505447B (zh) 2013-11-06

Similar Documents

Publication Publication Date Title
EP2088802B1 (de) Verfahren zur Schätzung der Gewichtungsfunktion von Audiosignalen in einem Hörgerät
US10431239B2 (en) Hearing system
CN105872923B (zh) 包括双耳语音可懂度预测器的听力系统
Hamacher et al. Signal processing in high-end hearing aids: State of the art, challenges, and future trends
US20100002886A1 (en) Hearing system and method implementing binaural noise reduction preserving interaural transfer functions
EP2899996B1 (de) Signalverbesserung mittels drahtlosem Streaming
CN107071674B (zh) 配置成定位声源的听力装置和听力系统
US10070231B2 (en) Hearing device with input transducer and wireless receiver
EP3761671B1 (de) Hörgerät mit adaptiver teilbandstrahlformung und entsprechendes verfahren
JP2019531659A (ja) バイノーラル補聴器システムおよびバイノーラル補聴器システムの動作方法
JP2018186494A (ja) 適応型サブバンドビームフォーミングを用いた聴覚装置と関連する方法
CN108243381B (zh) 具有自适应双耳听觉引导的听力设备和相关方法
US11617037B2 (en) Hearing device with omnidirectional sensitivity
EP4178221A1 (de) Hörgerät oder system mit einem rauschsteuerungssystem
US20230080855A1 (en) Method for operating a hearing device, and hearing device
EP4277300A1 (de) Hörgerät mit adaptiver teilbandstrahlformung und zugehöriges verfahren
CN115314820A (zh) 配置成选择参考传声器的助听器
Neher et al. The influence of hearing-aid microphone location and room reverberation on better-ear effects
JP2013153427A (ja) 周波数アンマスキング機能を有する両耳用補聴器

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

17P Request for examination filed

Effective date: 20100212

17Q First examination report despatched

Effective date: 20100322

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 621537

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130715

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602008025866

Country of ref document: DE

Effective date: 20130905

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20131007

Ref country code: DK

Ref legal event code: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 621537

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130710

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20130710

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131110

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131010

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130807

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131111

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131021

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131011

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

26N No opposition filed

Effective date: 20140411

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602008025866

Country of ref document: DE

Effective date: 20140411

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140207

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140207

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20080207

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230127

Year of fee payment: 16

Ref country code: DK

Payment date: 20230127

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240201

Year of fee payment: 17

Ref country code: CH

Payment date: 20240301

Year of fee payment: 17

Ref country code: GB

Payment date: 20240201

Year of fee payment: 17