WO2020035158A1 - Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive - Google Patents

Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive Download PDF

Info

Publication number
WO2020035158A1
WO2020035158A1 PCT/EP2018/081502 EP2018081502W WO2020035158A1 WO 2020035158 A1 WO2020035158 A1 WO 2020035158A1 EP 2018081502 W EP2018081502 W EP 2018081502W WO 2020035158 A1 WO2020035158 A1 WO 2020035158A1
Authority
WO
WIPO (PCT)
Prior art keywords
microphone
hearing aid
resultant length
determining
input signal
Prior art date
Application number
PCT/EP2018/081502
Other languages
English (en)
Inventor
Pejman Mowlaee
Lars Dalskov Mosgaard
Thomas Bo Elmedyb
Michael Johannes Pihl
Georg Stiefenhofer
David PELEGRIN-GARCIA
Adam Westermann
Original Assignee
Widex A/S
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201800462A external-priority patent/DK201800462A1/en
Application filed by Widex A/S filed Critical Widex A/S
Priority to US17/268,144 priority Critical patent/US11438712B2/en
Priority to EP18807579.0A priority patent/EP3837861B1/fr
Publication of WO2020035158A1 publication Critical patent/WO2020035158A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • the present invention relates to a method of operating a hearing aid system.
  • the present invention also relates to a hearing aid system adapted to carry out said method.
  • a hearing aid system is understood as meaning any device which provides an output signal that can be perceived as an acoustic signal by a user or contributes to providing such an output signal, and which has means which are customized to compensate for an individual hearing loss of the user or contribute to compensating for the hearing loss of the user.
  • They are, in particular, hearing aids which can be worn on the body or by the ear, in particular on or in the ear, and which can be fully or partially implanted.
  • some devices whose main aim is not to compensate for a hearing loss may also be regarded as hearing aid systems, for example consumer electronic devices (televisions, hi-fi systems, mobile phones, MP3 players etc.) provided they have, however, measures for compensating for an individual hearing loss.
  • a traditional hearing aid can be understood as a small, battery-powered, microelectronic device designed to be worn behind or in the human ear by a hearing-impaired user.
  • the hearing aid Prior to use, the hearing aid is adjusted by a hearing aid fitter according to a prescription.
  • the prescription is based on a hearing test, resulting in a so-called audiogram, of the performance of the hearing-impaired user’s unaided hearing.
  • the prescription is developed to reach a setting where the hearing aid will alleviate a hearing loss by amplifying sound at frequencies in those parts of the audible frequency range where the user suffers a hearing deficit.
  • a hearing aid comprises one or more microphones, a battery, a microelectronic circuit comprising a signal processor, and an acoustic output transducer.
  • the signal processor is preferably a digital signal processor.
  • the hearing aid is enclosed in a casing suitable for fitting behind or in a human ear.
  • a hearing aid system may comprise a single hearing aid (a so called monaural hearing aid system) or comprise two hearing aids, one for each ear of the hearing aid user (a so called binaural hearing aid system).
  • the hearing aid system may comprise an external device, such as a smart phone having software applications adapted to interact with other devices of the hearing aid system or such as an independent external microphone with wireless link means.
  • the term“hearing aid system device” may denote a hearing aid or an external device.
  • BTE Behind-The-Ear
  • an electronics unit comprising a housing containing the major electronics parts thereof is worn behind the ear.
  • An earpiece for emitting sound to the hearing aid user is worn in the ear, e.g. in the concha or the ear canal.
  • a sound tube is used to convey sound from the output transducer, which in hearing aid terminology is normally referred to as the receiver, located in the housing of the electronics unit and to the ear canal.
  • a conducting member comprising electrical conductors conveys an electric signal from the housing and to a receiver placed in the earpiece in the ear.
  • Such hearing aids are commonly referred to as Receiver-In-The-Ear (RITE) hearing aids.
  • RITE Receiver-In-The-Ear
  • RIC Receiver-In-Canal
  • In-The-Ear (ITE) hearing aids are designed for arrangement in the ear, normally in the funnel-shaped outer part of the ear canal.
  • ITE hearing aids In a specific type of ITE hearing aids the hearing aid is placed substantially inside the ear canal. This category is sometimes referred to as Completely-In-Canal (CIC) hearing aids.
  • CIC Completely-In-Canal
  • Hearing loss of a hearing impaired person is quite often frequency-dependent. This means that the hearing loss of the person varies depending on the frequency. Therefore, when compensating for hearing losses, it can be advantageous to utilize frequency- dependent amplification. Hearing aids therefore often provide to split an input sound signal received by an input transducer of the hearing aid, into various frequency intervals, also called frequency bands, which are independently processed. In this way, it is possible to adjust the input sound signal of each frequency band individually to account for the hearing loss in respective frequency bands.
  • the invention in a first aspect, provides a method of operating a hearing aid system comprising the steps of:
  • This provides an improved method of operating a hearing aid system with respect to noise reduction.
  • the invention in a second aspect, provides a hearing aid system comprising a first and a second microphone, a sound estimator, a digital signal processor and an electrical- acoustical output transducer; wherein the sound estimator is adapted to:
  • digital signal processor is configured to:
  • the first input signal is at least derived from the output from the first microphone
  • the second input signal is at least derived from the output from the second microphone
  • the third input signal is at least derived from at least one of the outputs from the first microphone, the second microphone and a third microphone.
  • the invention in a third aspect, provides a non-transitory computer readable medium carrying instructions which, when executed by a computer, cause the following method to be performed:
  • the invention in a fourth aspect provides an internet server comprising a downloadable application that may be executed by a personal communication device, wherein the downloadable application is adapted to cause the following method to be performed:
  • Fig. 1 illustrates highly schematically a binaural hearing aid system according to an embodiment of the invention
  • Fig. 2 illustrates highly schematically a hearing aid system according to an
  • noise reduction noise suppression
  • single channel noise reduction single channel noise suppression
  • single channel noise suppression may be used interchangeably.
  • these methods which are the subject of the present disclosure distinguishes e.g. beamforming, which may also be named spatial processing, in that the noise suppression is provided by applying a frequency dependent gain (adapted to provide noise suppression) to a single signal (i.e. channel) as opposed to beamforming methods that provide another type of noise suppression that is achieved in the combination of at least two signals.
  • said single signal may very well be the result of a spatial processing.
  • the noise suppression methods considered in the following are single- channel methods even if not explicitly named as such.
  • the inventors have found that sound signal noise reduction may be improved by considering unbiased sound environment characteristics that are based on the inter-microphone phase difference (IPD) for a set of microphones as will be explained in further details below.
  • IPD inter-microphone phase difference
  • IPD samples The average of a multitude of IPD estimates (which in the following may also be named IPD samples) may be given by:
  • ( ) is the average operator
  • n represents the number of IPD samples used for the averaging
  • R A is an averaged amplitude that depends on the phase and that may assume values in the interval [0, (A)]
  • Q A is the weighted mean phase. It can be seen that the amplitude Ai of each individual sample weight each corresponding phase q [ in the averaging. Therefore, both the averaged amplitude R A and the weighted mean phase Q A are biased (i.e. dependent on the other).
  • the amplitude weighting providing the weighted mean phase Q A will generally result in the weighted mean phase Q A being different from the unbiased mean phase Q that is defined by:
  • Equation (1) ( ) is the average operator and n represents the number of IPD samples used for the averaging. It follows that the unbiased mean phase Q can be estimated by averaging a multitude of IPD samples.
  • R is named the mean resultant length and the mean resultant length R provides information on how closely the individual phase estimates q [ are grouped together and the circular variance V and the mean resultant length R are related by:
  • V 1— R (eq. 3)
  • the determination of the unbiased mean phase and the mean resultant length according to the present invention can be determined purely based on input signals and as such is highly flexible with respect to its use in hearing aid systems with different amounts and positions of microphones, considering e.g. hearing aid systems that are monaural or binaural and may comprise additional microphones in external devices.
  • mean resultant length may be used to improve noise reduction in a variety of different manners and for a variety of different systems and therefore it is worth emphasizing that the mean resultant length need not be determined using only two microphone signals to provide the IPD samples but that additional microphone signals may be included whereby enhanced noise reduction performance can be achieved in certain situations.
  • the input signals i.e. the sound environment
  • the two main sources of dynamics are the temporal and spatial dynamics of the sound environment.
  • speech the duration of a short consonant may be as short as only 5 milliseconds, while long vowels may have a duration of up to 200 milliseconds depending on the specific sound.
  • the spatial dynamics is a consequence of relative movement between the hearing aid user and surrounding sound sources.
  • speech is considered quasi stationary for a duration in the range between say 20 and 40 milliseconds and this includes the impact from spatial dynamics.
  • the duration of the involved time windows are as long as possible, but it is, on the other hand, detrimental if the duration is so long that it covers natural speech variations or spatial variations and therefore cannot be considered quasi-stationary. Furthermore, it noted that the quasi-stationarity is generally frequency dependent.
  • a first time window is defined by the transformation of the digital input signals into the time-frequency domain and the longer the duration of the first time window the higher the frequency resolution in the time-frequency domain, which obviously is advantageous. Additionally, the present invention requires that the final estimate of an inter-microphone phase difference is based on a calculation of an expectation value and it has been found that the number of individual samples used for calculation of the expectation value preferably exceeds at least 5.
  • the combined effect of the first time window and the calculation of the expectation value provides an effective time window that is shorter than 40 milliseconds or in the range between 5 and 200 milliseconds such that the sound environment in most cases can be considered quasi-stationary.
  • improved accuracy of the unbiased mean phase or the mean resultant length may be provided by obtaining a multitude of successive samples of the unbiased mean phase and the mean resultant length, in the form of a complex number using the methods according to the present invention and subsequently adding these successive estimates (i.e. the complex numbers) and normalizing the result of the addition with the number of added estimates.
  • This embodiment is particularly advantageous in that the mean resultant length effectively weights the samples that have a high probability of comprising a target source, while estimates with a high probability of mainly comprising noise will have a negligible impact on the final value of the unbiased mean phase of the inter-microphone phase difference because the samples are characterized by having a low value of the mean resultant length.
  • At least one or at least not all of the successive complex numbers representing the unbiased mean phase and the mean resultant length are used for improving the estimation of the inter-microphone phase difference, wherein the selection of the complex numbers to be used are based on an evaluation of the corresponding mean resultant length (i.e. the variance) such that only complex numbers representing a high mean resultant length are considered.
  • the estimation of the unbiased mean phase of the inter microphone phase difference is additionally based on an evaluation of the value of the individual samples of the unbiased mean phase such that only samples representing the same target source are combined.
  • the angular direction of a target source which may also be named the direction of arrival (DOA) is derived from the unbiased mean phase and this may be used to select directions from which the user wants to listen whereby a corresponding noise reduction may become directional dependent.
  • DOA may also be determined using methods well known in the art that are not based on the unbiased mean phase.
  • speech detection may be used as input to determine a preferred unbiased mean phase (which is readily transformed to a direction of arrival (DOA)) e.g. by giving preference to target sources positioned at least approximately in front of the hearing aid system user, when speech is detected.
  • DOA direction of arrival
  • the knowledge of the direction of arrival is used to suppress sound signals from targets positioned in unwanted directions, such as the back half-plane of the hearing aid system user, even if sound signals from the targets contain speech, whereby the hearing aid system user is allowed to focus on speakers in the front half-plane.
  • speech detection may be carried out using a variety of traditional speech detection methods, such as methods based on e.g. temporal variations, level variations, spectral distribution or based on feature based trained models or, more specifically, methods such as disclosed in WO- A 1-2012076045.
  • speech detection methods or the corresponding methods for estimating speech presence probability (SPP) or speech absence probability (SAP) may also be based on the mean resultant length R by using that a high value of the mean resultant length is very likely to represent a sound environment with a single primary sound source.
  • a single primary sound source may be a single speaker or something else such as a person or a loudspeaker playing music it will be advantageous to combine the mean resultant length based methods with one or more of the traditional speech detection methods.
  • the mean resultant length based methods are combined as a joint probability.
  • the mean resultant length can be used to determine how to weight information, such as a determined DOA of a target source, from each hearing aid of a binaural hearing aid system. More generally the mean resultant length can be used to compare or weight information obtained from a multitude of microphone pairs, such as the multitude of microphone pairs that are available in e.g. a binaural hearing aid system comprising two hearing aids each having two microphones.
  • Fig. 1 illustrates highly schematically a binaural hearing aid system 100 comprising a first 102 and a second 103 (i.e. left and right) hearing aid and an external device 103.
  • the external device 103 comprises an external microphone (not shown) and at least one wireless link 104, 105 and 106 between the hearing aids and the external device enables sound processing in the hearing aids to be carried out based on at least some of the microphones in the hearing aids and the external device.
  • the binaural hearing aid system 100 comprises a multitude of external devices and additional microphones accommodated therein.
  • the binaural hearing aid system 100 is wirelessly connected to a multitude of external devices and hereby have access to the microphone signals obtained by these external devices, even though these external devices are not part of the hearing aid system 100 as such.
  • external devices may be smart phones of persons the hearing aid system user is speaking with.
  • FIG. 2 illustrates highly schematically a hearing aid 200 of a hearing aid system according to an embodiment of the invention.
  • the hearing aid system comprises a first hearing aid 200 (that in the following will be named the ipse-lateral or left hearing aid), a second hearing aid (that in the following will be named the contra-lateral or right hearing aid) and an external device (for clarity reasons the second hearing aid and the external device are not shown).
  • a first acoustical-electrical input transducer accommodated in the ipse-lateral hearing aid 201 - a is shown together with a second acoustical-electrical input transducer 201 -b accommodated in the contra-lateral hearing aid and a third acoustical-electrical input transducer 201-C accommodated in the external device (in the following the acoustical- electrical input transducers may also be named microphones).
  • the lines connecting the microphones 20l-b and 201-C to the remaining parts of the hearing aid 200 are dashed in order to illustrate that these microphones (i.e. the input signals at least derived from at least these microphones, as will be discussed below in variations of the present embodiment) are operatively connected to the hearing aid 200 through at least one wireless link.
  • the hearing aid 200 further comprises a filter bank 202, a sound estimator 203, a digital signal processor 204 and an electrical-acoustical output transducer 205.
  • the microphones 201 -a, 20l-b and 201-C provide analog output signals that are converted into digital output signals by analog-digital converters (that for clarity reasons are not shown) and subsequently provided to a filter bank 202 adapted to transform the digital input signals into the time-frequency domain.
  • analog-digital converters that for clarity reasons are not shown
  • filter bank 202 adapted to transform the digital input signals into the time-frequency domain.
  • a Fast Fourier Transform may be used for the transformation and in variations other time-frequency domain transformations can be used such as a Discrete Fourier Transform (DTF) or a polyphase filterbank.
  • analog-digital converters and filter banks may be accommodated in at least one of the contra-lateral hearing aid and the external device such that e.g. input signals in the time-frequency domain are transmitted to the first hearing aid using the wireless links and that therefore corresponding analog-digital converter and filter bank are by-passed.
  • the transformed digital input signals provided by the time-frequency domain filter bank may also be named“input signal”.
  • all other signals referred to in the present disclosure may or may not be specifically named as digital signals.
  • the terms input signal, digital input signal, transformed digital input signal, frequency band input signal, sub-band signal and frequency band signal may be used interchangeably in the following and unless otherwise noted the input signals can generally be assumed to be frequency band signals independent on whether the filter bank 102 provide frequency band signals in the time domain or in the time-frequency domain.
  • the microphones are omni-directional.
  • the input signals are not transformed into the time-frequency domain. Instead the input signals are first transformed into a number of frequency band signals by a time-domain filter bank comprising a multitude of time-domain bandpass filters, such as Finite Impulse Response bandpass filters and subsequently the frequency band signals are compared using correlation analysis wherefrom the phase is derived.
  • a time-domain filter bank comprising a multitude of time-domain bandpass filters, such as Finite Impulse Response bandpass filters
  • the input signal provided by the ipse-lateral microphone 201 -a is branched and provided both to the digital signal processor 204 and to the sound estimator 203 while the input signals provided by the contra-lateral microphone 20l-b and the external device microphone 201-C are not branched and are consequently only provided to the sound estimator 203.
  • the digital signal processor 204 may be adapted to provide various forms of signal processing including at least: noise reduction, speech enhancement and hearing compensation. However, in the present context focus will be on the noise reduction.
  • the sound estimator 203 is configured to provide improved estimates of characteristics of the current sound environment of the hearing aid system and provide this information to the digital signal processor 204 such that an improved noise reduction is provided.
  • the input signals may be spatially processed (which in the following may also be named beamformed) and the resulting
  • two ipse-lateral input signals may be monaurally beamformed and provided to the digital signal processor 204.
  • the input signals provided from two hearing aids of a binaural hearing aid system are monaurally beamformed before being provided to the sound estimator 203, which is preferred when the signal provided to the digital signal processor 204 is likewise monaurally beamformed.
  • monaurally beamformed signals from each of the hearing aids of the binaural hearing aid system are provided to the sound estimator 203, independent on whether the signal provided to the digital signal processor 204 is likewise beamformed, whereby a spatial focus in the analysis carried out by the sound estimator 203 is allowed.
  • monaural beamforming of the signals from the two hearing aids is used to provide two front pointing cardioids as input to the sound estimator 203 whereby an SNR estimate or some other sound environment
  • characteristic can be determined based primarily on the front half plane of the sound environment.
  • At least one monaurally beamformed signal is provided to the sound estimator 203 wherein the at least one monaurally beamformed signal is combined with a signal from an external device microphone in order to provide a resultant length.
  • ipse-lateral microphone signals are monaurally beamformed and provided to the digital signal processor 204, while the input to the sound estimator 203 consists only of signals that are not the result of beamforming.
  • This variation may especially be advantageous in the case, where only two
  • microphones are available, because two signals are needed for the analysis required to provide a frequency dependent gain, according to the invention, and two signals are likewise required to obtain a beamformed signals that may subsequently have the frequency dependent gain applied.
  • the normalized cross-correlations, between each of the ipse-lateral microphone 201 -a and the contra lateral microphone 20l-b with the external microphone signal 201-C may, when processed using the circular statistics explained with reference to equation 2, be given by:
  • Y L , Y R , Y E represent the digitally transformed input signals from respectively the ipse-lateral, contra-lateral and external microphone and R L , R r , ⁇ l , ⁇ R the
  • an external microphone is advantageous because it provides an improved target sound source signal, with improved signal-to-noise ratio (SNR), due to the presumably short spacing between the target sound source and the external microphone compared to the distance between the target sound source and the hearing aid microphones.
  • SNR signal-to-noise ratio
  • the distance between an external device and the hearing aids when the distance between an external device and the hearing aids are sufficiently large the noise in the hearing aid microphone signals will be spatially uncorrelated with the noise in an external microphone signal and the advantageous effect hereof is that diffuse noise is rendered incoherent and therefore easier to identify and subsequently suppress.
  • the distance between an external device and the hearing aids may be considered sufficiently large if the external device is a smart phone that is positioned on a meeting table or if the external device is an external microphone device that is worn by a person the hearing aid system user would like to listen particularly to.
  • the spacing between the external microphone 201-C and the hearing aid microphones 201 -a and 20l-b typically also renders other sound sources that are positioned sufficiently far away from either the hearing aids or the external microphone incoherent and therefore easier to identify and subsequently suppress.
  • the ipse- and contra-lateral mean resultant lengths R L and R r which may be determined using equations 4-a or 4-b are advantageous as is, but the corresponding unbiased mean phases are of little use since the interpretation of the value of the determined unbiased mean phases according to equations 4-a and 4b depend on the relative position of the external microphone relative to the hearing aids and this relative position is generally unknown.
  • the hearing aid system may comprise means to determine this relative positions e.g. using optical or infrared detection mechanisms or according to another variation using the wireless link of a binaural hearing aid system to do a triangularization detection based e.g. on the received signal strength indication (RSSI).
  • RSSI received signal strength indication
  • a binaural IPD sample may be estimated from the normalized cross-correlations between the ipse-lateral and contra lateral microphone signals each with the external microphone signal and when combining this with equations 4-a and 4-b:
  • R BB represents the binaural mean resultant length
  • Q BB represents the binaural unbiased mean phase
  • the external microphone enhanced estimates of the binaural mean resultant length R BB and the binaural unbiased mean phase Q BB may be used for several signal processing features. However, according to the present context and the present embodiment the focus is on utilizing the external microphone enhanced binaural mean resultant length R BB (which in the following may also be named the external microphone enhanced mean resultant length) for noise suppression.
  • R(k, P) e 2 (eq. 7)
  • R(k,l) represents a mean resultant length, such as the external microphone enhanced mean resultant length R BB , some mapped mean resultant length or some other resultant length determined using any of the embodiments and the corresponding variations.
  • the wrapped normal distribution is obtained by wrapping the normal distribution around the unit circle and adding all probability mass wrapped to the same point. Now, by combining equations 6 and 7 we get:
  • SNR d represents a so called directional or spatial signal-to-noise ratio because SNR d is based on the mean resultant length R(k,l).
  • the input signals from the three microphones 201 -a, 20l-b and 201-C are provided to the sound estimator 203 and used to provide a directional SNR that subsequently is used to control the setting of the DSP 204 such that noise in the signal from the ipse-lateral microphone 201 -a is suppressed.
  • the noise suppression i.e. a frequency dependent gain to be applied
  • Wiener filtering may be determined using a multitude of different single-channel noise suppression methods including Wiener filtering, statistical-based methods such as maximum likelihood
  • ML minimum mean square error
  • MMSE minimum mean square error
  • MAP maximum a posteriori
  • the directional signal-to-noise-ratio SNR d is used to control a directional Wiener filter gain function w d (k, 1) given by:
  • the enhanced target sound source spectral estimate (which in the following may also be named target speech spectral estimate)
  • the Wiener gain may be applied in a multitude of ways, including applied directly as a frequency dependent gain onto an input signal in the time-frequency domain as in the present embodiment or it could be applied directly onto an input signal divided into frequency bands in the time domain or as a broadband filter such as a linear phase filter or a minimum phase filter that may provide high quality sound with few artefacts.
  • the filter is a mixed phase filter that combines linear phase in the high frequency range with minimum phase in the low frequency range or the other way around dependent at least partly on the type of binaural cue that is to be preserved.
  • the input signal may also be a beamformed signal.
  • the frequency dependent gain which is dependent on R
  • R can be derived based on a multitude of appropriate cost functions, which may contain R directly or indirectly, where the Wiener gain using R to find SNR is one special case.
  • Wiener filter gain function (which in the following may also be named a Wiener filter mask) w(k, l ) may be given by:
  • b ( k , l) represents an instantaneous a-posteriori SNR that may be determined from an equation given by:
  • x (k, 1) represents an a-priori SNR
  • y(k, 1) represents an a-posterior SNR
  • b ( k , V) represents an instantaneous a-posteriori SNR in the sense that it provides an instantaneous estimation of the SNR.
  • the a-priori SNR x ( k , l ) may be determined using the equation:
  • ⁇ p S s( . k, l ) represents the target sound power spectral density (target sound PSD) and q NN (k, V) represents the noise power spectral density (noise PSD).
  • q NN k, V
  • noise PSD noise power spectral density
  • the directional signal-to -noise-ratio SNR d may be used as the a-priori SNR.
  • the noise PSD may be estimated using standard minimum statistics without requiring a mean resultant length R.
  • the noise PSD f NN (k, l ) may be estimated (using equation 13) directly by setting x ( k , l ) equal to the directional signal -to-noise-ratio SNR d . and using that the target sound PSD p S s( . k, l) may be estimated by applying the directional Wiener filter gain function w d (k, l ) to the noisy time-frequency signal bins Y (k, l ), since the directional Wiener filter gain function can be determined based on the directional signal -to-noise-ratio SNR d .
  • the a-posterior SNR g (k, l) may be determined using the equation:
  • Y ( k , l) represents the noisy time-frequency signal bins.
  • the target sound PSD and the noise PSD may be determined based on a mean resultant length.
  • p(k, 1) a single-channel speech presence probability
  • q(k, l ) may be considered a hyper parameter and therefore estimated as such or q(k, l ) may just be given a fixed value if the hyper parameter estimations demands to many processing resources, and in an even more specific variation the fixed value of q(k, l ) is selected from a range between 0.3 and 0.7.
  • q(k, V) is set equal to 1 - p (k, V),.
  • q(k,l) is determined based on the values of the speech presence probability in adjacent frames or adjacent frequency bins.
  • an enhanced target sound source spectral estimate X(k, 1) may be determined from the equation given by:
  • X 0 k , p ( k , l)X d (M) + q ( k , l) G min ( k ) Y (k, V) (eq. 18)
  • p(k,l) may be given from equation 17 or from some other speech presence probability estimator (including estimators based on temporal variations, level variations, the spectral distribution or based on feature based trained models), wherein X d (k, 1) may be given from equation 10 and and G min (k) is a remixing level constant adapted to control the remixing levels between the two parts of the sum that equation 16 consists of.
  • G min (k) may be selected to be in the range between (-) 1 and (-) 30 db, but generally these values are highly dependent on the considered target source and in other variations the remixing level constant G min (k) may be omitted.
  • the enhanced target sound source spectral estimate X(k, 1) may be determined from the equation given by:
  • X(k, 1) p(k, l)Y(k, 1) + q(k, l)G min (k)Y(k, 1) (eq. 19)
  • equations 18 and 19 allows a corresponding noise suppression gain function (or in other words a frequency dependent gain) to be readily derived, which can be used to control the digital signal processor 204 comprising either an adaptive filter or a time frequency mask, which both can be used to apply the desired noise suppression gain function.
  • the speech absence probability is estimated by mapping a coherent-to-diffuse-ratio CDR onto the speech absence probability using a continuous interpolation of a function representing the coherent-to-diffuse-ratio CDR and wherein the coherent-to-diffuse-ratio CDR is estimated using the similarity between the mean resultant length R and the coherent-to-diffuse-ratio CDR.
  • the coherent-to-diffuse-ratio CDR may be set equal to the mean resultant length R.
  • the speech presence probability may likewise be estimated based on coherent-to-diffuse-ratio CDR.
  • the directional signal -to-noise-ratio SNR D may be estimated, based on the mean resultant length R, using a relation of the form given by:
  • f SNR f SNR ( ⁇ n(R)) (eq. 20) wherein f SNR (.) is an arbitrary non-linear function adapted to suppress lower values of SNR more relative to higher values of SNR. In this way it is possible to compensate for the fact that the value of the mean resultant length is typically overestimated for low values of the mean resultant length R due to the limited amount of samples used to estimate the mean resultant length.
  • Resultant length may generally refer to the mean resultant length as defined in equation 2, or to an ipse- or contral-lateral mean resultant length as defined in equations 4-a and 4-b, or to an external microphone enhanced mean resultant length as defined in equation 5 or to combinations of these or to a mapped mean resultant length as given by:
  • indices 1 and k represent respectively the frame used to transform the input signals into the time-frequency domain and the frequency bin, wherein E ⁇ . ⁇ is an expectation operator, wherein e ⁇ ab ⁇ k,v> represents the inter-microphone phase difference between the first and the second input signals, wherein f i is a real variable; and wherein f 2 is an arbitrary function.
  • the mapped mean resultant length is a so called wrapped mean resultant length R ab which is obtained by setting f i (k, l) equal to k u /k and leaving out the function f 2 and wherein k u is equal to 2Kf u /f s , with f u representing the upper frequency limit below which phase ambiguities, due to the periodicity of the IPD, are avoided, f s being the sampling frequency and K the number of frequency bins up to the Nyquist limit.
  • the wrapped mean resultant length R ab is advantageous at least because for diffuse noise R ab approaches zero for all k ⁇ k u while for anechoic sources R ab approaches one as intended whereby an improved ability to distinguish diffuse noise from desired target sources, especially in the low frequency range is obtained. This is especially advantageous for the relatively short microphone spacings that are typical for hearing aid systems.
  • any of the above mentioned variations of the (time and frequency dependent) resultant length R may be used directly as a frequency dependent gain, which can be used to control the digital signal processor 204 comprising either an adaptive filter or a time frequency mask, which both can be used to apply the desired frequency dependent gain.
  • the methods and selected parts of the hearing aid according to the disclosed embodiments may also be implemented in systems and devices that are not hearing aid systems (i.e. they do not comprise means for compensating a hearing loss), but nevertheless comprise both acoustical-electrical input transducers and electro- acoustical output transducers.
  • Such systems and devices are at present often referred to as hearables.
  • a headset is another example of such a system.
  • the hearing aid system needs not comprise a traditional loudspeaker as output transducer.
  • hearing aid systems that do not comprise a traditional loudspeaker are cochlear implants, implantable middle ear hearing devices (IMEHD), bone-anchored hearing aids (BAHA) and various other electro -mechanical transducer based solutions including e.g. systems based on using a laser diode for directly inducing vibration of the eardrum.
  • IMEHD implantable middle ear hearing devices
  • BAHA bone-anchored hearing aids
  • electro -mechanical transducer based solutions including e.g. systems based on using a laser diode for directly inducing vibration of the eardrum.
  • the various embodiments of the present embodiment may be combined unless it is explicitly stated that they cannot be combined. Especially it may be worth pointing to the possibilities of impacting various hearing aid system signal processing features, including directional systems, based on sound environment classification.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un procédé de fonctionnement d'un système d'aide auditive destiné à fournir une réduction de bruit améliorée et un système d'aide auditive (100) conçu pour mettre en œuvre le procédé.
PCT/EP2018/081502 2018-08-15 2018-11-16 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive WO2020035158A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/268,144 US11438712B2 (en) 2018-08-15 2018-11-16 Method of operating a hearing aid system and a hearing aid system
EP18807579.0A EP3837861B1 (fr) 2018-08-15 2018-11-16 Procédé de fonctionnement d'un système de prothèse auditive

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DKPA201800462 2018-08-15
DKPA201800465 2018-08-15
DKPA201800462A DK201800462A1 (en) 2017-10-31 2018-08-15 METHOD OF OPERATING A HEARING AID SYSTEM AND A HEARING AID SYSTEM
DKPA201800465 2018-08-15

Publications (1)

Publication Number Publication Date
WO2020035158A1 true WO2020035158A1 (fr) 2020-02-20

Family

ID=64453468

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/EP2018/081502 WO2020035158A1 (fr) 2018-08-15 2018-11-16 Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
PCT/EP2019/061993 WO2020035180A1 (fr) 2018-08-15 2019-05-09 Procédé de fonctionnement d'un système audio de niveau d'oreille et système audio de niveau d'oreille

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/061993 WO2020035180A1 (fr) 2018-08-15 2019-05-09 Procédé de fonctionnement d'un système audio de niveau d'oreille et système audio de niveau d'oreille

Country Status (1)

Country Link
WO (2) WO2020035158A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11617037B2 (en) 2021-04-29 2023-03-28 Gn Hearing A/S Hearing device with omnidirectional sensitivity

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023110845A1 (fr) 2021-12-13 2023-06-22 Widex A/S Procédé de fonctionnement d'un système de dispositif audio et système de dispositif audio
WO2023110836A1 (fr) 2021-12-13 2023-06-22 Widex A/S Procédé de fonctionnement d'un système de dispositif audio et système de dispositif audio

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009034524A1 (fr) * 2007-09-13 2009-03-19 Koninklijke Philips Electronics N.V. Appareil et procede de formation de faisceau audio
US20090202091A1 (en) * 2008-02-07 2009-08-13 Oticon A/S Method of estimating weighting function of audio signals in a hearing aid
US20150289064A1 (en) * 2014-04-04 2015-10-08 Oticon A/S Self-calibration of multi-microphone noise reduction system for hearing assistance devices using an auxiliary device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0720473D0 (en) * 2007-10-19 2007-11-28 Univ Surrey Accoustic source separation
EP2882203A1 (fr) * 2013-12-06 2015-06-10 Oticon A/s Dispositif d'aide auditive pour communication mains libres

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009034524A1 (fr) * 2007-09-13 2009-03-19 Koninklijke Philips Electronics N.V. Appareil et procede de formation de faisceau audio
US20090202091A1 (en) * 2008-02-07 2009-08-13 Oticon A/S Method of estimating weighting function of audio signals in a hearing aid
US20150289064A1 (en) * 2014-04-04 2015-10-08 Oticon A/S Self-calibration of multi-microphone noise reduction system for hearing assistance devices using an auxiliary device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CABOT: "AN INTRODUCTION TO CIRCULAR STATISTICS AND ITS APPLICATION TO SOUND LOCALIZATION EXPERIMENTS", AES, November 1977 (1977-11-01), XP002788240, Retrieved from the Internet <URL:http://www.aes.org/tmpFiles/elib/20190109/3062.pdf> [retrieved on 201901] *
KUTIL R: "Biased and unbiased estimation of the circular mean resultant length and its variance", INTERNET CITATION, 1 August 2012 (2012-08-01), pages 549 - 561, XP002788241, Retrieved from the Internet <URL:https://www.tandfonline.com/doi/pdf/10.1080/02331888.2010.543463?needAccess=true> [retrieved on 19000101] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11617037B2 (en) 2021-04-29 2023-03-28 Gn Hearing A/S Hearing device with omnidirectional sensitivity

Also Published As

Publication number Publication date
WO2020035180A1 (fr) 2020-02-20

Similar Documents

Publication Publication Date Title
US11109164B2 (en) Method of operating a hearing aid system and a hearing aid system
US9723422B2 (en) Multi-microphone method for estimation of target and noise spectral variances for speech degraded by reverberation and optionally additive noise
EP2899996B1 (fr) Amélioration du signal à l&#39;aide de diffusion en continu sans fil
US10631105B2 (en) Hearing aid system and a method of operating a hearing aid system
CN107071674B (zh) 配置成定位声源的听力装置和听力系统
US20160080873A1 (en) Hearing device comprising a gsc beamformer
US10334371B2 (en) Method for feedback suppression
WO2019086432A1 (fr) Procédé de fonctionnement d&#39;un système d&#39;aide auditive et système d&#39;aide auditive
WO2020035158A1 (fr) Procédé de fonctionnement d&#39;un système d&#39;aide auditive et système d&#39;aide auditive
EP3837861B1 (fr) Procédé de fonctionnement d&#39;un système de prothèse auditive
Puder Adaptive signal processing for interference cancellation in hearing aids
US10111012B2 (en) Hearing aid system and a method of operating a hearing aid system
DK201800462A1 (en) METHOD OF OPERATING A HEARING AID SYSTEM AND A HEARING AID SYSTEM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18807579

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018807579

Country of ref document: EP

Effective date: 20210315