CN108430002B - Adaptive level estimator, hearing device, method and binaural hearing system - Google Patents

Adaptive level estimator, hearing device, method and binaural hearing system Download PDF

Info

Publication number
CN108430002B
CN108430002B CN201810108458.7A CN201810108458A CN108430002B CN 108430002 B CN108430002 B CN 108430002B CN 201810108458 A CN201810108458 A CN 201810108458A CN 108430002 B CN108430002 B CN 108430002B
Authority
CN
China
Prior art keywords
level
estimate
estimator
hearing
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810108458.7A
Other languages
Chinese (zh)
Other versions
CN108430002A (en
Inventor
A·H·汤姆森
J·皮特森
J·M·德哈恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oticon AS
Original Assignee
Oticon AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=57960338&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CN108430002(B) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Oticon AS filed Critical Oticon AS
Publication of CN108430002A publication Critical patent/CN108430002A/en
Application granted granted Critical
Publication of CN108430002B publication Critical patent/CN108430002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G9/00Combinations of two or more types of control, e.g. gain control and tone control
    • H03G9/005Combinations of two or more types of control, e.g. gain control and tone control of digital or coded signals
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G9/00Combinations of two or more types of control, e.g. gain control and tone control
    • H03G9/02Combinations of two or more types of control, e.g. gain control and tone control in untuned amplifiers
    • H03G9/025Combinations of two or more types of control, e.g. gain control and tone control in untuned amplifiers frequency-dependent volume compression or expansion, e.g. multiple-band systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/35Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
    • H04R25/356Amplitude, e.g. amplitude shift or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Otolaryngology (AREA)
  • Neurosurgery (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

An adaptive level estimator, a hearing device, a method and a binaural hearing system are disclosed, wherein the adaptive level estimator is configured to provide a level estimate of an electrical input signal representing sound and comprises: a first level estimator configured to provide a first level estimate of the electrical input signal in a first number K1 of frequency bands; a second level estimator configured to provide a second level estimate of the electrical input signal and/or a attack/release time constant associated with the second level estimate in a frequency band of a second number K2, wherein K2 is less than K1; and a level control unit configured to provide a composite level estimate based on the first and second level estimates and/or a attack/release time constant associated with the second level estimate.

Description

Adaptive level estimator, hearing device, method and binaural hearing system
Technical Field
The present application relates to the field of audio processing, such as hearing devices, e.g. hearing aids, headsets or mobile phones.
Background
Level estimation is important to properly adjust the ambient sound level to the user's needs. The foregoing adjustments are sometimes referred to as compression amplification, meaning that to optimize the perception of the current sound level in its environment by a particular user, some levels should be compressed while others should be amplified. The compression-amplification scheme for a given application or a particular user is defined, for example, by a compression characteristic curve, which defines the input level LINMapped to a gain G (thus providing a desired output level, L)OUT=G·LIN). The gain G (which may be greater or less than 1, i.e. representing amplification or compression respectively) is typically a function of frequency in addition to the input level, G ═ G (L)INF) where f is frequency, or G (L)INK), where k is a frequency index (e.g., a band index). A schematic example of such a level-gain mapping (G (L, f)) is shown in fig. 1, which shows a combination of two general trends: a) the need for gain decreases as the input level increases (intuitively obvious); and b) the need for gain increases with increasing frequency (reflecting the more sensitive nature of the human auditory system at lower frequencies and the susceptibility to damage/aging at higher frequencies). Fig. 1 shows five (schematic) gain (G) -frequency (f) curves at different levels L of an input signal in a three-dimensional (orthogonal) coordinate system, the level and frequency axes being across the horizontal planeAnd gain is associated with the vertical axis. Exemplary (planned) gain values (G) -frequency are shown for five different levels of the input signal (L1, L2, L3, L4, L5) (each curve corresponding to a certain input level, denoted by a specific line type, G (L1) -solid line, G (L2) -dotted line, G (L3) -dash-dot line, G (L4) -dotted line and G (L5) -solid line). The projected gain value may, for example, represent a particular user's need for compression amplification during a particular type of acoustic situation, such as a speech situation in a quiet environment (e.g., determined by empirical evidence, such as NAL-NL 1). The gain at each level is indicated at eight different frequencies (f1, f2, f3, f4, f5, f6, f7, f8), the gain values for odd frequencies f1, f3, f5, f7 are indicated by filled circles, and the gain values for even frequencies f2, f4, f6, f8 are indicated by open circles. In the lower right part of fig. 1, the corresponding gain-frequency curves for different levels are indicated in a two-dimensional gain (f) plot with levels (L1-L5) as parameters. Similarly, in the lower middle portion of fig. 1, the corresponding gain-level curves for different frequencies are plotted in a two-dimensional gain (l) plot with frequencies (f1-f8) as parameters. Such data material, e.g. indicating the hearing ability (impaired or not) of the user, can for example be generated in the case of a test performed by an audiologist, e.g. by measuring the hearing threshold-frequency of the user. A measurement of the discomfort level-frequency of the user (or based on empirical observations) may also be made to obtain a "dynamic range" of the appropriate sound level for a particular user.
Level estimation has been referred to in a number of prior art documents. One such example is WO2003081947A1, which describes a dynamic level estimator, wherein attack and/or release times are (adaptively) determined on the basis of the dynamic properties of the input signal (see e.g. fig. 7A, 7B). In WO2003081947a1, the level estimation is performed for a full band signal (one frequency band).
Different level estimation strategies have been used in the past in connection with compression amplification. A compromise between speech intelligibility and loudness perception has been made. For example, in favor of speech intelligibility, relatively fast level estimation has been applied during periods of time with dynamic changes in the input signal. In favor of loudness perception, relatively slow level estimation has been applied during periods with more stable input signals. Such level estimation strategies have been applied in time domain (for full band signals) or time frequency domain (band splitting) configurations. Level estimation in a limited number of frequency bands has for example been applied in line with the experience that fast compression in narrow frequency bands is difficult to manage without deteriorating the sound perception (sound quality).
However, the aforementioned strategy has its drawbacks, for example because steady-state narrow-band noise affects level estimation across a full range of one of a limited number (and therefore relatively wide) of frequency bands. The result is that low level signal content (e.g., speech) is not amplified by the compressed amplification scheme and thus is not made audible (masked). This is schematically illustrated in fig. 2, which shows an input signal situation where the input signal comprises a mixture of (relatively soft, i.e. low level) speech signals (dotted lines) or relatively loud (e.g. quasi-) stationary noise components (dotted lines) at a single frequency. The speech signal comprises a signal in frequency (e.g. between 0 and 8kHz, here consisting of 16 frequency bands FB116,…,FB1616Values in (b) are distributed signal components, the level of each component (as measured in dB) being indicated by the height of the vertical line. The noise signal (in this example) is assumed to be concentrated to a single frequency (or a narrow range of about a single frequency, here FB1116). Input signals in four frequency bands FB14,FB24,FB34,FB44The estimated level L in (1) is indicated by a horizontal thick line, and is respectively denoted as L (FB1), …, L (FB 4). In the exemplary illustration of fig. 2, the frequency band FB14,…,FB44And FB116,…,FB1616Low and high resolution frequency representations of the operating frequency range of the hearing device, e.g. K2 and K1 of the adaptive level estimator according to the invention, respectively, may be represented (see e.g. the description of fig. 3A, 3B, 3C below). In the example of fig. 2, each of the 4 bands of the low resolution representation spans 4 bands of the high resolution representation, e.g., FB34Span FB916,FB1016,FB1116,FB1216. The frequency bands are shown as having equal widths, but their widths may be different, for example if it is assumed to be marked on a logarithmic frequency scale, or adapted to the application concerned. The operating frequency may be, for example, between 0 and 8kHz or between 0 and 10kHz or any other part of the normal human hearing range (e.g. 20Hz to 20kHz). As is evident from this example, in the illustrated case, the third frequency band (FB 3)4) Will pass the narrow band steady state noise signal (at FB 11)16In (d) determination, e.g., from noise in an appliance or the vehicle cabin, etc. Thus, spanning the high resolution frequency band FB916,FB1016,FB1116,FB1216Of the third low resolution frequency band FB34The level of the speech part of the signal in (a) will not be estimated correctly. This may lead to a reduced intelligibility of the speech signal, since the relatively soft speech components in the third frequency band will not be properly amplified (possibly attenuated) by the subsequent compression scheme (see fig. 1).
Disclosure of Invention
The present invention relates to dynamic estimation of the level of an input signal representing sound, for example for use in audio devices such as hearing aids.
In the present invention, it proposes
-providing high resolution level estimates of the input signal in a relatively large number (e.g. 16 or more) of frequency bands;
-providing low resolution level estimates and/or associated attack/release time constants of the input signal in a relatively few (e.g. 8 or less) frequency bands; and
-determining a composite level estimate based on the high resolution level estimate and the low resolution level estimate and/or an associated attack/release time constant (associated with the low resolution level estimate).
Level estimator
In one aspect of the application, an adaptive level estimator is provided for providing a level estimate of an electrical input signal representing sound. The adaptive level estimator includes:
-a first level estimator configured to provide a first level estimate of the electrical input signal in a first number K1 of frequency bands;
-a second level estimator configured to provide a second level estimate of the electrical input signal or an associated attack/release time constant (which attack/release time constant is associated with the second level estimate) in a frequency band of a second number K2, wherein K2 is smaller than K1; and
a level control unit (receiving the first and second level estimates and) configured to provide a composite level estimate based on the first and second level estimates or based on the first level estimate and a attack/release time constant associated with the second level estimate.
Thus, an improved level estimation may be provided.
It will be appreciated that the second level estimator is configured to provide a second level estimate or a attack/release time constant associated with the second level estimate (i.e. one of a) LE2(K) in a frequency band of the second quantity K2; B) tau is2a(k),τ2r(k) (ii) a And C) LE2(k) and τ2att(k),τ2rel(k) (ii) a Where K is 1, …, K2, LE2(K) is the (frequency-dependent) level estimator of the second level estimator, and τ2att(k) And τ2rel(k) Respectively (possibly frequency-dependent) attack and release times of the second level estimator).
The level estimate is provided dynamically based on the current input signal, such as its dynamic properties, e.g., timing properties of level changes as a function of frequency (e.g., time-of-flight changes). In an embodiment, the first number of frequency bands K1 is greater than 4, such as greater than or equal to 16, such as greater than or equal to 24. In an embodiment, the second number of frequency bands K2 is less than or equal to 16, such as less than or equal to 8, such as less than or equal to 4 (e.g., equal to 2 or 1). In an embodiment, the first number of frequency bands K1 is greater than or equal to 16, and the second number of frequency bands K2 is less than 16, such as less than or equal to 8, such as less than or equal to 4 (e.g., equal to 2 or 1). In an embodiment, the number K of sub-bands of the electrical input signal is equal to the first number K1 of frequency bands. In an embodiment, the number K of sub-bands of the electrical input signal is greater than or equal to 32, such as greater than or equal to 64, such as greater than or equal to 128.
In an embodiment, the first level estimator is configured to provide a first level estimate having a first time constant, and the second level estimator is configured to provide a second level estimate having a second time constant, wherein the first time constant is greater than or equal to the second time constant. In an embodiment, the first and the secondTwo time constant τ1And τ2Is a frequency band specific time constant (tau)1(k) K is 1, …, K1 and τ2(k) K is 1, …, K2, where K is the frequency index). In an embodiment, τ is for all k1(k)≥τ2(k) (or. tau)1(k)>τ2(k) ). In an embodiment, at least two of the frequency-dependent time constants of each respective first and second time constant are different (e.g., at τ)1(k) Wherein at least two of K1, …, and K1 are different (e.g., τ)1(1)≠τ1(K) And at τ)2(k) Wherein at least two of K1, …, and K2 are different (e.g., τ)2(1)≠τ2(K) ))). In an embodiment, the first and/or second time constants comprise respective increases (τ)ia(k) And release (. tau.)ir(k) Time constant (. tau.)1a(k),τ1r(k) Are) and (τ)2a(k),τ2r(k) ). In an embodiment, τ is applied to all corresponding k, τ1a(k)≥τ2a(k) And τ1r(k)≥τ2r(k) In that respect In an embodiment, τ is applied to all corresponding k, τ1a(k)≤τ1r(k) And τ2a(k)≤τ2r(k) In that respect In an embodiment, the first and second time constants are equal (or substantially equal). In an embodiment, the first and/or second time constant is a time constant in the order of 1ms, for example between 0.5ms and 4ms, for example between 1 and 3 ms. In an embodiment, the second time constant is in the order of 10ms, for example between 5ms and 20 ms.
In an embodiment, the level control unit is configured to ramp the synthesis level estimate between the first and second level estimates, e.g. according to a ramping scheme, such as an adaptive ramping scheme, e.g. according to the electrical input signal and the first and second time constants.
In an embodiment, the level control unit is configured to provide the composite level estimate based on the first and second level estimates and a signal-to-noise ratio of the electrical input signal.
In an embodiment, the level control unit comprises a comparison unit for comparing the first and second level estimates and providing a comparison signal indicative of the comparison result. In an embodiment, the composite level estimate is determined based on a comparison of the first and second level estimates. In an embodiment, the level control unit is configured to cause the synthesis level estimate to be determined based on the comparison signal. In an embodiment, the comparison signal indicates a difference (e.g. in a linear or logarithmic representation, such as an absolute value, or other suitable functional relationship) between the first and second level estimates. In an embodiment, the synthesis level estimate is determined based on a ratio between the first and second level estimates (or between the second and first level estimates).
In an embodiment, the level control unit comprises a filtering unit for down-sampling or low-pass filtering the comparison signal and providing a filtered comparison signal. In an embodiment, the level control unit is configured to use the filtered comparison signal in determining the synthesis level estimate. It is thus achieved that, for slow variations of the sound level estimate, the high resolution (first) level estimate is used (or mainly used) as the synthesis level estimate; whereas for fast variations of the sound level estimate the low resolution (second) level estimate is used (or mainly used) as the synthesis level estimate.
In an embodiment, the level control unit comprises a combining unit for combining the filtered comparison signal or a signal derived therefrom with the second level estimate and providing a combined signal. In an embodiment, the level control unit is configured to use the combined signal in determining the synthesis level estimate.
In an embodiment, the level control unit comprises a limiter configured to limit an influence of the comparison signal on the synthesis level estimate. For a given change in the comparison signal, the limiter may provide a smaller change in the resultant change in the level estimate or set an upper limit (as compared to the case where the limiter is not present). In an embodiment, the limiter is configured to limit the effect of the comparison signal on the synthesis level estimate to a predetermined or adaptively determined amount. In an embodiment, the predetermined amount is 10 dB. In an embodiment, the level control unit is configured to limit the deviation of the combined level estimate from the second level estimate to a predetermined amount, e.g. 10 dB.
In an embodiment, the first and/or second level estimator comprises a dynamic level estimator, such that an estimate of the level of the input signal is provided to the dynamic level estimator, wherein the attack and/or release time constants are configurable in dependence of the input signal of the dynamic level estimator. In an embodiment, the dynamic level estimator comprises a relatively fast level estimator connected to the pilot level estimator, both receiving the input signal to the dynamic level estimator. The pilot level estimator is configured to provide an estimate of the level of the input signal (to the dynamic level estimator), wherein the attack and/or release times of the pilot level estimator are determined based on a difference between the level estimates of the pilot level estimator and the relatively fast level estimator. In an embodiment, the dynamic level estimator comprises the level estimator described in WO2003081947A1 (see fig. 7A, 7B).
In an embodiment, the adaptive level estimator comprises at least one calibrator for calibrating the level estimator for a specific type of sound signal. In an embodiment, the specific type of sound signal is speech (e.g. speech in a quiet situation or speech in a noisy situation) or music or noise. In an embodiment, the calibrator is configured to calibrate the level estimates for a particular standardized or other documented calibration scheme. In an embodiment, the calibrator is configured to calibrate the level estimator for an average long-term speech spectrum (LTASS), e.g. according to IEC 60118-15. In an embodiment the adaptive level estimator comprises at least two calibrators adapted to calibrate the first and second level estimators, e.g. optimized for the same type of sound signal, e.g. according to the same calibration scheme. In an embodiment, the at least two calibrators are adapted to calibrate the first and second level estimates according to different types of sound signals, e.g. according to different calibration schemes. In an embodiment, the first level estimate is calibrated for a first type of signal, such as noise. In an embodiment, the second level estimate is calibrated for a second type of signal, such as speech. In an embodiment, the adaptive level estimator comprises at least three calibrators adapted to calibrate the first and second level estimators and the synthesis level estimator (e.g. optimized for the same type of sound signal, e.g. according to the same calibration scheme).
Hearing device
In one aspect, the invention also provides a hearing device, such as a hearing aid, comprising an adaptive level estimator as described above, in the detailed description and as defined in the claims.
IN an embodiment, the hearing device comprises an input unit for providing an electrical input signal representing sound IN a sub-band representation IN (K, m), where K is a sub-band index, K is 1, …, K, where K is the number of sub-bands and m is a time frame index. In an embodiment, the first number of frequency bands K1 is less than or equal to the number of sub-frequency bands K of the electrical input signal. In an embodiment, the hearing device comprises a first band translation unit for providing the electrical input signal or a processed version thereof at K1 frequency bands (e.g. based on K sub-bands) for use by a first level estimator of the adaptive level estimator. In an embodiment, the hearing device comprises a second band translation unit for providing the electrical input signal or a processed version thereof at K2 frequency bands (e.g. based on K or K1 sub-bands) for use by a second level estimator of the adaptive level estimator. In an embodiment the first and second band converting units are implemented as band summing units, wherein the content of a given output band is the sum (or a weighted average; or an average, such as a statistical average) of the content of the input bands across the output bands. In an embodiment, the first and second band converting units are implemented as maximum band units, wherein the content of a given output band is the maximum of the content of the input bands across the output band. In an embodiment, only the amplitude of the input signal is considered for the level estimation. In an embodiment, a hearing device, such as an adaptive level estimator, comprises an ABS unit for providing a magnitude of an input signal or a signal derived therefrom. In an embodiment, the hearing device comprises a third band conversion unit for providing the second level estimates or processed versions thereof in K1 frequency bands for use in the level control unit (conversion from K2 frequency bands to K1 frequency bands). In an embodiment the hearing device comprises a fourth band conversion unit for providing the synthesis level estimates or processed versions thereof in the K frequency bands for use in the level-gain conversion unit, e.g. forming part of the signal processor (conversion from K1 frequency bands to K frequency bands). In an embodiment the third and fourth band translation units are band distribution units for providing output levels, such as second level estimates and synthesis level estimates in K1 bands (instead of K2 bands, where K1> K2) and in K bands (instead of K1 bands, where K ≧ K1). In an embodiment, K > K1.
In an embodiment, the hearing device comprises an output unit for providing a stimulus originating from the electrical input signal that is perceivable as sound by a user. In an embodiment, the hearing device, such as the output unit, comprises a synthesis filter bank for converting the sub-band signals into a single time domain signal. In an embodiment, the single time domain signal forms the basis for generating a stimulus perceivable as sound. In an embodiment, the output unit comprises a speaker for providing the stimulus as sound waves in air. In an embodiment, the output unit comprises a vibrator for providing the stimulus as sound waves in the bone, e.g. in the skull of the user. In an embodiment, the output unit comprises a multi-electrode array for providing the stimulation as electrical stimulation of a cochlear nerve of the user.
In an embodiment, the hearing device comprises a level-gain conversion unit for converting the synthesized level into a synthesized gain. In an embodiment, the level-gain conversion unit is configured to implement an application specific compression strategy. In an embodiment, the level-gain conversion unit is configured to implement a compression strategy for a specific user, such as a hearing impaired user.
In an embodiment, the hearing device comprises or consists of a hearing aid, a headset, an ear microphone, an ear protection device or a combination thereof.
In an embodiment, the hearing device is adapted to provide a frequency dependent gain and/or a level dependent compression and/or a frequency shift of one or more frequency ranges to one or more other frequency ranges (with or without frequency compression) to compensate for a hearing impairment of the user. In an embodiment, the hearing device comprises a signal processor for processing an input signal and providing a processed output signal.
In an embodiment, the hearing device comprises an output unit for providing a stimulus perceived by the user as an acoustic signal based on the processed electrical signal. In an embodiment, the output unit comprises a plurality of electrodes of a cochlear implant or a vibrator of a bone conduction hearing device. In an embodiment, the output unit comprises an output converter. In an embodiment, the output transducer comprises a receiver (speaker) for providing the stimulus as an acoustic signal to the user. In an embodiment, the output transducer comprises a vibrator for providing the stimulation to the user as mechanical vibrations of the skull bone (e.g. in a bone-attached or bone-anchored hearing device).
In an embodiment, the hearing device comprises an input unit for providing an electrical input signal representing sound. In an embodiment, the input unit comprises an input transducer, such as a microphone, for converting input sound into an electrical input signal. In an embodiment, the input unit comprises a wireless receiver for receiving a wireless signal comprising sound and providing an electrical input signal representing the sound. In an embodiment, the hearing device comprises a directional microphone system adapted to spatially filter sound from the environment to enhance a target sound source among a plurality of sound sources in the local environment of a user wearing the hearing device. In an embodiment, the directional system is adapted to detect (e.g. adaptively detect) from which direction a particular part of the microphone signal originates. This can be achieved in a number of different ways, for example as described in the prior art.
In an embodiment, the hearing device comprises an antenna and a transceiver circuit for receiving a direct electrical input signal from another device, such as a communication device or another hearing device. In an embodiment, the hearing device comprises a (possibly standardized) electrical interface (e.g. in the form of a connector) for receiving a wired direct electrical input signal from another device, such as a communication device or another hearing device. In an embodiment the direct electrical input signal represents or comprises an audio signal and/or a control signal and/or an information signal. In an embodiment, the hearing device comprises a demodulation circuit for demodulating the received direct electrical input to provide a direct electrical input signal representing the audio signal and/or the control signal, for example for setting an operating parameter (such as volume) and/or a processing parameter of the hearing device. In general, the wireless link established by the transmitter and the antenna and transceiver circuitry of the hearing device may be of any type. In an embodiment, the wireless link is used under power constraints, for example since the hearing device comprises a portable (typically battery-driven) device. In an embodiment, the wireless link is a near field communication based link, e.g. an inductive link based on inductive coupling between antenna coils of the transmitter part and the receiver part. In another embodiment, the wireless link is based on far field electromagnetic radiation. In an embodiment, the communication over the wireless link is arranged according to a specific modulation scheme, for example an analog modulation scheme, such as FM (frequency modulation) or AM (amplitude modulation) or PM (phase modulation), or a digital modulation scheme, such as ASK (amplitude shift keying) such as on-off keying, FSK (frequency shift keying), PSK (phase shift keying) such as MSK (minimum frequency shift keying) or QAM (quadrature amplitude modulation).
Preferably, the communication between the hearing device and the other device is based on some kind of modulation at frequencies above 100 kHz. Preferably, the frequency for establishing a communication link between the hearing device and the further device is below 50GHz, e.g. in the range from 50MHz to 70GHz, e.g. above 300MHz, e.g. in the ISM range above 300MHz, e.g. in the 900MHz range or in the 2.4GHz range or in the 5.8GHz range or in the 60GHz range (ISM ═ industrial, scientific and medical, such standardized ranges being defined e.g. by the international telecommunications ITU union). In an embodiment, the wireless link is based on standardized or proprietary technology. In an embodiment, the wireless link is based on bluetooth technology (e.g., bluetooth low power technology).
In an embodiment, the hearing device is a portable device, such as a device comprising a local energy source, such as a battery, e.g. a rechargeable battery.
In an embodiment, the hearing device comprises a forward or signal path between an input transducer (a microphone system and/or a direct electrical input (such as a wireless receiver)) and an output transducer. In an embodiment, a signal processor is located in the forward path. In an embodiment, the signal processor is adapted to provide a frequency dependent gain according to the specific needs of the user. In an embodiment, the hearing device comprises an analysis path with functionality for analyzing the input signal (e.g. determining level, modulation, signal type, acoustic feedback estimate, etc.). In an embodiment, part or all of the signal processing of the analysis path and/or the signal path is performed in the frequency domain. In an embodiment, the analysis path and/or part or all of the signal processing of the signal path is performed in the time domain.
In an embodiment, an analog electrical signal representing an acoustic signal is converted into a digital audio signal in an analog-to-digital (AD) conversion process, wherein the analog signal is at a predetermined sampling frequency or sampling rate fsSampling is carried out fsFor example in the range from 8kHz to 48kHz, adapted to the specific needs of the application, to take place at discrete points in time tn(or n) providing digital samples xn(or x [ n ]]) Each audio sample passing a predetermined NbBit representation of acoustic signals at tnValue of time, NbFor example in the range from 1 to 48 bits such as 24 bits. Each audio sample thus uses NbBit quantization (resulting in 2 of audio samples)NbA different possible value). The digital samples x having 1/fsFor a time length of e.g. 50 mus for fs20 kHz. In an embodiment, the plurality of audio samples are arranged in time frames. In an embodiment, a time frame comprises 64 or 128 audio data samples. Other frame lengths may be used depending on the application.
In an embodiment, the hearing device comprises an analog-to-digital (AD) converter to digitize the analog input at a predetermined sampling rate, e.g. 20 kHz. In an embodiment, the hearing device comprises a digital-to-analog (DA) converter to convert the digital signal into an analog output signal, e.g. for presentation to a user via an output transducer.
In an embodiment, the hearing device, such as a microphone unit and/or a transceiver unit, comprises a TF conversion unit for providing a time-frequency representation of the input signal. In an embodiment, the time-frequency representation comprises an array or mapping of respective complex or real values of the involved signals at a particular time and frequency range. In an embodiment, the TF conversion unit comprises a filter bank for filtering a (time-varying) input signal and providing a plurality of (time-varying) output signals, each comprising a distinct input signal frequency range. In an embodiment the TF conversion unit comprises a fourier transformation unit for converting the time-varying input signal into a (time-varying) signal in the frequency domain.In an embodiment, the hearing device takes into account a frequency from a minimum frequency fminTo a maximum frequency fmaxIncludes a portion of a typical human hearing range from 20Hz to 20kHz, for example a portion of the range from 20Hz to 12 kHz. In an embodiment, the signal of the forward path and/or the analysis path of the hearing device is split into NI frequency bands, wherein NI is for example larger than 5, such as larger than 10, such as larger than 50, such as larger than 100, such as larger than 500, at least parts of which are processed individually. In an embodiment the hearing aid is adapted to process the signal of the forward and/or analysis path in NP different frequency channels (NP ≦ NI). The channels may be uniform or non-uniform in width (e.g., increasing in width with frequency), overlapping, or non-overlapping.
In an embodiment, the hearing device comprises a plurality of detectors configured to provide status signals related to a current network environment (e.g. a current acoustic environment) of the hearing device, and/or related to a current status of a user wearing the hearing device, and/or related to a current status or operation mode of the hearing device. Alternatively or additionally, the one or more detectors may form part of an external device in (e.g. wireless) communication with the hearing device. The external device may comprise, for example, another hearing device, a remote control, an audio transmission device, a telephone (e.g., a smartphone), an external sensor, etc.
In an embodiment, one or more of the plurality of detectors contribute to the full band signal (time domain). In an embodiment, one or more of the plurality of detectors operates on a band split signal ((time-) frequency domain).
In a particular embodiment, the hearing device comprises a Voice Detector (VD) for determining whether the input signal (at a particular point in time) comprises a voice signal. In this specification, a voice signal includes a speech signal from a human being. It may also include other forms of vocalization (e.g., singing) produced by the human speech system. In an embodiment, the voice detector unit is adapted to classify the user's current acoustic environment as a "voice" or "no voice" environment. This has the following advantages: the time segments of the electroacoustic transducer signal comprising a human sound (e.g. speech) in the user's environment can be identified and thus separated from the time segments comprising only other sound sources (e.g. artificially generated noise). In an embodiment, the voice detector is adapted to detect the user's own voice as well as "voice". Alternatively, the speech detector is adapted to exclude the user's own speech from the detection of "speech".
In an embodiment, the hearing device comprises a self-voice detector for detecting whether a particular input sound (e.g. voice) originates from the voice of a user of the system. In an embodiment, the microphone system of the hearing device is adapted to be able to distinguish between the user's own voice and the voice of another person and possibly from unvoiced sounds.
In an embodiment, the hearing device comprises a classification unit configured to classify the current situation based on the input signal from (at least part of) the detector and possibly other inputs. In this specification, the "current situation" is defined by one or more of the following:
a) a physical environment (e.g. including a current electromagnetic environment, such as the presence of electromagnetic signals (including audio and/or control signals) that are or are not intended to be received by the hearing device, or other properties of the current environment other than acoustic);
b) current acoustic situation (input level, feedback, etc.);
c) the current mode or state of the user (motion, temperature, etc.);
d) the current mode or state of the hearing device and/or another device in communication with the hearing device (selected program, elapsed time since last user interaction, etc.).
In an embodiment, the hearing device further comprises other suitable functions for the application in question, such as feedback suppression, noise reduction, etc.
In an embodiment, the hearing device comprises a listening device, such as a hearing aid, such as a hearing instrument, e.g. a hearing instrument adapted to be positioned at the ear or fully or partially in the ear canal of a user, e.g. a headset, an ear microphone, an ear protection device or a combination thereof.
Applications of
In one aspect, there is provided a use of a hearing device as described above, in the detailed description of the "detailed description" section and as defined in the claims. In an embodiment, an application in a system comprising an audio distribution, such as a system comprising a microphone and a loudspeaker, is provided. In an embodiment, applications in systems comprising one or more hearing instruments, headsets, active ear protection systems, etc., are provided, for example in hands free telephone systems, teleconferencing systems, broadcasting systems, karaoke systems, classroom amplification systems, etc.
Method
In one aspect, the present application also provides a method of dynamically estimating a level of an input signal representing sound. The method comprises the following steps:
-providing a first level estimate of the electrical input signal in a first number K1 of frequency bands;
-providing a second level estimate of the electrical input signal or an associated attack/release time constant (which is associated with the second level estimate) in a second number K2 of frequency bands, wherein K2 is smaller than K1; and
-providing a synthesis level estimate based on the first and second level estimates, or based on the first level estimate and a attack/release time constant associated with the second level estimate.
Some or all of the structural features of the apparatus described above, detailed in the "detailed description of the invention" or defined in the claims may be combined with the implementation of the method of the invention, when appropriately replaced by corresponding procedures, and vice versa. The implementation of the method has the same advantages as the corresponding device.
Computer readable medium
The present invention further provides a tangible computer readable medium storing a computer program comprising program code which, when run on a data processing system, causes the data processing system to perform at least part (e.g. most or all) of the steps of the method described above, in the detailed description of the invention, and defined in the claims.
By way of example, and not limitation, such tangible computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk, as used herein, includes Compact Disk (CD), laser disk, optical disk, Digital Versatile Disk (DVD), floppy disk and blu-ray disk where disks usually reproduce data magnetically, while disks reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. In addition to being stored on a tangible medium, a computer program may also be transmitted over a transmission medium such as a wired or wireless link or a network such as the internet and loaded into a data processing system to be executed at a location other than the tangible medium.
Data processing system
In one aspect, the invention further provides a data processing system comprising a processor and program code to cause the processor to perform at least some (e.g. most or all) of the steps of the method described in detail above, in the detailed description of the invention and in the claims.
Computer program
Furthermore, the present application provides a computer program (product) comprising instructions which, when executed by a computer, cause the computer to perform the method (steps) described above in detail in the "detailed description" and defined in the claims.
Hearing system
In another aspect, a hearing system (e.g., a binaural hearing system) comprises the first and second hearing devices described above, detailed in the "detailed description" and defined in the claims, the hearing system being adapted to establish a communication link between the first and second hearing devices.
In an embodiment, the hearing system comprises an auxiliary device, the hearing system being adapted such that information can be exchanged between or forwarded from at least one of the first and second hearing devices to the auxiliary device. In an embodiment, the hearing system is adapted to implement a binaural hearing system, such as a binaural hearing aid system.
In an embodiment, the hearing system is adapted to establish respective communication links between the first and second hearing devices and between the hearing devices and the auxiliary device such that information (such as control and status signals, e.g. level estimators, possibly audio signals) can be exchanged or forwarded from one device to the other (e.g. directly from one hearing device to the other or via the auxiliary device, or directly between the auxiliary device and either of the first and second hearing devices, or between the auxiliary device and a given hearing device (either directly or via the other hearing device)).
In an embodiment, the auxiliary device is or comprises a remote control for controlling the function and operation of the hearing device. In an embodiment, the functionality of the remote control is implemented in a smartphone, which may run an APP enabling the control of the functionality of the hearing system via the smartphone (the hearing device comprises a suitable wireless interface to the smartphone, e.g. based on bluetooth or some other standardized or proprietary scheme).
In this specification, a smart phone may include a combination of (a) a mobile phone and (B) a personal computer:
- (a) a mobile telephone comprising a microphone, a loudspeaker, and a (wireless) interface to the Public Switched Telephone Network (PSTN);
- (B) personal computers comprise a processor, a memory, an Operating System (OS), a user interface (such as a keyboard and a display, for example integrated in a touch-sensitive display) and a wireless data interface (including a web browser), enabling a user to download and execute an Application (APP) implementing a particular functional feature (for example displaying information retrieved from the internet, remotely controlling another device, combining information from a plurality of different sensors (such as a camera, scanner, GPS, microphone, etc.) and/or external sensors of a smartphone to provide the particular feature, etc.).
APP
In another aspect, the invention also provides non-transient applications known as APP. The APP comprises executable instructions configured to run on the auxiliary device to implement a user interface for a hearing device or (e.g. binaural) hearing system as described above, detailed in "detailed description" and defined in the claims. In an embodiment, the APP is configured to run on a mobile phone, such as a smartphone or another portable device enabling communication with the hearing device or hearing system.
In an embodiment, the non-transitory application is configured to enable configuring the adaptive level estimator in the hearing device and/or in the first and second hearing devices of the (e.g. binaural) hearing system according to the invention to be performed via said user interface.
Definition of
In this specification, "hearing device" refers to a device adapted to improve, enhance and/or protect the hearing ability of a user, such as a hearing instrument or an active ear protection device or other audio processing device, by receiving an acoustic signal from the user's environment, generating a corresponding audio signal, possibly modifying the audio signal, and providing the possibly modified audio signal as an audible signal to at least one ear of the user. "hearing device" also refers to a device such as a headset or a headset adapted to electronically receive an audio signal, possibly modify the audio signal, and provide the possibly modified audio signal as an audible signal to at least one ear of a user. The audible signal may be provided, for example, in the form of: acoustic signals radiated into the user's outer ear, acoustic signals transmitted as mechanical vibrations through the bone structure of the user's head and/or through portions of the middle ear to the user's inner ear, and electrical signals transmitted directly or indirectly to the user's cochlear nerve.
The hearing device may be configured to be worn in any known manner, such as a unit worn behind the ear (with a tube for introducing radiated acoustic signals into the ear canal or with a speaker arranged close to or in the ear canal), as a unit arranged wholly or partly in the pinna and/or ear canal, as a unit attached to a fixture implanted in the skull bone, or as a wholly or partly implanted unit, etc. The hearing device may comprise a single unit or several units in electronic communication with each other.
More generally, a hearing device comprises an input transducer for receiving acoustic signals from the user's environment and providing corresponding input audio signals and/or a receiver for receiving input audio signals electronically (i.e. wired or wireless), a (typically configurable) signal processing circuit (such as a signal processor, e.g. comprising a configurable (programmable) processor, e.g. a digital signal processor) for processing the input audio signals, and an output unit for providing audible signals to the user in dependence of the processed audio signals. The signal processor may be adapted to process the input signal in the time domain or in a plurality of frequency bands. In some hearing devices, the amplifier and/or compressor may constitute a signal processing circuit. The signal processing circuit typically comprises one or more (integrated or separate) memory elements for executing programs and/or for saving parameters for use (or possible use) in the processing and/or for saving information suitable for the function of the hearing device and/or for saving information for use e.g. in connection with an interface to a user and/or to a programming device (such as processed information, e.g. provided by the signal processing circuit). In some hearing devices, the output unit may comprise an output transducer, such as a speaker for providing a space-borne acoustic signal or a vibrator for providing a structure-or liquid-borne acoustic signal. In some hearing devices, the output unit may include one or more output electrodes for providing electrical signals (e.g., a multi-electrode array for electrically stimulating the cochlear nerve).
In some hearing devices, the vibrator may be adapted to transmit the acoustic signal propagated by the structure to the skull bone percutaneously or percutaneously. In some hearing devices, the vibrator may be implanted in the middle and/or inner ear. In some hearing devices, the vibrator may be adapted to provide a structurally propagated acoustic signal to the middle ear bone and/or cochlea. In some hearing devices, the vibrator may be adapted to provide a liquid-borne acoustic signal to the cochlear liquid, for example, through the oval window. In some hearing devices, the output electrode may be implanted in the cochlea or on the inside of the skull, and may be adapted to provide electrical signals to the hair cells of the cochlea, one or more auditory nerves, the auditory brainstem, the auditory midbrain, the auditory cortex, and/or other parts of the cerebral cortex.
"hearing system" refers to a system comprising one or two hearing devices. "binaural hearing system" refers to a system comprising two hearing devices and adapted to cooperatively provide audible signals to both ears of a user. The hearing system or binaural hearing system may also include one or more "auxiliary devices" that communicate with the hearing device and affect and/or benefit from the function of the hearing device. The auxiliary device may be, for example, a remote control, an audio gateway device, a mobile phone (e.g. a smart phone), a broadcast system, a car audio system or a music player. Hearing devices, hearing systems or binaural hearing systems may be used, for example, to compensate for hearing loss of hearing impaired persons, to enhance or protect hearing of normal hearing persons, and/or to convey electronic audio signals to humans.
Embodiments of the invention may be used, for example, in the following applications: benefit from devices or applications where the input signal level is dynamically adjusted according to the listener's (possibly limited) perceived dynamic range of the sound level or any other specific dynamic range deviating from the dynamic range of the ambient sound. The invention can be used, for example, in the following applications: hearing aids, headsets, ear protection systems, hands-free telephone systems, mobile phones, teleconferencing systems, broadcasting systems, karaoke systems, classroom amplification systems, and the like.
Drawings
Various aspects of the invention will be best understood from the following detailed description when read in conjunction with the accompanying drawings. For the sake of clarity, the figures are schematic and simplified drawings, which only show details which are necessary for understanding the invention and other details are omitted. Throughout the specification, the same reference numerals are used for the same or corresponding parts. The various features of each aspect may be combined with any or all of the features of the other aspects. These and other aspects, features and/or technical effects will be apparent from and elucidated with reference to the following figures, in which:
fig. 1 schematically shows an exemplary level-gain mapping for a normal-hearing user or a hearing-impaired user.
Fig. 2 shows an exemplary level-frequency relationship diagram for four band level estimators at a given point in time for a particular acoustic situation comprising a soft speech signal and a loud (stationary) narrow-band noise signal.
Fig. 3A shows an adaptive level estimator according to a first embodiment of the present invention.
Fig. 3B shows an adaptive level estimator according to a second embodiment of the present invention.
Fig. 3C shows the possible effect of the filtering unit across frequency with a weight (W) between 0 and 1.
Fig. 4A shows a hearing device comprising an adaptive level estimator according to a first embodiment of the present invention.
Fig. 4B shows a hearing device comprising an adaptive level estimator according to a second embodiment of the present invention.
Fig. 5 shows a binaural hearing system comprising a first and a second hearing device according to an embodiment of the invention.
Fig. 6 shows a hearing device comprising an adaptive level estimator according to a third embodiment of the invention, which hearing device is adapted to exchange a first level estimate with another device.
Fig. 7A shows an exemplary structure of a level estimator used in the adaptive level estimator according to the present invention.
Fig. 7B schematically illustrates an exemplary scheme for determining attack and release times of a level estimator from an input signal.
Fig. 8A shows an exemplary application of an embodiment of a hearing system according to the invention, comprising a user, a binaural hearing aid system and an auxiliary device.
Fig. 8B shows an auxiliary device running APP that enables a user to influence the functionality of the adaptive level estimators of the respective first and second hearing devices.
Fig. 9 shows an adaptive level estimator according to a fourth embodiment of the present invention.
Fig. 10 shows an adaptive level estimator according to a fifth embodiment of the present invention.
Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood, however, that the detailed description and the specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only. Other embodiments of the present invention will be apparent to those skilled in the art based on the following detailed description.
Detailed Description
The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. It will be apparent, however, to one skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described in terms of various blocks, functional units, modules, elements, circuits, steps, processes, algorithms, and the like (collectively, "elements"). Depending on the particular application, design constraints, or other reasons, these elements may be implemented using electronic hardware, computer programs, or any combination thereof.
The electronic hardware may include microprocessors, microcontrollers, Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Programmable Logic Devices (PLDs), gating logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described herein. A computer program should be broadly interpreted as instructions, instruction sets, code segments, program code, programs, subroutines, software modules, applications, software packages, routines, subroutines, objects, executables, threads of execution, programs, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or by other names.
The present invention proposes a concept for solving the problems of the prior art, thereby maintaining a proper sound perception and an acceptable speech intelligibility.
Another drive power to modify the level estimation strategy stems from the trend (resulting from improvements in digital signal processing and chip development) to provide signal processing in an increased number of frequency bands, such as 32 frequency bands or more, for example 64 frequency bands or more, in the forward path of an audio processing device, such as a hearing aid. Therefore, a higher resolution level estimation is required.
However, the experience is that too many bands and fast compression sound very bad (it is intuitive that the variance is reciprocal to the square root of the bandwidth times time, so that a narrow band with a small time constant provides a large variance, which is undesirable for sound perception).
Therefore, a compromise solution is required.
According to the same logic, an acceptable variance may be provided by:
many frequency bands (small bandwidth) and large time constants (slow compression);
fewer frequency bands (large bandwidth) and small time constants (fast compression).
In the present invention, it proposes
For relatively slowly varying input signals in a relatively large number (e.g. more than 24) of frequency bands; and
during a rapidly varying input signal, in a relatively small number of frequency bands (e.g. 4 or less)
Providing a level estimate and introducing an appropriate "tapering scheme" between the two.
In other words, the proposed low/high resolution hybrid level estimator has been designed to maintain the quality of the (prior art) level estimator (see e.g. WO2003081947a1) focused on adjusting the time constant in a few frequency bands according to the input signal, providing a fast level estimation when the input signal changes fast, but to combine it with spectral amplification at smoother input signals. For this purpose, a single high resolution (e.g. more than 16 channels, such as 24 channels) calibrated level estimator is used.
In general, the inventive concept is useful in devices or applications that benefit from dynamic adaptation of an input signal to the listener's perceived (possibly limited) dynamic range of sound levels, or to any other specific dynamic range deviating from ambient sound.
Fig. 3A shows an adaptive level estimator ALD according to a first embodiment of the invention. The adaptive level estimator is adapted to provide a level estimate RLE of an electrical input signal representing sound. The adaptive level estimator ALD comprises a first level estimator LD1 τ1And a second level estimator LD2 τ2The first level estimator is configured to provide a first level estimate LE1 of the electrical input signal in a first number K1 of frequency bands (based on the input signal in K1 frequency bands), and the second level estimator is configured to provide a second level estimate LE2 of the electrical input signal in a second number K2 of frequency bands (based on the input signal in K2 frequency bands). The second number of frequency bands K2 is less than the first number of frequency bands K1. In summary, the (attack and release) time constants (denoted herein collectively as τ) of the first level estimator LD1 and the second level estimator LD21And τ2) Satisfy τ respectively1≥τ2The relationship (2) of (c). In an embodiment, the second level estimator LD2 comprises 4 fast channels (K2 ═ 4) and the first level estimator LD1 comprises 24 slow channels (K1 ═ 24), but any combination may be applied ("fast" and "slow" here mean τ21). Realistic possibilities range from 1-8 fast channels and 1-64 slow channels. In an embodiment, 1 fast channel (K2 ═ 1) is used. The adaptive level estimator ALD further comprises a level control unit CONT receiving the first level estimate LE1 and the second level estimate LE2, and being configured to provide a combined level estimate RLE in K1 frequency bands based on the first and second level estimates LE1, LE2 in the K1 frequency bands. The adaptive level estimator ALD further comprises a K2-K1 band distributor K2->K1 for transforming the K2 level estimators LE2 of the second level estimator LD2 to K1 level estimators LE2 for direct comparison in the control unit CONT with the K1 level estimators LE1 of the first level estimator LD 1.
Fig. 3B shows an adaptive level estimator according to a second embodiment of the present invention. The embodiment of fig. 3B includes the same functional elements as the embodiment of fig. 3A. However, the level control unit CONT is described in more detail in fig. 3B and below. The level control unit CONT comprises a comparison unit COMP for comparing the first and second level estimators LE1, LE2 and providing a comparison signal Δ L (in K1 frequency bands) indicating the result of the comparison. The resulting level estimator RLE of the adaptive level estimator is based on this comparison of the first and second level estimates. The comparison unit COMP may for example comprise a subtraction unit such that the comparison signal Δ L indicates the difference between the first and second level estimatesThe difference (e.g., in linear or logarithmic terms). Alternatively, the combined level estimate RLE may be based on a ratio between the first and second level estimates (or a ratio between the second and first level estimates). The level control unit CONT further comprises a filtering unit LP for low-pass filtering the comparison signal Δ L and providing a filtered comparison signal W Δ L (in K1 frequency bands). The control unit is configured to use the filtered comparison signal W al in determining the synthesis level estimate RLE. The level control unit CONT further comprises a combination unit CU, for example a summing unit, for combining the filtered comparison signal W al with the second level estimate LE2 and providing a combined signal, here equal to the combined level estimate RLE (in the K1 frequency bands). It is thus achieved that for slow variations in the sound level estimate, the high resolution (first) level estimate is used (or mainly used) as the synthesis level estimate, and for fast variations in the sound level estimate, the low resolution (second) level estimate is used (or mainly used) as the synthesis level estimate. An exemplary effect of the filtering unit LP implemented as a frequency-dependent weighting factor w (f) is shown in fig. 3C. Fig. 3C shows three different examples of weighting factors W (f), W Δ L ═ Wx Δ L, where "x" denotes multiplication. The first path (solid line) of W (f) implements an (ideal) low-pass filter function (single-step function), where f ≦ fth1(first threshold frequency), W ≧ 1, and f ≧ fth1And W is 0. The second line (dotted line) of W (f) implements a gradual transition of W from 1 to 0 for increasing frequencies (e.g., W (f)th1) 0.5, and f ≧ f for fth2(second threshold frequency), W ═ 0). The third route (dashed line) of W (f) is implemented, for increasing frequencies, W transitions piecewise linearly from 1 to 0 (e.g., for f ≦ f)th1W is 1; for f ≧ fth2W is 0; and for fth1And fth2Increasing frequency in between, W decreases linearly from 1 to 0). Other suitable routes of the w (f) function representing the effect of the filtering unit LP are also possible. In an embodiment, the first threshold frequency fth1Substantially equal to the 3dB cut-off frequency of the low-pass filter. In an embodiment, the first threshold frequency fth1(and/or second threshold frequency f)th2) Less than or equal to 5Hz, such as less than or equal to 1Hz, such as less thanOr equal to 0.1Hz, such as less than or equal to 0.01 Hz. In the framework of the time constant, the filter function of the LP unit preferably has a time constant in the range from 100ms to 5s, e.g. equal to 1s (for a first order filter, the cut-off frequency fcAnd the time constant τ is 1/(2 × pi × f)c)=1/ωc)。
A simple min/max limit on the allowed variation of the second level estimate may be applied to be able to control the minimum and maximum impact of the slow high resolution estimate. A final calibration stage may be added after combining the units CU. A simple scalar scaling unit, such as a multiplication unit, may be inserted in front of the filtering unit LP to control the amount of slow high resolution level estimators used. A range of scaling values [0:1] is proposed that limits the size of the correction (e.g., defaults to 1).
Fig. 4 shows a hearing device HD comprising a forward path from an input unit IU via a signal processor SPU to an output unit OU. The hearing device further comprises an adaptive level estimator ALD (as shown in fig. 3A) according to a first embodiment of the invention. The hearing aid HD comprises an input unit IU for providing an electrical input signal IN representing sound IN a sub-band representation IN (K, m), where K is a sub-band index, K is 1, …, K, where K is the number of sub-bands and m is a time frame index. The input unit comprises an input transformer IT (or a plurality of input transformers) and a (corresponding number of) time-domain to time-frequency-domain conversion unit t/f to convert from a time-domain signal IN (n), where n is a time index, to a subband signal IN (k, m). The input unit IU may further comprise a beamformer for providing spatially filtered signals. The hearing device comprises an adaptive level estimator ALD as shown and described in connection with fig. 3A. The first number K1 of frequency bands used by the first level estimator LD1 is smaller than or equal to the number K of sub-frequency bands of the electrical input signal. The appropriate inputs are provided to the first and second level estimators LD1, LD2 of the adaptive level estimator ALD, which hearing device comprises appropriate band summing units K- > K1 and K- > K2 (or K1- > K2 if the output of the K- > K1 band summing unit is used). Similarly, the hearing device comprises a band distribution unit K1- > K for converting the synthesis level estimates RLE at K1 frequency bands into synthesis level estimates RLE at K frequency bands, which are fed to a signal processor SPU of the forward path of the hearing device. The signal processor is configured to run an algorithm for processing the electrical input signal in K frequency bands, for example to compensate for a hearing impairment of the user. One of the algorithms is a compression amplification algorithm that converts the synthesis level estimates RLE (K, m) in the K frequency bands to corresponding gains G (K, m) in the K frequency bands (see, e.g., fig. 1). In an embodiment, the compression amplification algorithm is configured to implement a compression strategy for a specific application (such as ear protection or noise suppression in noisy environments) or for a specific user, such as a hearing impaired user. The gain G (k, m) is preferably applied to the input signal IN (k, m) (possibly modified by other processing algorithms) to provide a processed signal OUT (k, m). The hearing device HD further comprises an output unit OU providing a stimulus perceivable as sound by the user from the electrical input signal IN based on the processed output signal OUT. The output unit OU comprises a synthesis filter bank f/t for converting the subband signal OUT (k, m) into a single time domain signal OUT (n). The output unit OU further comprises an output transducer OT, for example comprising a loudspeaker for providing the stimulus as sound waves in the air or a vibrator for providing the stimulus as sound waves in the skull of the user. Alternatively or additionally, the output unit OU may comprise a multi-electrode array for providing stimulation (or partial stimulation) as electrical stimulation of the cochlear nerve of the user. The hearing device may for example comprise or implement a hearing aid, a headset, an ear microphone, an ear protection device or a combination thereof.
Fig. 4B shows a hearing device HD comprising an adaptive level estimator ALD according to a second embodiment of the invention. The hearing device embodiment of fig. 4B is similar to the embodiment of fig. 4A, but comprises a further embodiment of an adaptive level estimator ALD according to the present invention, i.e. a second embodiment (as shown in fig. 3B).
Fig. 5 shows a binaural hearing system comprising a first and a second hearing device according to an embodiment of the invention. A hybrid high-resolution compression scheme may be used, for example, to implement "binaural compression" as shown in fig. 5, where level estimates (signals xLE in Kx frequency bands) from respective first and second hearing devices are exchanged between the hearing devices via the interaural wireless link IA-WL. The first and second hearing devices HD1, HD2 are hearing devices according to the invention, for example as shown in fig. 4A, 4B. The interaural wireless link is implemented in the first and second hearing devices by means of respective antenna and transceiver circuits ANT, Rx/Tx. The level estimates may be the synthetic level estimates RLE in K1 frequency bands (before comparison with estimates from another hearing device), the first level estimates LE1 in K1 frequency bands, and/or the second level estimates in K1 or K2 frequency bands.
In an embodiment it is proposed to exchange only the second level estimates in K2 frequency bands, e.g. 4 frequency bands. This is very economical in terms of link bandwidth and power consumption.
In another embodiment it is proposed to exchange a first (slow, high resolution) level estimate LE1 in K1 frequency bands between the first and second hearing devices HD1, HD 2. This is illustrated in the hearing device shown in fig. 6.
The influence of the level estimate xLE received from the other device on the locally determined first level estimate LE1 may be adapted according to the application in question, e.g. determined according to a predetermined criterion, such as an average, a maximum or a minimum of two level estimates in each frequency band, which in an embodiment is determined adaptively, e.g. according to an estimate of the signal-to-noise ratio of the signal on which the level estimate is based.
Fig. 6 shows a hearing device HD comprising an adaptive level estimator ALD according to a third embodiment of the invention. The hearing device HD is a hearing device as shown in fig. 4B, but is also adapted to exchange the first level estimate LE1 with another device, such as another hearing device of a binaural hearing system, see e.g. fig. 5. The interaural wireless link is implemented in the hearing device HD by means of suitable antenna and transceiver circuits ANT, Rx/Tx and is configured to enable (at least) exchange of a first level estimate LE1 in K1 frequency bands with another (e.g. hearing) device, see signal xLE and designation K1 on the double-arrow connection between the transceiver Rx/Tx and the BLX unit. The first (high resolution) level estimate LE1 in fig. 6 is adjusted by a (corresponding) first level estimate xLE received from another device in the binaural adjustment unit BLX to provide binaural level estimates xLE1 in the K1 frequency bands, which are fed to the comparison unit COMP (instead of the local first level estimate LE 1).
Fig. 7A shows an exemplary structure of a dynamic level estimator LDx (e.g., level estimator LD1 and/or LD2 in fig. 3A, 3B or fig. 4A, 4B) for use in an adaptive level estimator according to the present invention. The dynamic level estimator LDx is adapted to provide the dynamic level estimator with an estimate LEx of the level of the input signal INx (of magnitude-INx). Increase and/or release time constant tauattrelMay be configured according to the input signal INx (| INx |). The dynamic level estimator LDx comprises a relatively fast level estimator ALD connected to the pilot level estimator GLD, both receiving an input signal INx (| INx |) to the dynamic level estimator LDx. The pilot level estimator GLD is configured to provide an estimate of the level LEx of the input signal. Attack and/or release time constant tau of pilot level estimatorattrelIs determined by the time constant controller TC-CNT based on the level estimators LEx, ALE of the pilot level estimator GLD and the relatively fast level estimator ALD. The time constant controller TC-CNT provides a time constant τ for controlling or providing the steering level estimator GLDattrelThe control signal TCC. Control signal TCC (e.g., time constant τ)attrel) Optionally available for external use (as illustrated in fig. 10), see dashed arrows labeled TCC, as an optional output at LDx. A dynamic level estimator LDx as shown in fig. 7A is for example described in WO2003081947A1 (for one frequency band). In the embodiment of the adaptive level estimator shown in fig. 3A and 3B, the first and second level estimators LD1 and LD2 operate in K1 and K2 frequency bands, respectively (i.e., provide K1 and K2 level estimators, respectively). Likewise, the dynamic level estimator LDx may be configured to provide level estimates in a suitable number of frequency bands (e.g., K1 or K2 or any other suitable number, e.g., after appropriate adjustment).
FIG. 7B schematically illustrates the attack and release time constants τ for determining the level estimator LDx of FIG. 7 according to the input signal INx (| INx |)attrelExemplary scheme (c). FIG. 7B is a graph illustrating ALD according to a relatively fast level estimatorIs compared to the difference al (in dB) between the level estimate ALE of (a) and the level estimate LEx of the pilot level estimator GLD]) Leading the attack and release time constant tau of the level estimator GLDattrelExemplary coherency in (units such as ms), al-ALE-LEx. FIG. 7B implements a strategy in which, at a relatively small (positive or negative) level difference Δ L (in value), the attack and release time constants τ are relatively largeslowIs applied to the pilot level estimator GLD. For greater than Δ L+ th1(or less than Δ L- th1) Until a threshold value Δ L for the level difference, the attack time (or release time) being reduced as the value of Δ L increases (or decreases)+ th2(ΔL- th2) Until now. For greater than Δ L+ th2(or less than Δ L- th2) Is increased (or released) time constant is kept at a constant minimum value taufast. In the graph of fig. 7B, the course of the curve of the thick solid line τ (Δ L) is symmetrical with respect to 0. However, this is not necessarily so. Likewise, the thick solid line τ (Δ L) curve also shows that the attack and release times are of equal magnitude for the same value of level difference. Nor must it be so. In an embodiment, the release time is typically larger than the boost time, or at least for large negative values of the level difference Δ L (Δ L)<ΔL- th1) May be larger than a large positive value Δ L (Δ L) for the corresponding level difference>ΔL+ th1) Increased time constant. This is indicated by the dashed line, whose diagram exhibits a greater "fast release time" τ than in the case of the thick solid curverel,fastAlternative release time route τrel(Δ L). Also, for relatively small level differences (e.g., for 0 ≧ Δ L)- th1And 0 is less than or equal to Δ L+ th1) The release time may generally be greater than the boost time. The curve takes the form of a trapezoid comprising a plurality of linear segments between inflection points. Other (e.g., curvilinear) functional forms may also be implemented. The time constant-level difference function may be the same for all frequency bands of a given dynamic level estimator. Alternatively, the function may be different for some or all of the frequency bands. For the first and second level estimators LD1, LD2, time constant-level differenceThe functions may be different. In an embodiment, the time constant for the first level estimator LD1 is greater than, for example, greater than or equal to the time constant for the second level estimator LD 2.
Fig. 8A and 8B show an exemplary application of an embodiment of a hearing system according to the invention. Fig. 8A shows a user, a binaural hearing aid system, and an auxiliary device. Fig. 8B shows an auxiliary device running an APP for controlling a binaural hearing system, in particular level estimation. APP is a non-transitory application comprising executable instructions configured to execute on an auxiliary device to implement a user interface for a hearing device or hearing system. In the illustrated embodiment, the APP is configured to run on a smartphone or another portable device that enables communication with a hearing device or hearing system.
Fig. 8A shows an embodiment of a binaural hearing aid system comprising left (second) and right (first) hearing devices HD1, HD2 communicating with a portable (handheld) accessory device AD serving as a user interface UI of the binaural hearing aid system. In an embodiment, the binaural hearing aid system comprises the auxiliary device AD (and the user interface UI). In an embodiment, the accessory device AD comprising the user interface UI is adapted to be held in a hand of the user U.
In fig. 8A, wireless links denoted IA-WL, such as inductive links between left and right devices, and wireless links denoted WL-RF, such as RF links (e.g. bluetooth) between the accessory device AD and the left HD1 and between the accessory device AD and the right HD2, are implemented in the devices HD1, HD2 by corresponding antenna and transceiver circuits (denoted RF-IA-Rx/Tx-1 and RF-IA-Rx/Tx-2 in fig. 8A, respectively, in the left and right hearing devices). The wireless link is configured to enable exchange of audio signals and/or information or control signals (see signal CNT) between the hearing devices HD1, HD2 and between the hearing devices HD1, HD2 and the accessory device AD1,CNT2)。
In an embodiment, the accessory device AD is or comprises an audio gateway apparatus adapted to receive a plurality of audio signals (e.g. from an entertainment device such as a TV or music player, a telephone device such as a mobile phone, or a computer such as a PC, a wireless microphone, etc.) and to enable selection of an appropriate audio signal (or combination of signals) of the received audio signals for transmission to the hearing device. In an embodiment, the auxiliary device is or comprises a remote control for controlling the function and operation of the hearing device. In an embodiment, the functionality of the remote control is implemented in a smartphone, which may run an APP enabling the control of the functionality of the audio processing device via the smartphone (the hearing device comprises a suitable wireless interface to the smartphone, e.g. based on bluetooth or some other standardized or proprietary scheme).
An exemplary user interface UI of the auxiliary device AD is shown in fig. 8B. The user interface comprises a display (e.g. a touch sensitive display) displaying a user of a hearing system comprising the first and second hearing devices, e.g. hearing aids HD1, HD2, and a number of possible selections defining a configuration of a level estimate for the system. Via the display of the user interface (under the heading "level estimation", an adaptive level estimator is configured), the user U is instructed
-pressing to select the effect on the Level Estimate (LE)
Fast LE in a few bands
Slow LE in many frequency bands
- -hybrid LE
-monaural decision
- -binaural decision
-press activation to start the selected configuration.
These instructions will prompt the user to select two of the five possible contributors (in this example) of the level estimation (one defining a pattern of level estimation, the other defining an individual (monaural) decision or a joint (binaural) decision based on the local estimate, where the level estimates are based on estimates from both hearing devices). Filled squares and bold writing indicate that the user has selected a hybrid level estimation mode (hybrid LE) (as proposed in the present invention) and a binaural mode (binaural decision), where the level estimators are exchanged between the two and used to limit the synthesized estimate of the local level estimator (also as in the present invention). When the level estimator has been configured, the activation of the selected combination may be started by pressing "activate".
Other possible operation modes of the level estimator may be selected, see "fast LE in few bands", "slow LE in many bands" and "monaural decision".
The user interface UI may be configured to select the blend level estimation and binaural decision as default selections.
In an embodiment, the APP and system are configured such that other possible choices include "fast LE in many bands" and "slow LE in few bands". In embodiments, "a few" means 4 or less. In embodiments, "a number" refers to 16 or more. Different options may be tried in different acoustic situations.
In general, in a relatively stable (slowly varying) acoustic environment, slow estimation in many frequency bands may be appropriate. In general, in a relatively dynamic (rapidly changing) acoustic environment, fast estimation in a few frequency bands may be appropriate.
In an embodiment, the APP is configured to enable the user to set the number of frequency bands in which level estimation will be done in fast and slow LE mode.
Fig. 9 shows an adaptive level estimator ALD according to a fourth embodiment of the invention. The embodiment shown in fig. 9 includes the same elements as the adaptive level estimator of the first embodiment shown in fig. 3A. The first and second level estimators LD1, LD2 are shown in more detail in the embodiment of fig. 9. Each of the first and second level estimators comprises an ABS unit ABS for providing a magnitude of the input signal IN (IN a corresponding number of frequency bands K1, K2). The absolute value of the input signal is optionally fed to a level estimator (fast LE) with a small (attack and release) time constant (so that it actually follows the course of (the magnitude of) the input signal IN). The output of the fast level estimator is fed to a level estimator LD, such as a dynamic level estimator, which provides a level estimator from the input signal (see, e.g., fig. 7 and associated description). Each of the first and second level estimators LD1, LD2 comprises a calibration unit (CAL 1 and CAL2, respectively) for calibrating the first and second level estimators for a specific type of sound signal (e.g. for a sound signal comprising speech, possibly for different types of sound signals). The calibrated first level estimate LE1(K1 frequency bands) is fed to the control unit CONT. The calibrated second level estimate LE2(K2 bands) is fed to a band distribution unit K2- > K1. In addition, the adaptive level estimator comprises a third calibration unit CAL3 for calibrating the second level estimator LE2 after the band distribution unit K2- > K1. The third calibration unit CAL3 is configured to calibrate the synthesis level estimate for a particular type of sound signal. The calibrated second level estimate LE2(K1 frequency bands) is fed to the control unit CONT to be compared with the calibrated first level estimate LE1(K1 frequency bands) and further processed to provide a resulting level estimate RLE (K1 frequency bands). The resulting level estimator RLE may be used, for example, in a compression amplification algorithm (see, e.g., the L2G cell in fig. 10) or in a maximum power output algorithm.
Fig. 10 shows an adaptive level estimator according to a fifth embodiment of the invention for providing a resulting level estimate RLE (IN K1 frequency bands) of an input signal IN. The embodiment of fig. 10 is implemented by controlling the time constant τ from the lower resolution level estimator LD2 in the high resolution level estimator LD11(e.g., attack and release time constants (. tau.)att,1rel,1) Hybrid high resolution level estimation (e.g., for use in compression, see L2G cell in fig. 10). Time constant τ of K2 (e.g., 4) level estimators from low resolution level estimator LD22Is distributed (see FIG. 10 for K1 time constants τ2Distribution unit K2->K1) K1 (e.g., 24) level estimators to the high resolution level estimator LD1, which on this basis provides a composite level estimator RLE in K1 frequency bands. Any number of K2 and K1 may be used, K2 is e.g. 1 to 8, K1 (C) ((R))>K2) Such as K2+1 through 64.
The upper branch represents a low resolution adaptive level estimator LD2, e.g. based on a dynamic level estimator LDx (LDx2), as described in connection with fig. 7A, 7B, having a relatively few channels (frequency bands). In this case, the realistic number of K2 may be any number between 1 and 8, such as 4.
The lower branch shows a high resolution level estimator LD1, which includes a dynamic level estimator LDx (LDx1) with a relatively large number of channels (K1> K2). In this case, the realistically feasible number may be any number between 2 and 64, such as 24.
The idea is that the upper branch (low resolution level estimator LD2) decides whether the time constant will be small (fast) or large (slow) based on the dynamic level estimator LDx 2. These time constants τ2And then distributed to the following multi-channel branches (high resolution level estimator LD 1). This arrangement allows the attack and release time constants of the high resolution level estimator LD1 to be determined as "blocks" of the channel definition in the above branch (see, e.g., fig. 2). This means that if one of the K2 channels in the low resolution level estimator LD2 (e.g. second, see FB2 in fig. 2)4) Having a small time constant (tau) at a given time2(FB24) I.e. a rapid attack/release time constant (tau)att,2(FB24),τrel,2(FB24) ()) of the high resolution estimator LD1, the (all) channels associated with that channel (e.g., channel FB5 in fig. 2) in the high resolution estimator LD116,FB616,FB716,FB816) Will also obtain a small time constant (fast reaction) (e.g. equal to τ)2(FB24) Or with τ2(FB24) With a predetermined coherence).
The result is to provide an adaptive level estimator ALD that operates in many frequency channels (high resolution), but with (attack and release) time constants determined and updated in a few frequency channels (low resolution).
In another embodiment, a level control unit receiving the first and second level estimates is configured to provide a combined level estimate based on the first and second level estimates and a signal-to-noise ratio of the electrical input signal.
In the above description, the level estimation concept has been exemplified by use in compression applications. However, the same concept can be applied to other functions, such as Maximum Power Output (MPO). The MPO will likely have settings different from compression (with respect to time constant and/or number of frequency bands, and/or a tapering scheme between low and high resolution level detection).
The structural features of the device described above, detailed in the "detailed description of the embodiments" and defined in the claims, can be combined with the steps of the method of the invention when appropriately substituted by corresponding procedures.
As used herein, the singular forms "a", "an" and "the" include plural forms (i.e., having the meaning "at least one"), unless the context clearly dictates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present, unless expressly stated otherwise. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
It should be appreciated that reference throughout this specification to "one embodiment" or "an aspect" or "may" include features means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the invention. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
The claims are not to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more. The terms "a", "an", and "the" mean "one or more", unless expressly specified otherwise.
Accordingly, the scope of the invention should be determined from the following claims.
Reference to the literature
·WO2003081947A1(OTICON)02.10.2003

Claims (15)

1. An adaptive level estimator for providing a level estimate of an electrical input signal representing sound, wherein
The adaptive level estimator comprises:
-a first level estimator configured to provide a first level estimate of the electrical input signal in a first number K1 of frequency bands;
-a second level estimator configured to provide a second level estimate of the electrical input signal or a attack/release time constant associated with said second level estimate in a frequency band of a second number K2, wherein K2 is smaller than K1; and
-a level control unit configured to a) provide a composite level estimate based on the first level estimate and the second level estimate, or b) provide a composite level estimate based on the first level estimate and a attack/release time constant associated with the second level estimate;
wherein the level control unit comprises a comparison unit for comparing the first and second level estimators and providing a comparison signal indicative of said comparison;
wherein the level control unit comprises a filtering unit for down-sampling or low-pass filtering the comparison signal and providing a filtered comparison signal.
2. The adaptive level estimator of claim 1, wherein the first level estimator is configured to provide a first level estimate having a first time constant, and the second level estimator is configured to provide a second level estimate having a second time constant, wherein the first time constant is greater than or equal to the second time constant.
3. The adaptive level estimator of claim 1, wherein the level control unit comprises a combining unit for combining the filtered comparison signal or a signal derived therefrom with a second level estimator and providing a combined signal.
4. The adaptive level estimator of claim 1, wherein the level control unit comprises a limiter configured to limit an effect of the comparison signal on the synthesis level estimate.
5. The adaptive level estimator of claim 1, wherein the first and/or second level estimator comprises a dynamic level estimator such that an estimate of the level of the input signal is provided to the dynamic level estimator, wherein the attack and/or release time constants are configurable based on the input signal to the dynamic level estimator.
6. The adaptive level estimator of claim 1, comprising at least one calibrator for calibrating the level estimate for a particular type of sound signal.
7. A hearing device comprising the adaptive level estimator of claim 1.
8. The hearing device of claim 7, comprising an input unit for providing an electrical input signal representing sound IN a sub-band representation IN (K, m), where K is a sub-band index, K-1, …, K, where K is the number of sub-bands and m is a time frame index.
9. The hearing device of claim 7, comprising an output unit for providing a stimulus originating from the electrical input signal that is perceivable as sound by a user.
10. The hearing device of claim 7, comprising a level-gain conversion unit for converting the synthesized level to a synthesized gain.
11. The hearing device of claim 7, comprising or consisting of a hearing aid, a headset, an ear microphone, an ear protection device, or a combination thereof.
12. A binaural hearing system comprising a first and a second hearing device according to claim 7, the binaural hearing system being adapted to establish a communication link between the first and the second hearing device.
13. The binaural hearing system according to claim 12, comprising an auxiliary device, the binaural hearing system being adapted such that information can be exchanged between or forwarded from at least one of the first and second hearing devices and the auxiliary device.
14. A method of dynamically estimating the level of an input signal representing sound, comprising:
-providing a first level estimate of the electrical input signal in a first number K1 of frequency bands;
-providing a second level estimate of the electrical input signal or a attack/release time constant associated with said second level estimate in a frequency band of a second quantity K2, wherein K2 is smaller than K1;
-providing a composite level estimate based on the first and second level estimates, or based on the first level estimate and a attack/release time constant associated with the second level estimate;
-comparing the first and second level estimates and providing a comparison signal indicative of said comparison; and
-down-sampling or low-pass filtering the comparison signal and providing a filtered comparison signal.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a computer, carries out the steps of the method as claimed in claim 14.
CN201810108458.7A 2017-02-02 2018-02-02 Adaptive level estimator, hearing device, method and binaural hearing system Active CN108430002B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP17154383 2017-02-02
EP17154383.8 2017-02-02

Publications (2)

Publication Number Publication Date
CN108430002A CN108430002A (en) 2018-08-21
CN108430002B true CN108430002B (en) 2021-12-28

Family

ID=57960338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810108458.7A Active CN108430002B (en) 2017-02-02 2018-02-02 Adaptive level estimator, hearing device, method and binaural hearing system

Country Status (4)

Country Link
US (2) US10277990B2 (en)
EP (2) EP3358745B2 (en)
CN (1) CN108430002B (en)
DK (1) DK3358745T4 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
HK1250122A2 (en) * 2018-07-24 2018-11-23 Hearsafe Ltd Hearing protection device and method
GB2588191B (en) 2019-10-14 2021-12-08 Digico Uk Ltd A method of generating a control signal for use in a signal dynamics processor
CN112019278B (en) * 2020-08-18 2022-05-27 南京信息工程大学 Three-dimensional MAMSK-CAP photon access method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003081947A1 (en) * 2002-03-26 2003-10-02 Oticon A/S Method for dynamic determination of time constants, method for level detection, method for compressing an electric audio signal and hearing aid, wherein the method for compression is used
CN103222283A (en) * 2010-11-19 2013-07-24 Jacoti有限公司 Personal communication device with hearing support and method for providing the same

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1869948B1 (en) 2005-03-29 2016-02-17 GN Resound A/S Hearing aid with adaptive compressor time constants
DK2445231T3 (en) 2007-04-11 2013-09-16 Oticon As Hearing aid with binaural communication connection
DK2335427T3 (en) * 2008-09-10 2012-06-18 Widex As Method of sound processing in a hearing aid and a hearing aid
US8320858B2 (en) * 2010-11-22 2012-11-27 Motorola Solutions, Inc. Apparatus for receiving multiple independent RF signals simultaneously and method thereof
EP2928210A1 (en) 2014-04-03 2015-10-07 Oticon A/s A binaural hearing assistance system comprising binaural noise reduction
EP2941020B1 (en) 2014-05-01 2017-06-28 GN Resound A/S A multi-band signal processor for digital audio signals

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003081947A1 (en) * 2002-03-26 2003-10-02 Oticon A/S Method for dynamic determination of time constants, method for level detection, method for compressing an electric audio signal and hearing aid, wherein the method for compression is used
CN103222283A (en) * 2010-11-19 2013-07-24 Jacoti有限公司 Personal communication device with hearing support and method for providing the same

Also Published As

Publication number Publication date
US10277990B2 (en) 2019-04-30
US20190200141A1 (en) 2019-06-27
EP3358745B1 (en) 2020-03-11
US20180220242A1 (en) 2018-08-02
EP3358745B2 (en) 2023-01-04
DK3358745T4 (en) 2023-01-09
EP3657673A1 (en) 2020-05-27
DK3358745T3 (en) 2020-04-27
US10511917B2 (en) 2019-12-17
EP3358745A1 (en) 2018-08-08
CN108430002A (en) 2018-08-21

Similar Documents

Publication Publication Date Title
US11245993B2 (en) Hearing device comprising a noise reduction system
CN108200523B (en) Hearing device comprising a self-voice detector
CN105848078B (en) Binaural hearing system
CN106231520B (en) Peer-to-peer networked hearing system
CN105872923B (en) Hearing system comprising a binaural speech intelligibility predictor
US11140494B2 (en) Hearing device or system for evaluating and selecting an external audio source
CN106507258B (en) Hearing device and operation method thereof
US11330375B2 (en) Method of adaptive mixing of uncorrelated or correlated noisy signals, and a hearing device
US10951995B2 (en) Binaural level and/or gain estimator and a hearing system comprising a binaural level and/or gain estimator
US20220124444A1 (en) Hearing device comprising a noise reduction system
CN107454537B (en) Hearing device comprising a filter bank and an onset detector
CN108430002B (en) Adaptive level estimator, hearing device, method and binaural hearing system
US20220256296A1 (en) Binaural hearing system comprising frequency transition
CN107426663B (en) Configurable hearing aid comprising a beamformer filtering unit and a gain unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant