EP3886463A1 - Verfahren an einem hörgerät - Google Patents

Verfahren an einem hörgerät Download PDF

Info

Publication number
EP3886463A1
EP3886463A1 EP21162221.2A EP21162221A EP3886463A1 EP 3886463 A1 EP3886463 A1 EP 3886463A1 EP 21162221 A EP21162221 A EP 21162221A EP 3886463 A1 EP3886463 A1 EP 3886463A1
Authority
EP
European Patent Office
Prior art keywords
signal
gain value
directional
value
input signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21162221.2A
Other languages
English (en)
French (fr)
Inventor
Changxue Ma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GN Hearing AS
Original Assignee
GN Hearing AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/827,694 external-priority patent/US11153695B2/en
Application filed by GN Hearing AS filed Critical GN Hearing AS
Publication of EP3886463A1 publication Critical patent/EP3886463A1/de
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/43Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic

Definitions

  • This disclosure relates to hearing devices and related methods.
  • Listening devices e.g. including listening devices with compensation for a hearing loss, with directional sound capture (spatial filtering) is presently the best way to improve intelligibility of speech in noisy environments.
  • the signal-to-noise ratio is improved.
  • Use of directional microphones e.g. including beamforming methods involving multiple microphones e.g. arrays of multiple microphones on both sides of a user in an ipsilateral device and in a contralateral device, respectively, is a way to obtain directional sound capture.
  • Beamforming microphone arrays in listening devices can improve the signal-to-noise ratio (SNR) and thus also speech intelligibility.
  • SNR signal-to-noise ratio
  • Unilateral beamformer arrays also known as directional microphones, accomplish this improvement using two microphones in one listening device.
  • Bilateral beamformer arrays which combine information across four microphones in a bilateral fitting, further improve the SNR.
  • Early bilateral beamformers were static with fixed attenuation patterns. Recently adaptive, bilateral beamformers have been introduced in commercial hearing aids.
  • directional sensitivity is engaged, which gives some useful advantages like spatial noise reduction, or that omnidirectional sensitivity is engaged to enable hearing from multiple directions.
  • omnidirectional sensitivity usually comes at the cost of an increased noise level.
  • users have experienced a so-called 'tunnel effect'. That is, sounds from an on-axis target sound source are favoured in reproduction to the user at the cost of discriminating off-axis target sound sources.
  • On axis sounds appear to be coming from a tunnel, while sounds from all other directions are dampened or completely excluded. This leads to a decreased spatial awareness for the user, and may, among other disadvantages, introduce listening fatigue and a reduced attention span.
  • noise reduction obtained by conventional beamforming or directional microphones is not as good as desired.
  • 'on-axis' refers to a direction, or 'cone' of directions, relative to one or both of the hearing devices at which directions the directional signals are predominantly captured from. That is, 'on-axis' refers to the focus area of one or more beamformer(s) or directional microphone(s). This focus area is usually, but not always, in front of the user's face, i.e. the 'look direction' of the user. In some aspects, one or both of the hearing devices capture the respective directional signals from a direction in front, on-axis, of the user.
  • the term 'off-axis' refers to all other directions than the 'on-axis' directions relative to one or both of the hearing devices.
  • 'target sound source' or 'target source' refers to any sound signal source which produces an acoustic signal of interest e.g. from a human speaker.
  • a 'noise source' refers to any undesired sound source which is not a 'target source'.
  • a noise source may be the combined acoustic signal from many people talking at the same time, machine sounds, vehicle traffic sounds etc.
  • the term 'reproduced signal' refers to a signal which is presented to the user of the hearing device e.g. via a small loudspeaker, denoted a 'receiver' in the field of hearing devices.
  • the 'reproduced signal' may include a compensation for a hearing loss or the 'reproduced signal' may be a signal with or without compensation for a hearing loss.
  • the wording 'strength' of a signal refers to a non-instantaneous level of the signal e.g. proportional to a one-norm (1-norm) or a two-norm (2-norm) or a power (e.g. power of two) of the signal.
  • the term 'ipsilateral hearing device' or 'ipsilateral device' refers to one device, worn at one side of a user's head e.g. on a left side, whereas a 'contralateral hearing device' or 'contralateral device' refers to another device, worn at the other side of a user's head e.g. on the right side.
  • the 'ipsilateral hearing device' or 'ipsilateral device' may be operated together with a contralateral device, which is configured in the same way as the ipsilateral device or in another way.
  • the 'ipsilateral hearing device' or 'ipsilateral device' is an electronic listening device configured to compensate for a hearing loss.
  • the electronic listening device is configured without compensation for a hearing loss.
  • a hearing device may be configured to one or more of: protect against loud sound levels in the surroundings, playback of audio, communicate as a headset for telecommunication, and to compensate for a hearing loss.
  • the term 'processor' may include a combination of one or more hardware elements.
  • a processor may be configured to run a software program or software components thereof.
  • One or more of the hardware elements may be programmable or non-programmable.
  • a method of processing an audio signal comprising: at an ipsilateral hearing device (100) with: a first input unit (110) including one or more microphones (112,113) and configured to generate a first directional input signal (F L ); a communications unit (120) configured to receive a second directional input signal (F R ) from a contralateral hearing device; an output unit (140); and a processor (130) coupled to: the first input unit (110), the communications unit (120) and the output unit (140):
  • a significant improvement in acoustic fidelity is enabled at least when compared to methods involving selection between directionally focussed sensitivity and omnidirectional sensitivity.
  • improvements are achieved in social settings, where a user may want to listen to - or be able to listen to - more than one person in the vicinity, and at the same time enjoy reduction of noise from the surroundings.
  • the claimed method achieves a desired trade-off which enables a directional sensitivity, e.g. focussed at an on-axis target signal source, while at the same time enabling that an off-axis signal source to be heard, at least with better intelligibility. Listening tests has revealed that users experience less of a 'tunnel-effect' when provided with a system employing the claimed method.
  • off-axis noise suppression is improved, as evidenced by an improved directionality index. This is also true, in situations where an off-axis target signal source is present.
  • measurements show that a directivity index is improved over a range of frequencies, at least in the frequency range above 500Hz and, in particular, in the frequency range above 1000 Hz.
  • the method enables that directionality of the hearing device can be maintained, despite the presence of an off-axis target sound source.
  • a signal from an off-axis sound source is reproduced at the acceptable cost that the signals from an on-axis sound source is slightly suppressed, however only proportionally to the strength of signal from the off-axis sound source. Since the signals from an on-axis sound source are slightly suppressed, proportionally to the strength of signal from the off-axis sound source, the signals from the off-axis sound source can be perceived.
  • the method comprises forgoing automatically entering an omnidirectional mode.
  • it is thereby avoided that the user is exposed to a reproduced signal in which the noise level increases when entering the omnidirectional mode.
  • the method is aimed at utilizing the head shadow effect on beamforming algorithms by scaling the first directional signal and the second directional signal.
  • the scaling - or equalization of the first directional signal relative to the second directional signal or vice versa - is estimated from the first directional signal and the second directional signal.
  • the method can be implemented in different ways.
  • the first gain value and the second gain value are not frequency band limited i.e. the method is performed at one frequency band, which is not explicitly band limited.
  • the first gain value and the second gain value are associated with a band limited portion of the first directional signal and the second directional signal.
  • multiple first gain values and respective multiple second gain values are associated with respective band limited portions of the first directional signal and the second directional signal.
  • the first gain value and the second gain value are comprised by respective arrays of multiple gain values at respective multiple frequency bands or frequency indexes, sometimes denoted frequency bins.
  • the first gain value scales the amplitude of the first directional signal to provide a scaled first directional signal and the second gain value scales the amplitude of the second directional signal to provide a scaled second directional signal. Then the scaled first directional signal and the scaled second directional signal are combined by addition.
  • the first gain value scales the amplitude of the first directional signal to provide a scaled first directional signal, which is combined, by addition, with the second directional signal to provide a combined signal. Then, the combined signal is scaled by the second gain value.
  • the method may include forgoing scaling by the second gain value.
  • the intermediate signal is a single-channel signal or monaural signal.
  • the Single channel signal may be a discrete time domain signal or a discrete frequency domain signal.
  • the combination of the first directional input signal and the second directional input signal is a linear combination.
  • the ipsilateral hearing device and the contralateral hearing device are in mutual communication, e.g. wireless communication, such that each of the ipsilateral hearing device and the contralateral hearing device are able to process the first directional input signal and the second directional input signal, wherein one of the directional signals is received from the other device.
  • the signals may be streamed bi-directionally, such that the ipsilateral device receives the second directional signal from the contralateral device and such that the ipsilateral device transmits the first directional signal to the contralateral device.
  • the transmitting and receiving may be in accordance with a power saving protocol.
  • the method is performed concurrently at the ipsilateral hearing device and at the contralateral hearing device.
  • the respective output units at the respective devices presents the output signals to the user as monaural signals.
  • the monaural signals are void of spatial cues in respect of deliberately introduced time delays to add spatial cues.
  • the output signal is communicated to the output unit of the ipsilateral hearing device.
  • each of the ipsilateral hearing device and the contralateral hearing device comprises one or more respective directional microphones or one or more respective omnidirectional microphones including beamforming processors to generate the directional signals.
  • each of the first directional signal and the second directional signal is associated with a fixed directionality relative to the user wearing the hearing devices.
  • an on-axis direction may refer to a direction right in front of the user, whereas an off-axis direction may refer to any other direction e.g. to the left side or to the right side.
  • a user may select a fixed directionality, e.g. at a user interface of an auxiliary electronic device in communication with one or more of the hearing devices.
  • directionality may be automatically selected e.g. based on focussing on a strongest signal.
  • the method includes combining the first directional signal and the second directional signal from monaural, fixed beamformer outputs of the ipsilateral device and the contralateral device, respectively, to further enhance the target talker.
  • the method may be implemented in hardware or a combination of hardware and software.
  • the method may include one or both of time-domain processing and frequency-domain processing.
  • the method encompasses embodiments using iterative estimation of the first gain value and/or the second gain value, and embodiments using deterministic computation of the first gain value and/or the second gain value.
  • the method is a method of processing an audio signal.
  • the method comprises: recurrently determining one or both of: the first gain value ( ⁇ ; H(k)) and the second gain value (1- ⁇ ; 1-H(k)) based on a non-instantaneous level of the first directional input signal (F L ) and a non-instantaneous level of the second directional input signal (F R ).
  • An advantage thereof is that less distortion and less hearable modulation artefacts are introduced when recurrently determining one or both of the first gain value ( ⁇ ) and the second gain value (1- ⁇ ).
  • the non-instantaneous level of the first directional input signal and the non-instantaneous level of the second directional input signal may be obtained by computing, respectively, a first time average over an estimate of the power of the first directional input signal and a second time average over an estimate of the power of the first directional input signal.
  • the first time average may be a moving average.
  • the non-instantaneous level of the first directional input signal and the non-instantaneous level of the second directional input signal may be proportional to: a one-norm (1-norm) or a two-norm (2-norm) or a power (e.g. power of two) of the respective signals.
  • the non-instantaneous level of the first directional input signal and the non-instantaneous level of the second directional input signal may be obtained by a recursive smoothing procedure.
  • the recursive smoothing procedure may operate at the full bandwidth of the signal or at each of multiple frequency bins. For instance, in a frequency domain implementation, the recursive smoothing procedure may smooth at each bin across short time Fourier transformation frames e.g. by a weighted sum of a value in a current frame and a value in a frame carrying an accumulated average.
  • the non-instantaneous level of the first directional input signal and the non-instantaneous level of the second directional input signal may be obtained by a time-domain filter, e.g. an IIR filter.
  • the method comprises:
  • the method can perform at least the generation of an intermediate signal, determination of the first gain value and the second gain value, and the generation of an output signal in the frequency domain. This enables a more efficient implementation, especially in connection with performing compensation for a hearing loss.
  • the Short-time Fourier transform (STFT) is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time.
  • STFT The Short-time Fourier transform
  • the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment, denoted a frame. Each frame comprises one or more values in a number of so-called frequency bins.
  • a sequence of a time domain signal which is transformed into the frequency domain by short-time Fourier transformation is denoted an analysis window.
  • the time-domain signal generated by short-time inverse Fourier transformation is denoted a synthesis window.
  • the steps of transforming e.g. including the generation of the intermediate signal, as set out above, may be performed at a first recurring basis.
  • the first recurring basis may relate to a sampling rate and a length of the analysis window, in number of samples.
  • the analysis window(s) is/are selected with a predefined overlap (in terms of samples or a relative duration) with respect to a previous analysis window.
  • the overlap may be e.g. 50% of the length of the analysis window.
  • the overlap of the synthesis window may be 50% of the length of the synthesis window.
  • the analysis window and the synthesis window may have the same lengths.
  • values of the synthesis window may be added to the values of previous synthesis window.
  • the first gain value and the second gain value are scalar values determined by an iterative method.
  • the first gain value ( ⁇ ; H(k)) and the second gain value (1- ⁇ ; 1-H(k)) are recurrently determined, subject to the constraint that the first gain value ( ⁇ ; H(k)) and the second gain value (1- ⁇ ; 1-H(k)) sums to a predefined time-invariant value.
  • This constraint is useful to enable that the strength of a target signal in front, on-axis, is scaled proportionally to the strength of an off-axis signal. This is expedient to avoid disturbing an on-axis signal, which may be essential for the user to understand what a person in front, on-axis, is saying while ambient sounds change.
  • This constraint is also useful for a combination of the first directional signal and the second directional signal, wherein both of the first directional signal and the second directional signal are scaled in accordance with the first gain value and the second gain value, respectively, before the signals are combined into a single-channel signal. Also, this constraint is useful for an implementation of the method, wherein the first gain value and the second gain value are implemented as respective gain units, without at least deliberate frequency band limitations. In some embodiments, the first gain value ( ⁇ ) and the second gain value (1- ⁇ ) are applied by respective gain stages without emphasis of a particular frequency range, i.e. without applying frequency dependent filtering.
  • the first gain value ( ⁇ ) and the second gain value (1- ⁇ ) are determined, in accordance with an objective of: obtaining a substantially equal strength of the first directional input signal and the second directional input signal in the intermediate signal (Fo) subject to the constraint that the first gain value ( ⁇ ) and the second gain value (1- ⁇ ) sums to a predefined time-invariant value.
  • the first gain value ( ⁇ ) and the second gain value (1- ⁇ ) are determined, in accordance with an objective of: making a proportion of the first directional input signal (F L ) and a proportion of the second directional signal (F R ) at least substantially equal when combined, by the linear combination, subject to the constraint that the first gain value ( ⁇ ) and the second gain value (1- ⁇ ) sums to a predefined time-invariant value
  • a sum of the first gain value ( ⁇ ) and the second gain value (1- ⁇ ) is constrained to add up to a fixed constant value, which remains constant at least over a period of time when recurrent control of the gain values take place.
  • the first gain value ( ⁇ ; H(k)) and the second gain value (1- ⁇ ; 1-H(k)) are determined further in accordance with minimizing an auto-correlation or cross power spectrum of the intermediate signal (V).
  • the method is beneficial in terms of improved noise reduction in addition to the improved spatial noise reduction.
  • a noise signal source emitting a signal, even a strong signal, which correlates only poorly between the first input signal and the second input signal is suppressed.
  • one or both of the first gain value ( ⁇ ; H(k)) and the second gain value (1- ⁇ ; 1-H(k)) are recurrently estimated in accordance with adaptively seeking to minimize a first cost function C( ⁇ , ⁇ ), wherein the cost function includes the mean value of: the sum of: the first gain value ( ⁇ ; H(k)) multiplied by a numeric value representation of the first directional signal (F L ) and the second gain value (1- ⁇ ; 1-H(k)) multiplied by a numeric value representation the second directional signal (F R ).
  • the signal strength of an on-axis target signal source scales proportionally to the signal strength of an off-axis target signal source and thus that the off-axis target signal source does not drown out the on-axis signal source. Also, it is ensured that the on-axis target signal is maintained at even proportions at both ears of a user in case a pair of hearing devices are worn simultaneously by the user.
  • the step of adaptively seeking to minimize a first cost function may be implemented using a Least-Means-Square algorithm or another gradient descent algorithm known in the art.
  • the numeric value representation may also be designated an absolute value representation or an unsigned value representation.
  • the mean value may be a one-norm or a two-norm or a power (e.g. a power of two).
  • the mean value may be a Root-Mean-Square, rms, value.
  • the step of adaptively seeking to minimize a cost function may be performed on a recurrent basis, e.g. denoted a second recurrent basis.
  • the second recurrent basis may be different from the first recurrent basis.
  • the second recurrent basis may be more frequent than the first recurrent basis.
  • the constraint that the first gain value ( ⁇ ; H(k)) and the second gain value (1- ⁇ ; 1-H(k)) sums to a predefined time-invariant value is included in the first cost function.
  • the cost function may be determined and minimized in accordance with the method of Lagrange multipliers, which is a strategy for finding the local maxima and minima of a cost function subject to equality constraints, wherein the equality constraints include the constraint that the first gain value ( ⁇ ) and the second gain value (1- ⁇ ) sums to a time-invariant value.
  • C ⁇ ⁇ E ⁇ F L + ⁇ F R ⁇ ⁇ F L * + ⁇ F R * + ⁇ * ⁇ + ⁇ ⁇ 1 + ⁇ ⁇ + ⁇ ⁇ 1 * wherein ⁇ is the Lagrange multiplier.
  • the method comprises: iteratively, in the frequency domain: determining an updated first gain value ( ⁇ , H(k)) based on a previous first gain value and an iteration step size multiplied by a difference between the first directional signal (F L ) and the second directional signal (F R ), and a ratio between the value of the intermediate signal ( V ) and a squared value ( V * V ) of the intermediate signal ( V ): determining an updated value ( V n + 1 ) of the intermediate signal (V) including a linear combination of the first directional input signal (F L ) and the second directional input signal (F R ), based on the updated first gain value ( ⁇ , H(k)) and the updated second gain value (1- ⁇ , 1-H(k)).
  • the output signal for the output unit is generated.
  • the steps of determining an updated first gain value ( ⁇ ), and determining an updated value ( V n + 1 ) of the intermediate signal V, are thus performed in the frequency domain.
  • An initial value of the intermediate signal, V may be based on a value of the intermediate signal obtained at a preceding frame.
  • a first time value of the intermediate signal may include a mean value of the strength of the first directional signal and the strength of the second directional signal.
  • the first gain value and the second gain value are frequency dependent gain values, H(k); 1-H(k), determined by a non-iterative, non-recursive method.
  • one or both of the first gain value ( ⁇ ; H(k)) and the second gain value (1- ⁇ ; 1-H(k)) is/are a frequency dependent gain of a first filter (H) and a second filter (1-H), respectively.
  • the first filter H and/or the second filter 1-H enables a frequency dependent improvement in terms of maintaining noise reduction while improving the directionality index associated with the output signal.
  • the filters may be implemented as frequency-domain filters or a time-domain filters.
  • the method comprises:
  • one or both of the first filter H and the second filter 1-H are phase-neutral filters or zero-phase filters, wherein the first filter and the second filter are applied to frames of a frequency-domain transformation of the first directional signal and the second directional signal.
  • the method comprises:
  • This method enables a non-recursive estimation of the first filter, H, rather than an iterative, time-consuming and less predictable determination of the first filter.
  • fewer hardware resources are required compared to a recursive method.
  • the non-recursive estimation of the first filter may provide a less accurate determination of the first filter compared to an optimal first filter.
  • listening tests have revealed an improvement on par with a recursively optimized first filter.
  • the method comprises:
  • a post-filter, G is provided to further filter the signal output by the equalization unit or equalization filter, H.
  • the post filter, G further improves the directional index as evidenced herein.
  • the method comprises: filtering the single-channel signal with a single channel post-filter (G) which is configured to suppress an off-axis signal component in the single-channel signal, relative to an on-axis signal component; wherein the off-axis signal component occurs out-of-phase in the first directional input signal (F L ) and the second directional signal (F R ); and wherein the on-axis signal component occurs in-phase in the first directional input signal (F L ) and the second directional input signal (F R ).
  • G single channel post-filter
  • off-axis signal sources are suppressed in addition to any suppression of off-axis signal sources in one or both of: the first directional signal and the second directional signal.
  • a post-filter transfer function is obtained to suppress influence of a sound source outside of the beam focus and thus enhanced noise reduction compared to noise reduction obtained by beamforming alone.
  • the post-filter may be a Wiener filter.
  • a post-filter transfer function is obtained to further suppress the influence of any sound source outside of the beam focus.
  • the claimed method achieves a desired trade-off which enables directionally focussed sensitivity, e.g. focussed at an on-axis target signal source, while at the same time enabling that an off-axis signal source can be perceived, at least with better intelligibility, whereas noise signal sources from off-axis signal sources is suppressed.
  • the method comprises: processing the intermediate signal (V) based on a hearing loss compensation, which modifies the output signal (Z) in accordance with a predetermined hearing loss.
  • an ipsilateral hearing device and a contralateral hearing device are configured with respective hearing loss compensations, which modify respective output signals at a left ear and a right ear in accordance with a predetermined hearing loss for the respective ear.
  • the method comprises: generating a further output signal, at least substantially equal to the output signal (Z); wherein the further output signal is communicated to an output unit of the contralateral hearing device; and wherein the output signal and the further output signal constitutes a monaural signal at least substantially.
  • the output signal obtained as described above, is presented to the user at both ears. Advantages of the first mode are described above. As an additional advantage, e.g. to improve speech intelligibility, the output signal is presented to the user at both ears.
  • the combination is a linear combination.
  • the combination is a linear combination in amplitude. Expediently, distortion artefacts can be substantially avoided.
  • the combination is determined at least by the sum of: the first directional input signal (F L ) scaled in accordance with the first gain value ( ⁇ ); and the second directional input signal (F R ) scaled in accordance with the second gain value (1- ⁇ ).
  • the intermediate signal, V includes a linear combination of the first directional input signal (F L ) and the second directional input signal (F R ).
  • a hearing device comprising:
  • the hearing device may be an ipsilateral hearing device configured to communicate, e.g. bi-directionally, with a contralateral hearing device.
  • the ipsilateral hearing device is configured to be worn at or in a left ear of a user
  • the contralateral hearing device is configured to be worn at or in a right ear of the user, or vice versa.
  • the ipsilateral hearing device is a wearable electronic device.
  • the contralateral hearing device is a wearable electronic device.
  • a hearing system comprising an ipsilateral hearing device and a contralateral hearing device.
  • One or both of the ipsilateral hearing device and the contralateral hearing device are configured as set out in any of the above embodiments and/or aspects and/or examples.
  • the hearing system comprises an auxiliary electronic device.
  • the auxiliary electronic device is configured as a remote control.
  • a computer readable storage medium storing at least one program, the at least one program comprising instructions, which, when executed by the at least one processor of a hearing device (100) with an input transducer, at least one processor and an output transducer (141), enables the hearing device to perform the method as set out in any of the above embodiments and/or aspects and/or examples.
  • the subject matter described herein may be implemented in software, in combination with hardware.
  • the subject-matter described herein may be implemented in software executed by a processor.
  • a method described herein may be implemented using a non-transitory computer readable medium having stored thereon executable instructions that when executed by the processor of a computer, control the processor to perform steps of the method.
  • Exemplary non-transitory computer readable media suitable for implementing the subject-matter described herein include a memory device, e.g. a memory device accessible by a processor device, a processor device, programmable logic devices, and application specific integrated circuits.
  • the computer readable storage medium is a memory portion of a processor e.g. in a hearing device or in another type of electronic device such as, but not limited to, a smartwatch, a smartphone and a tablet computer. In some examples, the computer readable storage medium is a portable memory device.
  • a method at an ipsilateral, hearing device with: a first input unit (110) including one or more microphones (112,113) and configured to generate a first directional input signal (F L ); a communications unit (120) configured to receive a second directional input signal (F R ) from a contralateral, hearing device; an output unit (140); and a processor (130) coupled to: the first input unit (110), the communications unit (120) and the output unit (140):
  • the predetermined algebraic relation is a ratio or a root of the ratio.
  • Fig. 1 shows an ipsilateral hearing device with a communications unit for communication with a contralateral hearing device (not shown).
  • the ipsilateral hearing device 100 comprises a communications unit 120 with an antenna 122 and a transceiver 121 for bidirectional communication with the contralateral device.
  • the ipsilateral hearing device 100 also comprises a first input unit 110 with a first microphone 112 and a second microphone 113 each coupled to a beamformer 111 generating a first directional signal F L .
  • the beamformer is a hyper-cardioid beamformer.
  • the communications unit 120 receives a second directional signal F R .
  • the second directional signal F R may be captured by an input unit corresponding to the first input unit 110.
  • the second directional signal F R is a frequency domain signal.
  • the first directional signal, R L is a frequency domain signal.
  • the beamformer 111 performs beamforming in the frequency domain or a short time frequency domain.
  • the first directional signal, F L , and the second directional signal, F R are denoted an ipsilateral signal and a contralateral signal, respectively.
  • time domain to frequency domain transformation e.g. short time Fourier transformation (STFT)
  • corresponding inverse transformations e.g. short time inverse Fourier transformation (STIFT)
  • STFT short time Fourier transformation
  • STIFT short time inverse Fourier transformation
  • F, V, Y and Z represent frequency-domain signals.
  • Capital reference letters, e.g. H and G represent frequency-domain transfer functions.
  • Subscripts, e.g. L and R are used to designate that a signal is from an ipsilateral device and a contralateral device respectively.
  • a first device e.g. the ipsilateral device
  • a second device e.g. a contralateral device
  • the first device and the second device may have identical or similar processors.
  • one of the processors is configured to operate as a master and another is configured to operate as a slave.
  • the first directional signal F L and the second directional signal F R are input to a processor 130 comprising an equalization unit 131.
  • the equalization unit 131 may be based on gain units or filters as described in more detail herein.
  • the equalization unit 131 equalizes the strength or amplitude of the first directional signal F L and the strength or amplitude of the second directional signal F R prior to summation. Thus, two equalized signals are added.
  • the equalization unit 131 outputs an intermediate signal V.
  • the equalization unit 131 outputs a single-channel intermediate signal V.
  • the single-channel intermediate signal is a monaural signal.
  • the equalization unit is based on gain stages.
  • the equalization unit 131 performs equalization of the input signals to equalize their strength or amplitude based on one or more gain factor values including a gain value ⁇ .
  • the equalization unit is based on filters.
  • the equalization unit 131 performs equalization of the input signals to equalize their strength or amplitude, individually, at each of multiple frequency bands or frequency bins based on one or more gain filter transfer functions including filter transfer function H.
  • the one or more gain factor values including a gain value ⁇ or the one or more gain filter transfer functions including filter transfer function H are determined, as described in more detail herein, by a controller 134.
  • the controller 134 is coupled to the processor 130 and one or both of the equalizing unit 131 and a post-filter 132.
  • the controller 134 determines one or more of: the gain value, ⁇ , an equalization filter transfer function H and a post-filter transfer function G.
  • the output, V, from the equalization unit 131 is input to the post-filter 132 which outputs an intermediate signal Y.
  • the post-filter 132 is integrated with the equalization filter 131.
  • the post-filter 132 is omitted or at least temporarily dispensed with or by-passed.
  • the intermediate signals V or Y is input to a hearing loss compensation unit 133, which includes a prescribed compensation for a hearing loss of a user as it is known in the art.
  • the hearing loss compensation unit 133 is omitted or by-passed.
  • the intermediate signal V or Y or Z is input to an output unit 140, which may include a so-called 'receiver' or a loudspeaker 141 of the ipsilateral device for providing an acoustical signal to the user.
  • the intermediate signal V or Y or Z is input to a second communications unit for transmission to a further device.
  • the further device may be a contralateral device or an auxiliary device.
  • Fig. 2 shows a first embodiment of a method 200 performing equalization.
  • the first embodiment is based on a recurrent determination of the first gain value and the second gain value.
  • the first gain value, ⁇ , and the second gain value, 1- ⁇ are adaptively determined e.g. in accordance with the following.
  • the first gain value and the second gain value are applied to equalize the strength of the first directional signal (the ipsilateral signal) and the second directional signal (contralateral signal) prior to combination e.g. by summation.
  • the ipsilateral signal and the contralateral signal are firstly equalized and then combined with the objective of enhancing the strength of an on-axis target signal e.g. from a person speaking to the user from a position, on-axis, in front of the user.
  • an on-axis target signal e.g. from a person speaking to the user from a position, on-axis, in front of the user.
  • S argmin rms ( ⁇ F L + 1 ⁇ ⁇ F R
  • rms represents a function computing the root mean square
  • argmin is a function seeking a minimum by optimization of ⁇ , which serves as a variable value while determination of the gain value takes place.
  • An optimal value of ⁇ serves the objective of equalizing the strength of the first directional signal (the ipsilateral signal) and the second directional signal (contralateral signal) prior to summation.
  • Fig. 4 shows an example of how to equalize the signals prior to summation.
  • the symbol '*' denotes the complex conjugate.
  • NLMS normalized least mean square
  • the update is performed when V * ⁇ V > 0.
  • Other values of ⁇ may be used.
  • may be varied dynamically during minimizing the cost function.
  • the first embodiment is implemented as shown in fig. 2 .
  • the first embodiment includes steps 210 to transform the ipsilateral signal from a time-domain to a frequency domain.
  • the steps 210 may be dispensed with, at least in this portion of the method.
  • steps 210 can be used to perform the transformation to the frequency domain.
  • the beamformer e.g. beamformer 111
  • steps 210 may be omitted.
  • steps 220 transform the contralateral signal from a time-domain to a frequency domain.
  • the steps 220 may be dispensed with, at least in this portion of the method. This may be the case if the contralateral signal is received from a contralateral device in an accordance with a frequency domain representation.
  • steps 210 and 220 may be performed in the same way at the ipsilateral device. Alternatively, steps 210 may be performed at the ipsilateral device and steps 220 may be performed at the contralateral device.
  • time domain samples from a first input device e.g. first input device 120
  • time domain samples are appended at step 212 to a sequence of previously received input samples to form an analysis window of e.g. 48 samples at step 213.
  • a short time Fourier transformation is performed based on the analysis window to provide a frequency domain signal F L .
  • F L may be represented by real values or complex values in a vector or a frame with a number of k bins, e.g. 48 bins. Each bin may comprise one or more values.
  • steps 221, 222, 223 and 224 generates the contralateral signal F R .
  • the method may recurrently perform steps 201 and 202 until a stop criterion is reached.
  • a stop criterion is that a predefined number of iterations are performed.
  • a stop criterion is that the gradient flattens out or that ⁇ converges towards a value.
  • a short time inverse Fourier transformation is computed based on v when the recurrent method is completed.
  • IFFT inverse Fourier transformation
  • a synthesis window of e.g. 48 time domain samples are generated.
  • the time domain samples may partially overlap previously generated time domain samples. At the overlap, sample values are added.
  • ⁇ F L and ⁇ F R are equalized in terms of strength before being combined.
  • Fig. 3 shows a second embodiment of a method performing equalization.
  • the method 300 uses steps 210 and 220 as described above to obtain signals Z L and Z R .
  • the method performs steps 310, which may be non-iterative steps, before generating the intermediate signal V at step 301 or step 302.
  • steps 310 which may be non-iterative steps, before generating the intermediate signal V at step 301 or step 302.
  • the cross power spectrum P LR of F L and F R is computed.
  • the power spectrum P L of F L is computed
  • the power spectrum P R of F R is computed.
  • the power spectra and cross power spectrum are generated for a number of frequency bins or indexes designated k.
  • the signals may include a frame with a number of frequency bins. Each bin may comprise one or more values.
  • a frame may include fewer or more than 48 frequency bins e.g. 24 or 96 frequency bins.
  • the minimum power spectrum value P N (k) in the set of power spectrum values ⁇ P R (k); P L (k) ⁇ is determined for each or multiple of the frequency bins.
  • the maximum power spectrum value Px(k) in the set of power spectrum values ⁇ P R (k); P L (k) ⁇ is determined for each or multiple of the frequency bins.
  • subscript N designates the minimum values
  • subscript X designates the maximum values.
  • a transfer function G for a post-filter is computed based on the cross power spectrum P LR of Z L and F R and the power spectrum P R of F R .
  • Post filter : G Re P LR P L + P R
  • a transfer function H is computed for an equalization filter based on P N and P X including minimum values and maximum values, respectively, as described above.
  • H and G are computed element-wise for each frequency bin k.
  • the method includes determining that either the ipsilateral signal, F L , is strongest (Y) or determining that the contralateral signal, F R , is the strongest (N).
  • the determination may be based on a measure of energy, E, across all frequency bins, k, in the power spectra. E(P L ) and E(P R ) are thus scalar values.
  • F L is scaled by filter H to be equalized to F R before summation.
  • the post-filter transfer function G is applied to the sum.
  • F R is scaled by filter H to be equalized to F L before summation.
  • the post-filter transfer function G is applied to the sum.
  • the post-filter is omitted or temporarily dispensed with.
  • one or more of the power spectra, P L and P R , and the cross-power spectrum, P LR is/are estimated using a recursive smoothing method.
  • fig. 5 shows an embodiment including the equalization filter and the post-filter which are used in accordance with determining the strongest signal.
  • step 203 a short time inverse Fourier transformation (IFFT) is computed based on V.
  • IFFT inverse Fourier transformation
  • a synthesis window, 204 of e.g. 48 time domain samples are generated.
  • the time domain samples may partially overlap previously generated time domain samples.
  • sample values are added.
  • the overlapping and addition is performed in step 205.
  • F R H and F L (1 - H ) are equalized before summation or, alternatively, F L H and F R (1 - H ) are equalized before summation.
  • H and 1 - H comprises a first gain value H(k) and a second gain value 1 - H(k) at least at one or more frequency bins k.
  • the first gain value H(k) and a second gain value 1 - H(k) are determined in accordance with the above.
  • Fig. 4 shows a first equalization unit based on gain stages.
  • the first equalization unit is designated by reference numeral 400 and receives the ipsilateral signal, F L , and the contralateral signal, F R .
  • the first gain value ⁇ is applied by means of a gain unit 401, which outputs a scaled signal ⁇ F L to an adder 403.
  • the gain stages are not as such frequency band limited.
  • the gain values ⁇ and ⁇ may each be computed for respective frequency bands or bins, wherein F L and F R are frequency band limited signals.
  • the first equalization unit is based on a structure equivalent to the structure shown in fig. 4 .
  • the first equalization unit performs linear combination of the ipsilateral signal, F L , and the contralateral signal, F R .
  • F L the ipsilateral signal
  • F R the contralateral signal
  • some deviations from a linear combination may be accepted or intended.
  • Fig. 5 shows a second equalization unit based on filters.
  • the second equalization unit, 500 based on filters, may perform the equalization for each of multiple frequency bands, k, by means of an equalization filter H and a post-filter G.
  • the post-filter G is omitted or temporarily dispensed with.
  • the second equalization unit designated reference numeral 500 receives the ipsilateral signal, F L , and the contralateral signal, F R .
  • the method selects, for each frequency bin, k, a maximum F X (k) respectively, a minimum F N (k) among the ipsilateral signal and the contralateral signal. This is performed by unit 501.
  • the minimum signal, F N is input to equalization filter 502.
  • the equalization filter 502 performs filtering in accordance with the transfer function H described above. Output, (1 - H ) F N , from the equalization filter 502 is input to the adder 504.
  • the maximum signal, F X is input to equalization filter 503.
  • the equalization filter 503 performs filtering in accordance with the transfer function H described above.
  • the equalization filter 503 outputs signal HF X .
  • Output, HF X from the equalization filter 502 is input to the adder 504.
  • Signals HF X and (1 - H ) F N are thereby equalized per frequency band or frequency bin prior to summation by adder 504.
  • a post-filter 505 implementing the transfer function G filters a signal output from the adder 503 before providing the intermediate signal V.
  • the post-filter 505 performs filtering in accordance with the transfer function G described above.
  • the second equalization unit is based on a structure equivalent to the structure shown in fig. 5 .
  • the second equalization unit performs linear combination of the ipsilateral signal, F L , and the contralateral signal, F R , per frequency bin.
  • Fig. 6 shows a top view of a user and a first target speaker and a second target speaker.
  • the user 610 wears an ipsilateral device 601 and a contralateral device 602.
  • the ipsilateral device 601 captures the first directional signal F L and receives the second directional signal F R from the contralateral device link 603, e.g. a wireless link.
  • the first target speaker 620 is on-axis, in front, of the user 610. Therefore, an acoustic speech signal from the first target speaker 620 arrives, at least substantially, at the same time at both the ipsilateral device and the contralateral device whereby the signals are captured simultaneously. In respect of the first target speaker 620, signals F L and F R thus have equal strength.
  • a second target speaker 630 is off-axis, slightly to the right, of the user 610.
  • the claimed method suppresses the signal from the first target speaker 620, who is on-axis relative to the user, proportionally to the strength of the signal received, at the ipsilateral device and at the contralateral device, from the second target speaker 630, who is off-axis relative to the user.
  • the signal from the first target speaker 620 who is on-axis relative to the user, proportionally to the strength of the signal received, at the ipsilateral device and at the contralateral device, from the second target speaker 630, who is off-axis relative to the user.
  • a determination that a target signal is present e.g. from target speaker 630 may result in a listening device switching to a so-called omnidirectional mode whereby noise sources 650 and 640 all of a sudden contribute to sound presented to the user of a prior art listening device who may be experiencing a significantly increased noise level despite the sound level of the noise sources 650 and 640 being lower than the sound level of the target speaker 630.
  • Fig. 7 shows a first example of graphs showing a directionality index.
  • the graphs are shown in a Cartesian coordinate system with Frequency (Hz) along the abscissa (x-axis) and Directivity index (dB) along the ordinate (y-axis).
  • the graph designated 'Sum' indicates the directivity index for a hearing device without equalization as described herein.
  • the graph designated 'Equal' indicates the directivity index for a hearing device with equalization as described herein, however without a post-filter. There is thus achieved a significant improvement of about 3 dB in terms of improved directionality at least at frequencies above about 500 Hz. However, also at lower frequencies an improvement is achieved.
  • Fig. 8 shows a second example of graphs showing a directionality index.
  • the graph designated 'Sum' also indicates the directivity index for a hearing device without equalization as described herein.
  • the graph designated 'Equal+Post' indicates the directivity index for a hearing device with equalization followed by post filtering as described herein, thus including a post-filter.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
EP21162221.2A 2020-03-23 2021-03-12 Verfahren an einem hörgerät Pending EP3886463A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/827,694 US11153695B2 (en) 2020-03-23 2020-03-23 Hearing devices and related methods
DKPA202070427A DK180745B1 (en) 2020-03-23 2020-06-29 Procedure by a hearing aid

Publications (1)

Publication Number Publication Date
EP3886463A1 true EP3886463A1 (de) 2021-09-29

Family

ID=74873529

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21162221.2A Pending EP3886463A1 (de) 2020-03-23 2021-03-12 Verfahren an einem hörgerät

Country Status (3)

Country Link
EP (1) EP3886463A1 (de)
JP (1) JP2021150959A (de)
CN (1) CN113438590A (de)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2360943A1 (de) * 2009-12-29 2011-08-24 GN Resound A/S Strahlformung in Hörgeräten
US20140314260A1 (en) * 2013-04-19 2014-10-23 Siemens Medical Instruments Pte. Ltd. Method of controlling an effect strength of a binaural directional microphone, and hearing aid system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2360943A1 (de) * 2009-12-29 2011-08-24 GN Resound A/S Strahlformung in Hörgeräten
US20140314260A1 (en) * 2013-04-19 2014-10-23 Siemens Medical Instruments Pte. Ltd. Method of controlling an effect strength of a binaural directional microphone, and hearing aid system

Also Published As

Publication number Publication date
JP2021150959A (ja) 2021-09-27
CN113438590A (zh) 2021-09-24

Similar Documents

Publication Publication Date Title
EP2916321B1 (de) Verarbeitung eines verrauschten audiosignals zur schätzung der ziel- und rauschspektrumsvarianzen
US9711131B2 (en) Sound zone arrangement with zonewise speech suppression
CN105872923B (zh) 包括双耳语音可懂度预测器的听力系统
US8194880B2 (en) System and method for utilizing omni-directional microphones for speech enhancement
EP3157268B1 (de) Hörgerät und hörsystem zur positionsbestimmung einer schallquelle
EP3794844B1 (de) Adaptive binaurale strahlformung mit erhaltung der räumlichen signalgebung in hörhilfevorrichtungen
CA2805491C (en) Method of signal processing in a hearing aid system and a hearing aid system
US20090202091A1 (en) Method of estimating weighting function of audio signals in a hearing aid
US9532149B2 (en) Method of signal processing in a hearing aid system and a hearing aid system
EP1784816A2 (de) Headset zur trennung von sprachsignalen in einer rauschbehafteten umgebung
CN108694956B (zh) 具有自适应子频带波束成形的听力设备及相关方法
US11153695B2 (en) Hearing devices and related methods
EP3886463A1 (de) Verfahren an einem hörgerät
JP6479211B2 (ja) 聴音装置
US11617037B2 (en) Hearing device with omnidirectional sensitivity
Lotter et al. A stereo input-output superdirective beamformer for dual channel noise reduction.
CN115278493A (zh) 具有全向灵敏度的听力设备
EP4277300A1 (de) Hörgerät mit adaptiver teilbandstrahlformung und zugehöriges verfahren
Wambacq DESIGN AND EVALUATION OF NOISE REDUCTION TECHNIQUES FOR BINAURAL HEARING AIDS

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220325

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220808

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED