EP4344252A1 - System und verfahren zur bereitstellung von hörhilfe - Google Patents

System und verfahren zur bereitstellung von hörhilfe Download PDF

Info

Publication number
EP4344252A1
EP4344252A1 EP22197088.2A EP22197088A EP4344252A1 EP 4344252 A1 EP4344252 A1 EP 4344252A1 EP 22197088 A EP22197088 A EP 22197088A EP 4344252 A1 EP4344252 A1 EP 4344252A1
Authority
EP
European Patent Office
Prior art keywords
audio
signal
audio signal
stream
streaming device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22197088.2A
Other languages
English (en)
French (fr)
Inventor
Arnaud Brielmann
Amre El-Hoiydi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonova Holding AG
Original Assignee
Sonova AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonova AG filed Critical Sonova AG
Priority to EP22197088.2A priority Critical patent/EP4344252A1/de
Publication of EP4344252A1 publication Critical patent/EP4344252A1/de
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/43Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/03Aspects of the reduction of energy consumption in hearing devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/07Use of position data from wide-area or local-area positioning systems in hearing devices, e.g. program or information selection

Definitions

  • the invention relates to a system and a method for providing hearing assistance to a user, wherein an audio source device, such as a TV set, provides an audio source signal both as a non-acoustic audio signal and as an acoustic stream and wherein a streaming device transmits the audio signal from the audio source device as a wireless audio stream to an audio listening device which stimulates the user's hearing according to the audio signal received from the streaming device.
  • the audio listening device may be a hearing device to be worn at ear level, such as a hearing aid, a headset or headphones.
  • a typical use case of such a system may be a couple watching TV, wherein one person has a mild hearing loss and therefore uses a hearing device with an open fitting, while the partner is a normal hearing person.
  • the hearing impaired person receives the audio signal from the TV set via a wireless audio stream provided by a streaming device connected to the TV set to the audio device worn by the hearing impaired user, while the normal hearing partner listens to the TV set via the TV loudspeakers or an external sound system connected to the TV set.
  • the hearing device user When the user of the hearing device has an open fitting, such as headphones or a hearing instrument with vented domes, which allows the hearing device user to still converse with the normal hearing partner while watching TV, the hearing device user also can hear the TV acoustic stream, to which the normal hearing partner listens, in addition to the wireless audio stream.
  • an open fitting such as headphones or a hearing instrument with vented domes
  • the hearing device user also can hear the TV acoustic stream, to which the normal hearing partner listens, in addition to the wireless audio stream.
  • the acoustic stream and the wireless audio stream will not be received in synchronicity by the user of the hearing device, primarily due to the propagation delay of the acoustic stream resulting from the distance between the TV loudspeaker and the hearing device, resulting in degraded audio quality.
  • the comb filter effect degrades the audio quality at delays of only a few milliseconds, whereas delays of more than 50 msec will produce distinctive echoes.
  • Each meter of distance between the TV loudspeakers and the user of the hearing device will add a delay of approximately 3 msec.
  • TV sets which delay the acoustic stream provided by the loudspeaker by several tens of milliseconds to compensate for the image processing and rendering time.
  • the digital audio stream presented on the optical output has a programmable delay so that the user can synchronize the image and the sound ("lip synchronization"), considering the delay introduced by an external sound system. Consequently, by default setting, in such system the audio stream is presented on the digital output by a certain time interval earlier than being rendered acoustically via the loudspeakers. This gives the opportunity to compensate the delay in the digital audio stream for external sound systems, such as sound bars or home theatres.
  • the programmable delay on the digital output is a coarse setting in such systems, typically with a granularity 10 msec, so as to achieve lip synchronization correction.
  • a coarse granularity of the delay adjustment does not allow for achieving a precise synchronicity at a given listening distance from the TV set, and a manual adjustment of the delay requires that for any change of the listening location the adjustment has to be repeated.
  • WO 2010/133246 A1 relates to a TV streaming system for hearing aids wherein either an audio signal captured by the hearing aid microphone from the TV acoustic stream or the audio signal received via the wireless link by the hearing aid is reproduced to the hearing aid user via the hearing aid loudspeaker. That one of the two audio signals which is not presented to the user is utilized for optimizing presentation of the other audio signal.
  • the wirelessly received signal is used to build a signal model of the signal source as a target reference for sound cleaning algorithms on the acoustic audio path.
  • WO 94/04010 A1 relates to an audio enhancement system for concert venues comprising main loudspeakers and a wirelessly connected headphone, wherein the wireless stream is provided in several channels having different preset delay of the wireless audio signal so that the user of the headphone can select, depending on his/her location relative to the main speakers, the channel with the most appropriate preset delay.
  • US 2019/0295525 A1 relates to an audio enhancement system for use in concert venues, comprising a mobile receiver device with a headphone for each listener, wherein in the mobile receiver device the wirelessly streamed audio signal and an audio signal captured by a microphone of the headset from the acoustic stream provided by the main speakers are compared so as to automatically compensate for an audio delay in the acoustic stream.
  • a similar system is known from WO 2012/048299 A1 , wherein the dedicated mobile device of the listener is provided with a microphone for capturing an audio signal from the acoustic stream provided by the main speakers.
  • US 2011/0142268 A1 relates to a wireless TV set to be used with a hearing aid, wherein the hearing aid receives both an acoustic stream provided by a loudspeaker of the TV set and, directly or via a mobile relay device, a wireless stream of the audio signal.
  • a delay between the acoustic stream and the audio signal in the wireless stream is determined in the hearing aid or the relay device, respectively, by comparing the received signals, and the corresponding delay information is transmitted wirelessly to the TV set which then applies the respective delay compensation on the loudspeaker input to delay the acoustic stream.
  • the invention is beneficial in that, by providing a wireless back link from the listening device to the streaming device, which may be used for measuring a distance between the streaming device and the listening device or for supplying a reference signal derived from the acoustic stream received by the listening device, the streaming device is enabled to apply a suitable compensation delay to the wirelessly transmitted audio signal so as to enhance synchronicity between the stimulation of the user's hearing by the acoustic stream and the stimulation of the user's hearing by the output transducer of the listening device.
  • the compensation delay can be determined by the streaming device, rather than by the audio listening device, which typically has limited resources only. Accordingly, the complexity can be pushed to that part of the system which is less resource constrained, in particular with regard to power supply (cabled power supply versus battery) and form factor (the larger housing of the streaming device allows more memory storage and computation power).
  • This solution also avoids the need for manual adjustment of the compensation delay on the TV set, allowing for a user transparent solution with automatic adaptation to the user's location and improved sound quality due to fine tuning of the compensation delay.
  • the reference signal may be derived from the second audio signal by extracting at least one feature from the second audio signal; according to another example, the reference signal may correspond to the second audio signal.
  • the extracted feature(s) include(s) at least one of: a normalized spectrogram in the Bark domain, a time domain signal envelope, occurrences of speech pauses or silence, a normalized magnitude output of one or more band-pass filters.
  • the delay control unit is configured to compare the first audio signal or at least one feature extracted from first audio signal to the reference signal so as to determine an estimate of the latency of the reference signal with regard to the first audio signal.
  • the streaming device may be configured to extract the same feature(s) as included in the reference signal from the first audio signal and to compare the feature(s) extracted from the first audio signal to the feature(s) included in the reference signal so as to determine the reference signal latency estimate.
  • the reference signal latency estimate may include a pattern matching method applied to the extracted feature(s), such as a correlation analysis or a maximum absolute difference method (MAD).
  • a pattern matching method applied to the extracted feature(s) such as a correlation analysis or a maximum absolute difference method (MAD).
  • the streaming device and the listening device may be configured to execute a measurement of the distance between them, wherein a traveling time of the acoustic stream from the streaming device and the listening device is estimated based on the measured distance, and wherein the estimated traveling time is used in the reference signal latency estimate.
  • the streaming device comprises a microphone arrangement for capturing the acoustic stream, wherein the streaming device is configured to determine a delay of the arrival of the acoustic stream at the streaming device relative to the first audio signal received from the audio source device, and wherein the delay of the arrival of the acoustic stream at the streaming device relative to the first audio signal received from the audio source device is used in the reference signal latency estimate.
  • the delay control unit is configured to determine a confidence level of the reference signal latency estimate.
  • the wireless interface of the audio listening device is configured to transmit, in addition to the reference signal, metadata indicative of a physical activity of the user.
  • the delay control unit is configured to determine the stability in time of the reference signal latency estimate, the confidence level of the reference signal latency estimate and the metadata indicative of a physical activity of the user.
  • the delay control unit is configured to determine the compensation delay based on the reference signal latency estimate.
  • the delay control unit is configured to set the compensation delay by taking in addition into account at least one of the confidence level, the metadata indicative of a physical activity of the user, and the stability in time of the reference signal latency estimate, the confidence level of the reference signal latency estimate and the metadata indicative of a physical activity of the user.
  • the delay control unit is configured to set the compensation delay by adding a delay offset so as to account for a latency of the feature extraction and for a latency of the transmission of the reference signal.
  • the streaming device and the listening device are configured to execute a measurement of the distance between them, wherein the delay control unit is configured to determine or update the compensation delay during times only when it is determined by a distance measurement that the distance between the streaming device and the listening device has changed by at least a certain predefined amount.
  • the delay control unit is configured to update the compensation delay during times only when it is determined from the audio signal captured by the microphone arrangement of the audio listening device that the user is actually watching the audio source device.
  • the delay control unit is configured to update the compensation delay only when it is determined that the updated compensation delay would differ by at a least a given threshold from the presently applied compensation delay.
  • the delay control unit is configured to update the compensation delay only when it is determined that the confidence level is greater than a threshold value.
  • the streaming device is configured to adjust the compensation delay by cross fading of the present compensation delay and the new compensation delay.
  • the audio source device is configured to provide the first audio signal as an electric signal and/or as an optical signal.
  • the wireless interfaces are configured to operate in the 2.4GHz ISM-band.
  • the wireless interfaces may be configured to use a Bluetooth Low Energy protocol.
  • the wireless interface of the audio listening device is configured to use in the transmission of the reference signal a modulation setting different from that used by the wireless interface of the streaming device when transmitting the wireless stream.
  • the wireless interface of the audio listening device may be configured to use in the transmission of the reference signal a lower data rate than used by the wireless interface of the streaming device when transmitting the wireless stream.
  • the audio listening device is a hearing device configured to be worn at ear level.
  • the audio listening device may be a hearing instrument, a hearing aid, a cochlear implant, a headset, an earphone or a headphone.
  • the listening device is configured to compare the delayed first audio signal received by the listening device via the wireless audio stream is compared in to the second audio signal so as to obtain information regarding room acoustics, back noises, and/or acoustic interferences, which information is used as input to a sound cleaning algorithm performed in the listening device for processing audio signals captured by the microphone arrangement of the listening device for reproduction by the output transducer of the listening device.
  • the distance measurement uses bidirectional wireless communication between the wireless interfaces of the streaming device and the listening device to measure the distance between the streaming device and the listening device.
  • the distance measurement may be executed periodically.
  • the distance measurement is initiated by the streaming device or by the listening device.
  • the distance measurement may use at least one of a time of flight measurement and a radio phase measurement.
  • the distance measurement uses an ultra wide band (UWB) technology.
  • UWB ultra wide band
  • the audio source device is configured to provide the audio source signal both as the non-acoustic first audio signal and as the acoustic stream.
  • the audio source device may be a TV set comprising a loudspeaker arrangement for generating the acoustic stream.
  • a distance between the loudspeaker arrangement and the listening device is estimated from the estimation of the latency of the reference signal with regard to the first audio signal, wherein the listening device is configured to receive the wireless audio stream only during times when the estimated distance is below a predefined or user adjustable threshold value.
  • a “hearing device” as used hereinafter is any ear level device suitable for reproducing sound by stimulating a user's hearing, such as an electroacoustic hearing aid, a bone conduction hearing aid, an active hearing protection device, a hearing prostheses device such as a cochlear implant, a wireless headset, an earbud, an earplug, an earphone, etc.
  • Fig. 1 is a schematic illustration of a first example of a system for providing hearing assistance to a user 10, comprising an audio source device 12, a streaming device 14 and an audio listening device 16 to be worn by the user 10.
  • the audio source device 12 provides an audio source signal both as an acoustic stream 18 and as a non-acoustic first audio signal 20.
  • the audio source device may be a TV set including - or being connected to - a loudspeaker arrangement 22 for generating the acoustic stream 18 and comprising an audio signal output 24, such as an optical output, where the audio source signal is provided as a digital signal.
  • the loudspeaker arrangement 22 may comprise a built-in loudspeaker set and/or an external sound bar or other external sound system.
  • the digital signal provided at the output 24 is supplied to the streaming device 14 where it is delayed in a delay application unit 26 (see Fig. 4 ) by applying a certain compensation delay to the first audio signal 20 received from the audio source device 12.
  • the delayed first audio signal 27 then is transmitted via a wireless interface 28 of the streaming device 14 as a wireless audio stream 30 to the listening device 16 where it is received by a wireless interface 32 of the listening device 16.
  • the first audio signal received by the wireless interface 32 of the listening device 16 is supplied to a processing unit 34 and from there to an output transducer 36 for stimulating the hearing of the user 10.
  • the amount of compensation delay applied in the delay application unit 26 is selected such that the synchronicity between the stimulation of the user's hearing by the output transducer, i.e., the stimulation by the reproduction of the first audio signal wirelessly received from the streaming device 14, and the stimulation of the user's hearing by the acoustic stream 18 provided by the loudspeaker arrangement 22 of the audio source device 12 is enhanced or optimized, so as to enhance audio quality.
  • the amount of compensation delay is determined in a delay control unit 55 of the streaming device 14 based on the first audio signal 20 received from the audio source device 12 and a reference signal 46 received via a wireless back link 40 from the listening device 16.
  • the wireless back link 40 is established by the wireless interface 32 of the listening device 16 and the wireless interface 28 of the streaming device 14.
  • the reference signal 46 is obtained from a second audio signal 38 in the listening device which is captured by a microphone arrangement 42 of the listening device 16 from the acoustic stream 18.
  • the audio listening device 16 is a hearing device to be worn at ear level and may be a hearing instrument, such as a hearing aid or cochlea implant, a headset, an earphone or a headphone.
  • a hearing instrument such as a hearing aid or cochlea implant, a headset, an earphone or a headphone.
  • Such devices usually are constrained with regard to power consumption and hence with regard to transmit power, so that usually the transmission power of the listening device 16 will be lower than the transmission power of the streaming device 14.
  • the back link 40 may use a modulation setting which is different from that used by the wireless (forward) stream 30.
  • a lower data rate may be used in the back link 40 for improving the sensibility, so as to compensate for the range reduction resulting from the reduced transmission power (a lower data rate allows for a narrower RF channel so that the receiver captures less noise energy which allows to detect weaker RF signal, resulting in improved receiver sensitivity).
  • the extracted feature(s) should allow an accurate and stable estimate of the stream latency, should be robust against room acoustics and should be lightweight in terms of bit rate.
  • Suitable examples for the extracted feature(s) are at least one of a normalized spectrogram in the Bark domain, a time domain signal envelope, occurrences of speech pauses or silence or a normalized magnitude output of one or more bandpass filters. As shown in Fig.
  • the listening device 16 comprises a feature extraction unit 48 for extracting the desired feature(s) from the second audio signal 38 captured by the microphone arrangement 42 of the listening device 16 from the acoustic stream 18, so as to generate the reference signal 46 to be transmitted via the back link 40 to the streaming device 14.
  • the wireless interfaces 28, 32 operate in the 2.4 GHz ISM band.
  • the interfaces 28, 32 may use a version of the Bluetooth low energy protocol which allows to use different modulation settings in both directions.
  • the streaming device 14 comprises a feature extraction unit 44 for extracting the same feature(s) as included in the reference signal 46 received by the wireless interface 28.
  • the output signal 50 of the feature extraction unit 44 and the reference signal 46 are supplied to a latency estimation unit 52 of the delay control unit 55 for estimating the time delay or latency of the reference signal 46 (which is representative of the reception of the acoustic stream 18 by the listening device 16 - and hence by the user 10) with regard to the first audio signal 20 (which is representative of the reception of the audio source signal via the wireless stream 30).
  • the latency estimation unit 52 may, for example, shift the feature signals 50 and 46 relative to each other so as to determine the best match, corresponding to the delay / latency of the reference signal 46 relative to the first audio signal 50.
  • the best match may be determined, for example by a correlation analysis or a maximum absolute difference (MAD) analysis or any other pattern matching method.
  • the latency estimation unit 52 also may provide for a confidence level of the latency estimation. For example, if f(x) is the pattern matching method used in the latency estimation unit 52, then argmax(f(x)) provides the latency estimate and max(f(x)) can be used as a metric of confidence on the latency estimate. For example, such confidence metric may be used to adjust the applied compensation delay only when it is determined that the user is actually watching TV, which situation is reflected by a high proportion of the TV audio stream in the acoustic field captured by the microphone arrangement 42 of the listening device 16 (typically, when the user is watching TV, there is no concurrent conversation with other persons and the acoustic pickup faces the TV set). The confidence level metric actually reflects the fact that the delay estimation in the latency estimation unit 52 loses precision when the sound at the listening device 16 contains a significant amount of sources apart from the TV audio stream 18, such as house noises or the partner's voice.
  • the back link 40 may be used to transmit, in addition to the reference signal 46 representative of the second audio signal 38 captured by the microphone arrangement 42 of the listening device 16, metadata like information on the user's physical activity (for example, whether or not the user is presently walking).
  • the latency estimation unit 52 may set the compensation delay to be applied in the delay application unit 26 based on the latency estimation obtained by the pattern matching method, taking into account in addition at least one of the confidence level, the metadata indicative of the user activity and the stability in time of those parameters.
  • the delay control unit 55 may add a delay offset to the latency estimation provided by the latency estimation unit 52 so as to account for a latency of the feature extraction and for a latency in the transmission of the reference signal 46, and any other potential factor of bias.
  • the desirable compensation delay to be applied in the delay application unit 26 may be continuously/regularly determined by the latency estimation unit 52 and may be updated accordingly when necessary. However, as already mentioned above, it may be desirable to update the compensation delay only during times when it is determined that the user of the listening device 16 is actually watching TV. Further, in order to avoid unpleasant user experience, the compensation delay may be updated only if the updated compensation delay differs from the presently applied compensation delay at least by a given minimum difference. Also, the compensation delay may be updated only when it is determined that the confidence level is greater than a threshold value.
  • a soft transmission should be initiated in the audio path to avoid unpleasant audio artefacts.
  • a cross fading method may be implemented, wherein two delay application units 26A and 26B are provided in parallel for applying, in parallel, the old compensation delay A in unit 26A and the new compensation delay B in unit 26B in parallel to the first audio signal 20.
  • the final delayed signal 27, which is supplied to the wireless interface 28, is obtained as a weighted sum of the two parallel signals, wherein the weight (“cross fading gain”) is gradually increased from 0 to 1.
  • a modified example of the system of Fig. 2 is shown, wherein the system in addition implements a distance measurement feature for estimating and taking into account the distance between the streaming device 14 and the listening device 16 when setting the compensation delay.
  • Such distance measurement can be achieved by using the bidirectional wireless communication between the wireless interfaces 32, 28 of the streaming device 14 and the listening device 16 via the wireless forward link 30 and the wireless back link 40.
  • wireless distance measurements can be conducted based on the received signal strength, such as given by the Received Signal Strength Indicator (RSSI) value
  • RSSI Received Signal Strength Indicator
  • UWB ultrawide band
  • a distance measurement between the listening device 16 and the streaming device 12 may be executed periodically with period lengths of 0.1 to 10 sec.
  • the distance measurement may be initiated by the listening device 16, or it may be initiated by the streaming device 14. In the latter case, the listening device 16 may be notified regarding the measurement via a control message transmitted on the forward link 30.
  • the distance measurement information may be used to reduce power consumption required for transmission of the reference signal by transmitting the reference signal 46 from the listening device 16 to the streaming device 14 and conducting a determination of the reference signal latency estimate in the latency estimation unit 52 only when it has been determined that the distance between the streaming device 14 and the listening device 16 has changed at least by a certain predefined amount (what indicates, for example, that the user 10 has moved or is moving).
  • the distance measurement information may be used for improving the audio latency estimation algorithm.
  • a distance measurement result can be used to compute an estimate of the expected latency, which estimate can be used to improve or simplify the latency estimation algorithm.
  • a travelling time of the acoustic stream 18 from the audio source device 12 to the listening device 16 may be estimated based on the measured distance (for example, typically the streaming device 14 will be located close to the audio source device 12 and/or the typical distance between the streaming device 14 and the audio source device 12 will be known, as these components do not move during use); the measured distance can be used as some kind of calibration for the reference signal latency estimate.
  • the streaming device 14 may be provided with a microphone arrangement 60 for capturing an audio signal from the acoustic stream 18 provided by the loudspeaker arrangement 22 of the audio source device 12, so as to determine a delay between such captured audio signal 62 and the first audio signal 20 received from the audio source device 12 in a delay measurement unit 64.
  • the signal delay determined in the delay measurement unit 64 is representative of the delay of the arrival of the acoustic stream 18 at the streaming device 14 relative to the digital signal provided at the output 24 of the audio source device 12.
  • the expected total latency of the audio signal in the acoustic stream 18 received at the listening device 16 (and hence at the user 10) relative to the wirelessly received audio signal corresponds approximately to the sum of the delay between the audio signal 62 captured by the microphone arrangement 60 of the streaming device 14 and the first audio signal 20 from the audio source device 12 (i.e., the delay determined by the delay measurement unit 64) and the propagation delay of the acoustic stream 18 resulting from the distance of the user 10 to the loudspeaker arrangement 22 of the audio source device 12.
  • the audio latency estimation algorithm applied in the latency estimation unit 52 of the streaming device 14 may be further improved by taking into account the latency measured by the delay measurement unit 64 of the streaming device 14 in addition to the measured distance between the streaming device 14 and the listening device 16.
  • the latency of the acoustic stream 18 may be computed based on the distance measurement and a latency measurement via the microphone arrangement 60 at the streaming device 14.
  • an appropriate compensation delay may be applied by the delay compensation unit 26 without the need for having the listening device 16 transmit a reference signal containing audio features of the audio signal captured by the microphone arrangement 42 of the listening device 16.
  • the wireless back link 40 would be required for the distance measurement only.
  • the distance between the loudspeaker arrangement 22 of the audio source device 12 and the listening device 16 may be estimated from the estimate of the latency of the reference signal with regard to the first audio signal.
  • Such estimated distance is representative of the distance of the user of the listing device 16 from the loudspeaker arrangement 22 of the audio source device 12; this user distance information may be utilized in several ways, on streaming side and/or on the listening side.
  • the listening device 16 may decide not to render the wireless audio stream at all if the distance is above a certain threshold. This threshold may even be given as a user setting, e.g., on a smartphone application of the listening device 16.
  • the system may provide two user distance estimates, one for each ear. If the user is not facing the audio source device 12 (or its loudspeaker arrangement 22), then the acoustic streams 18 arrive at some angle on the user's side, which angle can be estimated from the difference in the left ear and right ear user distance estimates.
  • the listening devices can apply angle-of-incidence-dependent filters (i.e., Head Related Transfer Functions) on the received wireless audio stream 30 such that the user perceives the wireless audio stream arriving from the same direction than the acoustic audio stream.
  • angle-of-incidence-dependent filters i.e., Head Related Transfer Functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
EP22197088.2A 2022-09-22 2022-09-22 System und verfahren zur bereitstellung von hörhilfe Pending EP4344252A1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22197088.2A EP4344252A1 (de) 2022-09-22 2022-09-22 System und verfahren zur bereitstellung von hörhilfe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22197088.2A EP4344252A1 (de) 2022-09-22 2022-09-22 System und verfahren zur bereitstellung von hörhilfe

Publications (1)

Publication Number Publication Date
EP4344252A1 true EP4344252A1 (de) 2024-03-27

Family

ID=83438220

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22197088.2A Pending EP4344252A1 (de) 2022-09-22 2022-09-22 System und verfahren zur bereitstellung von hörhilfe

Country Status (1)

Country Link
EP (1) EP4344252A1 (de)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994004010A1 (en) 1992-07-30 1994-02-17 Clair Bros. Audio Enterprises, Inc. Concert audio system
WO2010133246A1 (en) 2009-05-18 2010-11-25 Oticon A/S Signal enhancement using wireless streaming
US20110142268A1 (en) 2009-06-08 2011-06-16 Kaoru Iwakuni Hearing aid, relay device, hearing-aid system, hearing-aid method, program, and integrated circuit
WO2012048299A1 (en) 2010-10-07 2012-04-12 Clair Brothers Audio Enterprises, Inc. Method and system for enhancing sound
US20120300958A1 (en) * 2011-05-23 2012-11-29 Bjarne Klemmensen Method of identifying a wireless communication channel in a sound system
US20190116434A1 (en) * 2017-10-16 2019-04-18 Intricon Corporation Head Direction Hearing Assist Switching
US20190295525A1 (en) 2018-03-22 2019-09-26 Sennheiser Electronic Gmbh & Co. Kg Method and Device for Generating and Providing an Audio Signal for Enhancing a Hearing Impression at Live Events
US20200314562A1 (en) * 2019-03-28 2020-10-01 Oticon A/S Hearing device or system for evaluating and selecting an external audio source
WO2021087524A1 (en) * 2019-10-30 2021-05-06 Starkey Laboratories, Inc. Generating an audio signal from multiple inputs
US20220167098A1 (en) * 2020-11-25 2022-05-26 James C. Young Hearing aid

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994004010A1 (en) 1992-07-30 1994-02-17 Clair Bros. Audio Enterprises, Inc. Concert audio system
WO2010133246A1 (en) 2009-05-18 2010-11-25 Oticon A/S Signal enhancement using wireless streaming
US20110142268A1 (en) 2009-06-08 2011-06-16 Kaoru Iwakuni Hearing aid, relay device, hearing-aid system, hearing-aid method, program, and integrated circuit
WO2012048299A1 (en) 2010-10-07 2012-04-12 Clair Brothers Audio Enterprises, Inc. Method and system for enhancing sound
US20120300958A1 (en) * 2011-05-23 2012-11-29 Bjarne Klemmensen Method of identifying a wireless communication channel in a sound system
US20190116434A1 (en) * 2017-10-16 2019-04-18 Intricon Corporation Head Direction Hearing Assist Switching
US20190295525A1 (en) 2018-03-22 2019-09-26 Sennheiser Electronic Gmbh & Co. Kg Method and Device for Generating and Providing an Audio Signal for Enhancing a Hearing Impression at Live Events
US20200314562A1 (en) * 2019-03-28 2020-10-01 Oticon A/S Hearing device or system for evaluating and selecting an external audio source
WO2021087524A1 (en) * 2019-10-30 2021-05-06 Starkey Laboratories, Inc. Generating an audio signal from multiple inputs
US20220167098A1 (en) * 2020-11-25 2022-05-26 James C. Young Hearing aid

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MYO MIN THEIN: "Comparing the Accuracy of Bluetooth Low Energy and UWB Technology for In-room Positioning", April 2019, WORCESTER POLYTECHNIC INSTITUTE

Similar Documents

Publication Publication Date Title
US11678128B2 (en) Method and apparatus for a binaural hearing assistance system using monaural audio signals
US9930456B2 (en) Method and apparatus for localization of streaming sources in hearing assistance system
DK3202160T3 (en) PROCEDURE TO PROVIDE HEARING ASSISTANCE BETWEEN USERS IN AN AD HOC NETWORK AND SIMILAR SYSTEM
EP2116102B1 (de) Drahtloses kommunikationssystem und verfahren
US9338565B2 (en) Listening system adapted for real-time communication providing spatial information in an audio stream
EP2148527B1 (de) System zur Reduzierung der akustischen Rückkopplung in Hörhilfen unter Verwendung von interauraler Signalübertragung, Verfahren und Verwendung
JP5886737B2 (ja) 信号強調機能を有する補聴器
US11438713B2 (en) Binaural hearing system with localization of sound sources
JP2020025250A (ja) バイノーラルアクティブ閉塞キャンセレーション機能を有するバイノーラル聴覚機器システム
US20180020298A1 (en) Hearing assistance system
EP2441277A1 (de) Verfahren und vorrichtung für direktionalen akustikeinbau von hörhilfen
EP2984855A1 (de) Verfahren und system zur bereitstellung einer hörhilfe für einen benutzer
JP2017063419A (ja) 雑音を受ける発話信号の客観的知覚量を決定する方法
DK2617127T3 (en) METHOD AND SYSTEM TO PROVIDE HEARING ASSISTANCE TO A USER / METHOD AND SYSTEM FOR PROVIDING HEARING ASSISTANCE TO A USER
EP4344252A1 (de) System und verfahren zur bereitstellung von hörhilfe
JP2022183121A (ja) スペクトル時間変調検出テストユニット
CA2592686A1 (en) Method and apparatus for a binaural hearing assistance system using monaural audio signals

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR