CN105744455B - Method for superimposing spatial auditory cues on externally picked-up microphone signals - Google Patents

Method for superimposing spatial auditory cues on externally picked-up microphone signals Download PDF

Info

Publication number
CN105744455B
CN105744455B CN201511009209.5A CN201511009209A CN105744455B CN 105744455 B CN105744455 B CN 105744455B CN 201511009209 A CN201511009209 A CN 201511009209A CN 105744455 B CN105744455 B CN 105744455B
Authority
CN
China
Prior art keywords
microphone signal
hearing
signal
external microphone
hearing assistance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201511009209.5A
Other languages
Chinese (zh)
Other versions
CN105744455A (en
Inventor
K-F·J·格兰
J·乌德森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GN Hearing AS
Original Assignee
GN Resound AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GN Resound AS filed Critical GN Resound AS
Publication of CN105744455A publication Critical patent/CN105744455A/en
Application granted granted Critical
Publication of CN105744455B publication Critical patent/CN105744455B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/43Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/01Input selection or mixing for amplifiers or loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic

Abstract

The present disclosure relates in a first aspect to a method of superimposing spatial auditory cues to an externally picked up sound signal in a hearing instrument. The method comprises the steps of generating an external microphone signal by an external microphone device and transmitting the external microphone signal via a first wireless communication link to a wireless receiver of the first hearing instrument. The method further comprises the steps of determining a response characteristic of the first spatial synthesis filter by correlating the external microphone signal with a first hearing assistance microphone signal of the first hearing instrument; and filtering, by the first spatial synthesis filter, the external microphone signal to produce a first synthesized microphone signal comprising a first spatial auditory cue.

Description

Method for superimposing spatial auditory cues on externally picked-up microphone signals
Technical Field
The present disclosure relates in a first aspect to a method of superimposing spatial auditory cues to an externally picked up sound signal in a hearing instrument. The method comprises the steps of generating an external microphone signal by an external microphone device and transmitting the external microphone signal to a wireless receiver of the first hearing instrument via a first wireless communication link. Further steps of the method include determining a response characteristic of the first spatial synthesis filter by correlating the external microphone signal with a first hearing assistance microphone signal of the first hearing instrument; and filtering the external microphone signal by a first spatial synthesis filter to produce a first synthesized microphone signal comprising the first spatial auditory cue.
Background
Hearing instruments or hearing aids typically comprise a microphone arrangement comprising one or more microphones for receiving incoming sounds, such as speech and music signals. The incoming sound is converted into an electrical microphone signal or signals which are amplified and processed in the control and processing circuitry of the hearing instrument according to the parameters set for one or more preset listening programs. The parameter settings for each listening program have typically been calculated from the specific hearing deficiency or loss of the hearing impaired individual, for example expressed in an algorithm. The output amplifier of the hearing instrument delivers a processed (i.e. hearing loss compensated) microphone signal to the ear canal of the user via an output transducer such as a micro-speaker, a receiver or possibly an electrode array. The micro-speaker or receiver may be provided in combination with the microphone device in the interior of the shell or casing of the hearing instrument, or separately in an ear plug or earpiece of the hearing instrument.
Hearing impaired persons often suffer from a loss of hearing sensitivity, which loss depends on both the frequency and the level of the sound in question. Thus, a hearing impaired person may be able to hear certain frequencies (e.g., low frequencies) as normal hearing persons, but not sound having the same sensitivity as normal hearing individuals at other frequencies (e.g., high frequencies). Also, a hearing impaired person may perceive loud sounds, e.g. above 90 db SPL, having the same intensity as a normal hearing person, but still not hearing a light sound with the same sensitivity as a normal hearing person. Thus, in the latter case, the hearing impaired person suffers from a loss of dynamic range at certain frequencies or frequency bands.
In addition to the above-mentioned frequency and level dependent hearing loss of a hearing impaired person, the loss often results in a reduced ability to distinguish between competing or interfering sound sources, for example in a noisy sound environment with multiple active loudspeakers and/or noise sound sources. Healthy hearing systems rely on the well-known cocktail party effect to distinguish between competing or interfering sound sources in such harsh listening conditions. The signal-to-noise ratio (SNR) of the sound at the listener's ear may be very low, e.g., around 0 decibels. Cocktail party effects rely inter alia on spatial auditory cues in competing or interfering sound sources to perform discrimination based on spatial localization of competing sound sources. Under such poor listening conditions, the SNR of the sound received at the ear of a hearing impaired individual may be so low that the hearing impaired individual cannot detect and use spatial auditory cues to distinguish between sound streams different from competing sound sources. This results in a severely degraded ability to hear and understand speech in a noisy sound environment for hearing impaired persons compared to normal hearing persons.
Many prior art analog and digital hearing aids have been designed to alleviate the above identified hearing deficiencies in noisy sound environments. Common approaches to solving this problem have applied SNR enhancement techniques to hearing assistance microphone signals such as various types of fixed or adaptive beamforming to provide enhanced directivity. Whether based on wireless technology or not, these technologies have been shown to have limited impact. With the introduction of wireless hearing assistance technology and accessories, it has become possible to place external microphone devices close to or in some cases on the target sound source (i.e., via a belt or shirt clip). The external microphone device may, for example, be housed in a portable unit that is disposed in proximity to a speaker, such as a teacher in a classroom environment. Due to the proximity of the microphone arrangement to the target sound source, an external microphone signal with the target sound signal can be generated with a significantly higher SNR than the SNR of the same target sound signal recorded/received at the hearing instrument microphone. The external microphone signal is sent to the wireless receiver of the left and/or right ear hearing instrument via a suitable wireless communication link or links. The wireless communication link or links may be based on proprietary or industry standard wireless technologies, such as bluetooth. The hearing instrument or instruments thereafter reproduce the external microphone signal with the SNR of the improved target sound signal via a suitable processor and output transducer to one or both ears of the hearing assistance user.
However, the external microphone signals generated by such prior art external microphone devices lack spatial auditory cues due to their remote or remote location in the sound field. The distant or remote location is typically located away from the head and ears of the hearing assistance user, e.g., more than 5 meters or 10 meters away. The lack of these spatial auditory cues during reproduction of the external microphone signal in the hearing instrument or instruments leads to an artificial and unpleasant internalized perception of the target sound source. The sound source appears to be placed inside the head of the hearing assistance user. It would therefore be advantageous to provide an externally recorded or picked-up sound signal capable of reproducing sound signals with appropriate spatial cues to a hearing assistance user or patient that provide more natural sound perception. This problem has been solved and solved by one or more of the embodiments described herein in connection with remotely picked-up microphone signals in a hearing instrument by generating and superimposing appropriate spatial auditory cues on the remotely recorded or picked-up microphone signals.
Disclosure of Invention
A first aspect relates to a method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument, comprising the steps of:
a) generating, by an external microphone device located in the sound field, an external microphone signal in response to the impinging sound,
b) transmitting the external microphone signal to the wireless receiver of the first hearing instrument via the first wireless communication link,
c) generating, by a microphone arrangement of a first hearing instrument, a first hearing assistance microphone signal while receiving an external microphone signal, wherein the first hearing instrument is placed at or in a sound field in a left or right ear of a user,
d) determining a response characteristic of the first spatial synthesis filter by correlating the external microphone signal with the first hearing assistance microphone signal,
e) in the first hearing instrument, the received external microphone signal is filtered by a first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial auditory cues.
The present disclosure addresses and solves the above discussed prior art problems with artificial and unpleasant intrinsic perception of a target sound source when reproduced via a remotely placed external microphone device instead of through a first hearing assisted microphone device or instrument. The determination of the frequency response characteristic or equivalent impulse response characteristic of the first spatial synthesis filter according to some embodiments allows the addition or superimposition of appropriate spatial auditory cues to the received external microphone signal. These spatial auditory cues largely correspond to auditory cues generated by sound propagating from a real spatial position (where the first hearing instrument is set) relative to a target sound field of the hearing user's head. The proximity between the external microphone arrangement and the target ensures that the target sound signal generally processes a significantly higher signal-to-noise ratio of the target sound picked up by the microphone arrangement of the first hearing assistance microphone signal. The microphone arrangement of the first hearing instrument is preferably accommodated within a housing or casing of the first hearing instrument such that it is arranged at or in the left or right ear of the hearing assistance user as the case may be. Those skilled in the art will appreciate that the first hearing instrument may comprise different types of hearing instruments, such as so-called BTE types, ITE types, CIC types, RIC types, etc. Thus, the microphone arrangement of the first hearing instrument may be located at or at various locations in the user's ear, such as behind the pinna of the user, or within the outer ear of the user or within the ear canal of the user.
A significant advantage is that the first spatial synthesis filter can be determined from the first hearing assistance microphone signal and the external microphone signal without involving the second hearing assistance microphone signal picked up at the other ear of the user. Thus, there is no need for binaural communication of the first and second hearing aid microphone signals between the first or left ear hearing instrument and the second or right ear hearing instrument. While a direct communication of this kind between the first and second hearing instruments would require the presence of a wireless transmitter in at least one of the first and second hearing instruments, resulting in increased power consumption and complexity of the hearing instrument in question.
The process preferably comprises the further steps of:
f) processing, by a first hearing assistance signal processor, the first composite microphone signal according to individual hearing loss data of the user to produce a first hearing loss compensated output signal of the first hearing instrument;
g) presenting a first hearing loss compensated output signal to a left ear or a right ear of a user through a first output transducer. The first output transducer may comprise a micro-speaker or receiver disposed inside a housing or casing of the first hearing instrument or separately disposed in an ear bud or earpiece of the first hearing instrument. The performance of the first hearing assistance signal processor is discussed below.
Another embodiment of the method comprises superimposing the respective spatial auditory cues to the remotely picked-up sound signals for the left ear or the first hearing instrument and the right ear or the second hearing instrument. This embodiment enables the generation of binaural spatial auditory cues to hearing impaired individuals to take advantage of the advantages associated with binaural processing of acoustic signals propagating in a sound field (such as a target sound of a target sound source). The binaural method of superimposing spatial auditory cues to a remotely picked-up sound signal comprises the further steps of:
b1) transmitting the external microphone signal via a second wireless communication link to a wireless receiver of a second hearing instrument,
c1) generating, by a microphone arrangement of a second hearing instrument, a second hearing assistance microphone signal in synchronism with receiving the external microphone signal, wherein the second hearing instrument is placed in the sound field at or in the other ear of the user,
d1) determining a response characteristic of the second spatial synthesis filter by correlating the external microphone signal with the second hearing assistance microphone signal,
e1) in the second hearing instrument, the received external microphone signal is filtered by a second spatial synthesis filter to produce a second synthesized microphone signal comprising second spatial auditory cues. The binaural method may comprise performing the further steps of:
f1) processing, by a second hearing assistance signal processor of a second hearing instrument, the second composite microphone signal according to the individual hearing loss data of the user to produce a second hearing loss compensated output signal of the second hearing instrument;
g1) presenting a second hearing loss compensated output signal to the other ear of the user via a second output transducer.
In an embodiment of the method, the step of processing the first composite microphone signal comprises:
the first synthesized microphone signal and the first hearing assistance microphone signal are mixed at a first ratio to produce a hearing loss compensated output signal.
According to one such embodiment, the ratio between the first composite microphone signal and the first hearing aid microphone signal is varied in dependence on the signal-to-noise ratio of the first microphone signal. Several advantages associated with mixing the first composite microphone signal and the first hearing assistance microphone signal are discussed in detail below in conjunction with the figures.
The skilled person will understand that there are many ways of correlating the external microphone signal and the first hearing assistance microphone signal to determine the response characteristic of the first spatial synthesis filter according to step d) and/or step d1) described above. In an embodiment of the method the external microphone signal and the first hearing assistance microphone signal are cross-correlated to determine a time delay between these signals. This embodiment additionally comprises the steps of: determining a level difference between the external microphone signal and the first hearing assistance microphone signal based on a cross-correlation of the external microphone signal and the first hearing assistance microphone signal, the response characteristic of the first spatial synthesis filter being determined by multiplying the determined time delay related Delta function and the determined level difference.
External microphone signal sE(t) and a first hearing assistance microphone signal sLThe cross-correlation of (t) may be based on
Figure GDA0002781770560000051
Executing;
time delay τ between the external microphone signal and the first hearing assistance microphone signalLFrom cross correlation rL(t) determining:
τL=arg maxt rL(t);
external microphone signal sE(t) and a first hearing assistance microphone signal sLLevel difference A between (t)LCan be determined according to
Figure GDA0002781770560000061
To be executed.
Finally, the impulse response g of the first spatial synthesis filter representing the response characteristic of the first spatial synthesis filterL(t) may be according to gL(t)=ALδ(t-τL) To be determined.
The first synthesized microphone signal may be at an impulse response g from the first spatial synthesis filterL(t) in the time domain is generated by the further steps of:
-convolving the external microphone signal with the impulse response of the first spatial synthesis filter. Those skilled in the art will appreciate that the first synthesized microphone signal may be generated from the respective frequency responses of the first spatial synthesis filter and the frequency domain representation of the external microphone signal by a DFT or FFT representation of the first spatial synthesis filter and the external microphone signal.
In an alternative embodiment of the method, determining the correlation of the external microphone signal and the first hearing assistance microphone signal of the response characteristic of the first spatial synthesis filter according to step d) and/or step d1) above comprises:
according to
Figure GDA0002781770560000062
To determine the impulse response g of the first spatial synthesis filterL(t),
Wherein g isL(t) represents the impulse response of the first spatial synthesis filter.
A significant advantage of the latter embodiment is the impulse response g of the first spatial synthesis filterL(t) may be calculated in real time as the respective adaptive filter by a suitably configured or programmed signal processor of the first hearing instrument and/or the second hearing instrument for the second spatial synthesis filter. gLThe solution of (t) may include filtering the external microphone signal by a first adaptive filter to produce a first synthesized microphone signal as an output of the adaptive filter, and subtracting the first synthesized microphone signal output by the first adaptive filter from the first hearing assistance microphone signal to produce an error signal, adapting filter coefficients of the first adaptive filter according to a predetermined adaptive algorithm to minimize the error signal. These adaptive filter-based embodiments of the first spatial synthesis filter are discussed in detail below in conjunction with the figures.
A second aspect relates to a hearing aid system comprising a first hearing instrument and a portable external microphone unit. The portable external microphone unit includes:
microphone means for placement in a sound field and for generating an external microphone signal in response to impinging sound,
a first wireless transmitter configured to transmit an external microphone signal via a first wireless communication link. A first hearing instrument of a hearing aid system comprises:
a hearing aid or housing configured to be placed at or in a left or right ear of a user,
a first wireless receiver configured to receive an external microphone signal via a first wireless communication link,
a first hearing assistance microphone configured to generate a first hearing assistance microphone signal in response to sound when the external microphone signals are received synchronously,
a first signal processor configured to determine a response characteristic of the first spatial synthesis filter by correlating the external microphone signal with the first hearing assistance microphone signal. The first signal processor is further configured to filter the received external microphone signal by a first spatial synthesis filter to produce a first synthesized microphone signal comprising a first spatial auditory cue.
As discussed above, the hearing aid system may be configured for binaural use and processing of external microphone signals such that the first hearing instrument is disposed at or in the user's left or right ear and the second hearing instrument is disposed at or in the user's other ear. Accordingly, the hearing aid system may comprise a second hearing instrument comprising:
a second hearing aid auxiliary housing or casing configured to be placed at or in the other ear of the user,
a second wireless receiver configured to receive an external microphone signal via a second wireless communication link,
a second hearing assistance microphone configured to generate a second hearing assistance microphone signal in response to sound when the external microphone signals are received synchronously,
a second signal processor configured to correlate the external microphone signal and the second hearing assistance microphone signal to determine a response characteristic of the second spatial synthesis filter, wherein the second signal processor is further configured to filter the received external microphone signal by the second spatial synthesis filter to produce a second synthesized microphone signal comprising a second spatial auditory cue.
The signal processing functions of each of the first and/or second signal processors may be performed or embodied by dedicated digital hardware or by one or more computer programs, program routines, and threads of execution running on a software programmable signal processor or processors. Each of the computer program, routine, and thread of execution may include a plurality of executable program instructions. Alternatively, the signal processing functions may be performed by a combination of dedicated digital hardware and executing computer programs, routines, and threads running on a software programmable signal processor or processors. Each of the above-described methods of correlating the external microphone signal and the second hearing assistance microphone signal may be performed by a computer program, program routine or thread executable on a suitable software programmable microprocessor, such as a programmable digital signal processor. The microprocessor and/or dedicated digital hardware may be integrated on an ASIC or implemented on an FPGA device. Likewise, the filtering of the external microphone signal received by the first spatial synthesis filter may be performed by a computer program, program routine or thread of execution executable on a suitable software programmable microprocessor, such as a programmable digital signal processor. Software programmable microprocessors and/or dedicated digital hardware may be integrated on an ASIC or implemented on an FPGA device.
Each of the first and second wireless communication links may be based on RF signal transmission of an external microphone signal to the first and/or second hearing instrument, e.g. analog FM technology or various types of digital transmission technology, e.g. compliant with a bluetooth standard such as bluetooth LE or other standardized RF communication protocols. In the alternative, each of the first and second wireless communication links may be based on optical signal transmission. The same type of wireless communication technology is preferably used for the first and second wireless communication links to minimize system complexity.
A method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument comprises:
receiving, via a first wireless communication link, an external microphone signal from an external microphone located in the sound field, wherein the receiving act is performed using a wireless receiver of the first hearing instrument; generating, by a microphone system of the first hearing instrument, a first hearing assistance microphone signal, wherein the first hearing instrument is located at or in a left or right ear of a user, respectively; determining a response characteristic of a first spatial synthesis filter by correlating the external microphone signal and the first hearing assistance microphone signal; and in the first hearing instrument, filtering the received external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising a first spatial hearing cue.
Optionally, the microphone system may comprise one or more microphones.
Optionally, the method further comprises: processing, by a first signal processor, the first composite microphone signal according to individual hearing loss data of the user to produce a first hearing loss compensated output signal of the first hearing instrument; and presenting the first hearing loss compensated output signal to the left or right ear of the user through the first output transducer.
Optionally, the method further comprises: receiving the external microphone signal via a second wireless communication link, wherein the act of receiving the external microphone signal via the second wireless communication link is performed using a wireless receiver of a second hearing instrument; generating, by a microphone system of a second hearing instrument, a second hearing assistance microphone signal when the external microphone signal is received by the second hearing instrument, wherein the first and second hearing instruments are located at or in the left and right ear, respectively, or vice versa; determining a response characteristic of a second spatial synthesis filter by correlating the external microphone signal and the second hearing assistance microphone signal; and in the second hearing instrument, filtering the received external microphone signal by a second spatial synthesis filter to produce a second synthesized microphone signal comprising a second spatial auditory cue.
Optionally, the act of processing the first composite microphone signal comprises mixing the first composite microphone signal and the first hearing assistance microphone signal at a first ratio to produce the hearing loss compensated output signal.
Optionally, the method further comprises varying a ratio between the first composite microphone signal and the first hearing assistance microphone signal in dependence on the signal-to-noise ratio.
Optionally, the act of determining the response characteristic comprises: cross-correlating the external microphone signal and the first hearing assistance microphone signal to determine a time delay between the external microphone signal and the first hearing assistance microphone signal; determining a level difference between the external microphone signal and the first hearing assistance microphone signal based on a result from the act of cross-correlating; and determining a response characteristic of the first spatial synthesis filter by multiplying a Delta function associated with the determined time delay by the determined level difference.
Optionally, the act of correlating the external microphone signal with the first hearing assistance microphone signal comprises a correlation based on
Figure GDA0002781770560000091
Determining rL(t),
Wherein s isE(t) denotes an external microphone signal, and sL(t) represents a first hearing assistance microphone signal; the time delay between the external microphone signal and the first hearing assistance microphone signal is dependent on tauL=arg maxt rL(t) determining:
wherein tau isLRepresents a time delay; determining an external microphone signal sE(t) and a first hearing assistance microphone signal sLBehavior of level difference between (t) according to
Figure GDA0002781770560000092
To perform;
wherein A isLRepresenting a level difference; and wherein the act of determining the response characteristic comprises determining according to gL(t)=ALδ(t-τL) Of the first spatial synthesis filter of (1)L(t)。
Optionally, the first composite microphone signal is also generated by convolving the external microphone signal with the impulse response of the first spatial composite filter.
Optionally, the act of determining the response characteristic comprises: according to
Figure GDA0002781770560000101
To determine the impulse response g of the first spatial synthesis filterL(t),
Wherein g isL(t) denotes the impulse response of the first spatial synthesis filter, sE(t) denotes an external microphone signal, and sL(t) represents a first hearing assistance microphone signal.
Optionally, the method further comprises: subtracting the first synthesized microphone signal from the first hearing assistance microphone signal to generate an error signal; the filter coefficients for the first adaptive filter are determined according to a predetermined adaptive algorithm, thereby minimizing the error signal.
Optionally, the first hearing assistance microphone signal is generated by a microphone system of the first hearing instrument when the external microphone signal is received from the external microphone.
A hearing aid system comprising: a first hearing instrument; and a portable external microphone unit. The portable external microphone unit includes: the system comprises a microphone for placement in the sound field and for generating an external microphone signal, and a first wireless transmitter configured to transmit the external microphone signal via a first wireless communication link. The first hearing instrument includes: a hearing assistance housing or shell configured to be placed at or in a left or right ear of a user; a first wireless receiver configured to receive an external microphone signal via a first wireless communication link; the apparatus comprises a first hearing assistance microphone configured to generate a first hearing assistance microphone signal in response to sound when an external microphone signal is received by a first wireless receiver, and a first signal processor configured to determine a response characteristic of a first spatial synthesis filter by correlating the external microphone signal with the first hearing assistance microphone signal, wherein the first spatial synthesis filter is configured to filter the received external microphone signal to produce a first synthesized microphone signal comprising a first spatial auditory cue.
Optionally, the hearing aid system further comprises a second hearing instrument, wherein the second hearing instrument comprises: a second hearing assistance housing or shell; a second wireless receiver configured to receive an external microphone signal via a second wireless communication link; a second hearing assistance microphone configured to generate a second hearing assistance microphone signal when the external microphone signal is received by the second wireless receiver; and a second signal processor configured to determine a response characteristic of a second spatial synthesis filter based on the external microphone signal and the second hearing assistance microphone signal, wherein the second spatial synthesis filter is configured to filter the received external microphone signal to produce a second synthesized microphone signal comprising a second spatial auditory cue.
Other features, embodiments, and advantages are described below in the detailed description.
Drawings
Embodiments will be described in more detail in conjunction with the appended drawings, in which:
fig. 1 is a schematic block diagram of a hearing aid system comprising a left-ear and a right-ear hearing instrument communicating with an external microphone arrangement via a wireless communication link according to a first embodiment; and
fig. 2 is a schematic block diagram illustrating an adaptive filter solution for real-time adaptive calculation of filter coefficients of a first spatial synthesis filter of a left-ear or right-ear hearing instrument.
Detailed Description
Various embodiments are described below with reference to the drawings. Like reference numerals refer to like elements throughout. Similar elements will therefore not be described in detail with respect to the description of the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. Moreover, the illustrated embodiments need not have all of the aspects or advantages shown. Aspects or advantages described in connection with a particular embodiment are not necessarily limited to that embodiment, and may be practiced in any other embodiment, even if not so shown, or if not so explicitly described.
Fig. 1 is a schematic diagram of a hearing aid system operating in an adverse sound or listening environment according to a first embodiment. The hearing aid system 101 comprises an external microphone device mounted within a portable housing structure of a portable external microphone unit 105. The external microphone device may comprise one or more individual omnidirectional or directional microphones. The portable housing structure may include a rechargeable battery pack that supplies power to one or more individual microphones and further provides power to various electronic circuits, such as digital control logic, a user readable screen or display, and a wireless transceiver (not shown). The external microphone arrangement may comprise a mate microphone, a clip microphone, a conference microphone or form part of a smartphone or mobile phone.
The hearing aid system 101 comprises a first hearing instrument or hearing aid 107 mounted in or at the right or left ear of a hearing impaired individual; and a second hearing instrument or hearing aid 109 mounted in or at the other ear of the hearing impaired individual. Thus, the hearing impaired individual 102 is in this exemplary embodiment equipped with hearing aids in both ears, such that a hearing loss compensated output signal is provided to both the left and right ears. It will be appreciated by the person skilled in the art that different types of hearing instruments, such as the so-called BTE type, ITE type, CIC type, etc., may be used, depending on various factors, such as the size of the hearing loss of the hearing impaired individual, personal preferences and processing power.
Each of the first and second hearing instruments 107, 109 comprises a wireless receiver or transceiver (not shown) allowing each hearing instrument to receive wireless signals or data, in particular the previously discussed external microphone signals transmitted from the portable external microphone unit 105. The external microphone signal may be modulated and transmitted as an analog signal or as a digitally encoded signal via the wireless communication link 104. The wireless communication link may be based on RF signal transmission, e.g. FM technology or digital transmission technology, e.g. RF communication protocols conforming to the bluetooth standard or other standards. In the alternative, the wireless communication link 10 may be based on optical signal transmission.
The hearing impaired individual 102 wishes to receive sound from a target sound source, which is a speaker placed at some distance away from the hearing impaired individual 102 beyond the median plane of the latter. Noise v as by interferenceL,R(t) schematically shown, the sound environment surrounding the hearing impaired individual 102 is disadvantageous with low SNR at the respective microphones of the first and second hearing instruments 107, 109. Noise vL,R(t) in practice may include many different types of common noise mechanisms or sources, such as competing speakers, motor vehicles, wind noise, cross-talk noise, music, and so forth. In addition to direct noise sound components from various noise sources, an interference noise sound vL,R(t) may also include various boundary reflections from room boundaries, such as the walls, floor, and ceiling of the room 110 where the hearing impaired individual 102 is placed. Thus, the noise source often produces noise sound components from multiple spatial directions at the hearing impaired individual, making the sound field in the room 110 very challenging for understanding the speech of the targeted speaker 103 without assistance from an external microphone device.
A first linear transfer function between the targeted speaker 103 and the first hearing instrument 107 is represented by the dashed line hL(t) is schematically shown, and a second linear transfer function between the targeted speaker 103 and the second hearing instrument 109 is likewise indicated by a second dashed line hR(t) is schematically shown. First and second transfer functions h due to Fourier transform equivalenceL(t) and hR(t) may be represented by their respective impulse responses or by their respective frequencies. The first and second linear transfer functions describe first/right and left/secondThe hearing instrument transmits sound from the targeted speaker 103 or talker to the right and left microphones, respectively.
The acoustic or sound signal picked up by the microphone of the first hearing instrument 107 is generated as sL(t) and the acoustic or sound signal picked up by the microphone of the right ear hearing instrument is generated as denoted s in the followingR(t) a second hearing assistance microphone signal. The noise sound signal at the microphone of the right hearing instrument is denoted vR(t) and the noise sound signal at the microphone of the left hearing instrument is denoted v belowL(t) of (d). The target speech signal produced by the target speaker 103 is denoted below as x (t). Furthermore, based on the assumption that each of the hearing assistance microphones picks up a noisy version with the target speech signal x (t) undergoing a linear transformation, we can write:
Figure GDA0002781770560000131
Figure GDA0002781770560000132
wherein
Figure GDA0002781770560000133
Is the convolution operator.
At the same time, a noisy infected or contaminated version of the target speech signal is received at the left and right hearing instrument microphones, the target speech signal x (t) being recorded or received at the external microphone device:
sE(t)=x(t)+υE(t) (3)
wherein v isE(t) i is the noise sound signal at the external microphone.
In addition, it is assumed that the target speech component of the external microphone signal picked up by the external microphone device dominates such that the target speech signal is much more powerful than the noise sound signal, i.e.:
Figure GDA0002781770560000134
the present embodiment of the method of deriving and superimposing spatial auditory cues onto an external microphone arrangement of the portable microphone unit 105 in each of the left and right ear hearing instruments preferably comprises the steps of:
1) auditory spatial cue estimation
2) An auditory spatial cue synthesizer; and optionally also,
3) the signals are mixed.
According to one such embodiment of the method, the auditory spatial cue determination or estimation comprises a time delay estimator and a signal level estimator. The first step consists in coupling the external microphone signal sE(t) is associated with each of the first or second hearing assistance microphone signals according to the following:
Figure GDA0002781770560000135
Figure GDA0002781770560000141
for right and left microphone signals sR(t)、sLThe time delay of (t) is determined by:
τL=arg maxt rL(t) (6a)
τR=arg maxt rR(t) (6b)
and the external microphone signal and the left and right microphone signals sL(t)、sRLevel difference A between (t)L、ARDetermined as follows:
Figure GDA0002781770560000142
Figure GDA0002781770560000143
in a second step, the impulse response of the left spatial synthesis filter for application in the left hearing instrument and the impulse response of the right spatial synthesis filter for application in the right hearing instrument are derived as:
gL(t)=ALδ(t-τL) (8a)
gR(t)=ARδ(t-τR) (8b)。
in the left hearing instrument, the calculated impulse response g of the left spatial synthesis filterL(t) for generating a first composite microphone signal y with a first spatial auditory cue superimposed or added according toL(t):
Figure GDA0002781770560000144
In the right hearing instrument, the calculated impulse response g of the right spatial synthesis filterL(t) in a corresponding manner for generating a second composite microphone signal y with a second spatial auditory cue superimposed or added according toR(t):
Figure GDA0002781770560000145
Thus, the first composite microphone signal yL(t) by combining the impulse response g of the left spatial synthesis filterL(t) and the external microphone signal s received by the left hearing instrument via the wireless communication link 104E(t) convolution results. Function rL(t)、AL、gL(t) and yLThe above-mentioned calculation of (t) is preferably performed by the first signal processor of the left hearing instrument. The first signal processor may comprise a microprocessor and/or dedicated digital computing hardware, including, for example, a hardwired Digital Signal Processor (DSP). In the alternative, at the first signalThe processor may comprise a software programmable DSP or a combination of dedicated digital computing hardware and a software programmable DSP. The software programmable DSP may be configured to perform the above described calculations by means of suitable program routines or threads each comprising a set of executable program instructions stored in a non-volatile memory device of the hearing instrument. Second composite microphone signal yR(t) in a corresponding manner by combining the impulse response g of the filter with the right spaceR(t) and the external microphone signal s received by the right hearing instrument via the wireless communication link 104E(t) convolution and in a corresponding manner continue the signal processing in the left hearing instrument.
It will be appreciated by the skilled person that each of the above-mentioned microphone signals and impulse responses in the left and right hearing instruments are preferably represented in the digital domain such that the function r is generatedL(t)、AL、gL(t) and yLThe calculation of (t) is performed numerically on the digital signal by a digital signal processor of the type previously discussed. Signal y of the first synthesis microphoneL(t), a first hearing assistance microphone signal sL(t) and an external microphone signal sEEach of (t) may be a digital signal sampled at a sampling frequency of, for example, between 16kHz and 48 kHz.
The first composite microphone signal is preferably further processed by a first hearing assistance signal processor to adapt the characteristics of the hearing loss compensated output signal to the individual hearing loss profile of the left ear of the hearing impaired user. Those skilled in the art will appreciate that this further processing may include various types of common and well-known signal processing functions such as multi-band dynamic range compression, noise reduction, and the like. After this further processing, the first composite microphone signal is reproduced via the first output transducer to the left ear of the hearing impaired person as a hearing loss compensated output signal. The first (and also the second) output transducer may comprise a miniature speaker, a receiver, or possibly an implanted electrode array for a cochlear implant hearing aid. The second composite microphone signal may be processed by the second hearing instrument in a corresponding manner to generate a second composite microphone signal and reproduce it to the right ear of the hearing impaired person.
Thus, the external microphone signals picked up by the remote microphone arrangement accommodated in the portable external microphone unit 105 are presented to the left and right ears of the hearing impaired person with appropriate spatial auditory cues corresponding to the spatial cues that would be present in the hearing aid microphone signals if the target speech signal produced by the target speaker 103 at his or her actual position in the listening room was acoustically conveyed to the left and right ear microphones of the hearing instruments 109, 107. This feature solves the previously discussed problems associated with artificial and intrinsic perception of a target sound source inside the head of a hearing assistance user in conjunction with reproduction of a remotely picked-up microphone signal in prior art hearing aid systems.
According to an embodiment of the method, the first hearing loss compensated output signal comprises not only the first composite microphone signal entirely, but also a component of the first hearing aid microphone signal recorded by the first hearing aid microphone or microphones, such that a mix of these different microphone signals is presented to the left ear of the hearing impaired individual. According to the latter embodiment, the first composite microphone signal y is processedLThe step of (t) comprises: combining the first combined microphone signal yL(t) and a first hearing assistance microphone signal sL(t) mixing at a first ratio to produce a hearing loss compensated output signal zL(t)。
First composite microphone signal yL(t) and a first hearing assistance microphone signal sLThe mixing of (t) can be carried out, for example, according to the following:
zL(t)=bsL(t)+(1-b)yL(t) (10)
where b is a decimal number controlling the mixing ratio between 0 and 1.
The mixing feature may be utilized to adjust the relative levels of the "raw" or unprocessed microphone signal and the external microphone signal so that the SNR of the left hearing loss compensated output signal may be adjusted. Compensating the output signal z at the left hearing lossL(t) first hearing assistance microphone signal sL(t) certain ofThe inclusion of individual components is advantageous in many cases. First hearing assistance microphone signal sLThe presence of the component or portion of (t) supplies a beneficial amount of "situational awareness" to the hearing impaired, where other sound sources of potential interest become audible over the targeted speaker. Other sound sources of interest may include, for example, another person seated near the hearing impaired person or a portable communication device.
In a further advantageous embodiment, the first combined microphone signal and the first hearing assistance microphone signal sL(t) the ratio between (t) depends on the first hearing assistance microphone signal sLThe signal-to-noise ratio of (t) varies. First hearing assistance microphone signal sLThe signal-to-noise ratio of (t) may for example be based on the signal s from the external microphoneE(t) certain target sound data derived. The latter microphone signal is assumed to be predominantly or completely dominated by the target sound source, e.g. the target speech as discussed above, and may thus be used for detecting the auxiliary microphone signal s at the first hearingL(t) a level of target speech present in the speech. When the first hearing assistance microphone signal sL(t) the signal-to-noise ratio is high, the mixing characteristic according to equation (10) above may be implemented such that b is close to 1, and when the first hearing aid microphone signal s is highLWhen the signal-to-noise ratio of (t) is low, b approaches 0. When the first hearing assistance microphone signal sLThe value of b may be, for example, greater than 0.9 when the signal-to-noise ratio of (t) is greater than 10 dB. In the opposite sound case, when the first hearing aid microphone signal sLThe value of b may be, for example, less than 0.1 when the signal-to-noise ratio of (t) is less than 3dB or 0 dB.
According to another embodiment of the method, the estimation or calculation of auditory spatial cues comprises a left and/or right spatial synthesis filter gL(t)、gR(t) direct or online estimation of the impulse response, the left and/or right spatial synthesis filters gL(t)、gR(t) describing or modeling linear transfer functions between the target sound source and the left and right ear hearing assistance microphones, respectively.
According to an online estimation procedure, the calculation or estimation of the impulse response of the spatial synthesis filter of the first or left ear is preferably achieved by solving the following optimization problem or equation:
Figure GDA0002781770560000171
those skilled in the art will appreciate that the external microphone signal sE(t) may reasonably be assumed to be dominated by the target sound signal (due to the proximity between the external microphone arrangement and the target sound source). This assumption means that only the error of equation (11) (and correspondingly the error of equation (12) below) is minimized from the first hearing assistance microphone signal sL(t) completely removing the target sound signal or component. This is done by selecting the response of the filter g (t) to match the first linear transfer function h between the target sound source or loudspeaker 103 and the first hearing instrument 107L(t) to complete. This inference is based on the target sound signal and the interfering noise sound vL,R(t) uncorrelated hypothesis. Experience has shown that this is generally a valid assumption in many real-life sound environments.
Thus, the calculation or estimation of the impulse response of the spatial synthesis filter of the second or right ear is also preferably achieved by solving the following optimization problem or equation:
Figure GDA0002781770560000172
gL(t) and gREach of these calculations of (t) can be implemented in real-time by applying an efficient adaptive algorithm, such as Least Mean Squares (LMS) or Recursive Least Squares (RLS). This solution is illustrated by fig. 2, which shows a simplified schematic block diagram of how the above-described optimization equation (11) can be solved in real-time in a signal processor schematically illustrating the left hearing instrument 200 using an adaptive filter 209. The respective solution may of course be applied in the respective right and left hearing instruments (not shown).
External microphone signal sE(t) is received by the previously discussed wireless receiver (not shown) which decodes and, if received in analog form, may convert to digital format. Number ofExternal microphone signal sE(t) is applied to the input of the adaptive filter 209 and filtered by the current transfer function/impulse response of the adaptive filter 209 to produce a first composite microphone signal y at the output of the adaptive filterL(t) of (d). First hearing assistance microphone signal sL(t) are applied substantially simultaneously to a first input of subtractor 204 or to subtraction function 204. First or left ear synthesized microphone signal yL(t) is applied to the second input 204 of the subtractor so that the latter generates an error signal epsilon on signal line 206, which is representative of the error signal epsilon at yL(t) and sL(t) difference between (a) and (b). The error signal epsilon is applied to the adaptive control input of the adaptive filter 209 via signal line 206 in a conventional manner such that the filter coefficients of the adaptive filter are adjusted to minimize the error signal epsilon in accordance with the particular adaptive algorithm implemented by the adaptive algorithm 209. Thus, the first or left ear spatial synthesis filter is formed by the adaptive filter 209, which results in a filter coefficient gLAnd (t) is feasible.
In general terms, a digital external microphone signal sE(t) is filtered by the adaptive transfer function of the adaptive filter 209, which in turn represents the spatial synthesis filter of the left ear, to produce a left ear synthesized microphone signal s comprising the first spatial auditory cueE(t) of (d). Digital external microphone signal s of the adaptive transfer function through the adaptive filter 209E(t) filtering may be performed at the adaptive coefficient gL(t) and a digital external microphone signal sE(t) discrete-time convolution between, i.e. directly performing the convolution operation specified by equation (9a) above:
Figure GDA0002781770560000181
the left hearing instrument 200 additionally includes the previously discussed miniature receiver or speaker 211 that converts the hearing loss compensated output signal produced by the signal processor 208 into audible sound for transmission to the ear drum of the hearing impaired person. The signal processor 208 may include a suitable output amplifier, such as a class D amplifier, for driving a micro receiver or speaker 211.
Those skilled in the art will appreciate that the features and functions of the right ear hearing instrument may be the same as those discussed above for the left hearing instrument 200 to generate a binaural signal to the hearing aid user.
At the first synthesized microphone signal yL(t) and a first hearing assistance microphone signal sL(t) a selectable mixing between them in a first ratio, and in a second composite microphone signal yR(t) and a second hearing assistance microphone signal sR(t) a similar and optional mixing between them in a second ratio to produce a left and right hearing loss compensated output signal z, respectivelyL,R(t), preferably performed as discussed above, i.e. according to:
zL,R(t)=bsL,R(t)+(1-b)yL,R(t) (14)
the mixing coefficient b may be a fixed value or may be user-operated. The mixing coefficient b may alternatively be controlled by a separate algorithm which monitors the SNR by comparing the contribution of the target signal component measured by the external microphone signal present in the hearing aid microphone signal and comparing the level of the noise component of the target signal component with the noise component. When the SNR is high, b will go to 1, and when the SNR is low, b will be close to 0.
While particular features have been shown and described, it will be understood that they are not intended to limit the claimed invention, and it will be apparent to those skilled in the art that various changes and modifications can be made without departing from the spirit and scope of the claimed invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The claimed invention is intended to embrace all such alternatives, modifications and equivalents.

Claims (12)

1. A method of superimposing spatial auditory cues to an externally picked up sound signal in a hearing instrument, comprising:
receiving, via a first wireless communication link, an external microphone signal from an external microphone located in the sound field, wherein the receiving act is performed using a wireless receiver of the first hearing instrument;
generating, by a microphone system of the first hearing instrument, a first hearing assistance microphone signal, wherein the first hearing instrument is located at or in a left or right ear of a user;
determining a response characteristic of a first spatial synthesis filter by correlating the external microphone signal and the first hearing assistance microphone signal; and
in the first hearing instrument, filtering the received external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial hearing cues,
wherein the act of determining the response characteristic comprises:
cross-correlating the external microphone signal and the first hearing assistance microphone signal to determine a time delay between the external microphone signal and the first hearing assistance microphone signal;
determining a level difference between the external microphone signal and the first hearing assistance microphone signal based on results from the cross-correlation behavior; and
determining a response characteristic of the first spatial synthesis filter by multiplying a Delta function related to the determined time delay and the determined level difference.
2. The method of claim 1, further comprising:
processing, by a first signal processor, the first composite microphone signal according to individual hearing loss data of the user to produce a first hearing loss compensated output signal of the first hearing instrument; and
presenting the first hearing loss compensated output signal to the left or right ear of the user through a first output transducer.
3. The method of claim 1, further comprising:
receiving the external microphone signal via a second wireless communication link, wherein the act of receiving the external microphone signal via the second wireless communication link is performed using a wireless receiver of a second hearing instrument;
generating a second hearing assistance microphone signal by a microphone system of the second hearing instrument when the external microphone signal is received by the second hearing instrument, wherein the first and second hearing instruments are located at or in the left and right ears, respectively, or vice versa;
determining a response characteristic of a second spatial synthesis filter by correlating the external microphone signal and the second hearing assistance microphone signal; and
in the second hearing instrument, the received external microphone signal is filtered by the second spatial synthesis filter to produce a second synthesized microphone signal comprising second spatial auditory cues.
4. The method of claim 2, wherein the act of processing the first synthesized microphone signal comprises mixing the first synthesized microphone signal and the first hearing assistance microphone signal at a first ratio to produce the hearing loss compensated output signal.
5. The method of claim 4, further comprising varying the ratio between the first composite microphone signal and the first hearing assistance microphone signal in dependence on a signal-to-noise ratio.
6. The method of claim 1, wherein:
the act of cross-correlating the external microphone signal and the first hearing assistance microphone signal comprises, in dependence on
Figure FDA0002781770550000021
Determining rL(t),
Wherein s isE(t) represents theExternal microphone signal, andL(t) represents the first hearing assistance microphone signal;
said time delay between said external microphone signal and said first hearing aid microphone signal is dependent on τL=argmaxtrL(t) determining:
wherein tau isLRepresenting the time delay;
determining at the external microphone signal sE(t) and the first hearing assistance microphone signal sLBehavior of the level difference between (t) according to
Figure FDA0002781770550000022
To perform;
wherein A isLRepresenting the level difference; and
wherein the act of determining the response characteristic comprises an act according to gL(t)=ALδ(t-τL) To determine an impulse response g of said first spatial synthesis filterL(t)。
7. The method of claim 1, wherein the first synthesized microphone signal is also generated by convolving the external microphone signal with an impulse response of the first spatial synthesis filter.
8. The method of claim 1, wherein the act of determining the response characteristic comprises:
according to
Figure FDA0002781770550000031
To determine an impulse response g of said first spatial synthesis filterL(t),
Wherein g isL(t) represents the impulse response of the first spatial synthesis filter,
sE(t) represents the external microphone signal, an
sL(t) represents the first hearing assistance microphone signal.
9. The method of claim 1, further comprising:
the first spatial synthesis filter is a first adaptive filter,
subtracting the first synthesized microphone signal from the first hearing assistance microphone signal to generate an error signal; and
the filter coefficients for the first adaptive filter are determined according to a predetermined adaptive algorithm so as to minimize the error signal.
10. The method of claim 1, wherein the first hearing assistance microphone signal is generated by a microphone system of the first hearing instrument when the external microphone signal is received from the external microphone.
11. A hearing aid system comprising:
a first hearing instrument; and
a portable external microphone unit;
wherein the portable external microphone unit includes:
a microphone placed in the sound field and used for generating an external microphone signal, an
A first wireless transmitter configured to transmit the external microphone signal via a first wireless communication link, an
Wherein the first hearing instrument comprises:
a hearing assistance housing or shell configured to be placed at or in a user's left or right ear,
a first wireless receiver configured to receive the external microphone signal via the first wireless communication link,
a first hearing assistance microphone configured to generate a first hearing assistance microphone signal in response to sound when the external microphone signal is received by the first wireless receiver, an
A first signal processor configured to determine a response characteristic of a first spatial synthesis filter by correlating the external microphone signal and the first hearing assistance microphone signal,
wherein the first spatial synthesis filter is configured to filter the received external microphone signal to produce a first synthesized microphone signal comprising a first spatial auditory cue,
wherein the act of determining the response characteristic comprises:
cross-correlating the external microphone signal and the first hearing assistance microphone signal to determine a time delay between the external microphone signal and the first hearing assistance microphone signal;
determining a level difference between the external microphone signal and the first hearing assistance microphone signal based on results from the cross-correlation behavior; and
determining a response characteristic of the first spatial synthesis filter by multiplying a Delta function related to the determined time delay and the determined level difference.
12. The hearing aid system according to claim 11, further comprising a second hearing instrument, wherein the second hearing instrument comprises:
a second hearing assistance housing or shell,
a second wireless receiver configured to receive the external microphone signal via a second wireless communication link,
a second hearing assistance microphone configured to generate a second hearing assistance microphone signal when the external microphone signal is received by the second wireless receiver, an
A second signal processor configured to determine a response characteristic of a second spatial synthesis filter based on the external microphone signal and the second hearing assistance microphone signal,
wherein the second spatial synthesis filter is configured to filter the received external microphone signal to produce a second synthesized microphone signal comprising a second spatial auditory cue.
CN201511009209.5A 2014-12-30 2015-12-29 Method for superimposing spatial auditory cues on externally picked-up microphone signals Expired - Fee Related CN105744455B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DKPA201470835 2014-12-30
DKPA201470835 2014-12-30

Publications (2)

Publication Number Publication Date
CN105744455A CN105744455A (en) 2016-07-06
CN105744455B true CN105744455B (en) 2021-06-04

Family

ID=56296308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511009209.5A Expired - Fee Related CN105744455B (en) 2014-12-30 2015-12-29 Method for superimposing spatial auditory cues on externally picked-up microphone signals

Country Status (2)

Country Link
JP (1) JP6762091B2 (en)
CN (1) CN105744455B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018201252A1 (en) * 2017-05-03 2018-11-08 Soltare Inc. Audio processing for vehicle sensory systems
CN111133774B (en) * 2017-09-26 2022-06-28 科利耳有限公司 Acoustic point identification
EP3468228B1 (en) 2017-10-05 2021-08-11 GN Hearing A/S Binaural hearing system with localization of sound sources
DE102018203907A1 (en) * 2018-02-28 2019-08-29 Sivantos Pte. Ltd. Method for operating a hearing aid
WO2023192312A1 (en) * 2022-03-29 2023-10-05 The Board Of Trustees Of The University Of Illinois Adaptive binaural filtering for listening system using remote signal sources and on-ear microphones

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001013674A3 (en) * 1999-08-12 2001-06-14 Ralf Hinrichs Hearing aid and corresponding programming method
CN1939092A (en) * 2004-02-20 2007-03-28 Gn瑞声达A/S Hearing aid with feedback cancellation
CN101091323A (en) * 2004-10-22 2007-12-19 兰斯·弗里德 Audio/video portable electronic devices providing wireless audio communication and speech and/or voice recognition command operation
CN102440007A (en) * 2009-05-18 2012-05-02 奥迪康有限公司 Signal enhancement using wireless streaming
CN103118321A (en) * 2011-10-17 2013-05-22 奥迪康有限公司 A listening system adapted for real-time communication providing spatial information in an audio stream

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5409656B2 (en) * 2009-01-22 2014-02-05 パナソニック株式会社 Hearing aid

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001013674A3 (en) * 1999-08-12 2001-06-14 Ralf Hinrichs Hearing aid and corresponding programming method
CN1939092A (en) * 2004-02-20 2007-03-28 Gn瑞声达A/S Hearing aid with feedback cancellation
CN101091323A (en) * 2004-10-22 2007-12-19 兰斯·弗里德 Audio/video portable electronic devices providing wireless audio communication and speech and/or voice recognition command operation
CN102440007A (en) * 2009-05-18 2012-05-02 奥迪康有限公司 Signal enhancement using wireless streaming
CN103118321A (en) * 2011-10-17 2013-05-22 奥迪康有限公司 A listening system adapted for real-time communication providing spatial information in an audio stream

Also Published As

Publication number Publication date
CN105744455A (en) 2016-07-06
JP6762091B2 (en) 2020-09-30
JP2016140059A (en) 2016-08-04

Similar Documents

Publication Publication Date Title
US10431239B2 (en) Hearing system
US9699574B2 (en) Method of superimposing spatial auditory cues on externally picked-up microphone signals
EP2124483B1 (en) Mixing of in-the-ear microphone and outside-the-ear microphone signals to enhance spatial perception
EP2899996B1 (en) Signal enhancement using wireless streaming
US11510017B2 (en) Hearing device comprising a microphone adapted to be located at or in the ear canal of a user
US10349191B2 (en) Binaural gearing system and method
CN107371111B (en) Method for predicting intelligibility of noisy and/or enhanced speech and binaural hearing system
CN105744455B (en) Method for superimposing spatial auditory cues on externally picked-up microphone signals
US10070231B2 (en) Hearing device with input transducer and wireless receiver
US11463820B2 (en) Hearing aid comprising a directional microphone system
US11330375B2 (en) Method of adaptive mixing of uncorrelated or correlated noisy signals, and a hearing device
US20200396549A1 (en) Binaural hearing system comprising frequency transition
US20080205677A1 (en) Hearing apparatus with interference signal separation and corresponding method
EP3041270B1 (en) A method of superimposing spatial auditory cues on externally picked-up microphone signals
JP2022528579A (en) Bilateral hearing aid system with temporally uncorrelated beamformer
EP4198975A1 (en) Electronic device and method for obtaining a user's speech in a first sound signal
EP4178221A1 (en) A hearing device or system comprising a noise control system
CN112087699B (en) Binaural hearing system comprising frequency transfer
US20230080855A1 (en) Method for operating a hearing device, and hearing device
EP4084502A1 (en) A hearing device comprising an input transducer in the ear

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210604