EP3041270B1 - A method of superimposing spatial auditory cues on externally picked-up microphone signals - Google Patents

A method of superimposing spatial auditory cues on externally picked-up microphone signals Download PDF

Info

Publication number
EP3041270B1
EP3041270B1 EP14200593.3A EP14200593A EP3041270B1 EP 3041270 B1 EP3041270 B1 EP 3041270B1 EP 14200593 A EP14200593 A EP 14200593A EP 3041270 B1 EP3041270 B1 EP 3041270B1
Authority
EP
European Patent Office
Prior art keywords
microphone signal
signal
hearing
external microphone
hearing aid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP14200593.3A
Other languages
German (de)
French (fr)
Other versions
EP3041270A1 (en
Inventor
Karl-Fredrik Johan Gran
Jesper UDESEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GN Hearing AS
Original Assignee
GN Hearing AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GN Hearing AS filed Critical GN Hearing AS
Priority to EP14200593.3A priority Critical patent/EP3041270B1/en
Priority to DK14200593.3T priority patent/DK3041270T3/en
Priority to US14/589,587 priority patent/US9699574B2/en
Publication of EP3041270A1 publication Critical patent/EP3041270A1/en
Application granted granted Critical
Publication of EP3041270B1 publication Critical patent/EP3041270B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/43Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics

Definitions

  • the present invention relates in a first aspect to a method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument.
  • the method comprises steps of a generating an external microphone signal by an external microphone arrangement and transmitting the external microphone signal to a wireless receiver of a first hearing instrument via a first wireless communication link. Further steps of the methodology comprise determining response characteristics of a first spatial synthesis filter by correlating the external microphone signal and a first hearing aid microphone signal of the first hearing instrument and filtering the external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial auditory cues.
  • Hearing instruments or aids typically comprise a microphone arrangement which includes one or more microphones for receipt of incoming sound such as speech and music signals.
  • the incoming sound is converted to an electric microphone signal or signals that are amplified and processed in a control and processing circuit of the hearing instrument in accordance with parameter settings of one or more preset listening program(s).
  • the parameter settings for each listening program have typically been computed from the hearing impaired individual's specific hearing deficit or loss for example expressed in an audiogram.
  • An output amplifier of the hearing instrument delivers the processed, i.e. hearing loss compensated, microphone signal to the user's ear canal via an output transducer such as a miniature speaker, receiver or possibly electrode array.
  • the miniature speaker or receiver may be arranged inside housing or shell of the hearing instrument together with the microphone arrangement or arranged separately in an ear plug or earpiece of the hearing instrument.
  • a hearing impaired person typically suffers from a loss of hearing sensitivity which loss is dependent upon both frequency and the level of the sound in question.
  • a hearing impaired person may be able to hear certain frequencies (e.g., low frequencies) as well as a normal hearing person, but unable to hear sounds with the same sensitivity as a normal hearing individual at other frequencies (e.g., high frequencies).
  • the hearing impaired person may perceive loud sounds, e.g. above 90 dB SPL, with the same intensity as the normal hearing person, but still unable to hear soft sounds with the same sensitivity as the normal hearing person.
  • the hearing impaired person suffers from a loss of dynamic range at certain frequencies or frequency bands.
  • the healthy hearing system relies on the well-known cocktail party effect to discriminate between the competing or interfering sound sources under such adverse listening conditions.
  • the signal-to-noise ratio (SNR) of sound at the listener's ears may be very low for example around 0 dB.
  • the cocktail party effect relies inter alia on spatial auditory cues in the competing or interfering sound sources to perform the discrimination based on spatial localization of the competing sound sources.
  • the SNR of sound received at the hearing impaired individual's ears may be so low that the hearing impaired individual is unable to detect and use the spatial auditory cues to discriminate between different sound streams from the competing sound sources. This leads to a severe worsened ability to hearing and understanding speech in noisy sound environments for many hearing impaired persons compared to normal hearing subjects.
  • the external microphone signal is transmitted to a wireless receiver of the left ear and/or right hearing instrument(s) via a suitable wireless communication link or links.
  • the wireless communication link or links may be based proprietary or industry standard wireless technologies such as Bluetooth.
  • the hearing instrument or instruments thereafter reproduces the external microphone signal with the SNR improved target sound signal to the hearing aid user's ear or ears via a suitable processor and output transducer.
  • the external microphone signal generated by such prior art external microphone arrangements lacks spatial auditory cues because of its distant or remote position in the sound field. This distant or remote position typically lies far away from the hearing aid user's head and ears for example more than 5 meters or 10 meters away.
  • the lack of these spatial auditory cues during reproduction of the external microphone signal in the hearing instrument or instruments leads to an artificial and unpleasant internalized perception of the target sound source.
  • the sound source appears to be placed inside the hearing aid user's head.
  • This problem has been addressed and solved by the present invention by generating and superimposing appropriate spatial auditory cues on a remotely recorded or picked-up microphone signal in connection with reproduction of the remotely picked-up microphone signal in the hearing instrument.
  • US 2013/094683 A1 and US 2012/063610 A1 discloses a method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument, comprising steps of:
  • a first aspect of the invention relates to a method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument, comprising steps of:
  • the present invention addresses and solves the above discussed prior art problems with artificial and unpleasant internalized perception of the target sound source when reproduced via the remotely placed external microphone arrangement instead of through the microphone arrangement of the first hearing aid or instrument.
  • the determination of frequency response characteristics, or equivalently impulse response characteristics of the first spatial synthesis filter in accordance with the invention allows appropriate spatial auditory cues to be added or superimposed to the received external microphone signal. These spatial auditory cues correspond largely to the auditory cues that would be generated by sound propagating from the true spatial position of the target sound source relative to the hearing user's head where the first hearing instrument is arranged.
  • the microphone arrangement of the first hearing instrument is preferably housed within a housing or shell of the first hearing instrument such that this microphone arrangement is arranged at, or in, the hearing aid user's left or right ear as the case may be.
  • the skilled person will understand that the first hearing instrument may comprise different types of hearing instruments such as so-called BTE types, ITE types, CIC types, RIC types etc.
  • the microphone arrangement of the first hearing instrument may be located at various locations at, or in, the user's ear such as behind the user's pinnae, or inside the user's outer ear or inside the user's ear canal.
  • the first spatial synthesis filter may be determined solely from the first hearing aid microphone signal and the external microphone signal without involving a second hearing aid microphone signal picked-up at the user's other ear.
  • This type of direct communication between the first and second hearing instruments would require the presence of a wireless transmitter in at least one of the first and second hearing instruments leading to increased power consumption and complexity of the hearing instruments in question.
  • the present methodology preferably comprises further steps of:
  • Another embodiment of the present methodology comprises superimposing respective spatial auditory cues to the remotely picked-up sound signal for a left ear, or first, hearing instrument and a right ear, or second, hearing instrument.
  • This embodiment is capable of generating binaural spatial auditory cues to the hearing impaired individual to exploit the advantages associated with binaural processing of acoustic signals propagating in the sound field such as the target sound of the target sound source.
  • This binaural methodology of superimposing spatial auditory cues to the remotely picked-up sound signal comprises further steps of:
  • the first synthesized microphone signal may be generated in the time domain from the impulse response g L ( t ) of the first spatial synthesis filter by a further step of:
  • a second aspect of the invention relates to a hearing aid system comprising a first hearing instrument and a portable external microphone unit.
  • the portable external microphone unit comprises:
  • the first signal processor is further configured to filtering the received external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial auditory cues.
  • the hearing aid system may be configured for binaural use and processing of the external microphone signal such that the first hearing instrument is arranged at, or in, the user's left or right ear and the second hearing instrument placed at, or in, the user's other ear.
  • the hearing aid system may comprise the second hearing instrument which comprises:
  • Signal processing functions of the each of the first and/or second signal processors may be executed or implemented by dedicated digital hardware or by one or more computer programs, program routines and threads of execution running on a software programmable signal processor or processors.
  • Each of the computer programs, routines and threads of execution may comprise a plurality of executable program instructions.
  • the signal processing functions may be performed by a combination of dedicated digital hardware and computer programs, routines and threads of execution running on the software programmable signal processor or processors.
  • Each of the above-mentioned methodologies of correlating the external microphone signal and the second hearing aid microphone signal may be carried out by a computer program, program routine or thread of execution executable on a suitable software programmable microprocessor such as a programmable Digital Signal Processor.
  • the microprocessor and/or the dedicated digital hardware may be integrated on an ASIC or implemented on a FPGA device.
  • the filtering of the received external microphone signal by the first spatial synthesis filter may be carried out by a computer program, program routine or thread of execution executable on a suitable software programmable microprocessor such as a programmable Digital Signal Processor.
  • the software programmable microprocessor and/or the dedicated digital hardware may be integrated on an ASIC or implemented on a FPGA device.
  • Each of the first and second wireless communication links may be based on RF signal transmission of the external microphone signal to the first and/or second hearing instruments, e.g. analog FM technology or various types of digital transmission technology for example complying with a Bluetooth standard, such as Bluetooth LE or other standardized RF communication protocols.
  • each of the first and second wireless communication links may be based on optical signal transmission.
  • the same type of wireless communication technology is preferably used for the first and second wireless communication links to minimize system complexity.
  • FIG. 1 is a schematic illustration of a hearing aid system in accordance with an embodiment of the present invention operating in an adverse sound or listening environment.
  • the hearing aid system 101 comprises an external microphone arrangement mounted within a portable housing structure of a portable external microphone unit 105.
  • the external microphone arrangement may comprise one or more separate omnidirectional or directional microphones.
  • the portable housing structure 105 may comprise a rechargeable battery package supplying power to the one or more separate microphones and further supplying power to various electronic circuits such as digital control logic, user readable screens or displays and a wireless transceiver (not shown).
  • the external microphone arrangement may comprise a spouse microphone, clip microphone, a conference microphone or form part of a smartphone or mobile phone.
  • the hearing aid system 101 comprises a first hearing instrument or aid 100 mounted in, or at, a hearing impaired individual's right or left ear and a second hearing instrument or aid 100 mounted in, or at, the hearing impaired individual's other ear,
  • the hearing impaired individual 102 is binaurally fitted with hearing aids in the present exemplary embodiment of the invention such that a hearing loss compensated output signal is provided both the left and right ear.
  • hearing instruments such as so-called BTE types, ITE types, CIC types etc., may be utilized depending on factors such as the size of the hearing impaired individual's hearing loss, personal preferences and handling capabilities.
  • Each of the first and second hearing instruments 100, 200 comprises a wireless receiver or transceiver (not shown) allowing each hearing instrument to receive a wireless signal or data, in particular the previously discussed external microphone signal transmitted from the portable external microphone unit 105.
  • the external microphone signal may be modulated and transmitted as an analog signal or as a digitally encoded signal via the wireless communication link 104.
  • the wireless communication link may be based on RF signal transmission, e.g. FM technology or digital transmission technology for example complying with a Bluetooth standard or other standardized RF communication protocols.
  • the wireless communication link 10 may be based on optical signal transmission.
  • the hearing impaired individual 102 wishes to receive sound from the target sound source 103 which is a particular speaker placed on some distance away from the hearing impaired individual 102 outside the latter's median plane.
  • the sound environment surrounding the hearing impaired individual 102 is adverse with a low SNR at the respective microphones of the first and second hearing instruments 100, 200.
  • the interfering noise sound v L,R (t) may in practice comprises many different types of common noise mechanisms or sources such as competing speakers, motorized vehicles, wind noise, babble noise, music etc.
  • the interfering noise sound v L,R (t) may in addition to direct noise sound components from the various noise sources also comprise various boundary reflections from room boundaries such as walls, floors and ceiling of a room 110 where the hearing impaired individual 102 is placed.
  • the noise sources will often produce noise sound components from multiple spatial directions at the hearing impaired individual's ears making the sound field in the room 110 very challenging for understanding speech of the target speaker 103 without assistance from the external microphone arrangement 105.
  • a first linear transfer function between the target speaker 103 and the first hearing instrument 200 is schematically illustrated by dotted line h L (t) and a second linear transfer function between the target speaker 103 and the second hearing instrument 100 is likewise schematically illustrated by a second dotted line h R (t).
  • the first and second transfer functions h L (t) and h R (t) may be represented by their respective impulse responses or by their respective frequency responses due to the Fourier transform equivalence.
  • the first and second linear transfer functions describe the sound propagation from the target speaker or talker 103 to the right and left microphones, respectively, of the first/right and left/second hearing instruments.
  • the acoustic or sound signal picked-up by the microphone 4-207 of the first hearing instrument 200 produces a first hearing aid microphone signal denoted s L ( t ) and the acoustic or sound signal picked-up by the microphone of the right ear hearing instrument 100 produces a second hearing aid microphone signal denoted s R ( t )) in the following.
  • the noise sound signal at the microphone of the right hearing instrument 100 is denoted v R (t) and the noise sound signal at the microphone 207 of the left hearing instrument is denoted v L (t) in the following.
  • the target speech signal produced by the target speaker 103 is denoted x(t) in in the following.
  • the present embodiment of the methodology of deriving and superimposing spatial auditory cues onto the external microphone signal picked-up by the external microphone arrangement of the portable external microphone unit 105, 205 in each of the left and right ear hearing instruments preferably comprises steps of:
  • the auditory spatial cue determination or estimation comprises a time delay estimator and a signal level estimator.
  • 2 A R E
  • the first synthesized microphone signal y L ( t ) is produced by convolving the impulse response g L ( t ) of the left spatial synthesis filter with the external microphone signal s E ( t ) received by the left hearing instrument via the wireless communication link 104.
  • the above-mentioned computations of the functions r L ( t ), A L , g L ( t ) and y L ( t ) are performed by a first signal processor of the left hearing instrument.
  • the first signal processor may comprise a microprocessor and/or dedicated digital computational hardware for example comprising a hard-wired Digital Signal Processor (DSP).
  • DSP Digital Signal Processor
  • the first signal processor may comprise a software programmable DSP or a combination of dedicated digital computational hardware and the software programmable DSP.
  • the a software programmable DSP may be configured to perform the above-mentioned computations by suitable program routines or threads each comprising a set of executable program instructions stored in a non-volatile memory device of the hearing instrument.
  • the second synthesized microphone signal y R ( t ) is produced in a corresponding manner by convolving the impulse response g R ( t ) of the right spatial synthesis filter with the external microphone signal s E ( t ) received by the right hearing instrument via the wireless communication link 104 and proceeding in corresponding manner to the signal processing in the left hearing instrument.
  • each of the above-mentioned microphone signals and impulse responses in the left and right hearing instruments preferably are represented in the digital domain such that the computational operations to produce the functions r L ( t ), A L , g L ( t ) and y L ( t ) are executed numerically on digital signals by the previously discussed types of Digital Signal Processors.
  • Each of the first synthesized microphone signal y L ( t ), the first hearing aid microphone signal s L (t) and the external microphone signal s E ( t ) may be a digital signal for example sampled at a sampling frequency between 16 kHz and 48 kHz.
  • the first synthesized microphone signal is preferably further processed by the first hearing aid signal processor to adapt characteristics of a hearing loss compensated output signal to the individual hearing loss profile of the hearing impaired user's left ear.
  • the skilled person will appreciate that this further processing may include numerous types of ordinary and well-known signal processing functions such as multi-band dynamic range compression, noise reduction etc.
  • the first synthesized microphone signal is reproduced to the hearing impaired person's left ear as the hearing loss compensated output signal via the first output transducer.
  • the first (and also second) output transducer may comprise a miniature speaker, receiver or possibly an implantable electrode array for cochlea implant hearing aids.
  • the second synthesized microphone signal may be processed in a corresponding manner by the signal processor of the second hearing instrument to produce a second synthesized microphone signal and reproducing the same to the hearing impaired person's right ear.
  • the external microphone signal picked-up by the remote microphone arrangement housed in the portable external microphone unit 105, 205 is presented to the hearing impaired person's left and right ears with appropriate spatial auditory cues corresponding to the spatial cues that would have existed in the hearing aid microphone signals if the target speech signal produced by the target speaker 103 at his or hers actual position in the listening room was conveyed acoustically to the left and right ear microphones of the hearing instruments.
  • the first hearing loss compensated output signal does not exclusively include the first synthesized microphone signal, but also comprises a component of the first hearing aid microphone signal recorded by the first hearing aid microphone 207 or microphones such that a mixture of these different microphone signals are presented to the left ear of the hearing impaired individual.
  • the step of processing the first synthesized microphone signal y L (t) comprises: mixing the first synthesized microphone signal y L (t) and the first hearing aid microphone signal s L (t) in a first ratio to produce the left hearing loss compensated output signal z L ( t ).
  • the mixing feature may be exploited to adjust the relative level of the "raw" or unprocessed microphone signal and the external microphone signal such that the SNR of the left hearing loss compensated output signal can be adjusted.
  • the inclusion of a certain component of the first hearing aid microphone signal s L (t) in the left hearing loss compensated output signal z L ( t ) is advantageous in many circumstances.
  • the presence of a component or portion of the first hearing aid microphone signal s L (t) supplies the hearing impaired person with a beneficial amount of "environmental awareness" where other sound sources of potential interest than the target speaker becomes audible.
  • the other sound sources of interest could for example comprise another person or a portable communication device sitting next to the hearing impaired person.
  • the ratio between the first synthesized microphone signal and the first hearing aid microphone signal s L (t) is varied in dependence of a signal to noise ratio of first hearing aid microphone signal s L (t).
  • the signal to noise ratio of the first hearing aid microphone signal s L (t) may for example be estimated based on certain target sound data derived from the external microphone signal s E (t).
  • the latter microphone signal is assumed to mainly or entirely be dominated by the target sound source, e.g. the target speech discussed above, and may hence be used to detect the level of target speech present in the first hearing aid microphone signal s L (t).
  • the mixing feature according to equation (10) above may be implemented such that b is close to 1, when the signal to noise ratio of first hearing aid microphone signal s L (t) is high and b approaches 0 when the signal to noise ratio of first hearing aid microphone signal s L (t) is low.
  • the value of b may for example be larger than 0.9 when the signal to noise ratio of first hearing aid microphone signal s L (t) is larger than 10 dB.
  • the value of b may for example be smaller than 0.1 when the signal to noise ratio of first hearing aid microphone signal s L (t) is smaller than 3 dB or 0 dB.
  • the estimation or computation of the auditory spatial cues comprises a direct or on-line estimation of the impulse responses of the left and/or right spatial synthesis filter g L (t), g R (t) that describe or model the linear transfer functions between the target sound source and the left ear and right ear hearing aid microphones, respectively.
  • the external microphone signal s E (t) can reasonably be assumed to be dominated by the target sound signal (because of the proximity between the external microphone arrangement and the target sound source).
  • This assumption implies that the only way to minimize the error of equation (11) (and correspondingly the error of equation (12) below) is to completely remove the target sound signal or component from the first hearing aid microphone signal s L (t). This is accomplished by choosing the response of the filter g(t) to match the first linear transfer function h L (t) between the target sound source or speaker 103 and the first hearing instrument 200.
  • This reasoning is based on the assumption that the target sound signal is uncorrelated with the interfering noise sound v L,R (t). Experience shows that this generally is a valid assumption in numerous real-life sound environments.
  • FIG. 2 shows a simplified schematic block diagram of how the above-mentioned optimization equation (11) can be solved in real-time in the signal processor of the schematically illustrated left hearing instrument 200 using an adaptive filter 209. A corresponding solution may of course be applied in a corresponding right left hearing instrument (not shown).
  • LMS Least Mean Square
  • RLS Recursive Least Square
  • the external microphone signal s E (t) is received by the previously discussed wireless receiver (not shown) decoded and possibly converted to a digital format if received in analog format.
  • the digital external microphone signal s E (t) is applied to an input of the adaptive filter 209 and filtered by a current transfer function/impulse response of the adaptive filter 209 to produce a first synthesized microphone signal y L (t) at an output of the adaptive filter.
  • the first hearing aid microphone signal s L (t) is substantially simultaneously applied to a first input of a subtractor 204 or subtraction function 204.
  • the first, or left ear, synthesized microphone signal y L (t) is applied to a second input of a subtractor 204 such that the latter produces an error signal ⁇ on signal line 206 which represents a difference between y L (t) and s L (t).
  • the error signal ⁇ is applied to an adaptive control input of the adaptive filter 209 via the signal line 206 in a conventional manner such that the filter coefficients of the adaptive filter are adjusted to minimize the error signal ⁇ in accordance with the particular adaptive algorithm implemented by the adaptive filter 209.
  • the first, or left ear, spatial synthesis filter is formed by the adaptive filter 209 which makes a real-time adaptive computation of filter coefficients g L (t).
  • the digital external microphone signal s E (t) is filtered by the adaptive transfer function of the adaptive filter 209 which in turn represents the left ear spatial synthesis filter, to produce the left ear synthesized microphone signal y L (t) comprising the first spatial auditory cues.
  • the left hearing instrument 200 additionally comprises the previously discussed miniature receiver or loudspeaker 211 which converts the hearing loss compensated output signal produced by the signal processor 208 to audible sound for transmission to the hearing impaired person's ear drum.
  • the signal processor 208 may comprise a suitable output amplifier, e.g. a class D amplifier, for driving the miniature receiver or loudspeaker 211.
  • a right ear hearing instrument may be identical to the above-discussed features and functions of the left hearing instrument 200 to produce a binaural signal to the hearing aid user.
  • the mixing coefficient b may either be a fixed value or may be user operated.
  • the mixing coefficient b may alternatively be controlled by a separate algorithm which monitors the SNR by comparing the contribution of the target signal component measured by the external microphone present in the hearing aid microphone signals and comparing the level of the target signal component to the noise component. When the SNR s high, b would go to 1, and when the SNR is low, b would approach 0.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Description

  • The present invention relates in a first aspect to a method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument. The method comprises steps of a generating an external microphone signal by an external microphone arrangement and transmitting the external microphone signal to a wireless receiver of a first hearing instrument via a first wireless communication link. Further steps of the methodology comprise determining response characteristics of a first spatial synthesis filter by correlating the external microphone signal and a first hearing aid microphone signal of the first hearing instrument and filtering the external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial auditory cues.
  • BACKGROUND OF THE INVENTION
  • Hearing instruments or aids typically comprise a microphone arrangement which includes one or more microphones for receipt of incoming sound such as speech and music signals. The incoming sound is converted to an electric microphone signal or signals that are amplified and processed in a control and processing circuit of the hearing instrument in accordance with parameter settings of one or more preset listening program(s). The parameter settings for each listening program have typically been computed from the hearing impaired individual's specific hearing deficit or loss for example expressed in an audiogram. An output amplifier of the hearing instrument delivers the processed, i.e. hearing loss compensated, microphone signal to the user's ear canal via an output transducer such as a miniature speaker, receiver or possibly electrode array. The miniature speaker or receiver may be arranged inside housing or shell of the hearing instrument together with the microphone arrangement or arranged separately in an ear plug or earpiece of the hearing instrument.
  • A hearing impaired person typically suffers from a loss of hearing sensitivity which loss is dependent upon both frequency and the level of the sound in question. Thus a hearing impaired person may be able to hear certain frequencies (e.g., low frequencies) as well as a normal hearing person, but unable to hear sounds with the same sensitivity as a normal hearing individual at other frequencies (e.g., high frequencies). Similarly, the hearing impaired person may perceive loud sounds, e.g. above 90 dB SPL, with the same intensity as the normal hearing person, but still unable to hear soft sounds with the same sensitivity as the normal hearing person. Thus, in the latter situation, the hearing impaired person suffers from a loss of dynamic range at certain frequencies or frequency bands.
  • In addition to the above-mentioned frequency and level dependent hearing loss of the hearing impaired person loss often leads to a reduced ability to discriminate between competing or interfering sound sources for example in a noisy sound environment with multiple active speakers and/or noise sound sources. The healthy hearing system relies on the well-known cocktail party effect to discriminate between the competing or interfering sound sources under such adverse listening conditions. The signal-to-noise ratio (SNR) of sound at the listener's ears may be very low for example around 0 dB. The cocktail party effect relies inter alia on spatial auditory cues in the competing or interfering sound sources to perform the discrimination based on spatial localization of the competing sound sources. Under such adverse listening conditions, the SNR of sound received at the hearing impaired individual's ears may be so low that the hearing impaired individual is unable to detect and use the spatial auditory cues to discriminate between different sound streams from the competing sound sources. This leads to a severe worsened ability to hearing and understanding speech in noisy sound environments for many hearing impaired persons compared to normal hearing subjects.
  • Numerous prior art analog and digital hearing aids have been designed to mitigate the above-identified hearing deficiency in noisy sound environments. A common way of addressing the problem has been to apply SNR enhancing techniques to the hearing aid microphone signal(s) such as various types of fixed or adaptive beamforming to provide enhanced directionality. These techniques, whether based on wireless technology or not, have only been shown to have limited effect. With the introduction of wireless hearing aid technology and accessories, it has become possible to place an external microphone arrangement close to or on, i.e. via a belt or shirt clip, the target sound source in certain listening situations. The external microphone arrangement may for example be housed in portable unit which is arranged in the proximity of a speaker such as a teacher in a classroom environment. Due to the proximity of the microphone arrangement to the target sound source it is able to generate the external microphone signal with a target sound signal with significantly higher SNR than the SNR of the same target sound signal recorded/received at the hearing instrument microphone(s). The external microphone signal is transmitted to a wireless receiver of the left ear and/or right hearing instrument(s) via a suitable wireless communication link or links. The wireless communication link or links may be based proprietary or industry standard wireless technologies such as Bluetooth. The hearing instrument or instruments thereafter reproduces the external microphone signal with the SNR improved target sound signal to the hearing aid user's ear or ears via a suitable processor and output transducer.
  • However, the external microphone signal generated by such prior art external microphone arrangements lacks spatial auditory cues because of its distant or remote position in the sound field. This distant or remote position typically lies far away from the hearing aid user's head and ears for example more than 5 meters or 10 meters away. The lack of these spatial auditory cues during reproduction of the external microphone signal in the hearing instrument or instruments leads to an artificial and unpleasant internalized perception of the target sound source. The sound source appears to be placed inside the hearing aid user's head. Hence, it is advantageous to provide signal processing methodologies, hearing instruments and hearing aid systems capable of reproducing externally recorded or picked-up sound signals with appropriate spatial cues providing the hearing aid user or patient with a more natural sound perception. This problem has been addressed and solved by the present invention by generating and superimposing appropriate spatial auditory cues on a remotely recorded or picked-up microphone signal in connection with reproduction of the remotely picked-up microphone signal in the hearing instrument.
  • Each of US 2013/094683 A1 and US 2012/063610 A1 discloses a method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument, comprising steps of:
    1. a) generating an external microphone signal by an external microphone arrangement placed in a sound field in response to impinging sound,
    2. b) transmitting the external microphone signal to a wireless receiver of a first hearing instrument via a first wireless communication link,
    3. c) generating a first hearing aid microphone signal by a microphone arrangement of the first hearing instrument simultaneously with receiving the external microphone signal, wherein the first hearing instrument is placed in the sound field at, or in, a user's left or right ear,
    4. d) determining response characteristics of a first spatial synthesis filter by cross-correlating the external microphone signal and the first hearing aid microphone signal to determine a time delay between the signals, and
    5. e) filtering, in the first hearing instrument, the received external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial auditory cues.
    SUMMARY OF THE INVENTION
  • A first aspect of the invention relates to a method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument, comprising steps of:
    1. a) generating an external microphone signal by an external microphone arrangement placed in a sound field in response to impinging sound,
    2. b) transmitting the external microphone signal to a wireless receiver of a first hearing instrument via a first wireless communication link,
    3. c) generating a first hearing aid microphone signal by a microphone arrangement of the first hearing instrument simultaneously with receiving the external microphone signal, wherein the first hearing instrument is placed in the sound field at, or in, a user's left or right ear,
    4. d) determining response characteristics of a first spatial synthesis filter by:
      • cross-correlating the external microphone signal and the first hearing aid microphone signal to determine a time delay between the signals,
      • determining a level difference between the external microphone signal and the first hearing aid microphone signal based on the cross-correlation of the external microphone signal and the first hearing aid microphone signal, and
      • determining the response characteristics of the first spatial synthesis filter by multiplying a delta function at the determined time delay and the determined level difference,
    5. e) filtering, in the first hearing instrument, the received external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial auditory cues.
  • The present invention addresses and solves the above discussed prior art problems with artificial and unpleasant internalized perception of the target sound source when reproduced via the remotely placed external microphone arrangement instead of through the microphone arrangement of the first hearing aid or instrument. The determination of frequency response characteristics, or equivalently impulse response characteristics of the first spatial synthesis filter in accordance with the invention, allows appropriate spatial auditory cues to be added or superimposed to the received external microphone signal. These spatial auditory cues correspond largely to the auditory cues that would be generated by sound propagating from the true spatial position of the target sound source relative to the hearing user's head where the first hearing instrument is arranged. The proximity between the external microphone arrangement and the target sound source ensures the target sound signal typically possesses a significantly higher signal-to-noise ratio than the target sound picked-up by the microphone arrangement of the first hearing aid microphone signal. The microphone arrangement of the first hearing instrument is preferably housed within a housing or shell of the first hearing instrument such that this microphone arrangement is arranged at, or in, the hearing aid user's left or right ear as the case may be. The skilled person will understand that the first hearing instrument may comprise different types of hearing instruments such as so-called BTE types, ITE types, CIC types, RIC types etc. Hence, the microphone arrangement of the first hearing instrument may be located at various locations at, or in, the user's ear such as behind the user's pinnae, or inside the user's outer ear or inside the user's ear canal.
  • It is a significant advantage of the present invention that the first spatial synthesis filter may be determined solely from the first hearing aid microphone signal and the external microphone signal without involving a second hearing aid microphone signal picked-up at the user's other ear. Hence, there is no need for binaural communication of the first and second hearing aid microphone signals between the first, or left ear, hearing instrument and the second, or right ear, hearing instrument. This type of direct communication between the first and second hearing instruments would require the presence of a wireless transmitter in at least one of the first and second hearing instruments leading to increased power consumption and complexity of the hearing instruments in question.
  • The present methodology preferably comprises further steps of:
    • f) processing the first synthesized microphone signal by a first hearing aid signal processor according to individual hearing loss data of the user to produce a first hearing loss compensated output signal of the first hearing instrument,
    • g) reproducing the first hearing loss compensated output signal to the user's left or right ear through a first output transducer. The first output transducer may comprise a miniature speaker or receiver arranged inside the housing or shell of the first hearing instrument or arranged separately in an ear plug or earpiece of the first hearing instrument. Properties of the first hearing aid signal processor are discussed below.
  • Another embodiment of the present methodology comprises superimposing respective spatial auditory cues to the remotely picked-up sound signal for a left ear, or first, hearing instrument and a right ear, or second, hearing instrument. This embodiment is capable of generating binaural spatial auditory cues to the hearing impaired individual to exploit the advantages associated with binaural processing of acoustic signals propagating in the sound field such as the target sound of the target sound source. This binaural methodology of superimposing spatial auditory cues to the remotely picked-up sound signal comprises further steps of:
    • b1) transmitting the external microphone signal to a wireless receiver of a second hearing instrument via a second wireless communication link,
    • c1) generating a second hearing aid microphone signal by a microphone arrangement of the second hearing instrument simultaneously with receiving the external microphone signal, wherein the second hearing instrument is placed in the sound field at, or in, a user's other ear,
    • d1) determining response characteristics of a second spatial synthesis filter by correlating the external microphone signal and the second hearing aid microphone signal,
    • e1) filtering, in the second hearing instrument, the received external microphone signal with the second spatial synthesis filter to produce a second synthesized microphone signal comprising second spatial auditory cues. This binaural methodology may comprise executing further steps of:
      • f1) processing the second synthesized microphone signal by a second hearing aid signal processor of the second hearing instrument according to the individual hearing loss data of the user to produce a second hearing loss compensated output signal of the second hearing instrument,
      • g1) reproducing the second hearing loss compensated output signal to the user's other ear through a second output transducer.
  • The cross-correlation of the external microphone signal, sE (t), and the first hearing aid microphone signal, sL (t), may be carried out according to: r L t = s E t s L t ;
    Figure imgb0001
  • The time delay, τL , between the external microphone signal and the first hearing aid microphone signal is determined from the cross-correlation rL (t): τ L = arg max t r L t ;
    Figure imgb0002
  • Determining the level difference, AL , between the external microphone signal sE (t) and the first hearing aid microphone signal sL (t) may be carried out according to: A L = E | r L t | 2 E | s E t s E t | 2
    Figure imgb0003
  • Finally, an impulse response gL (t) of the first spatial synthesis filter, representing the response characteristics of the first spatial synthesis filter, may be determined according to: g L t = A L δ t τ L
    Figure imgb0004
  • The first synthesized microphone signal may be generated in the time domain from the impulse response gL (t) of the first spatial synthesis filter by a further step of:
    • convolving the external microphone signal with the impulse response of the first spatial synthesis filter. The skilled person will understand that the first synthesized microphone signal may be generated from a corresponding frequency response of the first spatial synthesis filter and a frequency domain representation of the external microphone signal for example by DFT or FFT representations of the first spatial synthesis filter and the external microphone signal.
  • A second aspect of the invention relates to a hearing aid system comprising a first hearing instrument and a portable external microphone unit. The portable external microphone unit comprises:
    • a microphone arrangement for placement in a sound field and generation of an external microphone signal in response to impinging sound,
    • a first wireless transmitter configured to transmit the external microphone signal via a first wireless communication link. The first hearing instrument of the hearing aid system comprises:
      • a hearing aid housing or shell configured for placement at, or in, a user's left or right ear,
      • a first wireless receiver configured for receiving the external microphone signal via the first wireless communication link,
      • a first hearing aid microphone configured for generating a first hearing aid microphone signal in response to sound simultaneously with the receipt of the external microphone signal, a first signal processor configured to determining response characteristics of a first spatial synthesis filter by:
        • cross-correlating the external microphone signal and the first hearing aid microphone signal to determine a time delay between the signals,
        • determining a level difference between the external microphone signal and the first hearing aid microphone signal based on the cross-correlation of the external microphone signal and the first hearing aid microphone signal, and
        • determining the response characteristics of the first spatial synthesis filter by multiplying a delta function at the determined time delay and the determined level difference.
  • The first signal processor is further configured to filtering the received external microphone signal by the first spatial synthesis filter to produce a first synthesized microphone signal comprising first spatial auditory cues.
  • As discussed above, the hearing aid system may be configured for binaural use and processing of the external microphone signal such that the first hearing instrument is arranged at, or in, the user's left or right ear and the second hearing instrument placed at, or in, the user's other ear. Hence, the hearing aid system may comprise the second hearing instrument which comprises:
    • a second hearing aid housing or shell configured for placement at, or in, the user's other ear,
    • a second wireless receiver configured for receiving the external microphone signal via a second wireless communication link,
    • a second hearing aid microphone configured for generating a second hearing aid microphone signal in response to sound simultaneously with the receipt of the external microphone signal,
    • a second signal processor configured to determining response characteristics of a second spatial synthesis filter by correlating the external microphone signal and the second hearing aid microphone signal, wherein the second signal processor is further configured to filtering the received external microphone signal by the second spatial synthesis filter to produce a second synthesized microphone signal comprising second spatial auditory cues.
  • Signal processing functions of the each of the first and/or second signal processors may be executed or implemented by dedicated digital hardware or by one or more computer programs, program routines and threads of execution running on a software programmable signal processor or processors. Each of the computer programs, routines and threads of execution may comprise a plurality of executable program instructions. Alternatively, the signal processing functions may be performed by a combination of dedicated digital hardware and computer programs, routines and threads of execution running on the software programmable signal processor or processors. Each of the above-mentioned methodologies of correlating the external microphone signal and the second hearing aid microphone signal may be carried out by a computer program, program routine or thread of execution executable on a suitable software programmable microprocessor such as a programmable Digital Signal Processor. The microprocessor and/or the dedicated digital hardware may be integrated on an ASIC or implemented on a FPGA device. Likewise, the filtering of the received external microphone signal by the first spatial synthesis filter may be carried out by a computer program, program routine or thread of execution executable on a suitable software programmable microprocessor such as a programmable Digital Signal Processor. The software programmable microprocessor and/or the dedicated digital hardware may be integrated on an ASIC or implemented on a FPGA device.
  • Each of the first and second wireless communication links may be based on RF signal transmission of the external microphone signal to the first and/or second hearing instruments, e.g. analog FM technology or various types of digital transmission technology for example complying with a Bluetooth standard, such as Bluetooth LE or other standardized RF communication protocols. In the alternative, each of the first and second wireless communication links may be based on optical signal transmission. The same type of wireless communication technology is preferably used for the first and second wireless communication links to minimize system complexity.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention will be described in more detail in connection with the appended drawings in which:
    • FIG. 1 is a schematic block diagram of a hearing aid system comprising left and right ear hearing instruments communicating with an external microphone arrangement via wireless communication links in accordance with an embodiment of the present invention; and
    • FIG. 2 is a schematic block diagram illustrating an adaptive filter solution for real-time adaptive computation of filter coefficients of a first spatial synthesis filter of the left or right ear hearing instrument. This solution is outside the scope of the claims.
    DESCRIPTION OF PREFERRED EMBODIMENTS
  • FIG. 1 is a schematic illustration of a hearing aid system in accordance with an embodiment of the present invention operating in an adverse sound or listening environment. The hearing aid system 101 comprises an external microphone arrangement mounted within a portable housing structure of a portable external microphone unit 105. The external microphone arrangement may comprise one or more separate omnidirectional or directional microphones. The portable housing structure 105 may comprise a rechargeable battery package supplying power to the one or more separate microphones and further supplying power to various electronic circuits such as digital control logic, user readable screens or displays and a wireless transceiver (not shown). The external microphone arrangement may comprise a spouse microphone, clip microphone, a conference microphone or form part of a smartphone or mobile phone.
  • The hearing aid system 101 comprises a first hearing instrument or aid 100 mounted in, or at, a hearing impaired individual's right or left ear and a second hearing instrument or aid 100 mounted in, or at, the hearing impaired individual's other ear, Hence, the hearing impaired individual 102 is binaurally fitted with hearing aids in the present exemplary embodiment of the invention such that a hearing loss compensated output signal is provided both the left and right ear. The skilled person will understand that different types of hearing instruments such as so-called BTE types, ITE types, CIC types etc., may be utilized depending on factors such as the size of the hearing impaired individual's hearing loss, personal preferences and handling capabilities.
  • Each of the first and second hearing instruments 100, 200 comprises a wireless receiver or transceiver (not shown) allowing each hearing instrument to receive a wireless signal or data, in particular the previously discussed external microphone signal transmitted from the portable external microphone unit 105. The external microphone signal may be modulated and transmitted as an analog signal or as a digitally encoded signal via the wireless communication link 104. The wireless communication link may be based on RF signal transmission, e.g. FM technology or digital transmission technology for example complying with a Bluetooth standard or other standardized RF communication protocols. In the alternative, the wireless communication link 10 may be based on optical signal transmission.
  • The hearing impaired individual 102 wishes to receive sound from the target sound source 103 which is a particular speaker placed on some distance away from the hearing impaired individual 102 outside the latter's median plane. As schematically illustrated by an interfering noise sound vL,R(t), the sound environment surrounding the hearing impaired individual 102 is adverse with a low SNR at the respective microphones of the first and second hearing instruments 100, 200. The interfering noise sound vL,R(t) may in practice comprises many different types of common noise mechanisms or sources such as competing speakers, motorized vehicles, wind noise, babble noise, music etc. The interfering noise sound vL,R(t) may in addition to direct noise sound components from the various noise sources also comprise various boundary reflections from room boundaries such as walls, floors and ceiling of a room 110 where the hearing impaired individual 102 is placed. Hence, the noise sources will often produce noise sound components from multiple spatial directions at the hearing impaired individual's ears making the sound field in the room 110 very challenging for understanding speech of the target speaker 103 without assistance from the external microphone arrangement 105.
  • A first linear transfer function between the target speaker 103 and the first hearing instrument 200 is schematically illustrated by dotted line hL(t) and a second linear transfer function between the target speaker 103 and the second hearing instrument 100 is likewise schematically illustrated by a second dotted line hR(t). The first and second transfer functions hL(t) and hR(t) may be represented by their respective impulse responses or by their respective frequency responses due to the Fourier transform equivalence. The first and second linear transfer functions describe the sound propagation from the target speaker or talker 103 to the right and left microphones, respectively, of the first/right and left/second hearing instruments.
  • The acoustic or sound signal picked-up by the microphone 4-207 of the first hearing instrument 200 produces a first hearing aid microphone signal denoted sL (t) and the acoustic or sound signal picked-up by the microphone of the right ear hearing instrument 100 produces a second hearing aid microphone signal denoted sR (t)) in the following. The noise sound signal at the microphone of the right hearing instrument 100 is denoted vR(t) and the noise sound signal at the microphone 207 of the left hearing instrument is denoted vL(t) in the following. The target speech signal produced by the target speaker 103 is denoted x(t) in in the following. Furthermore, based on the assumption that the each of hearing aid microphones pick up a noisy version of the target speech signal x(t) which has undergone a linear transformation we can write: s L t = h L t x t + υ L t
    Figure imgb0005
    s R t = h R t x t + υ R t
    Figure imgb0006
    where ⊗ is the convolution operator.
  • At the same time the noisy infected or polluted versions of the target speech signal is received at the left and right hearing instrument microphones, the target speech signal x(t) is recorded or received at the external microphone arrangement: s E t = x t + υ E t
    Figure imgb0007
    where vE (t) is the noise sound signal at the external microphone.
  • Furthermore, it is assumed that the target speech component of the external microphone signal picked-up by the the external microphone arrangement is dominant such that power of the target speech signal is much larger than power of the noise sound signal, i.e.: E x 2 t > > E υ E 2 t
    Figure imgb0008
  • The present embodiment of the methodology of deriving and superimposing spatial auditory cues onto the external microphone signal picked-up by the external microphone arrangement of the portable external microphone unit 105, 205 in each of the left and right ear hearing instruments preferably comprises steps of:
    1. 1) Auditory spatial cue estimation
    2. 2) Auditory spatial cue synthesizer; and, optionally
    3. 3) Signal mixing.
  • According to the invention, the auditory spatial cue determination or estimation comprises a time delay estimator and a signal level estimator. The first step comprises cross correlating the external microphone signal sE (t) with each of the first or the second hearing aid microphone signals according to: r L t = s E t s L t
    Figure imgb0009
    r R t = s E t s R t
    Figure imgb0010
    the time delay for the right and left microphone signals sR(t), sL(t) is determined by: τ L = arg max t r L t
    Figure imgb0011
    τ R = arg max t r R t
    Figure imgb0012
    and the level difference AL , AR between the external microphone signal and each of the left and right microphone signals sL(t), sR(t) is determined according to: A L = E | r L t | 2 E | s E t s E t | 2
    Figure imgb0013
    A R = E | r R t | 2 E | s E t s E t | 2
    Figure imgb0014
  • In the second step, the impulse response of a left spatial synthesis filter for application in the left hearing instrument and the impulse response of a right spatial synthesis filter for application in the right hearing instrument are derived as: g L t = A L δ t τ L
    Figure imgb0015
    g R t = A R δ t τ R
    Figure imgb0016
  • In the left hearing instrument, the computed impulse response gL (t) of the left spatial synthesis filter is used to produce a first synthesized microphone signal yL (t) with superimposed or added first spatial auditory cues according to: y L t = g L t s E t
    Figure imgb0017
  • In the right hearing instrument, the computed impulse response gR(t) of the right spatial synthesis filter is used in a corresponding manner to produce a second synthesized microphone signal yR (t) with superimposed or added second spatial auditory cues according to: y R t = g R t s E t
    Figure imgb0018
  • Consequently, the first synthesized microphone signal yL (t) is produced by convolving the impulse response gL (t) of the left spatial synthesis filter with the external microphone signal sE (t) received by the left hearing instrument via the wireless communication link 104. The above-mentioned computations of the functions rL (t), AL , gL (t) and yL (t) are performed by a first signal processor of the left hearing instrument. The first signal processor may comprise a microprocessor and/or dedicated digital computational hardware for example comprising a hard-wired Digital Signal Processor (DSP). In the alternative, the first signal processor may comprise a software programmable DSP or a combination of dedicated digital computational hardware and the software programmable DSP. The a software programmable DSP may be configured to perform the above-mentioned computations by suitable program routines or threads each comprising a set of executable program instructions stored in a non-volatile memory device of the hearing instrument. The second synthesized microphone signal yR (t) is produced in a corresponding manner by convolving the impulse response gR (t) of the right spatial synthesis filter with the external microphone signal sE (t) received by the right hearing instrument via the wireless communication link 104 and proceeding in corresponding manner to the signal processing in the left hearing instrument.
  • The skilled person will understand that each of the above-mentioned microphone signals and impulse responses in the left and right hearing instruments preferably are represented in the digital domain such that the computational operations to produce the functions rL (t), AL , gL (t) and yL (t) are executed numerically on digital signals by the previously discussed types of Digital Signal Processors. Each of the first synthesized microphone signal yL (t), the first hearing aid microphone signal sL(t) and the external microphone signal sE (t) may be a digital signal for example sampled at a sampling frequency between 16 kHz and 48 kHz.
  • The first synthesized microphone signal is preferably further processed by the first hearing aid signal processor to adapt characteristics of a hearing loss compensated output signal to the individual hearing loss profile of the hearing impaired user's left ear. The skilled person will appreciate that this further processing may include numerous types of ordinary and well-known signal processing functions such as multi-band dynamic range compression, noise reduction etc. After being subjected to this further processing, the first synthesized microphone signal is reproduced to the hearing impaired person's left ear as the hearing loss compensated output signal via the first output transducer. The first (and also second) output transducer may comprise a miniature speaker, receiver or possibly an implantable electrode array for cochlea implant hearing aids. The second synthesized microphone signal may be processed in a corresponding manner by the signal processor of the second hearing instrument to produce a second synthesized microphone signal and reproducing the same to the hearing impaired person's right ear.
  • Consequently, the external microphone signal picked-up by the remote microphone arrangement housed in the portable external microphone unit 105, 205 is presented to the hearing impaired person's left and right ears with appropriate spatial auditory cues corresponding to the spatial cues that would have existed in the hearing aid microphone signals if the target speech signal produced by the target speaker 103 at his or hers actual position in the listening room was conveyed acoustically to the left and right ear microphones of the hearing instruments. This feature solves the previously discussed problems associated with the artificial and internalized perception of the target sound source inside the hearing aid user's head in connection with reproduction of remotely picked-up microphone signals in prior art hearing aid systems.
  • According to one embodiment of the present methodology, the first hearing loss compensated output signal does not exclusively include the first synthesized microphone signal, but also comprises a component of the first hearing aid microphone signal recorded by the first hearing aid microphone 207 or microphones such that a mixture of these different microphone signals are presented to the left ear of the hearing impaired individual. According to the latter embodiment, the step of processing the first synthesized microphone signal yL(t) comprises: mixing the first synthesized microphone signal yL(t) and the first hearing aid microphone signal sL(t) in a first ratio to produce the left hearing loss compensated output signal zL (t).
  • The mixing of the first synthesized microphone signal yL(t) and the first hearing aid microphone signal sL(t) may for example be implemented according to: z L t = bs L t + 1 b y L t
    Figure imgb0019
    where b is a decimal number between 0 and 1 which controls the mixing ratio.
  • The mixing feature may be exploited to adjust the relative level of the "raw" or unprocessed microphone signal and the external microphone signal such that the SNR of the left hearing loss compensated output signal can be adjusted. The inclusion of a certain component of the first hearing aid microphone signal sL(t) in the left hearing loss compensated output signal zL (t) is advantageous in many circumstances. The presence of a component or portion of the first hearing aid microphone signal sL(t) supplies the hearing impaired person with a beneficial amount of "environmental awareness" where other sound sources of potential interest than the target speaker becomes audible. The other sound sources of interest could for example comprise another person or a portable communication device sitting next to the hearing impaired person.
  • In a further advantageous embodiment, the ratio between the first synthesized microphone signal and the first hearing aid microphone signal sL(t) is varied in dependence of a signal to noise ratio of first hearing aid microphone signal sL(t). The signal to noise ratio of the first hearing aid microphone signal sL(t) may for example be estimated based on certain target sound data derived from the external microphone signal sE(t). The latter microphone signal is assumed to mainly or entirely be dominated by the target sound source, e.g. the target speech discussed above, and may hence be used to detect the level of target speech present in the first hearing aid microphone signal sL(t). The mixing feature according to equation (10) above may be implemented such that b is close to 1, when the signal to noise ratio of first hearing aid microphone signal sL(t) is high and b approaches 0 when the signal to noise ratio of first hearing aid microphone signal sL(t) is low. The value of b may for example be larger than 0.9 when the signal to noise ratio of first hearing aid microphone signal sL(t) is larger than 10 dB. In the opposite sound situation the value of b may for example be smaller than 0.1 when the signal to noise ratio of first hearing aid microphone signal sL(t) is smaller than 3 dB or 0 dB.
  • According to an example outside the scope of the claims, the estimation or computation of the auditory spatial cues comprises a direct or on-line estimation of the impulse responses of the left and/or right spatial synthesis filter gL(t), gR(t) that describe or model the linear transfer functions between the target sound source and the left ear and right ear hearing aid microphones, respectively.
  • According to this on-line estimation procedure, the computation or estimation of the impulse response of the first or left ear spatial synthesis filter is preferably accomplished by solving the following optimization problem or equation: g L t = arg min g t E | g t s E t s L t | 2
    Figure imgb0020
  • The skilled person will understand that the external microphone signal sE(t) can reasonably be assumed to be dominated by the target sound signal (because of the proximity between the external microphone arrangement and the target sound source). This assumption implies that the only way to minimize the error of equation (11) (and correspondingly the error of equation (12) below) is to completely remove the target sound signal or component from the first hearing aid microphone signal sL(t). This is accomplished by choosing the response of the filter g(t) to match the first linear transfer function hL(t) between the target sound source or speaker 103 and the first hearing instrument 200. This reasoning is based on the assumption that the target sound signal is uncorrelated with the interfering noise sound vL,R(t). Experience shows that this generally is a valid assumption in numerous real-life sound environments.
  • Hence, the computation or estimation of the impulse response of the second or right ear spatial synthesis filter is likewise preferably accomplished by solving the following optimization problem or equation: g R t = arg min g t E | g t s E t s R t | 2
    Figure imgb0021
  • Each of these computations of gL(t) and gR(t) can be accomplished in real time by applying an efficient adaptive algorithm such as Least Mean Square (LMS) or Recursive Least Square (RLS). This solution is illustrated by FIG. 2 which shows a simplified schematic block diagram of how the above-mentioned optimization equation (11) can be solved in real-time in the signal processor of the schematically illustrated left hearing instrument 200 using an adaptive filter 209. A corresponding solution may of course be applied in a corresponding right left hearing instrument (not shown).
  • The external microphone signal sE(t) is received by the previously discussed wireless receiver (not shown) decoded and possibly converted to a digital format if received in analog format. The digital external microphone signal sE(t) is applied to an input of the adaptive filter 209 and filtered by a current transfer function/impulse response of the adaptive filter 209 to produce a first synthesized microphone signal yL(t) at an output of the adaptive filter. The first hearing aid microphone signal sL(t) is substantially simultaneously applied to a first input of a subtractor 204 or subtraction function 204. The first, or left ear, synthesized microphone signal yL(t) is applied to a second input of a subtractor 204 such that the latter produces an error signal ε on signal line 206 which represents a difference between yL(t) and sL(t). The error signal ε is applied to an adaptive control input of the adaptive filter 209 via the signal line 206 in a conventional manner such that the filter coefficients of the adaptive filter are adjusted to minimize the error signal ε in accordance with the particular adaptive algorithm implemented by the adaptive filter 209. Hence, the first, or left ear, spatial synthesis filter is formed by the adaptive filter 209 which makes a real-time adaptive computation of filter coefficients gL(t).
  • Overall, the digital external microphone signal sE(t) is filtered by the adaptive transfer function of the adaptive filter 209 which in turn represents the left ear spatial synthesis filter, to produce the left ear synthesized microphone signal yL(t) comprising the first spatial auditory cues. The filtration of the digital external microphone signal sE(t) by the adaptive transfer function of the adaptive filter 209 may carried out as a discrete time convolution between the adaptive filter coefficients gL(t) and samples of the digital external microphone signal sE(t), i.e. directly carrying out the convolution operation specified by equation (9a) above: y L t = g L t s E t
    Figure imgb0022
  • The left hearing instrument 200 additionally comprises the previously discussed miniature receiver or loudspeaker 211 which converts the hearing loss compensated output signal produced by the signal processor 208 to audible sound for transmission to the hearing impaired person's ear drum. The signal processor 208 may comprise a suitable output amplifier, e.g. a class D amplifier, for driving the miniature receiver or loudspeaker 211.
  • The skilled person will understand that feature and functions of a right ear hearing instrument may be identical to the above-discussed features and functions of the left hearing instrument 200 to produce a binaural signal to the hearing aid user.
  • The optional mixing between the first synthesized microphone signal yL(t) and the first hearing aid microphone signal sL(t) in a first ratio and the similar and optional mixing between the second synthesized microphone signal y R (t) and the second hearing aid microphone signal s R (t) in a second ratio, to produce the left and right hearing loss compensated output signal zL,R (t), respectively, is preferably carried out as discussed above, i.e. according to: z L , R t = bs L , R t + 1 b y L , R t
    Figure imgb0023
  • The mixing coefficient b may either be a fixed value or may be user operated. The mixing coefficient b may alternatively be controlled by a separate algorithm which monitors the SNR by comparing the contribution of the target signal component measured by the external microphone present in the hearing aid microphone signals and comparing the level of the target signal component to the noise component. When the SNR s high, b would go to 1, and when the SNR is low, b would approach 0.

Claims (7)

  1. A method of superimposing spatial auditory cues to an externally picked-up sound signal in a hearing instrument, comprising steps of:
    a) generating an external microphone signal by an external microphone arrangement (105, 205) placed in a sound field in response to impinging sound,
    b) transmitting the external microphone signal to a wireless receiver of a first hearing instrument (200) via a first wireless communication link (104),
    c) generating a first hearing aid microphone signal (SL(t)) by a microphone arrangement (107, 207) of the first hearing instrument (200) simultaneously with receiving the external microphone signal (SE(t)), wherein the first hearing instrument (200) is placed in the sound field at, or in, a user's (102) left or right ear,
    d) determining response characteristics of a first spatial synthesis filter (209) by:
    cross-correlating the external microphone signal (SE(t)) and the first hearing aid microphone signal (SL(t)) to determine a time delay between the signals,
    determining a level difference between the external microphone signal (SE(t)) and the first hearing aid microphone signal (SL(t)) based on the cross-correlation of the external microphone signal (SE(t)) and the first hearing aid microphone signal (SL(t)), determining the response characteristics of the first spatial synthesis filter (209) by multiplying a delta function at the determined time delay and the determined level difference,
    e) filtering, in the first hearing instrument (200), the received external microphone signal (SE(t)) by the first spatial synthesis filter (209) to produce a first synthesized microphone signal (yL(t)) comprising first spatial auditory cues.
  2. A method of superimposing spatial auditory cues to an externally picked-up sound signal according to claim 1, comprising further steps of:
    f) processing the first synthesized microphone signal (yL(t)) by a first hearing aid signal processor (208) according to individual hearing loss data of the user (102) to produce a first hearing loss compensated output signal of the first hearing instrument (200),
    g) reproducing the first hearing loss compensated output signal to the user's left or right ear through a first output transducer (211).
  3. A method of superimposing spatial auditory cues to an externally picked-up sound signal according to claim 1 or 2, comprising further steps of:
    b1) transmitting the external microphone signal (SE(t)) to a wireless receiver of a second hearing instrument (100) via a second wireless communication link,
    c1) generating a second hearing aid microphone signal by a microphone arrangement of the second hearing instrument simultaneously with receiving the external microphone signal, wherein the second hearing instrument is placed in the sound field at, or in, a user's other ear,
    d1) determining response characteristics of a second spatial synthesis filter by correlating the external microphone signal and the second hearing aid microphone signal,
    e1) filtering, in the second hearing instrument (100), the received external microphone signal with the second spatial synthesis filter to produce a second synthesized microphone signal comprising second spatial auditory cues; and
    optionally executing further steps of:
    f1) processing the second synthesized microphone signal by a second hearing aid signal processor of the second hearing instrument according to the individual hearing loss data of the user to produce a second hearing loss compensated output signal of the second hearing instrument,
    g1) reproducing the second hearing loss compensated output signal to the user's other ear through a second output transducer.
  4. A method of superimposing spatial auditory cues to an externally picked-up sound signal according to claim 1, comprising:
    cross-correlating the external microphone signal sE (t) and the first hearing aid microphone signal sL (t) according to: r L t = s E t s L t ;
    Figure imgb0024
    determining a time delay τL between the external microphone signal and the first hearing aid microphone signal from the cross-correlation: τ L = arg max t r L t ;
    Figure imgb0025
    determining the level difference between the external microphone signal sE (t) and the first hearing aid microphone signal sL (t) according to: A L = E | r L t | 2 E | s E t s E t | 2
    Figure imgb0026
    determining an impulse response gL (t) of the first spatial synthesis filter according to: g L t = A L δ t τ L .
    Figure imgb0027
  5. A method of superimposing spatial auditory cues to an externally picked-up sound signal according to claim 4, comprising a further step of:
    - convolving the external microphone signal (SE(t)) with the impulse response of the first spatial synthesis filter to generate the first synthesized microphone signal.
  6. A hearing aid system (101) comprising a first hearing instrument (100) and a portable external microphone unit (105, 205), wherein the portable external microphone unit comprises:
    a microphone arrangement for placement in a sound field and generation of an external microphone signal (SE(t)) in response to impinging sound,
    a first wireless transmitter configured to transmit the external microphone signal (SE(t)) via a first wireless communication link (104);
    the first hearing instrument (100) comprising:
    a hearing aid housing or shell configured for placement at, or in, a user's (102) left or right ear,
    a first wireless receiver configured for receiving the external microphone signal (SE(t)) via the first wireless communication link (104),
    a first hearing aid microphone (107, 207) configured for generating a first hearing aid microphone signal (SL(t)) in response to sound simultaneously with the receipt of the external microphone signal (SE(t)),
    a first signal processor (208) configured to determining response characteristics of a first spatial synthesis filter (209) by:
    cross-correlating the external microphone signal (SE(t)) and the first hearing aid microphone signal (SL(t)) to determine a time delay between the signals,
    determining a level difference between the external microphone signal (SE(t)) and the first hearing aid microphone signal (SL(t)) based on the cross-correlation of the external microphone signal (SE(t)) and the first hearing aid microphone signal (SL(t)), determining the response characteristics of the first spatial synthesis filter (209) by multiplying a delta function at the determined time delay and the determined level difference, wherein the first signal processor (208) is further configured to filtering the received external microphone signal (SE(t)) by the first spatial synthesis filter (209) to produce a first synthesized microphone signal (yL(t)) comprising first spatial auditory cues.
  7. A hearing aid system according to claim 6, comprising a second hearing instrument (100), wherein said second hearing instrument (100) comprises:
    a second hearing aid housing or shell configured for placement at, or in, the user's other ear,
    a second wireless receiver configured for receiving the external microphone signal via a second wireless communication link,
    a second hearing aid microphone configured for generating a second hearing aid microphone signal in response to sound simultaneously with the receipt of the external microphone signal,
    a second signal processor configured to determining response characteristics of a second spatial synthesis filter by correlating the external microphone signal and the second hearing aid microphone signal, wherein the second signal processor is further configured to filtering the received external microphone signal by the second spatial synthesis filter to produce a second synthesized microphone signal comprising second spatial auditory cues.
EP14200593.3A 2014-12-30 2014-12-30 A method of superimposing spatial auditory cues on externally picked-up microphone signals Active EP3041270B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP14200593.3A EP3041270B1 (en) 2014-12-30 2014-12-30 A method of superimposing spatial auditory cues on externally picked-up microphone signals
DK14200593.3T DK3041270T3 (en) 2014-12-30 2014-12-30 PROCEDURE FOR SUPERVISION OF SPACIOUS AUDITIVE MARKINGS ON EXTERNALLY RECEIVED MICROPHONE SIGNALS
US14/589,587 US9699574B2 (en) 2014-12-30 2015-01-05 Method of superimposing spatial auditory cues on externally picked-up microphone signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP14200593.3A EP3041270B1 (en) 2014-12-30 2014-12-30 A method of superimposing spatial auditory cues on externally picked-up microphone signals

Publications (2)

Publication Number Publication Date
EP3041270A1 EP3041270A1 (en) 2016-07-06
EP3041270B1 true EP3041270B1 (en) 2019-05-15

Family

ID=52282567

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14200593.3A Active EP3041270B1 (en) 2014-12-30 2014-12-30 A method of superimposing spatial auditory cues on externally picked-up microphone signals

Country Status (2)

Country Link
EP (1) EP3041270B1 (en)
DK (1) DK3041270T3 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3468228B1 (en) * 2017-10-05 2021-08-11 GN Hearing A/S Binaural hearing system with localization of sound sources
CN112911480A (en) * 2021-01-22 2021-06-04 成都市舒听医疗器械有限责任公司 Hearing aid sound amplification method and hearing aid

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030059076A1 (en) * 2001-09-24 2003-03-27 Raimund Martin Hearing aid device with automatic switching to hearing coil mode
US20080101636A1 (en) * 2006-10-02 2008-05-01 Siemens Audiologische Technik Gmbh Hearing apparatus with controlled input channels and corresponding method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9544698B2 (en) * 2009-05-18 2017-01-10 Oticon A/S Signal enhancement using wireless streaming
EP2584794A1 (en) * 2011-10-17 2013-04-24 Oticon A/S A listening system adapted for real-time communication providing spatial information in an audio stream

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030059076A1 (en) * 2001-09-24 2003-03-27 Raimund Martin Hearing aid device with automatic switching to hearing coil mode
US20080101636A1 (en) * 2006-10-02 2008-05-01 Siemens Audiologische Technik Gmbh Hearing apparatus with controlled input channels and corresponding method

Also Published As

Publication number Publication date
EP3041270A1 (en) 2016-07-06
DK3041270T3 (en) 2019-08-05

Similar Documents

Publication Publication Date Title
US10431239B2 (en) Hearing system
US9699574B2 (en) Method of superimposing spatial auditory cues on externally picked-up microphone signals
US10225669B2 (en) Hearing system comprising a binaural speech intelligibility predictor
EP2899996B1 (en) Signal enhancement using wireless streaming
EP3373603B1 (en) A hearing device comprising a wireless receiver of sound
US8542855B2 (en) System for reducing acoustic feedback in hearing aids using inter-aural signal transmission, method and use
US10349191B2 (en) Binaural gearing system and method
US20150256956A1 (en) Multi-microphone method for estimation of target and noise spectral variances for speech degraded by reverberation and optionally additive noise
US9432778B2 (en) Hearing aid with improved localization of a monaural signal source
US10070231B2 (en) Hearing device with input transducer and wireless receiver
US20170295436A1 (en) Hearing aid comprising a directional microphone system
US11330375B2 (en) Method of adaptive mixing of uncorrelated or correlated noisy signals, and a hearing device
CN105744455B (en) Method for superimposing spatial auditory cues on externally picked-up microphone signals
EP2928213B1 (en) A hearing aid with improved localization of a monaural signal source
EP2916320A1 (en) Multi-microphone method for estimation of target and noise spectral variances
EP3041270B1 (en) A method of superimposing spatial auditory cues on externally picked-up microphone signals
JP2022528579A (en) Bilateral hearing aid system with temporally uncorrelated beamformer
US20230080855A1 (en) Method for operating a hearing device, and hearing device
EP4325892A1 (en) Method of audio signal processing, hearing system and hearing device
EP4094685B1 (en) Spectro-temporal modulation detection test unit
US20230143325A1 (en) Hearing device or system comprising a noise control system
US11115761B2 (en) Methods and systems for hearing device signal enhancement using a remote microphone
US20230197094A1 (en) Electronic device and method for obtaining a user's speech in a first sound signal
CN115278494A (en) Hearing device comprising an in-ear input transducer

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20170105

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20170926

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: GN HEARING A/S

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20181211

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602014046737

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20190801

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190515

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190815

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190915

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190815

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190816

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1134807

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602014046737

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

26N No opposition filed

Effective date: 20200218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20191231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191230

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191230

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190915

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20141230

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20211221

Year of fee payment: 8

Ref country code: DK

Payment date: 20211217

Year of fee payment: 8

Ref country code: FR

Payment date: 20211215

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20211217

Year of fee payment: 8

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190515

REG Reference to a national code

Ref country code: DK

Ref legal event code: EBP

Effective date: 20221231

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20221230

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221230

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231218

Year of fee payment: 10