EP2887695A1 - Dispositif d'audition à positionnement spatial perçu sélectionnable de sources acoustiques - Google Patents

Dispositif d'audition à positionnement spatial perçu sélectionnable de sources acoustiques Download PDF

Info

Publication number
EP2887695A1
EP2887695A1 EP13198545.9A EP13198545A EP2887695A1 EP 2887695 A1 EP2887695 A1 EP 2887695A1 EP 13198545 A EP13198545 A EP 13198545A EP 2887695 A1 EP2887695 A1 EP 2887695A1
Authority
EP
European Patent Office
Prior art keywords
user
binaural
signal
hearing device
hearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP13198545.9A
Other languages
German (de)
English (en)
Other versions
EP2887695B1 (fr
Inventor
Brian Dam Pedersen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GN Hearing AS
Original Assignee
GN Resound AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GN Resound AS filed Critical GN Resound AS
Priority to EP13198545.9A priority Critical patent/EP2887695B1/fr
Priority to DK13198545.9T priority patent/DK2887695T3/en
Priority to US14/138,021 priority patent/US9307331B2/en
Publication of EP2887695A1 publication Critical patent/EP2887695A1/fr
Application granted granted Critical
Publication of EP2887695B1 publication Critical patent/EP2887695B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • the hearing device system has a hearing device and a control device that allows a user to select perceived directions of arrival of selected sound signals transmitted to the hearing device.
  • the hearing device may be a headset, a headphone, an earphone, an ear defender, an earmuff, etc, e.g. of the following types: Ear-Hook, In-Ear, On-Ear, Over-the-Ear, Behind-the-Neck, Helmet, Headguard, etc.
  • the hearing device may be a binaural hearing aid.
  • the hearing aids of the binaural hearing aid may be of the types: BTE, RIE, ITE, ITC, CIC, etc.
  • the control device may be a computer, such as a PC, such as a stationary PC, a portable PC, etc, or a hand-held device, such as a tablet PC, such as an IPAD, etc, a smartphone, such as an Iphone, an Android phone, a windows phone, etc, etc.
  • a computer such as a PC, such as a stationary PC, a portable PC, etc, or a hand-held device, such as a tablet PC, such as an IPAD, etc, a smartphone, such as an Iphone, an Android phone, a windows phone, etc, etc.
  • today's digital hearing aids typically use multi-channel and compression signal processing to restore audibility of sound for a hearing impaired individual. In this way, the patient's hearing ability is improved by making previously inaudible speech cues audible.
  • One tool available to a hearing aid user in order to increase the signal to noise ratio of speech originating from a specific speaker is to equip the speaker in question with a microphone, often referred to as a spouse microphone, that picks up speech from the speaker in question with a high signal to noise ratio due to its proximity to the speaker.
  • the spouse microphone converts the speech into a corresponding audio signal with a high signal to noise ratio and transmits the signal, preferably wirelessly, to the hearing aid for hearing loss compensation.
  • a speech signal is provided to the user with a signal to noise ratio well above the SRT of the user in question.
  • Another way of increasing the signal to noise ratio of speech from a speaker that a hearing aid user desires to listen to is to use a telecoil to magnetically pick up audio signals generated, e.g., by telephones, FM systems (with neck loops), and induction loop systems (also called "hearing loops").
  • a telecoil to magnetically pick up audio signals generated, e.g., by telephones, FM systems (with neck loops), and induction loop systems (also called "hearing loops").
  • sound may be transmitted to hearing aids with a high signal to noise ratio well above the SRT of the hearing aid users.
  • US 8,208,642 B2 discloses a method and an apparatus for a binaural hearing aid in which sound from a single monaural signal source is presented to both ears of a user wearing the binaural hearing aid in order to obtain benefits of binaural hearing when listening to the monaural signal source.
  • the sound presented to one ear is phase shifted relative to the sound presented to the other ear, and additionally, the sound presented to one ear may be set to a different level relative to the sound presented to the other ear. In this way, lateralization and volume of the monaural signal are controlled.
  • a telephone signal may be presented to both ears in order to benefit from binaural reception of a telephone call, e.g. by relaying of the caller's voice to the ear without the telephone against it, albeit at the proper phase and level to properly lateralize the sound of the caller's voice.
  • Hearing devices typically reproduce sound in such a way that the user perceives sound sources to be localized inside the head.
  • the sound is said to be internalized rather than being externalized.
  • a common complaint for hearing aid users when referring to the "hearing speech in noise problem" is that it is very hard to follow anything that is being said even though the signal to noise ratio (SNR) should be sufficient to provide the required speech intelligibility.
  • SNR signal to noise ratio
  • a significant contributor to this fact is that the hearing aid reproduces an internalized sound field. This adds to the cognitive loading of the hearing aid user and may result in listening fatigue and ultimately that the user removes the hearing aid(s).
  • SNR signal to noise ratio
  • a human with normal hearing will also experience benefits of improved externalization and localization of sound sources when using a hearing device thereby enjoying reproduced sound with externalized sound sources.
  • the new method makes use of the human auditory system's capability of distinguishing sound sources located in different spatial directions or positions in the sound environment, and capability of concentrating on a selected one or more of the spatially separated sound sources.
  • a new hearing device system using the new method is also disclosed.
  • signals from different sound sources are presented to the ears of a human in such a way that the human perceives the sound sources to be positioned in different spatial positions or directions in the sound environment of the human.
  • the human's auditory system's binaural signal processing is utilized to improve the user's capability of separating signals from different sound sources and of focussing his or her listening to a desired one of the sound sources, or simultaneously listen to and understand more than one of the sound sources.
  • Humans detect and localize sound sources in three-dimensional space by means of the human binaural sound localization capability.
  • the input to the hearing consists of two signals, namely the sound pressures at each of the eardrums, in the following termed the binaural sound signals.
  • HRTF Head-Related Transfer Function
  • the HRTF contains all information relating to the sound transmission to the ears of the listener, including diffraction around the head, reflections from shoulders, reflections in the ear canal, etc., and therefore, the HRTF varies from individual to individual.
  • the HRTF changes with direction and distance of the sound source in relation to the ears of the listener. It is possible to measure the HRTF for any direction and distance and simulate the HRTF, e.g. electronically, e.g. by filters. If such filters are inserted in the signal path between an audio signal source, such as a microphone, and speakers worn by a listener for emission of sound towards the respective ears of the listener, the listener will achieve the perception that the sounds generated by the speakers originate from a sound source positioned at the distance and in the direction as defined by the transfer functions of the filters simulating the HRTF in question, because of the true reproduction of the sound pressures in the ears.
  • an audio signal source such as a microphone
  • Binaural processing by the brain when interpreting the spatially encoded information, results in several positive effects, namely improved signal source separation; improved direction of arrival (DOA) estimation; and improved depth/distance perception.
  • the human auditory system extracts information about distance and direction to a sound source, but it is known that the human auditory system uses a number of cues in this determination. Among the cues are spectral cues, reverberation cues, interaural time differences (ITD), interaural phase differences (IPD) and interaural level differences (ILD).
  • ITD interaural time differences
  • ILD interaural level differences
  • the level difference is a result of diffraction and is determined by the relative position of the ears compared to the source. This cue is dominant above 2 kHz but the auditory system is equally sensitive to changes in ILD over the entire spectrum.
  • a directional transfer function is a Head-Related Transfer Function or an approximation to a Head-Related Transfer Function that adds directional cues to an input signal so that a human listening to a binaural sound signal based on the output signal of a binaural filter with the directional transfer function perceives the sound to be emitted from a sound source residing in a direction defined by the cues.
  • a new method is provided of imparting perception of a direction or position of a sound source to a human, comprising the steps of displaying at least one movable symbol indicating a position of the sound source with relation to the user on a display, moving the at least one movable symbol into a desired position for selection of a perceived direction towards the sound source, selecting a binaural filter with a directional transfer function corresponding to the selected perceived direction towards the sound source, and emitting a binaural sound signal to ears of the user based on the selected binaural filter with the directional transfer function, whereby the user perceives that the sound signal is emitted from the sound source positioned in the selected perceived direction.
  • a monaural audio signal emitted by a specific source such as a monaural audio signal from a spouse microphone, a media player, a hearing loop system, a teleconference system, a radio, a TV, a telephone, a device with an alarm, etc.
  • a binaural filter in such a way that the human perceives the received monaural audio signal to be emitted by the respective source positioned in the selected direction in space.
  • a new hearing device system comprising a hearing device with a first housing accommodating a first speaker and a second housing accommodating a second speaker, and wherein the first and second housings are configured to be worn at a user's respective ears for emission of sound from the speakers towards the respective ears of the user of the hearing device system, a binaural filter having an input signal and connected to the first and second speakers and having a directional transfer function for providing a binaural signal to the first and second speakers, whereby the input signal is perceived by the user to be emitted by a sound source positioned in a direction defined by the directional transfer function, and a control device configured for control of the hearing device system, the control device having
  • the first and second speakers may be parts of a binaural hearing aid, i.e. the receivers for the left ear and the right ear of the binaural hearing aid may constitute the first and second speakers, respectively.
  • the binaural filter may be configured for providing output signals that are equal to the input signal, but phase shifted by different respective amounts and thereby phase shifted with relation to each other.
  • the binaural filter may alternatively or additionally be configured for providing output signals that are equal to the input signal, but multiplied with different respective gains.
  • the binaural filter may have a Head-Related Transfer Function.
  • the hearing device system may have a plurality of binaural filters with different directional transfer functions applied to different input signals arriving from different signal sources, one of the binaural filters being the binaural filter.
  • a device with the signal source generating the input signal may be a spouse microphone, a media player, a hearing loop system, a teleconference system, a radio, a TV, a telephone, a public announcement system, a device with an alarm, etc.
  • One or more of the binaural filters may be accommodated in the first and/or second housings.
  • a device with the signal source may comprise the binaural filter.
  • the hearing device may comprise a data interface for transmission of data from the control device.
  • the data interface may be a wired interface, e.g. a USB interface, or a wireless interface, such as a Bluetooth interface, e.g. a Bluetooth Low Energy interface.
  • a wired interface e.g. a USB interface
  • a wireless interface such as a Bluetooth interface, e.g. a Bluetooth Low Energy interface.
  • the hearing device may comprise an audio interface for reception of an audio signal from the control device or other devices with signal sources capable of transmitting audio signals to the hearing device for provision of the input signals.
  • the audio interface may be a wired interface or a wireless interface.
  • the data interface and the audio interface may be combined into a single interface, e.g. a USB interface, a Bluetooth interface, etc.
  • the hearing device may for example have a Bluetooth Low Energy data interface for exchange of control data between the hearing device and the control device, and a wired audio interface for exchange of audio signals between the hearing device and the control device and other devices with signal sources.
  • Each of the control device and some or all of the devices with signal sources may have binaural filters with directional transfer functions that can be controlled by the control device in a way similar to the control of the binaural filters of the hearing device.
  • the binaural audio signals output by the binaural filters of the control device and some or all of the devices with signal sources, are transmitted to the hearing device so that binaural filters are not required in the hearing device for these signals whereby power and signal processing resources are saved in the hearing device.
  • the perceived spatial separation of different signal sources assists the user of the hearing device system in understanding speech in the monaural audio signals emitted by the signal sources, and in focussing the user's listening to a desired one of the audio signals.
  • a first binaural filter may be configured to output signals intended for the right ear and left ear of the user of the hearing device system that are phase shifted with relation to each other in order to introduce a first interaural time difference whereby the perceived position of the corresponding first sound source is shifted outside the head and laterally with relation to the user of the hearing device system.
  • Further separation of sound sources may be obtained by provision of further binaural filters so that other monaural signals, such as a second monaural signal received from a second spouse microphone, a media player, a hearing loop system, a teleconference system, a radio, a TV, a telephone, a device with an alarm, etc., is filtered with the second binaural filter in such a way that the user perceives the received second monaural audio signal to be emitted by a sound source positioned in a second position and/or arriving from a second direction in space different from other selected perceived positions and directions.
  • other monaural signals such as a second monaural signal received from a second spouse microphone, a media player, a hearing loop system, a teleconference system, a radio, a TV, a telephone, a device with an alarm, etc.
  • the second binaural filter may be configured to output signals intended for the right ear and left ear of the user of the hearing device system that are phase shifted with relation to each other in order to introduce a second interaural time difference whereby the corresponding position of the second sound source is shifted laterally, preferably in the opposite direction of the first sound source, with relation to the user of the hearing device system.
  • the first binaural filter may be configured to output signals intended for the right ear and left ear of the user of the hearing device system that are equal to the first audio input signal multiplied with a first right gain and a first left gain, respectively; in order to obtain a first interaural level difference whereby the perceived position of the corresponding first sound source is shifted laterally with relation to the user of the hearing device system.
  • the second binaural filter may be configured to output signals intended for the right ear and left ear of the user of the hearing device system that are equal to the second audio input signal multiplied with a second right gain and a second left gain, respectively, in order to obtain a second interaural level difference whereby the perceived position of the corresponding second sound source is shifted laterally, preferably in the opposite direction of the first sound source, with relation to the user of the hearing device system.
  • the pair of first interaural time difference and first interaural level difference must be different from the pair of second interaural time difference and second interaural level difference, e.g. the first and second interaural level differences may be identical provided that the first and second interaural time differences are different and vice versa.
  • the perceived spatial separation of the perceived signal sources of different audio signals assists the user in understanding speech in the first and second monaural audio signals, and in focussing the user's listening to a desired one of the first and second monaural audio signals.
  • the directional transfer function may be a Head-Related Transfer Function; or, an approximation to a Head-Related Transfer Function.
  • Head-Related Transfer Functions may be determined using a manikin, such as KEMAR.
  • KEMAR a manikin
  • an approximation to the individual Head-Related Transfer Functions is provided that can be of sufficient accuracy for the user of the hearing device system to maintain sense of direction when wearing the hearing device.
  • Azimuth is the perceived angle of direction towards the sound source projected onto the horizontal plane with reference to the forward looking direction of the user.
  • the forward looking direction is defined by a virtual line drawn through the centre of the user's head and through a centre of the nose of the user.
  • a sound source located in the forward looking direction has an azimuth value of 0o
  • a sound source located directly in the opposite direction has an azimuth value of 180o.
  • a sound source located in the left side of a vertical plane perpendicular to the forward looking direction of the user has an azimuth value of - 90o
  • a sound source located in the right side of the vertical plane perpendicular to the forward looking direction of the user has an azimuth value of + 90o.
  • one signal is said to represent another signal when the one signal is a function of the other signal, for example the one signal may be formed by analogue-to-digital conversion, or digital-to-analogue conversion of the other signal; or, the one signal may be formed by conversion of an acoustic signal into an electronic signal or vice versa; or the one signal may be formed by analogue or digital filtering or mixing of the other signal; or the one signal may be formed by transformation, such as frequency transformation, etc, of the other signal; etc.
  • signals that are processed by specific circuitry may be identified by a name that may be used to identify any analogue or digital signal forming part of the signal path of the signal in question from its input of the circuitry in question to its output of the circuitry.
  • a name e.g. an output signal of a microphone, i.e. the microphone audio signal, may be used to identify any analogue or digital signal forming part of the signal path from the output of the microphone to its input to the speaker, including any processed microphone audio signals.
  • the new hearing device system may comprise a binaural hearing aid comprising multi-channel first and/or second hearing aids in which the input signals are divided into a plurality of frequency channels for individual processing of at least some of the input signals in each of the frequency channels.
  • the plurality of frequency channels may include warped frequency channels, for example all of the frequency channels may be warped frequency channels.
  • the binaural hearing aid may additionally provide circuitry used in accordance with other conventional methods of hearing loss compensation so that the new circuitry or other conventional circuitry can be selected for operation as appropriate in different types of sound environment.
  • the different sound environments may include speech, babble speech, restaurant clatter, music, traffic noise, etc.
  • the binaural hearing aid may for example comprise a Digital Signal Processor (DSP), the processing of which is controlled by selectable signal processing algorithms, each of which having various parameters for adjustment of the actual signal processing performed.
  • DSP Digital Signal Processor
  • the gains in each of the frequency channels of a multi-channel hearing aid are examples of such parameters.
  • One of the selectable signal processing algorithms operates in accordance with the new method.
  • various algorithms may be provided for conventional noise suppression, i.e. attenuation of undesired signals and amplification of desired signals.
  • Microphone output signals obtained from different sound environments may possess very different characteristics, e.g. average and maximum sound pressure levels (SPLs) and/or frequency content. Therefore, each type of sound environment may be associated with a particular program wherein a particular setting of algorithm parameters of a signal processing algorithm provides processed sound of optimum signal quality in a specific sound environment.
  • a set of such parameters may typically include parameters related to broadband gain, corner frequencies or slopes of frequency-selective filter algorithms and parameters controlling e.g. knee-points and compression ratios of Automatic Gain Control (AGC) algorithms.
  • AGC Automatic Gain Control
  • Signal processing characteristics of each of the algorithms may be determined during an initial fitting session in a dispenser's office and programmed into the binaural hearing aid in a non-volatile memory area.
  • the binaural hearing aid may have a user interface, e.g. buttons, toggle switches, etc, of the hearing aid housings, or a remote control, so that the user of the binaural hearing aid can select one of the available signal processing algorithms to obtain the desired hearing loss compensation in the sound environment in question.
  • a user interface e.g. buttons, toggle switches, etc
  • the user of the binaural hearing aid can select one of the available signal processing algorithms to obtain the desired hearing loss compensation in the sound environment in question.
  • One or both hearing aids may also comprise a telecoil that converts a magnetic field at the telecoil into a corresponding analogue audio signal in which the instantaneous voltage of the audio signal varies continuously with the magnetic field strength at the telecoil.
  • Telecoils may be used to increase the signal to noise ratio of speech from a speaker addressing a number of people in a public place, e.g. in a church, an auditorium, a theatre, a cinema, etc., or through a public address systems, such as in a railway station, an airport, a shopping mall, etc. Speech from the speaker is converted to a magnetic field with an induction loop system (also called "hearing loop"), and the telecoil is used to magnetically pick up the magnetically transmitted speech signal.
  • an induction loop system also called "hearing loop
  • the telecoil output audio signal may be input to a binaural filter with directional transfer functions selected by the control device, whereby the user may select a perceived direction of arrival of the telecoil signal so that the telecoil signal as reproduced in the ears of the user is perceived by the user to be emitted by a sound source positioned in a direction defined by the directional transfer function,
  • One or both hearing aids may comprise one or more microphones and a telecoil and a switch, e.g. for selection of an omni-directional microphone signal, or a directional microphone signal, or a telecoil signal, either alone or in any combination, as the audio signal.
  • a switch e.g. for selection of an omni-directional microphone signal, or a directional microphone signal, or a telecoil signal, either alone or in any combination, as the audio signal.
  • the analogue audio signal is made suitable for digital signal processing by conversion into a corresponding digital audio signal in an analogue-to-digital converter whereby the amplitude of the analogue audio signal is represented by a binary number.
  • a discrete-time and discrete-amplitude digital audio signal in the form of a sequence of digital values represents the continuous-time and continuous-amplitude analogue audio signal.
  • the processor of the control device is configured to control the display of the control device to display distinguishable symbols representing various devices that are capable of transmitting an audio signal to the hearing device.
  • each device capable of transmitting an audio signal to the hearing device may be represented by a symbol that is different from the symbols representing other devices.
  • One symbol represents the user.
  • the user may move the symbols into desired positions on the display using the user interface of the control device.
  • the display may be a touch sensitive display allowing the user to move the symbols by touching and dragging the symbols as is well-known in the art of smartphones.
  • the processor controls the corresponding binaural filters connected to respective input signals from sound sources represented by the symbols, for selection of directional transfer function corresponding to the positions on the display of the symbols representing the sound sources with relation to the symbol representing the user, whereby the user perceives the sound sources to be positioned in the directions, or at the positions, indicated by the respective symbols positions on the display.
  • the directions indicated by the respective symbols positions on the display are indicated with reference to the user's forward looking direction.
  • the hearing device may include an orientation sensor unit for sensing the orientation of the head of the user, when the user wears the hearing device in its intended operational position on the user's head, and the processor may further be configured to adjust selection of the directional transfer function(s) of the binaural filter(s) based at least in part on the sensed orientation of the head of the user.
  • the processor may be configured to select directional transfer function(s) with perceived direction(s) towards the corresponding at least one sound source turned 30° to the right in such a way that the user perceives the at least one sound source to remain in fixed position(s) in the sound environment, i.e. the rate of change of the perceived direction(s) corresponds to the rate of change of the orientation of the head of the user.
  • the orientation of the head of the user may be defined as the orientation of a head coordinate system having a vertical axis and two horizontal axes at the current location of the user with relation to a reference coordinate system that is fixed with relation to the surroundings.
  • a head coordinate system is defined with its centre located at the centre of the user's head, which is defined as the midpoint of a line drawn between the respective centres of the eardrums of the left and right ears of the user.
  • the x-axis of the head coordinate system is pointing ahead through a centre of the nose of the user, its y-axis is pointing towards the left ear through the centre of the left eardrum, and its z-axis is pointing upwards.
  • Head yaw is the angle between the current x-axis' projection onto a horizontal plane at the location of the user and a horizontal reference direction, such as the forward looking direction when selection of the directional transfer function is made; or, Magnetic North or True North, head pitch is the angle between the current x-axis and the horizontal plane, and head roll is the angle between the y-axis and the horizontal plane.
  • the x-axis, y-axis, and z-axis of the head coordinate system are denoted the head x-axis, the head y-axis, and the head z-axis, respectively.
  • the orientation sensor unit may comprise accelerometers for determination of orientation of the hearing device.
  • the orientation sensor unit may determine head yaw based on determinations of individual displacements of two accelerometers positioned with a mutual distance for sensing displacement in the same horizontal direction when the user wears the hearing device. Such a determination is accurate when head pitch and head roll do not change during change of the yaw value.
  • the orientation sensor unit may determine head yaw utilizing a first gyroscope, such as a solid-state or MEMS gyroscope positioned for sensing rotation of the head x-axis projected onto a horizontal plane at the user's location with respect to a horizontal reference direction.
  • a first gyroscope such as a solid-state or MEMS gyroscope positioned for sensing rotation of the head x-axis projected onto a horizontal plane at the user's location with respect to a horizontal reference direction.
  • the orientation sensor unit may have further accelerometers and/or further gyroscope(s) for determination of head pitch and/or head roll, when the user wears the hearing device in its intended operational position on the user's head.
  • the orientation sensor unit may further include a compass, such as a magnetometer.
  • the orientation sensor unit may have one, two or three axis sensors that provide information of head yaw; or, head yaw and head pitch; or, head yaw, head pitch, and head roll, respectively.
  • the hearing device may be equipped with a complete attitude heading reference system (AHRS) for determination of the orientation of the user's head that has either solid-state or MEMS gyroscopes, accelerometers and magnetometers on all three axes.
  • AHRS attitude heading reference system
  • a processor of the AHRS provides digital values of the head yaw, head pitch, and head roll based on the sensor data.
  • the processor may be configured to select directional transfer function(s) with a changed yaw of the perceived direction(s) that compensates the changed yaw of the orientation of the head of the user so that the user perceives the at least one sound source to remain in fixed position(s) in the sound environment.
  • the processor may be configured to select directional transfer function(s) with a changed pitch of the perceived direction(s) that compensates the changed pitch of the orientation of the head of the user so that the user perceives the at least one sound source to remain in fixed position(s) in the sound environment.
  • the processor may be configured to select directional transfer function(s) with a changed roll of the perceived direction(s) that compensates the changed roll of the orientation of the head of the user so that the user perceives the at least one sound source to remain in fixed position(s) in the sound environment.
  • the selection of the directional transfer function(s) of the binaural filter(s) may be performed when the user inputs a specific user command with the user interface, e.g. by touching a selection symbol on the display; or by not moving any movable symbol for a certain time period, e.g. 5 seconds; or in another suitable way.
  • a symbol may be deleted from the display by dragging it to the edge of the display as is well-known in the art of smartphones.
  • a new symbol may be added to the display by dragging a palette of selectable symbols from the edge of the display and drag a selected symbol from the palette into the display as is also well-known in the art of smartphones.
  • the "audio signal” may be used to identify any analogue or digital signal forming part of the signal path from the output of the microphone(s) or telecoil or other input signals, to an input of the processor.
  • the "hearing loss compensated audio signal” may be used to identify any analogue or digital signal forming part of the signal path from the output of the signal processor to an input of the output transducer.
  • the binaural hearing aid may be capable of automatically classifying the user's sound environment into one of a number of sound environment categories, such as speech, babble speech, restaurant clatter, music, traffic noise, etc, and may automatically select the appropriate signal processing algorithm accordingly as known in the art.
  • Signal processing in the new hearing device system may be performed by dedicated hardware or may be performed in one or more signal processors, or performed in a combination of dedicated hardware and one or more signal processors.
  • processor As used herein, the terms "processor”, “signal processor”, “controller”, “system”, etc., are intended to refer to CPU-related entities, either hardware, a combination of hardware and software, software, or software in execution.
  • a "processor”, “signal processor”, “controller”, “system”, etc. may be, but is not limited to being, a process running on a processor, a processor, an object, an executable file, a thread of execution, and/or a program.
  • processor designate both an application running on a processor and a hardware processor.
  • processors may reside within a process and/or thread of execution, and one or more "processors”, “signal processors”, “controllers”, “systems”, etc., or any combination hereof, may be localized on one hardware processor, possibly in combination with other hardware circuitry, and/or distributed between two or more hardware processors, possibly in combination with other hardware circuitry.
  • a processor may be any component or any combination of components that is capable of performing signal processing.
  • the signal processor may be an ASIC processor, a FPGA processor, a general purpose processor, a microprocessor, a circuit component, or an integrated circuit.
  • the new method and hearing device system will now be described more fully hereinafter with reference to the accompanying drawings, in which various examples of the new hearing device system are shown.
  • the new method and hearing device system may, however, be embodied in different forms and should not be construed as limited to the examples set forth herein.
  • Fig. 1 schematically illustrates a meeting in which one of the participants 100 uses a new hearing device system (not visible) comprising a binaural hearing aid (not visible) and his or her smartphone 110 controlling the binaural hearing aid and illustrated in the lower part of Fig. 1 .
  • the other meeting participants 102, 104, 106 wear spouse microphones (not visible) that transmit speech from the respective meeting participants to the binaural hearing aid of participant 100.
  • a participant in a remote location also participates in the meeting using a teleconference system (not shown).
  • the smartphone 110 of participant 100 has a display 120 controlled by a processor (not shown) for displaying movable symbols 100', 102', 104', 106', 108', 110' of the at least one movable symbol.
  • the positions on the display 120 of each of the symbols 100', 102', 104', 106', 108', 110' indicate the desired perceived positions of the corresponding participants 102, 104, 106, and devices, i.e. the teleconference system 108 (not shown) and the smartphone 110 of with relation to the user, i.e. participant 100.
  • the user 100 may move each of the symbols 100', 102', 104', 106', 108', 110' around the display 120 in a way well-known in the art of smartphones, by touching the symbol in question with a fingertip while moving the fingertip into the desired position on the display 120, and the moving the finger tip away from the display 120.
  • a symbol may be deleted from the display by dragging it to the edge of the display as is well-known in the art of smartphones.
  • a new symbol may be added to the display by dragging a palette of selectable symbols from the edge of the display 120 and drag a selected symbol from the palette into the display 120 as is also well-known in the art of smartphones.
  • the user 100 has positioned symbols 102', 104', 106' of the other participants 102, 104, 106 in positions relative to the symbol 100' of the user 100 that correspond to the relative positions of the participants around the table at the meeting so that the user 100 will perceive speech from the participants 102, 104, 106 as arriving from the true directions of the respective participants 102, 104, 106.
  • the user 100 has positioned a symbol 108' of the teleconference system 108 (not shown) to the right of the symbol 106' of participant 106 so that the user 100 will perceive speech from the remote participant (not shown) using the teleconference system as arriving from a person seated to the right (seen from the user) of participant 106.
  • the user 100 has positioned a symbol 110' of his or her smartphone 110 to the right so that the user may hear messages from his or her smartphone 110 as arriving from someone positioned to the right of the user 100 at the meeting table.
  • the smartphone 110 is connected to the hearing aids (not visible) of the binaural hearing aid of the user 100 with a Bluetooth Low Energy wireless data interface for transmission of control signals to the hearing aids for selection of binaural filters in the hearing aids having directional transfer functions corresponding to the positions of the movable symbols 102', 104', 106', 108', 110' with relation to the symbol 100' of the user on the display so that each of the sound signals from the participants 102, 104, 106 and devices 108, 110 associated with the symbols 102', 104', 106', 108', 110' is filtered by a binaural filter having a directional transfer function corresponding to the relative position on the display 120 of the corresponding respective symbol 102', 104', 106', 108', 110' in relation to the symbol 100' of the user.
  • the display 120 shows a map of participants and devices indicating the direction of arrival of sound from the shown participants and devices, perceived by the user 100.
  • Fig. 2 schematically illustrates an example of the new hearing device system 10 with a binaural hearing aid with a first hearing aid 10A for the right ear and a second hearing aid 10B for the left ear, and a control device, namely a smartphone 110, interconnected with the binaural hearing aid 10A, 10B for control of the binaural hearing aid 10A, 10B through a data interface and for transmission of audio signals through an audio interface.
  • the illustrated hearing device system 10 may use speech syntheses to issue messages and instructions to the user and speech recognition may be used to receive spoken commands from the user.
  • the first hearing aid 10A comprises a first microphone 12A for provision of first microphone audio signal 14A in response to sound received at the first microphone 12A.
  • the microphone audio signal 14A may be pre-filtered in a first pre-filter 16A well-known in the art, and input to a signal processor 18.
  • the first microphone 12A may include two or more microphones with signal processing circuitry for combining the microphone signals into the microphone audio signal 14A.
  • the first hearing aid 10A may have two microphones and a beamformer for combining the microphone signals into a microphone audio signal 14A with a desired directivity pattern as is well-known in the art of hearing aids.
  • the first hearing aid 10A also comprises a first input 20A for provision of a first audio input signal 24A representing sound output by a first sound source (not shown) that is not a part of the first hearing aid 10A, and received at the first input 20A.
  • the first sound source may be a spouse microphone (not shown) carried by a person 102, 104, 106, the hearing aid user desires to listen to.
  • the output signal of the spouse microphone is encoded for transmission to the first hearing aid 10A using wireless or wired data transmission.
  • the transmitted data representing the spouse microphone audio signal are received by a receiver and decoder 22A for decoding into the first audio input signal 24A.
  • the second hearing aid 10B comprises a second microphone 12B for provision of second microphone audio signal 14B in response to sound received at the second microphone 12B.
  • the microphone audio signal 14B may be pre-filtered in a second pre-filter 16B well-known in the art, and input to signal processor 18.
  • the second microphone 12B may include two or more microphones with signal processing circuitry for combining the microphone signals into the microphone audio signal 14B.
  • the second hearing aid 10B may have two microphones and a beamformer for combining the microphone signals into a microphone audio signal 14B with a desired directivity pattern as is well-known in the art of hearing aids.
  • the binaural hearing aid 10A, 10B also comprises a second input 26 for provision of a second audio input signal 30 representing sound output by a second sound source (not shown) and received at the second input 26.
  • the second sound source may be a second spouse microphone (not shown) carried by a second person 102, 104, 106, the hearing aid user desires to listen to.
  • the output signal of the second spouse microphone is encoded for transmission to the binaural hearing aid 10A, 10B using wireless or wired data transmission.
  • the transmitted data representing the spouse microphone audio signal are received by a receiver and decoder 28 for decoding into the second audio input signal 30.
  • the second input 26 and receiver and decoder 28 may be accommodated in the first hearing aid 10A or in the second hearing aid 10B.
  • the binaural hearing aid 10A, 10B also comprises further inputs (not shown) similar to the second input 26 for provision of further audio input signals representing sound output by further sound sources (not shown) that do not form part of the first and second hearing aids 10A, 10B.
  • the further inputs and receivers and decoders may be accommodated in the first hearing aid 10A or in the second hearing aid 10B.
  • the received signals 20A, 26 are compensated for hearing loss, as is well-known in the art of hearing aids, and perceived spatial separation of the signal sources are added to the received signals, i.e. the audio input signals 24A, 30 are filtered with binaural filters 32A-R, 32A-L; 34-R, 34-L, in such a way that the user of the hearing device system 10 perceives the corresponding signal sources to be externalized, i.e. moved away from the centre of the head of the user, and positioned in different positions in his or her surroundings.
  • the resulting perceived spatial separation of the sound sources improves the capability of the user's auditory system's binaural signal processing of separating sound signals and focussing his or her attention to a desired one of the sound signals, or even to simultaneously listen to and understand more than one sound signal.
  • a set of two filters 32A-R, 32A-L, 34-R, 34-L is provided with inputs connected to the respective outputs 24A, 30 of each of the respective receivers and decoders 22A, 28 and with outputs 36A-R, 36A-L, 38-R, 38-L, one of which 36A-R, 38-R provides an output signal to the right ear and the other 36A-L, 38-L provides an output signal to the left ear.
  • the sets of two filters 32A-R, 32A-L, 34-R, 34-L have directional transfer functions, e.g. of respective HRTFs 32A, 34 imparting selected perceived directions of arrival to the first and second sound sources. Only, two audio inputs 20A, 26 with associated circuitry are shown in Fig. 2 ; however, further similar audio inputs (not shown) with similar associated circuitry (not shown) are included in the hearing aids 10A, 10B.
  • the output of the filters 32A-R, 32A-L, 34-R, 34-L, are processed in signal processor 18 for hearing loss compensation and the processor output signal 40A intended to be transmitted towards the right ear is connected to a first receiver 42A of the first hearing aid 10A for conversion into an acoustic signal for transmission towards an eardrum of the right ear of a user of the binaural hearing aid 10A, 10B, and the processor output signal 40B intended to be transmitted towards the left ear is connected to a second receiver 42B of the second hearing aid 10B for conversion into an acoustic signal for transmission towards an eardrum of the left ear of the user of the binaural hearing aid 10A, 10B.
  • the directional transfer functions of the binaural filters may be individually determined for the user of the hearing device system 10, whereby the user's perceived externalization of and sense of direction towards the various sound sources 102, 104, 106, 108, 110 will be distinct since the HRTFs will contain all information relating to the sound transmission to the ears of the user, including diffraction around the head, reflections from shoulders, reflections in the ear canal, etc., which cause variations of HRTFs of different users.
  • HRTFs such as HRTFs determined on a manikin, such as a KEMAR head.
  • approximations may be constituted by HRTFs determined as averages of individual HRTFs of humans in a selected group of humans with certain physical similarities leading to corresponding similarities of the individual HRTFs, e.g. humans of the same age or in the same age range, humans of the same race, humans with similar sizes of pinnas, etc.
  • Good sense of directions may also be obtained by approximations to individually determined HRTFs constituted by binaural filters with directional transfer functions that add only one or more directional cues to the input signal, such as Interaural Time Difference and/or Interaural Level Difference.
  • binaural hearing aid 10A, 10B shown in Fig. 2 may be substituted with another type of hearing device, including the binaural hearing aid shown in Figs. 3 - 6 .
  • the binaural hearing aid 10A, 10B shown in Fig. 2 may be substituted with another type of hearing device, including an Ear-Hook, In-Ear, On-Ear, Over-the-Ear, Behind-the-Neck, Helmet, Headguard, etc, headset, headphone, earphone, ear defenders, earmuffs, etc.
  • the illustrated binaural hearing aid 10A, 10B may comprise any type of hearing aids, such as a BTE, a RIE, an ITE, an ITC, a CIC, etc, hearing aids.
  • the illustrated binaural hearing aid may also be substituted by a single monaural hearing aid worn at one of the ears of the user, in which case sound at the other ear will be natural sound inherently containing the characteristics of the user's individual HRTFs.
  • the illustrated binaural hearing aid 10A, 10B has a user interface (not shown), e.g. with push buttons and dials as is well-known from conventional hearing aids, for user control and adjustment of the binaural hearing aid 10A, 10B and possibly the smartphone 110 interconnected with the binaural hearing aid 10A, 10B, e.g. for selection of media to be played back.
  • a user interface e.g. with push buttons and dials as is well-known from conventional hearing aids, for user control and adjustment of the binaural hearing aid 10A, 10B and possibly the smartphone 110 interconnected with the binaural hearing aid 10A, 10B, e.g. for selection of media to be played back.
  • the microphones of binaural hearing aid 10A, 10B may be used for reception of spoken commands by the user transmitted (not shown) to the smartphone 110 for speech recognition in a processor 130 of the smartphone 110, for decoding of the spoken commands and for controlling the hearing device system 10 to perform actions defined by respective spoken commands.
  • the smartphone 110 has a touch screen 120 controlled by the processor 130 to display movable symbols as further explained above with reference to Fig. 1 .
  • the processor 130 transmits control signals 140 through a data interface to the binaural hearing aid 10 for selection of binaural filters 32A-R, 32A-L, 34-R, 34-L, etc, with directional characteristics corresponding to the relative positioning of the symbols on the touch screen 120.
  • the data interface may be a wired interface, e.g. a USB interface, or a wireless interface, such as a Bluetooth interface, e.g. a Bluetooth Low Energy interface.
  • a wired interface e.g. a USB interface
  • a wireless interface such as a Bluetooth interface, e.g. a Bluetooth Low Energy interface.
  • All or some of the binaural filters 32A-R, 32A-L, 34-R, 34-L, etc, may reside in devices generating audio signals for transmission to the binaural hearing aid 10 so that the generated audio signal is transmitted as a binaural audio signal to the binaural hearing aid 10 through its audio interface, and the corresponding control signals from the processor are transmitted to the device with the binaural filter in question.
  • the processor 130 selects a binaural filter 150, i.e. a pair of filters 150-L, 150-R, accommodated in the smartphone 110 with a directional characteristic, preferably a Head-Related Transfer Function, corresponding to the relative positioning of the symbol of the smart phone 110' with relation to the user 100', see Fig. 1 , and transmits a binaural output signal 160-L for the left ear and 160-R for the right ear, of the binaural filter 150 through the audio interface to a processor 18 of the binaural hearing aid 10 for conversion into an acoustic binaural signal and emission towards the respective ears of the user.
  • a binaural filter 150 i.e. a pair of filters 150-L, 150-R, accommodated in the smartphone 110 with a directional characteristic, preferably a Head-Related Transfer Function, corresponding to the relative positioning of the symbol of the smart phone 110' with relation to the user 100', see Fig. 1
  • the smartphone 110 may output audio signals representing any type of sound, such as speech, e.g. from an audio book, radio, etc, music, tone sequences, etc.
  • the user may for example decide to listen to a radio station while walking, and the smartphone 110 outputs binaural audio signals 160-L, 160-R reproducing the signals originating from the desired radio station filtered by binaural filter 150, i.e. filter pair 150-L, 150-R, with the HRTF specified by the user using the touch screen 120, so that the user perceives to hear the desired radio station from the direction corresponding to the selected HRTF.
  • binaural filter 150 i.e. filter pair 150-L, 150-R
  • the audio interface may be a wired interface or a wireless interface.
  • the data interface and the audio interface may be combined into a single interface, e.g. a USB interface, a Bluetooth interface, etc.
  • the binaural hearing aid may for example have a Bluetooth Low Energy data interface for exchange of control data between the hearing device and the device, and a wired audio interface for exchange of audio signals between the hearing device and the device.
  • the illustrated smartphone 110 may have a GPS-receiver-, mobile telephone- and WiFi-interface 170.
  • Fig. 3 shows another example of the new hearing device system 10 similar to the hearing device system shown in Fig. 2 except for the fact that sufficient perceived spatial separation between the first and second sound sources is obtained by introducing a delay equal to the ITD of a desired azimuth direction of arrival in the signal path from the first receiver and decoder 22A to one of the ears of the user.
  • the filter 32A-R introduces a time delay between its input signal 24A and output signal 36A-R intended for the right ear of the user
  • the filter 32A-L shown in Fig. 2 is constituted by a direct connection in Fig. 3 between input 24A and output 36A-L.
  • Audio inputs similar to audio input 20A with similar associated circuitry (not shown) introducing different perceived azimuth directions of arrival to the received audio signals are provided in one or both of the hearing aids 10A, 10B.
  • the perceived azimuth of the direction of arrival of the first sound source is shifted, e.g. to - 45o, while the signal from the second sound source is presented monaurally to the ears of the user, i.e. the output 30 of the receiver and decoder 28 is input as a monaural signal to the signal processor 18 and output to both ears of the user.
  • perceived spatial separation of the first and second sound sources is obtained, since the first sound source is perceived to be position in a direction determined by the delay 32A-R, e.g. 45o azimuth, while the second sound source is perceived to be positioned at the centre inside the head of the user.
  • Fig. 4 shows another example of the new hearing device system 10 similar to the example shown in Fig. 3 except for the fact that improved perceived spatial separation between the first and second sound sources is obtained by introducing an additional delay equal to the ITD of a desired second azimuth direction of arrival in the signal path from the second receiver and decoder 28 to one of the ears of the user.
  • the filter 34-L may introduce a time delay between its input signal 30 and output signal 38-L intended for the left ear of the user
  • the filter 34-R shown in Fig. 1 is constituted by a short-circuit between input 30 and output 38-R.
  • the perceived azimuth of the direction of arrival of the second sound source is shifted, e.g. to + 45o while the perceived azimuth of the direction of arrival of the first sound source remains shifted, e.g. to - 45o.
  • improved perceived spatial separation of the first and second sound sources is obtained, since the first sound source is perceived to be position in a direction determined by the delay 32A-R, e.g. at - 45o azimuth, while the second sound source is perceived to be positioned in a direction determined by the delay 34-L, e.g. at +45o azimuth.
  • the dashed lines indicate the housings of the first and second hearing aids 10A, 10B accommodating the components of the binaural hearing aid 10A, 10B.
  • Each of the housings accommodates the one or more microphones 12A, 12B for reception of sound at the respective ear of the user for which the respective hearing aid 10A, 10B is intended for performing hearing loss compensation, and the respective receiver 42A, 42B for conversion of the respective output signal 40A, 40B of the signal processor 18 into acoustic signals for transmission towards eardrum of the respective one of the right and left ears of the user.
  • the remaining circuitry may be distributed in arbitrary ways between the two hearing aid housings in accordance with design choices made by the designer of the hearing device system 10.
  • Each of the signals in the binaural hearing aid shown in Figs. 2 , 3 and 4 may be transmitted by wired or wireless transmission between the hearing aids 10A, 10B in a way well-known in the art of signal transmission.
  • Fig. 5 shows another example of the new hearing device system 10 shown in Fig. 2 , wherein the second hearing aid 10B does not have a signal processor 18 and does not have inputs for provision of first and second audio input signals representing sound from respective first and second sound sources.
  • the second hearing aid 10B only has the one or more second microphone 12B and the second receiver 42B and the required encoder and transmitter (not shown) for transmission of the microphone audio signal 14B for signal processing in the first hearing aid 10A, and receiver and decoder (not shown) for reception of the output signal 40B of the signal processor 18A.
  • the remaining circuitry shown in Fig. 1 is accommodated in the housing of the first hearing aid 10A.
  • Fig. 6 shows another example of the new hearing device system 10 shown in Fig. 2 , wherein the first and second hearing aids 10A, 10B both comprise a microphone, and a receiver, and a hearing aid processor.
  • the illustrated new binaural hearing aid 10A, 10B comprises,
  • a first hearing aid 10A comprising a first input 20A for provision of a first audio input signal 24A representing sound output by a first sound source and received at the first input 20A, a first binaural filter 32A-R, 32A-L for filtering the first audio input signal 24A and configured to output a first right ear signal 36A-R for the right ear and a first left ear signal 36A-L for the left ear that are that are equal to the first audio input signal multiplied with a first right gain and a different first left gain, respectively, and/or phase shifted differently with a resulting first phase shift with relation to each other, a first ear receiver 42A for conversion of a first ear receiver input signal 40A into an acoustic signal for transmission towards an eardrum of the first ear of a user of the binaural hearing aid 10A, 10B, and a second input 26B for provision of a second audio input signal 30B representing sound output by a second sound source and received at the second input 26B, a second

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Stereophonic System (AREA)
EP13198545.9A 2013-12-19 2013-12-19 Dispositif d'audition à positionnement spatial perçu sélectionnable de sources acoustiques Active EP2887695B1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP13198545.9A EP2887695B1 (fr) 2013-12-19 2013-12-19 Dispositif d'audition à positionnement spatial perçu sélectionnable de sources acoustiques
DK13198545.9T DK2887695T3 (en) 2013-12-19 2013-12-19 A hearing aid system with selectable perceived spatial location of audio sources
US14/138,021 US9307331B2 (en) 2013-12-19 2013-12-21 Hearing device with selectable perceived spatial positioning of sound sources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP13198545.9A EP2887695B1 (fr) 2013-12-19 2013-12-19 Dispositif d'audition à positionnement spatial perçu sélectionnable de sources acoustiques

Publications (2)

Publication Number Publication Date
EP2887695A1 true EP2887695A1 (fr) 2015-06-24
EP2887695B1 EP2887695B1 (fr) 2018-02-14

Family

ID=49880478

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13198545.9A Active EP2887695B1 (fr) 2013-12-19 2013-12-19 Dispositif d'audition à positionnement spatial perçu sélectionnable de sources acoustiques

Country Status (2)

Country Link
EP (1) EP2887695B1 (fr)
DK (1) DK2887695T3 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3668110A1 (fr) * 2018-12-12 2020-06-17 GN Hearing A/S Dispositif de communication à génération de source spatiale dépendante de la position, système de communication et procédé associé
CN114363770A (zh) * 2021-12-17 2022-04-15 北京小米移动软件有限公司 通透模式下的滤波方法、装置、耳机以及可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2928210A1 (fr) 2014-04-03 2015-10-07 Oticon A/s Système d'assistance auditive biauriculaire comprenant une réduction de bruit biauriculaire

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020151996A1 (en) * 2001-01-29 2002-10-17 Lawrence Wilcock Audio user interface with audio cursor
US20050265535A1 (en) * 2004-05-26 2005-12-01 Yasusi Kanada Voice communication system
US20060018497A1 (en) * 2004-07-20 2006-01-26 Siemens Audiologische Technik Gmbh Hearing aid system
US8208642B2 (en) 2006-07-10 2012-06-26 Starkey Laboratories, Inc. Method and apparatus for a binaural hearing assistance system using monaural audio signals
US20130041648A1 (en) * 2008-10-27 2013-02-14 Sony Computer Entertainment Inc. Sound localization for user in motion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2373154B (en) * 2001-01-29 2005-04-20 Hewlett Packard Co Audio user interface with mutable synthesised sound sources
US20090112589A1 (en) * 2007-10-30 2009-04-30 Per Olof Hiselius Electronic apparatus and system with multi-party communication enhancer and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020151996A1 (en) * 2001-01-29 2002-10-17 Lawrence Wilcock Audio user interface with audio cursor
US20050265535A1 (en) * 2004-05-26 2005-12-01 Yasusi Kanada Voice communication system
US20060018497A1 (en) * 2004-07-20 2006-01-26 Siemens Audiologische Technik Gmbh Hearing aid system
US8208642B2 (en) 2006-07-10 2012-06-26 Starkey Laboratories, Inc. Method and apparatus for a binaural hearing assistance system using monaural audio signals
US20130041648A1 (en) * 2008-10-27 2013-02-14 Sony Computer Entertainment Inc. Sound localization for user in motion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RENATO PELLEGRINI ET AL: "Wave Field Synthesis: Mixing and Mastering Tools for Digital Audio Workstations", AUDIO ENGINEERING SOCIETY 116TH CONVENTION, CONVENTION PAPER 6117, 8 May 2004 (2004-05-08), XP055115185, Retrieved from the Internet <URL:www.aes.org> [retrieved on 20140424] *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3668110A1 (fr) * 2018-12-12 2020-06-17 GN Hearing A/S Dispositif de communication à génération de source spatiale dépendante de la position, système de communication et procédé associé
CN111314824A (zh) * 2018-12-12 2020-06-19 大北欧听力公司 具有位置相关空间源生成的通信设备、通信系统及相关方法
US11057729B2 (en) 2018-12-12 2021-07-06 Gn Hearing A/S Communication device with position-dependent spatial source generation, communication system, and related method
CN114363770A (zh) * 2021-12-17 2022-04-15 北京小米移动软件有限公司 通透模式下的滤波方法、装置、耳机以及可读存储介质
CN114363770B (zh) * 2021-12-17 2024-03-26 北京小米移动软件有限公司 通透模式下的滤波方法、装置、耳机以及可读存储介质

Also Published As

Publication number Publication date
DK2887695T3 (en) 2018-05-07
EP2887695B1 (fr) 2018-02-14

Similar Documents

Publication Publication Date Title
US9307331B2 (en) Hearing device with selectable perceived spatial positioning of sound sources
US10869142B2 (en) Hearing aid with spatial signal enhancement
JP6193844B2 (ja) 選択可能な知覚空間的な音源の位置決めを備える聴覚装置
US11438713B2 (en) Binaural hearing system with localization of sound sources
US9930456B2 (en) Method and apparatus for localization of streaming sources in hearing assistance system
US9426589B2 (en) Determination of individual HRTFs
WO2010043223A1 (fr) Procédé de rendu stéréo binaural dans un système de prothèse auditive et système de prothèse auditive
EP2806661B1 (fr) Prothèse auditive avec amélioration spatiale de signal
US11805364B2 (en) Hearing device providing virtual sound
US8666080B2 (en) Method for processing a multi-channel audio signal for a binaural hearing apparatus and a corresponding hearing apparatus
EP2887695B1 (fr) Dispositif d&#39;audition à positionnement spatial perçu sélectionnable de sources acoustiques
EP2928213A1 (fr) Prothèse auditive à localisation améliorée d&#39;une source de signal monophonique
US11856370B2 (en) System for audio rendering comprising a binaural hearing device and an external device
DK201370280A1 (en) A hearing aid with spatial signal enhancement
Ruscetta Addressing Hearing Aid User Needs with Binaural Strategies for Enhanced Hearing.

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20131219

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

R17P Request for examination filed (corrected)

Effective date: 20151222

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

17Q First examination report despatched

Effective date: 20161104

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170712

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: GN HEARING A/S

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013033056

Country of ref document: DE

Ref country code: AT

Ref legal event code: REF

Ref document number: 970610

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180315

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20180503

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20180214

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 970610

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180214

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180514

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180514

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180515

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013033056

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20181115

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181219

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20181231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181219

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181219

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180214

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180214

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20131219

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180614

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20211221

Year of fee payment: 9

Ref country code: DK

Payment date: 20211217

Year of fee payment: 9

Ref country code: FR

Payment date: 20211215

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20211217

Year of fee payment: 9

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230525

REG Reference to a national code

Ref country code: DK

Ref legal event code: EBP

Effective date: 20221231

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20221219

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221219

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221231

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231218

Year of fee payment: 11