CN104980865A - Binaural hearing assistance system comprising binaural noise reduction - Google Patents

Binaural hearing assistance system comprising binaural noise reduction Download PDF

Info

Publication number
CN104980865A
CN104980865A CN201510156082.3A CN201510156082A CN104980865A CN 104980865 A CN104980865 A CN 104980865A CN 201510156082 A CN201510156082 A CN 201510156082A CN 104980865 A CN104980865 A CN 104980865A
Authority
CN
China
Prior art keywords
signal
auditory prosthesis
user
hearing assistance
assistance system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510156082.3A
Other languages
Chinese (zh)
Other versions
CN104980865B (en
Inventor
J·延森
M·S·佩德森
J·M·德哈恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oticon AS
Original Assignee
Oticon AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=50397047&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CN104980865(A) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Oticon AS filed Critical Oticon AS
Publication of CN104980865A publication Critical patent/CN104980865A/en
Application granted granted Critical
Publication of CN104980865B publication Critical patent/CN104980865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/558Remote control, e.g. of amplification, frequency
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/61Aspects relating to mechanical or electronic switches or control elements, e.g. functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention discloses a binaural hearing assistance system comprising binaural noise reduction. The system comprises left and right hearing assistance devices, and a user interface. The left and right hearing assistance devices comprises a) at least two input units for providing a time-frequency representation of an input signal in a number of frequency bands and a number of time instances; and b) a multi-input unit noise reduction system comprising a multi-channel beamformer filtering unit operationally coupled to said at least two input units and configured to provide a beamformed signal. The binaural hearing assistance system is configured to allow a user to indicate a direction to or location of a target signal source relative to the user via said user interface. This binaural hearing assistance system has the advantage that interaural cues of the target signals can be maintained, while the ambient noise is reduced.

Description

Comprise the ears hearing assistance system of ears noise reduction
Technical field
The application relates to auditory prosthesis, particularly relates to the noise reduction in ears hearing assistance system.The present invention is specifically related to comprise left and right auditory prosthesis and be configured to communicate with left and right auditory prosthesis and make user can affect the ears hearing assistance system of the user interface of the function of left and right auditory prosthesis.
The application also relates to the purposes of ears hearing assistance system and the operation method of ears hearing assistance system.
Such as, enforcement of the present invention can be used on as wherein spatial cues maintenance or create in very important audio frequency processing system, be such as used in the binaural system that auditory prosthesis is arranged in every ear place of user.Such as the present invention can be used on as in the application such as hearing aids, headphone, headset, active ear protection system.
Background technology
One of DESCRIPTION OF THE PRIOR ART application relating to the application below, i.e. hearing aids.
Traditionally, " space " or " orientation " noise reduction system in hearing aids uses following hypothesis to run: interested sound source (target) is positioned at the dead ahead of hearing aid user.Afterwards, use Beam Forming System, its target is that enhancing suppresses the signal from other direction any from signal source above simultaneously.
Under several typical acoustical situation, the hypothesis that target is positioned at above not meets logic, and such as compartment situation, session are by and that sit people carry out dinner party etc. adjacent with you.Like this, many have interference scenarios under, needs " can be listened the sound on side " and also suppress ambient noise simultaneously.
EP2701145A1 relates in noise circumstance the signal quality improving targeted voice signal, correlation matrix between the spectrum microphone particularly relating to the noise estimating to be embedded in multi-channel audio signal that multiple microphones of existing from acoustic environment obtain, aforementioned acoustic environment comprises one or more target sound source and multiple undesirable noise source.
Summary of the invention
The present invention proposes to use that user controls and that ears are synchronous multichannel to strengthen system, among every ear/each one of part, to provide the noise reduction system of improvement in ears hearing assistance system.This idea be allow hearing aid user " tell " hearing assistance system (comprise to be positioned on every ear or among auditory prosthesis) position of target sound source is (as direction, may and apart from its distance), or relative to the nose of user or with absolute coordinate.There are many modes that user can be made this information to be supplied to this system.In a preferred embodiment, this system configuration becomes to use servicing unit, as having the form (such as, as remote controller or mobile phone, smart phone) of the portable electronic equipment of touch-screen, and the premenstrual device of stating of user is made to indicate audition direction and (possibility) distance.The alternative providing this user to input comprises starting component (as program pushbutton) (such as distinct program " listens " different directions) on auditory prosthesis, with the pointing device (pen, phone, indicating device, flow transmission device etc.) of any kind of auditory prosthesis radio communication even, the head inclination/motion of being picked up by the gyroscope/accelerometer in the auditory prosthesis brain interface that (among auditory prosthesis or on) EEG electrode realizes as used.
According to the present invention, each auditory prosthesis comprises many microphones noise reduction system, and these system synchronization make them concentrate on same point in space or the same area (target source position).In an embodiment, between two auditory prosthesises, transmission and shared information comprise target signal source direction and/or the distance (or range) apart from target signal source.In the embodiment of proposed system, from the information of corresponding speech activity detector (VAD) and share (exchange) to improve performance between two auditory prosthesises by the yield value that corresponding single channel noise reduction system applies.
In an embodiment, ears hearing assistance system comprises at least two microphones.
Even if the another aspect of the Beam-former of corresponding auditory prosthesis/single channel noise reduction system is that they are designed so that having the biauricular line rope of echo signal under interference scenarios still to retain.Therefore, the target source of presenting to user sounds that ambient noise is lowered simultaneously just as being derived from correct direction.
Target of the present invention is the ears hearing assistance system providing improvement.Another target of embodiments of the invention is the signal transacting (as target is to improve the intelligibility of speech) improved in ears hearing assistance system, especially under target signal source (typical case) be positioned at before user supposes invalid acoustical situation.The another target of embodiments of the invention is the process simplifying many microphones beamforming unit.
Invention that the target of the application is defined by the following claims and described below realizes.
ears hearing assistance system
On the one hand, the target of the application is realized by ears hearing assistance system, it comprise be suitable for being arranged in the left ear of user and auris dextra part or among or be suitable for being implanted in wholly or in part the left and right auditory prosthesis of user's head, ears hearing assistance system also comprises and is configured to communicate with left and right auditory prosthesis and make user can affect the user interface of the function of left and right auditory prosthesis, and each in the auditory prosthesis of left and right comprises:
A) multiple input unit IUi, i=1 ... M, M is more than or equal to 2, for providing input signal xi (n) at i-th input unit place at multiple frequency band and the time-frequency representation Xi (k, m) in multiple moment, k is band index, m is time index, and n represents the time, the time-frequency representation Xi (k of i-th input signal, m) comprise echo signal component and noise signal component, echo signal component is derived from target signal source;
B) multi input unit noise reduction system, comprise and operationally link multiple input unit IUi, i=1, M is also configured to provide beam-formed signal Y (k, m) multichannel Beam-former filter unit, the signal component wherein from other direction being different from target signal source direction is attenuated, and keeps undamped or attenuation degree lower than the signal component from other direction from the signal component in target signal source direction;
Ears hearing assistance system is configured to make user can indicate target signal source relative to the direction of user or position through user interface.
Its advantage is, even if the biauricular line rope of echo signal is also retained having under interference scenarios, the target source of presenting to user is sounded, and just as being derived from correct direction, ambient noise is lowered simultaneously.
In this manual, term " beam forming " (" Beam-former ") means " space filtering " of (providing) multiple input sensor signal, and target is to decay from the signal component of some angle relative to the signal component from other angle in the beam-formed signal of gained." beam forming " comprises the linear combination forming multiple sensor signal (as sensor signal), as based on time-frequency unit, as in predetermined or dynamically/self-adapting program.
Term " makes user can indicate target signal source relative to the direction of user or position " to comprise in this manual directly indicate (as shown the position of audio-source or submitting objective definition sound source to relative to the data of the position of user) by user and/or indirectly indicate, wherein information obtains (as through monitoring the motion of user or the motion sensor in direction, or through the signal of telecommunication from user's brain as through EEG electrode) from the behavior of user.
If the signal component from target signal source direction does not keep undamped, but be really attenuated but attenuation degree lower than from the signal component in other direction being different from target signal direction, then present system is preferably arranged to and makes aforementioned decay (in fact) in the auditory prosthesis of left and right the same.Its advantage is, even if the biauricular line rope of echo signal is also retained having under interference scenarios, the target source of presenting to user is sounded, and just as being derived from correct direction, ambient noise is lowered simultaneously.
In an embodiment, ears hearing assistance system is suitable for the corresponding multichannel Beam-former filter unit of synchronous left and right auditory prosthesis, makes two Beam-former filter units concentrate on the locus of target signal source.Preferably, the Beam-former of corresponding left and right auditory prosthesis is synchronous, makes them concentrate on the position of the same space position and target signal source.Term " synchronously " means that relevant data exchanges between two, data are compared and the data obtained collection is determined based on the comparison in this manual.In an embodiment, between the auditory prosthesis of left and right, transmission and shared information comprise target source direction and/or the range information apart from target source.
In an embodiment, user interface forms a part for left and right auditory prosthesis.In an embodiment, user interface is implemented in left and/or right auditory prosthesis.In an embodiment, at least one in the auditory prosthesis of left and right comprises makes user can indicate the direction of target signal source or the starting component of position.In an embodiment, each in the auditory prosthesis of left and right comprises starting component, repeatedly to start indicate as made the special angle to the left or to the right relative to user's front direction by the correspondence of the starting component on the related device in two auditory prosthesises.
In an embodiment, user interface forms a part for servicing unit.In an embodiment, user interface is implemented to implement in servicing unit or by servicing unit wholly or in part.In an embodiment, servicing unit is or comprises the remote controller of hearing assistance system, mobile phone, intelligent watch, the glasses comprising computer, panel computer, personal computer, laptop computer, notebook computer, dull and stereotyped mobile phone etc. or its combination in any.In an embodiment, servicing unit comprises smart phone.In an embodiment, the display of smart phone and a part for starting component formation user interface.
In an embodiment, the interactive display (as touch-sensitive display) indicating APP that target signal source runs relative to the direction of user or the function of position on servicing unit and servicing unit (as smart phone) is implemented.
In an embodiment, indicate target signal source to implement by comprising the servicing unit be suitable for the pointing device of left and/or right auditory prosthesis radio communication (as pen, phone, audio frequency net gate way etc.) relative to the direction of user or the function of position.In an embodiment, indicate target signal source to be implemented by the unit for sensing head inclination/motion relative to the direction of user or the function of position, as used gyroscope/accelerometer elements, such as be arranged in left and/or right auditory prosthesis, even through large torsion free modules, implement on the part being positioned at left and/or right auditory prosthesis as used, with the EEG electrode of user's head contact.
In an embodiment, user interface comprise be positioned at left and/or right auditory prosthesis part on, with the electrode of user's head contact.In an embodiment, present system is suitable for indicating target signal source relative to the direction of user or position based on the eeg signal of electrode pickup.In an embodiment, electrode is EEG electrode.In an embodiment, one or more electrode is arranged on each device of left and right auditory prosthesis.In an embodiment, one or more electrode is implanted in user's head wholly or in part.In an embodiment, ears hearing assistance system is configured between the auditory prosthesis of left and right, exchange eeg signal (or being derived from its signal).In an embodiment, the eeg signal that the estimator of the position of target sound source is picked up from the EEG electrode of left and right auditory prosthesis extracts.
In an embodiment, ears hearing assistance system is suitable for enablely between the auditory prosthesis of left and right, setting up wireless communication link between ear thus enable swap data betwixt.In an embodiment, this system configuration becomes to make data relevant with the control of corresponding many microphones noise reduction system (as comprising and the relevant data in the direction of target sound source or position) can exchange between auditory prosthesis.In an embodiment, between ear, wireless communication link communicates based near field (as induction).As alternative, between ear, wireless communication link is based on far field (as radiation field) communication, such as, according to bluetooth or the low-yield or similar standard of bluetooth.
In an embodiment, ears hearing assistance system is suitable for enablely between servicing unit and corresponding left and right auditory prosthesis, setting up external wireless communication link thus enable swap data betwixt.In an embodiment, this system configuration becomes enable and data relevant for direction or position with target sound source is passed to each device in the auditory prosthesis of left and right one of (or).In an embodiment, external wireless communication link is based near field (as induction) communication.As alternative, external wireless communication link communicates, such as, according to bluetooth or the low-yield or similar standard of bluetooth based on far field (as radiation field).
In an embodiment, ears hearing assistance system is suitable for wireless link (as based on near-field communication) between external wireless communication link (as based on radiation field) and ear can be set up.Its advantage is reliability and the flexibility of the communication that improve between servicing unit and left and right auditory prosthesis.
In an embodiment, each in the auditory prosthesis of left and right also comprises operationally to be linked multichannel Beam-former filter unit and is configured to provide enhancing signal single channel post-processing filter unit.The target of single channel post filtering process is to suppress the noise component(s) (it is not yet by space filtering process (as the process of MVDR beam forming)) from target direction to suppress.Target is also to suppress echo signal to exist or is the noise component(s) during time period of main (as determined by speech activity detector) and echo signal time period when not existing.In an embodiment, the process of single channel post filtering is based on the estimator of the target signal to noise ratio of each time-frequency brick (m, k).In an embodiment, the signal that the estimator of the target signal to noise ratio of each time-frequency brick (m, k) is eliminated from beam-formed signal and target is determined.Thus, signal is strengthened represent the space filtering (beam forming) of current input signal (noise and target) and the version of noise reduction.Wittingly, signal is strengthened represent the estimator of echo signal, its direction is indicated through user interface by user.
Preferably, Beam-former (multichannel Beam-former filter unit) is designed to being derived from specific direction/distance (as specific a pair the gain of signal transmission 0dB d), and suppress the signal component being derived from other locus any.As alternative, Beam-former be designed to be derived from specific (target) direction/range data (as d to) signal compare the larger gain (less decay) of the signal component transmission that is derived from other locus any.Preferably, the Beam-former of left and right auditory prosthesis is configured to apply same gain (or decay) (any spatial cues in echo signal is not covered by Beam-former) to the signal component from target signal source.In an embodiment, the multichannel Beam-former filter unit of each in the auditory prosthesis of left and right comprises linear constraint minimal variance (LCMV) Beam-former.In an embodiment, Beam-former is embodied as the undistorted response of minimum variance (MVDR) Beam-former.
In an embodiment, the multichannel Beam-former filter unit of each in the auditory prosthesis of left and right comprises provides filtering weighting w mvdrthe MVDR filter of (k, m), this filtering weighting w mvdr(k, m) based on line of vision amount d (k, m) and noise signal input unit between covariance matrix R vv(k, m).MVDR is the abbreviation of the undistorted response of minimum variance, and undistorted finger target direction remains unaffected, and minimum variance refers to farthest be suppressed from the signal in other direction any being different from target direction.
Line of vision amount d when hearing aid device runs from (target) sound source to the expression of each input unit (as microphone) (as relatively) acoustic transfer function.Line of vision amount preferably in input audio signal target (as speech) signal exist or be main (such as, as existed with high probability, >=70%) time (as before use hearing devices, or self adaptation) determine.Between input (as microphone), covariance matrix and the eigenvector corresponding to the dominant eigenvalue of this covariance matrix are determined based on it.Eigenvector corresponding to the dominant eigenvalue of covariance matrix is line of vision amount d.Line of vision amount depends on the relative position (assuming that hearing aid device is positioned on ear) between echo signal and user's ear.Therefore, from target sound source to hearing devices, the estimator of the transfer function of (as to each microphone in multiple microphone) is inputted depending on vector representation.
In an embodiment, multichannel Beam-former filter unit and/or single channel post-processing filter cell location become to retain echo signal ear between spatial cues.In an embodiment, even if spatial cues is also retained having under interference scenarios between the ear of target source.Therefore, the target signal source of presenting to user sounds that ambient noise is lowered simultaneously just as being derived from correct direction.In other words, the target component arriving each eardrum (or, considerably, microphone) is retained in during Beam-former exports, and causes the preservation of the biauricular line rope of target component.In an embodiment, the output of multichannel Beam-former filter unit is processed by the single channel post-processing filter unit (SC-NR) in each in the auditory prosthesis of left and right.If these SC-NR are independent and do not run collaboratively, they can make the biauricular line rope distortion of target component, and this can cause the target source position distortions of perception.For avoiding occurring this situation, the SC-NR system preferably estimator of its (in time-frequency becomes) yield value commutative also determines to use the same yield value, such as, maximum during specific time-frequency unit (k, m) in two yield values.Like this, the suppression being applied to a certain time-frequency unit is all the same in two ears, can not introduce level difference between artitificial ear.
In an embodiment, each in the auditory prosthesis of left and right comprises the memory cell comprising multiple predetermined line of vision amounts, and each line of vision amount corresponds to instruction and/or focuses on the Beam-former of predetermined direction and/or position.
In an embodiment, user through user interface provide about target signal source target direction (phi, ) and distance (range, information d).In an embodiment, the quantity of preserving predetermined line of vision amount (collection) in a memory cell corresponding to target direction (phi, ) and distance (range, the quantity of occurrence (collection) d).Along with the Beam-former synchronous (communication link between device) of left and right auditory prosthesis, two Beam-formers focus on same point (or locus).Its advantage is that user provides the direction/position of target source, thus selects corresponding (making a reservation for) line of vision amount (or one group of beamformer weights) will applied under current acoustic situation.
In an embodiment, each in the auditory prosthesis of left and right comprises speech activity detector (VAD), for determining that input signal exists the corresponding time period of people's speech.In an embodiment, hearing assistance system is arranged so that communication and shared information comprise the yield value of speech activity detector (VAD) value or decision and the application of single channel noise reduction system to improve performance between the auditory prosthesis of left and right.In this manual, voice signal comprises the voice signal from the mankind.It also can comprise the sounding (as sung) of other form produced by human speech system.In an embodiment, voice detector unit is suitable for acoustic environment current for user being categorized as " speech " or " without speech " environment.This has following advantage: the time period comprising the electric microphone signal of the people's sounding (as voice) in user environment can be identified, and is thus separated with the time period only comprising other sound source (noise as manually produced).In an embodiment, voice detector is suitable for the speech of user oneself to be also detected as " speech ".As alternative, voice detector is suitable for the speech getting rid of user oneself from the detection of " speech ".In an embodiment, ears hearing assistance system is adapted so that input signal exists the determination at least part of (as unique) of the corresponding time period of people's speech based on eeg signal.In an embodiment, ears hearing assistance system is adapted so that input signal exists the determination of the corresponding time period of people's speech based on eeg signal and the combination from the one or more signals as one or more microphone in multiple input unit.In an embodiment, ears hearing assistance system be suitable for using be arranged in left and/or right auditory prosthesis part on, pick up eeg signal with the electrode of user's head contact (as being positioned at duct).
In an embodiment, multiple input unit IU of left and right auditory prosthesis iin at least one such as all comprise for sound import is converted to electrical input signal x as major part ithe microphone of (n) and for providing i-th input unit IU ithe input signal x at place in () is at the time-frequency representation X of multiple frequency band k and multiple moment m ithe time of (k, m) is to time-frequency converting unit.Preferably, ears hearing assistance system comprises at least two microphones altogether, as left and right auditory prosthesis each at least one.In an embodiment, each in the auditory prosthesis of left and right comprises M input unit IU of microphone form i, it is physically located at (or at least at ear place, corresponding left and right) in the auditory prosthesis of corresponding left and right.In an embodiment, M equals 2.As alternative, at least one input unit time-frequency representation of input signal being supplied to one of left and right auditory prosthesis receives its input signal from another physical unit, such as from another auditory prosthesis corresponding, or from servicing unit as mobile phone, or from the remote control for controlling auditory prosthesis, or from special extra microphone arrangement (as being positioned to pickup echo signal or noise signal especially).
In an embodiment, ears hearing assistance system is suitable for providing the gain become with frequency to compensate the hearing loss of user.In an embodiment, each in the auditory prosthesis of left and right comprises for strengthening input signal and providing the signal processing unit of the output signal after process.
In an embodiment, auditory prosthesis comprises the output translator for converting electrical signals to the stimulation being perceived by a user as acoustical signal.In an embodiment, output translator comprises the vibrator of multiple cochlear implant electrode or bone conduction hearing device.In an embodiment, output translator comprises for stimulating the receiver (loud speaker) being supplied to user as acoustic signal.
In an embodiment, left and right auditory prosthesis is mancarried device, such as, comprise the device of the machine energy as battery such as rechargeable battery.
In an embodiment, each in the auditory prosthesis of left and right comprises forward between input translator (microphone system and/or directly electricity input (as wireless receiver)) and output translator or signal path.In an embodiment, signal processing unit is arranged in this forward path.In an embodiment, signal processing unit is suitable for providing according to the specific needs of user the gain become with frequency.In an embodiment, left and right auditory prosthesis comprises the analysis path of the functor had for analyzing input signal (as determined level, modulation, signal type, acoustic feedback estimator etc.).In an embodiment, some or all signal transacting of analysis path and/or signal path carry out at frequency domain.In an embodiment, some or all signal transacting of analysis path and/or signal path carry out in time domain.
In an embodiment, left and right auditory prosthesis comprise modulus (AD) transducer with by predetermined sampling rate as 20kHz makes analog input digitlization.In an embodiment, auditory prosthesis comprises digital-to-analogue (DA) transducer so that digital signal is converted to analog output signal, such as, for presenting to user through output translator.
In an embodiment, left and right auditory prosthesis such as input unit such as microphone unit and/or transceiver unit comprises the TF converting unit of the time-frequency representation for providing input signal.In an embodiment, time-frequency representation comprises involved signal in the corresponding complex value of special time and frequency range or real-valued array or mapping.In an embodiment, TF converting unit comprises for carrying out filtering to (time change) input signal and the bank of filters providing multiple (time change) to output signal, and each output signal comprises distinct input signal frequency range.In an embodiment, TF converting unit comprises the Fourier transform unit for time-varying input signal being converted to (time change) signal in frequency domain.In an embodiment, auditory prosthesis consider, from minimum frequency f minto peak frequency f maxthe frequency range people comprised typically, from 20Hz to 20kHz listen the part of frequency range, a part for such as, scope from 20Hz to 12kHz.In an embodiment, the forward of auditory prosthesis and/or the signal of analysis path split NI frequency band, and wherein NI is as being greater than 5, as being greater than 10, as being greater than 50, as being greater than 100, as being greater than 500, wherein process individually at least partly.
In an embodiment, left and right auditory prosthesis comprises level detector (LD), for determining the level (such as based on band level and/or complete (broadband) signal) of input signal.From the sorting parameter that the incoming level of the electric microphone signal of user's acoustic environment pickup is acoustic environment.In an embodiment, level detector is suitable for the acoustic environment current to user according to multiple different (as average) signal level and classifies, as being categorized as high level or low level environment.
In an embodiment, left and right auditory prosthesis comprises correlation detector, is configured to estimate that the signal of forward path is as the auto-correlation of electrical input signal.In an embodiment, correlation detector is configured to the auto-correlation of the electrical input signal estimating feedback compensation.In an embodiment, correlation detector is configured to the auto-correlation estimating electrical output signal.
In an embodiment, correlation detector is configured to the cross-correlation estimated between two signals of forward path, first signal separated from forward path before signal processing unit (wherein can apply the gain become with frequency), and secondary signal separates from forward path after signal processing unit.In an embodiment, the first signal in the signal of cross-correlation calculation is the input signal of electrical input signal or feedback compensation.In an embodiment, the secondary signal in the signal of cross-correlation calculation is output signal after the process of signal processing unit or electrical output signal (output translator of feeding is to present to user).
In an embodiment, left and right auditory prosthesis comprises sound (and/or machinery) feedback detection and/or suppresses system.In an embodiment, auditory prosthesis also comprises for other of involved application about function, as compression etc.
In an embodiment; left and right auditory prosthesis comprises hearing prosthesis as hearing aids, as hearing instrument; such as be suitable for being arranged in ear place or be arranged in user's duct wholly or in part or be implanted in the hearing instrument of user's head wholly or in part, such as earphone, headset, ear protection device or its combination.
purposes
In addition, the invention provides purposes that is above-described, " embodiment " middle ears hearing assistance system that is that describe in detail and that limit in claim.In an embodiment, the purposes in binaural hearing aid system is provided in.
method
On the other hand, the application also provides the operation method of ears hearing assistance system, ears hearing assistance system comprise be suitable for being arranged in the left ear of user and auris dextra part or among or be suitable for being implanted in wholly or in part the left and right auditory prosthesis of user's head, ears hearing assistance system also comprises and is configured to communicate with left and right auditory prosthesis and make user can affect the user interface of the function of left and right auditory prosthesis.The method is included in each of left and right auditory prosthesis:
A) provide input signal xi (n) at i-th input unit place at multiple frequency band and the time-frequency representation Xi (k in multiple moment, m), k is band index, and m is time index, n represents the time, i=1 ..., M, M is more than or equal to 2, the time-frequency representation Xi (k, m) of i-th input signal comprises echo signal component and noise signal component, and echo signal component is derived from target signal source;
B) from the time-frequency representation Xi (k of multiple input signal, m) beam-formed signal Y (k is provided, m), wherein at beam-formed signal Y (k, m) in, signal component from other direction being different from target signal source direction is attenuated, and keeps undamped or attenuation degree lower than the signal component from other direction from the signal component in target signal source direction; And
Be configured to by ears hearing assistance system to make user can indicate target signal source relative to the direction of user or position through user interface.
When the process by correspondence suitably replaces, some or all architectural features of above-described, " embodiment " middle system that is that describe in detail and that limit in claim can be combined with the enforcement of the inventive method, and vice versa.The enforcement of method has the advantage the same with correspondence system.
computer-readable medium
The present invention further provides the tangible computer computer-readable recording medium preserved and comprise the computer program of program code, when computer program runs on a data processing system, data handling system is made to perform at least part of (as major part or all) step that is above-described, " embodiment " middle method that is that describe in detail and that limit in claim.Except being kept at tangible medium as on disk, CD-ROM, DVD, hard disk or other machine-readable medium any, computer program also can carry out transmitting as internet as wired or wireless link or network through transmission medium and be loaded into data handling system thus run in the position being different from tangible medium.
data handling system
The present invention further provides data handling system, comprise processor and program code, program code makes processor perform at least part of (as major part or all) step that is above-described, " embodiment " middle method that is that describe in detail and that limit in claim.
definition
In this manual; " auditory prosthesis " refers to be suitable for improve, strengthen and/or the device of hearing ability of protection user as hearing instrument or active ear protection device or other apparatus for processing audio, its by receiving acoustical signal from user environment, produce corresponding audio signal, this audio signal may be revised and the audio signal that may revise is supplied at least one the ear of user as audible signal and realizes." auditory prosthesis " also refer to be suitable for electronically received audio signal, may revise this audio signal and the audio signal that may revise is supplied to the device of at least one the ear of user as headphone or headset as audible signal.Audible signal such as can following form provide: be radiated the acoustical signal in user's external ear, pass to the acoustical signal of user's inner ear and directly or indirectly pass to the signal of telecommunication of user's cochlea nerve as the bone structure of mechanical oscillation by user's head and/or the part by middle ear.
Auditory prosthesis can be configured to wear in any known fashion, as being arranged in the unit after ear, having the pipe of the acoustical signal of radiation importing duct or having the loud speaker being arranged to close duct or being arranged in duct; Be arranged in the unit in auricle and/or duct all or in part; The unit linking the fixture implanting skull, the unit etc. implanted all or in part.Auditory prosthesis can comprise the unit of single unit or several electronic communication each other.
More generally, auditory prosthesis comprise for receive acoustical signal from user environment and provide the input translator of corresponding input audio signal and/or electronically (namely wired or wireless) receive input audio signal receiver, for the treatment of the signal processing circuit of input audio signal and the output device for audible signal being supplied to according to the audio signal after process user.In some auditory prosthesises, amplifier can form signal processing circuit.In some auditory prosthesises, output device can comprise output translator, such as, for providing the loud speaker of empty transaudient signal or for providing the vibrator of the acoustical signal of structure or liquid transmissive.In some auditory prosthesises, output device can comprise one or more for providing the output electrode of the signal of telecommunication.
In some auditory prosthesises, vibrator can be suitable for passing to skull through skin or the acoustical signal propagated by skin structure.In some auditory prosthesises, vibrator is implantable in middle ear and/or inner ear.In some auditory prosthesises, vibrator can be suitable for the acoustical signal of structure-borne to be supplied to middle otica and/or cochlea.In some auditory prosthesises, vibrator can be suitable for such as by oval window, the acoustical signal of liquid transmissive being provided to cochlea liquid.In some auditory prosthesises, output electrode is implantable in cochlea or be implanted on inside skull, and can be suitable for the signal of telecommunication being supplied to the hair cell of cochlea, one or more auditory nerve, auditory cortex and/or corticocerebral other parts.
" hearing assistance system " refers to the system comprising one or two auditory prosthesis, and " ears hearing assistance system " refers to comprise two auditory prosthesises and be suitable for providing to two ears of user synergistically the system of audible signal.Hearing assistance system or ears hearing assistance system also can comprise " servicing unit ", and it communicates with auditory prosthesis and affects and/or benefit from the function of auditory prosthesis.Servicing unit can be such as remote controller, audio gateway device, mobile phone, broadcast system, automobile audio system or music player.The hearing ability that auditory prosthesis, hearing assistance system or ears hearing assistance system such as can be used for compensating hearing impaired persons loses, strengthens or protect the hearing ability of normal hearing person and/or electronic audio signal is passed to people.
Further target of the present invention is realized by the execution mode limited in dependent claims and detailed description of the present invention.
Unless explicitly stated otherwise, plural form (namely there is the meaning of " at least one ") is included in the implication of this singulative used.Should understand further, the term used in specification " has ", " comprising " and/or " comprising " show to exist described in feature, integer, step, operation, element and/or parts, but do not get rid of and there is or increase other features one or more, integer, step, operation, element, parts and/or its combination.Should be appreciated that unless explicitly stated otherwise, when element is called as " connection " or " coupling " to another element, can is directly connect or be coupled to other elements, also can there is middle insertion element.Term "and/or" as used in this comprises any of one or more relevant item enumerated and all combinations.Unless explicitly stated otherwise, the step of any method disclosed herein must accurately not perform by disclosed order.
Accompanying drawing explanation
The present invention will below with reference to accompanying drawing, illustrate more completely in conjunction with preferred implementation.
Figure 1A-1D shows four embodiments of the ears hearing assistance system comprising left and right auditory prosthesis, and each device includes through the synchronous Beam-former/noise reduction system of user interface ears.
Fig. 2 A-2B shows the 5th embodiment of the ears hearing assistance system comprising the left and right auditory prosthesis with the synchronous Beam-former/noise reduction system of ears, wherein left and right auditory prosthesis comprises antenna for setting up communication link between ear between two and transceiver circuit, Fig. 2 A shows exemplary left and right auditory prosthesis, and Fig. 2 B shows corresponding block diagram.
Fig. 3 A-3D schematically shows the mutual locus of element that represent in sphere and orthogonal coordinate system, ears hearing assistance system and/or the sound source example relative to user.
Fig. 4 A-4B schematically shows two examples of target sound source relative to the position of user, and Fig. 4 A shows the dead ahead user, and Fig. 4 B shows in the quadrant (x>0, y>0) on the user left side.
Fig. 5 schematically shows the multiple predetermined orientation of line of vision amount relative to user.
Fig. 6 A shows the embodiment comprised with the binaural hearing aid system of the left and right auditory prosthesis of communication, and Fig. 6 B shows the user interface that servicing unit is used as binaural hearing aid system.
For clarity, these accompanying drawings are schematically and the figure simplified, and they only give for understanding the necessary details of the present invention, and omit other details.In all of the figs, same Reference numeral is used for same or corresponding part.
By detailed description given below, the further scope of application of the present invention will be apparent.But, should be appreciated that they only provide for the purpose of illustration while detailed description and object lesson show the preferred embodiment of the present invention.For a person skilled in the art, other execution mode can be drawn apparently from detailed description below.
Embodiment
Figure 1A-1D shows and comprises left auditory prosthesis HAD lwith right auditory prosthesis HAD rfour embodiments of ears hearing assistance system BHAS, left and right auditory prosthesis be suitable for being arranged in user left and right ear part or among or be suitable for being implanted in user's head wholly or in part.Ears hearing assistance system BHAS also comprises user interface UI, and it is configured to communicate with left and right auditory prosthesis thus make user can affect the function of this system and left and right auditory prosthesis.
Solid box (the input unit IU of Figure 1A embodiment l, IU r, noise reduction system NRS l, NRS r, and user interface UI) form according to the primary element of hearing assistance system BHAS of the present invention.Left auditory prosthesis HAD lwith right auditory prosthesis HAD rin each comprise multiple input unit IU i, i=1 ..., M, M are more than or equal to 2 (in figure ia respectively by left and right input unit IU l, IU rrepresent).Corresponding input unit IU l, IU rthe input signal x at i-th input unit place is provided in () (is respectively signal x in figure ia 1l..., x maland x 1r..., x mbr) at multiple frequency band and the time-frequency representation X in multiple moment i(k, m) (signal X in Figure 1A land X r, each signal represents M signal of left and right auditory prosthesis respectively), k is band index, and m is time index, and n represents the time.The quantity of the input unit of each in the auditory prosthesis of left and right is assumed to M.As alternative, the quantity of the input unit of two devices can be different.But, as in Figure 1A by sensor signal x that is from left to right nonessential and auditory prosthesis from right to left il, x irshown in, the sensor signal (x picked up by the device of an ear il, x ir, as microphone signal) and the device at another ear place can be passed to also as the input of the multi input unit noise reduction system NRS of involved auditory prosthesis.Aforementioned signal communication between device can through wired connection or preferably through wireless link (for example, see the IA-WL in Fig. 2 A-2B and 6A).In addition, the multi input unit noise reduction system NRS of one or two auditory prosthesis of this system can be passed in the sensor signal (as microphone signal) of other communicator (microphone etc. as radio microphone or mobile phone) place's pickup and be used as its input (for example, see the antenna in Fig. 2 B and the communication link WL-RF in transceiver circuit ANT, RF-Rx/Tx or Fig. 6 A).Time-varying input signal x i(n) and i-th (i=1 ..., M) and the frequency schedule X of individual input signal i(k, m) shows and comprises echo signal component and noise signal component, and echo signal component is derived from target signal source.Preferably, time-varying input signal x il(n) and x irn () is for being derived from the signal (to comprise the spatial cues relevant with health with the head of user) of the acoustic signal received at the ear place, corresponding left and right of user.Left auditory prosthesis HAD lwith right auditory prosthesis HAD rin each comprise multi input unit noise reduction system NRS l, NRS r, it comprises the multiple input unit IU operationally linking left and right auditory prosthesis i, i=1 ..., M (IU land IU r) and be configured to provide (gained) beam-formed signal (in Figure 1A ) multichannel Beam-former filter unit, the signal component wherein from the direction being different from target signal source is attenuated, and keeps undamped or attenuation degree lower than the signal component from other direction from the signal component in target signal source direction.In addition, ears hearing assistance system BHAS is configured to make user can indicate target signal source relative to the direction of user or position, see dividing the multi input unit being clipped to left and right auditory prosthesis noise reduction system NRS from user interface through user interface UI l, NRS rsignal ds.User interface such as can comprise the corresponding starting component on the auditory prosthesis of left and right.In an embodiment, this system configuration becomes to make left auditory prosthesis HAD lon startup represent that predetermined angle step (as 30 °) in first (as the counterclockwise) direction in direction from user to target signal source is (from current state, as from front direction, as in Fig. 4 A in Fig. 5 ), and right auditory prosthesis HAD ron predetermined angle step (as 30 °) in startup (subtend as the clockwise) direction that represents second.For each predetermined direction, the predetermined filtering weighting of correspondence of Beam-former filter unit is preserved in systems in which and indicates application (description see composition graphs 5) according to the current of user.Certainly, other user interface is also possible, as implemented at (assisting) device separated as (for example, see Fig. 6 A-6B) in smart phone.
Dotted line frame (the signal processing unit SP of Figure 1A l, SP rwith output unit OU l, OI r) represent nonessential, other, form the function of a part for the embodiment of hearing assistance system BHAS.Signal processing unit SP l, SP rsuch as beam-formed signal can be provided further process, apply as needed according to user with (time/level and) frequency and the gain (to compensate the impaired hearing of user) that becomes the output signal after process is provided output unit OU l, OI rpreferably be suitable for the gained signal of telecommunication of the forward path of left and right auditory prosthesis (as the output signal after process accordingly ) be supplied to user as appreciable stimulation and as the sound of gained electricity (audio frequency) signal representing forward path.
Figure 1B shows and comprises left auditory prosthesis HAD according to of the present invention lwith right auditory prosthesis HAD rears hearing assistance system BHAS.Compared to the embodiment of Figure 1A, the embodiment of Figure 1B does not comprise nonessential (dotted line) element, input unit IU land IU rbe refined as the input unit IU separated of left and right auditory prosthesis respectively 1l..., IU mland IU 1r...., IU mr.Each input unit IU i(IU iland IU ir) comprise for by voice signal x ibe transformed to electrical input signal x ' ior for receiving the input translator of electrical input signal or receiver IT that represent voice signal i.Each input unit IU ialso comprise the time to time-frequency converter unit, as by electrical input signal x ' isplit to multiple frequency band k thus signal X is provided i(X il, X ir) analysis filterbank AFB.In addition, the multi input unit noise reduction system NRS of left and right auditory prosthesis l, NRS rin each comprise beam-formed signal Y (Y be provided l, Y r) multichannel Beam-former filter unit (" Beam-former ", as MVDR Beam-former) and comprising in addition (beam forming and noise reduction) signal of enhancing is provided single channel post-processing filter cell S C-NR.Single channel post-processing filter cell S C-NR operationally links multichannel Beam-former filter unit and is configured to provide enhancing signal the object of single channel post-processing filter cell S C-NR is to suppress the noise component(s) from target direction, and it is not yet suppressed by multichannel Beam-former filter unit.
Fig. 1 C shows to comprise has the synchronous Beam-former/noise reduction system NRS of ears l, NRS rleft and right auditory prosthesis HAD l, HAD rthe 3rd embodiment of ears hearing assistance system.In the embodiment of Fig. 1 C, each in the auditory prosthesis of left and right comprises two input unit IU respectively 1l, IU 2land IU rl, IU 2r, in this case microphone unit.Assuming that described system is in several sub-band concurrent working, but realize analysis/synthetic filtering device group required for this in fig. 1 c suppressed (shown in Figure 1B).User provides about target direction through user interface with the information apart from (d=range) (see " target location that user provides in Fig. 1 C ", and for providing the example of the user interface UI of this information in definition in such as Fig. 3 and Figure 1A and Fig. 6 A-6B).Hearing assistance system uses this information to find the Beam-former indicating/focus on correct direction/range, see the exemplary predetermined direction in Fig. 5 and range in the database (memory) of precalculated line of vision amount and/or beamformer weights.Because left ear and auris dextra Beam-former are synchronous, two Beam-formers all focus on same point (for example, see Fig. 4 A-4B).Beam-former is such as designed to being derived from specific (phi, the gain of d) right signal transmission 0dB, and suppress the signal component being derived from other locus any, namely they can be the undistorted response of minimum variance (MVDR) Beam-formers, or more generally, linear constraint minimal variance (LCMV) Beam-former.In other words, the target component arriving each eardrum (or, to a certain extent, microphone) is retained in Beam-former and exports Y l(k, m) and Y rin (k, m), cause the preservation of the biauricular line rope of target component.Beam-former exports Y l(k, m), Y rthe single channel post-processing filter cell S C-NR that (k, m) feeds in each auditory prosthesis is further processed.The task of single channel post-processing filter cell S C-NR is exist in echo signal or is main (as determined by speech activity detector VAD, see signal cnt l, cnt r) time period during and (indicated by VAD equally, see signal cnt when echo signal does not exist l, cnt r) restraint speckle component.Preferably, VAD control signal cnt l, cnt r(as binary speech, without speech or soft, based on probability be main, be not main) define for each time-frequency brick (m, k).In an embodiment, single channel post-processing filter unit is based on the estimator of the target signal to noise ratio of each time-frequency brick (m, k).Aforementioned SNR estimator such as can based on respective beam shaped signal Y l(k, m and Y rmodulation (as modulation index) size in (k, m).Signal Y respectively from the Beam-former of left and right auditory prosthesis to corresponding VAD l, Y rits " speech-without speech " can be made to determine based on beam forming output signal Y for making VAD l, Y r, except microphone signal X 1l(X 2l), X 1r(X 2r) outside or alternative as it.In an embodiment, beam-formed signal (it is weighted) is considered when having suitable low signal-to-noise ratio (SNR).
In an embodiment, left and right auditory prosthesis HAD l, HAD rin each comprise and eliminate the Beam-former TC-BF of target, as shown in Figure 1 D.In an embodiment, left and right auditory prosthesis HAD l, HAD rin each comprise and eliminate the Beam-former TC-BF of target, receive input signal X 1..., X mand provide the gain G of the corresponding time frequency unit of the beam-formed signal Y be applied in corresponding single channel post-processing filter cell S C-NR sc, as shown in Figure 1 D.Compared to the embodiment of Fig. 1 C, the embodiment of Fig. 1 D also provides nonessential, (one or more) input unit signal x ' between two auditory prosthesises i,land x ' i,rexchange, as shown in the left arrow between two devices.Preferably, gained signal the signal eliminated from beam-formed signal Y and target of the estimator of target signal to noise ratio of each time-frequency brick (m, k) (see the gain G Fig. 1 D sc) determine.If single channel post-processing filter cell S C-NR is independent and do not run collaboratively, they can make the biauricular line rope distortion of target component, and this can cause the target source position distortions of perception.For avoiding occurring this situation, the estimator of SC-NR system its (in time-frequency becomes) yield value commutative is (by the G of the right arrow between two devices in the SC-NR gain in Fig. 1 C, VAD decision etc. and Fig. 1 D sc, l, G sc, rindicate) and determine to use the same yield value, such as, maximum during specific time-frequency unit in two yield values.Like this, the suppression being applied to a certain time-frequency unit is all the same in two ears, can not introduce level difference between artitificial ear.Illustrate between two hearing aid devices at (at middle arrow place) for providing the user interface UI of the information about line of vision amount.User interface can comprise for extracting about the transducer of the information of current goal sound source or consisting of (as EEG electrode and/or motion sensor etc. and signal transacting thereof) from user.
Fig. 2 A-2B shows the 5th embodiment of the ears hearing assistance system comprising the left and right auditory prosthesis with the synchronous Beam-former/noise reduction system of ears, wherein left and right auditory prosthesis comprises antenna for setting up communication link between ear between two and transceiver circuit, Fig. 2 A shows exemplary left and right auditory prosthesis, and Fig. 2 B shows corresponding block diagram.
Fig. 2 A shows and comprises the first and second auditory prosthesis HAD l, HAD rthe example of ears listening system.Auditory prosthesis is suitable for through wireless link IA-WL and antenna and transceiver RxTx exchange message.The information that can exchange between two auditory prosthesises such as comprise sound (as target) source electricity information (as direction, may and distance, such as (d s, θ s, ), see Fig. 3 C), beamformer weights, noise reduction gain (decay), detector signal (as from speech activity detector), control signal and/or audio signal (one or more (as all) frequency band as one or more audio signal).The first and second auditory prosthesis HAD of Fig. 2 A l, HAD rshow for BTE type device, each comprises and is suitable for the housing after being positioned at user's ear (auricle), and each auditory prosthesis comprises one or more input translator as microphone mic 1, mic 2, signal processing unit SPU and output unit SPK (such as, as output translator, loud speaker).In an embodiment, all these parts are all arranged in the housing of BTE part.In this case, the sound from output translator can propagate into user's duct by the pipe through being connected to BTE speaker outlet partly.Pipe can be connected to the ear mold of special adaptation user ear canal shapes and make the voice signal from loud speaker can arrive the eardrum of involved ear.In an embodiment; be arranged in user's duct or neighbouring ear mold or other parts and comprise input translator as microphone (as being positioned at ear canal entrance place), it forms a part for the input unit of corresponding auditory prosthesis or its electric audio signal is passed to input unit and thus can form one of electrical input signal that many microphones noise reduction system NRS uses.As alternative, output translator independent of BTE part location, such as, in user's duct or in external ear, and can be electrically connected to the signal processing unit (as through electric conductor or wireless link) of BTE part.
Fig. 2 B shows the embodiment of ears hearing assistance system as binaural hearing aid system, comprises left and right auditory prosthesis HAD l, HAD r, be called hearing instrument below.Left and right hearing instrument be suitable for being positioned at the left ear of user and auris dextra part or among.As alternative, left and right hearing instrument can be suitable for all or part of being implanted in user's head and (as vibrated (as bone anchor formula) hearing instrument with the bone implementing the bone be used in mechanical oscillation user head, or implement the cochlea implantation type hearing instrument of the electrode comprised for the cochlea nerve in the electrostimulation user head left side and the right).Hearing instrument is suitable for betwixt through wireless communication link exchange message, (IA) wireless link IA-WL between the specific ear that this warp is implemented by respective antenna and the transceiver circuit IA-Rx/Tx of left and right hearing instrument.Two hearing instrument HAD l, HAD rbe suitable for enable exchange between two hearing instruments and comprise corresponding sound-source signal S spositional parameter loc s(as direction and/or distance or absolute coordinate) control signal CNT s, see the signal CNT indicating instrument from right to left s,rand the signal CNT of from left to right instrument s,lthe dotted arrows of transmission.Each hearing instrument HAD l, HAD rcomprise the forward signal path with the input unit (as microphone and/or wired or wireless receiver) being operationally connected to signal processing unit SPU and one or more output unit (in this case loud speaker SPK).Time is positioned at input unit mic to time-frequency converting unit T->TF and multichannel noise reduction system NRS 1, mic 2and be connected with the two between signal processing unit SPU.Time provides i-th (i=1,2) individual input unit place (time change) input signal x ' to time-frequency converting unit T->TF iat multiple frequency band k and multiple moment m (mic 1, mic 2output) time-frequency representation X i(k, m) (X in Fig. 2 B s,rand X s,l).The time-frequency representation X of i-th input signal i(k, m) supposition comprises echo signal component and noise signal component, and echo signal component is derived from target signal source S s.In the embodiment of Fig. 2 B, the time is to time-frequency converting unit T->TF and the selection for selecting the input unit being currently connected to multichannel noise reduction system NRS/mixed cell SEL/MIX one.Different input unit can be selected under the different operational modes of ears hearing assistance system.In the embodiment of Fig. 2 B, each hearing instrument comprises makes user can control the user interface UI of the function of corresponding hearing instrument and/or ears hearing assistance system (respectively see dotted line signal path UC r, UC l).Preferably, user interface UI makes user indicate target signal source S srelative to direction or the position loc of user U s.In the embodiment of Fig. 2 B, each hearing instrument HAD l, HAD ralso comprise antenna and the transceiver circuit ANT for receiving data from servicing unit (for example, see the AD in Fig. 6), RF-Rx/Tx, servicing unit such as comprises the user interface (or alternative or supplemental user interface) for ears hearing assistance system.As alternative or in addition, antenna and transceiver circuit ANT, RF-Rx/Tx can be configured to receive the audio signal of the audio signal comprised from another device, such as, from the microphone with the major part of involved auditory prosthesis (but same ear place or near) positioned apart.Aforementioned received signal INw (as under specific run pattern, such as, controlling through the signal UC from user interface UI) can be to one of input audio signal of multichannel noise reduction system NRS.Left and right hearing instrument HAD l, HAD rin each comprise for through signal cnt nRS, land cnt nRS, rcontrol the control unit CONT of multichannel noise reduction system NRS.Control signal cnt nRSsuch as can comprise the locating information of the audio-source about current existence received from user interface UI (see to the corresponding input signal loc of control unit CONT s,l, loc s,r).The corresponding multichannel noise reduction system NRS of left and right hearing instrument is such as presented as is shown in fig. 1 c.Multichannel noise reduction system NRS provides (beam forming with noise reduction) signal of enhancing (be respectively ).Corresponding signal processing unit SPU receives the input signal strengthened (be respectively ) and the output signal processed further is provided (be respectively ), its output translator SPK that feeds is (to be respectively OUT as audible signal OUT l, OUT r) present to user.Signal processing unit SPU can apply other algorithm to input signal, such as, comprise and apply the gain that becomes with frequency to compensate the specific impaired hearing of user.In an embodiment, this system call interception becomes to make the user interface of servicing unit (UI in Fig. 4) make user U indicate target signal source S srelative to the direction of user U or position (through wireless receiver ANT, RF-Rx/Tx and signal INw, in fig. 2b in selection or provide signal loc between mixed cell SEL/MIX and control unit CONT s(dotted arrow)).Hearing instrument HAD l, HAD ralso comprise memory (as being embodied in corresponding control unit CNT), for preserving the database comprising multiple predetermined line of vision amount and/or beamformer weights, each weight corresponds to instruction and/or focuses on the Beam-former of multiple predetermined direction and/or position.In an embodiment, user provides the target direction phi about target signal source and the information (for example, see Fig. 5) apart from (d=range) through user interface UI.In an embodiment, the quantity of preserving predetermined beams shaper weight (collection) in a memory cell corresponding to target direction (phi, ) and distance (range d) occurrence ( d) quantity of (collection).In the ears hearing assistance system of Fig. 2 B, signal CNT s,rand CNT s,lpass to left hearing instrument respectively from the right side through bidirectional wireless link IA-WL and pass to right hearing instrument from a left side.These signals are by respective antenna ANT and transceiver circuit IA-Rx/Tx reception and extraction and as signal CNT lrand CNT rlbe transmitted to the corresponding control unit CONT of offside hearing instrument.Signal CNT lrand CNT rlcomprise and make the multichannel noise reduction system NRS of left and right hearing instrument can synchronous information (as sound source localization data, the gain of corresponding single channel noise reduction system, sensor signal, as from corresponding speech activity detector etc.).Combination from the corresponding data of local and offside hearing instrument can one be used from the gained signal upgrading the forward path of corresponding multichannel noise reduction system NRS thus in the hearing instrument of left and right to retain and locates clue.Can manually-operated and/or can remote-operated user interface UI (respectively produce control signal UC rand UC l) such as can be supplied in signal processing unit SPU, control unit CONT, selector and mixer unit T->TF-SEL-MIX and multichannel noise reduction system NRS by user's input one or more.
Fig. 3 A-3D shows the element of ears hearing assistance system and/or sound source relative to the example of the mutual locus of user, represents in sphere and orthogonal coordinate system.Fig. 3 A show spherical coordinate system in orthogonal coordinate system (x, y, z) (d, θ, ) coordinate.In three dimensions its position by the center (0,0,0) from this coordinate system in orthogonal coordinate system to sound source S sposition (x s, y s, z s) vectorial d sthe specified point represented is (at this by sound source S sposition diagram) by spherical coordinate (d s, θ s, ) represent, wherein d sfor arriving sound source S sradial distance, θ sfor the z-axis from orthogonal coordinate system (x, y, z) is to vectorial d s(pole) angle, and for from x-axis to vectorial d s(orientation) angle of projection in the xy plane of orthogonal coordinate system.
Fig. 3 B shows left and right auditory prosthesis HAD with orthogonal and spherical coordinate respectively l, HAD rposition (see Fig. 3 C, 3D, at this in figure 3b by left and right microphone mic l, mic rrepresent).The center (0,0,0) of coordinate system can be positioned at Anywhere in principle, but is positioned at left and right microphone mic in this supposition l, mic rcenter centre (with utilize equipment symmetry), as shown in Fig. 3 C, 3D.Left and right microphone mic l, mic rposition by corresponding vectorial d land d rdetermine, it can by the rectangle of respective sets and spherical coordinate (x l, y l, z l), (d l, θ l, ) and (x r, y r, z r), (d r, θ r, ) represent.
Fig. 3 C shows left and right auditory prosthesis HAD with orthogonal and spherical coordinate respectively l, HAD r(at this by left and right microphone mic l, mic rrepresent) relative to sound source S sposition.Center (0,0, the 0) supposition of coordinate system is positioned at left and right microphone mic l, mic rthe centre of center.Left and right microphone mic l, mic rposition respectively by vectorial d land d rdetermine.Sound source S sposition by vectorial d swith orthogonal and spherical coordinate (x s, y s, z s) and (d s, θ s, ) determine.Sound source S scan be such as the people of speech (or expressing himself), the loud speaker (or audio signal being passed to the wireless launcher of wireless receiver of one or two auditory prosthesis) that plays sound.
Fig. 3 D shows and setting similar shown in Fig. 3 C.Fig. 3 D shows and is equipped with left and right auditory prosthesis HAD l, HAD ruser U and be positioned at the sound source S of user left front s(such as loud speaker as shown in the figure, or the people of speech).Left and right auditory prosthesis HAD l, HAD rleft and right microphone mic l, mic rfrom sound source S schange of voice tone signal during reception.Voice signal is received by corresponding microphone and is converted to electrical input signal and by (plural number) digital signal X sl[m, k] and X srthe time-frequency representation of [m, k] form is provided in left and right auditory prosthesis HAD l, HAD rin, m to be time index and k be frequency index (that is, at this moment between be included in corresponding input unit (as microphone unit) to time-frequency converting unit (T->TF in the analysis filterbank AFB in Figure 1B or Fig. 2 B)).From sound source S before sound waves sto corresponding left and right microphone unit mic l, mic rthe direction of propagation respectively by line (vector) d sland d srindicate.The center (0,0,0) of orthogonal coordinate system (x, y, z) is positioned at left and right auditory prosthesis HAD l, HAD rbetween centre, auditory prosthesis supposition together with sound source S sbe positioned at together in zy plane (z=0, θ=90 °).From sound source S sdivide and be clipped to left and right auditory prosthesis HAD l, HAD rdifferent distance d sland d srillustrate that specific sound wavefront is at two microphone mic l, mic rthe time of advent at place is different, therefore causes ITD (d s, θ s, ) (ITD=interaural difference).Similarly, cause at two microphone mic from sound source to the differently composed of the propagation path of left and right auditory prosthesis l, mic rthe level difference of the signal that place receives is (to right auditory prosthesis HAD rpath benefit from account portion impact (by vectorial d srsome line segment indicate), to left auditory prosthesis HAD lpath unaffected).In other words, ILD (d is observed s, θ s, ) (between ILD=ear level difference).These difference (being perceived as location clue by normal hearing person) are reflected in signal X to a certain extent sl[m, k] and X sr(depend on the physical location of the microphone on auditory prosthesis) in [m, k] and can be used for extracting point source and be positioned at (d s, θ s, ) head related transfer function (or keeping its impact in received signal) of particular geometric scene at place.
Fig. 4 A-4B shows two examples of target sound source relative to the position of user.Fig. 4 A shows typically (acquiescence) example, wherein target sound source S sbe positioned at distance │ d before user U s│ place ( also suppose θ s=90 °, i.e. sound source S ssame plane is arranged in the microphone of left and right auditory prosthesis; But, not necessarily).The wave beam beam of the corresponding multichannel Beam-former filter unit of the multi input unit noise reduction system of left and right auditory prosthesis sland beam srsynchronous to focus on target sound source S s.Fig. 4 B shows wherein target sound source S sbe positioned at the user U left side quadrant (x>0, y>0) in example.User supposes that user interface indicates this position of sound source, again causes the wave beam beam of corresponding multichannel Beam-former filter unit sland beam srsynchronous to focus on target sound source S s(such as based on the predetermined filtering weighting for selected sound source position respective beam shaper; Aforementioned location is such as selected among multiple precalculated position).
Fig. 5 shows the multiple predetermined orientation of line of vision amount relative to user.Fig. 5 shows by vectorial d sq, q=1,2 ..., N sor angle with distance d q=│ d sq│ definition from user U to target source S qpredetermined direction.In Figure 5, assuming that sound source S swith left and right auditory prosthesis HAD land HAD rmicrophone be arranged in same plane.In an embodiment, the predetermined line of vision amount of the corresponding multichannel Beam-former filter unit of the multi input unit noise reduction system of left and right auditory prosthesis and/or filtering weighting are kept in the memory of left and right auditory prosthesis.Be distributed in corresponding in the front demifacet in x>=0 (face relative to user) and corresponding to the predetermined angular in the rear demifacet of x<0 q=1,2 ..., 8 illustrate in Figure 5.The density of predetermined angular is large in front demifacet than in rear demifacet.In the example of fig. 5, be arranged in front demifacet (as between any two from arrive uniform intervals 30 °), and be arranged in rear demifacet for each predetermined angular the multiple distance d of definable q, figure 5 illustrates two different distance, be designated as a and b (d sqb~ 2*d sqa).Any amount of predetermined angular and distance can define in advance, such as, and the line of vision amount of correspondence and/or filtering weighting are determined and are kept at (or from the common data library access of ears hearing assistance system, can be arranged in servicing unit as smart phone) in the memory of corresponding left and right auditory prosthesis.In an embodiment, user interface is embodied as the APP of smart phone.By preserving multiple predetermined line of vision amount (or beamformer weights) and allowing user select one of them (by indicating direction or the position of target source through user interface), user provides the line of vision amount (beamformer weights) suitable to the acoustic enviroment that user is current effectively.Predetermined line of vision amount (or beamformer weights) is such as determined by the measurement of the different directions on model user and distance, model user such as from Br ü el & Kjaer Sound & Vibration Measurement A/S, " equipment " have head and trunk simulator (HATS) 4128C of the first and second auditory prosthesises.
Fig. 6 A shows the left side (second) and right (first) auditory prosthesis HAD that comprise and communicating with portable (hand-held) servicing unit AD l, HAD rthe embodiment of binaural hearing aid system, servicing unit is used as the user interface UI of binaural hearing aid system.In an embodiment, binaural hearing aid system comprises servicing unit AD (and user interface UI).The user interface UI of servicing unit AD as depicted in figure 6b.User interface comprises the display user of hearing assistance system and the target sound source display (as touch-sensitive display) relative to multiple precalculated positions of user.User U is encouraged to select the position of current goal sound source (if departing from forward direction and default distance) by sound source symbol is drawn to the appropriate location of target sound source." location of sound source " is embodied as the APP of servicing unit (as smart phone).In an embodiment, left and right auditory prosthesis is passed to for the suitable corresponding predetermined filtering weighting collection of selection or for calculating aforementioned weight based on received sound source position in selected location.As alternative, to determine in servicing unit or the suitable filtering weighting of preserving can pass to left and right auditory prosthesis to be used in respective beam shaper filter unit.The servicing unit AD comprising user interface UI is suitable for being held in the hand of user U, therefore facilitates the current location of display-object sound source.
In an embodiment, the communication between auditory prosthesis and servicing unit is in base band (audio frequency range, as between 0 and 20kHz).But preferably, the communication between auditory prosthesis and servicing unit is modulated based on higher than certain under 100kHz frequency.Preferably, for setting up the frequency of communication lower than 70GHz between auditory prosthesis and servicing unit, such as be arranged in the scope from 50MHz to 70GHz, such as higher than 300MHz, such as higher than in the ISM scope of 300MHz, such as in the range of 900 mhz or in 2.4GHz scope or in 5.8GHz scope or in 60GHz scope (ISM=industry, science and medical science, such normalized range is such as defined by International Telecommunication Union ITU).In an embodiment, wireless link is based on standardization or special technology.In an embodiment, wireless link is based on Bluetooth technology (technology as low-yield in bluetooth) or relevant technologies.
In the embodiment of Fig. 6 A, show and be designated as IA-WL (inductive link as between the auditory prosthesis of left and right) and WL-RF (as servicing unit AD and left auditory prosthesis HAD lbetween and servicing unit AD and right auditory prosthesis HAD rbetween RF link (as bluetooth)) wireless link (implemented in a device by the antenna of correspondence and transceiver circuit, be designated as RF-IA-Rx/Tx-l and RF-IA-Rx/Tx-r respectively in the left and right auditory prosthesis of Fig. 6 A).
In an embodiment, servicing unit AD is or comprises audio gateway device, it is suitable for (as from entertainment device such as TV or music player, from telephone device such as mobile phone, or from computer such as PC) receive multiple audio signal, and the proper signal being suitable for selecting and/or combining in institute's received audio signal (or signal combination) is to pass to auditory prosthesis.In an embodiment, servicing unit is or comprises remote controller, for controlling function and the operation of auditory prosthesis.In an embodiment, the Function implementation of remote controller is in smart phone, this smart phone may run the enable APP (auditory prosthesis is included in the suitable wave point of smart phone, such as, based on bluetooth or some other standardization or proprietary scheme) controlling the function of apparatus for processing audio through smart phone.
In this manual, smart phone can comprise the combination of (A) mobile phone and (B) personal computer:
-(A) mobile phone comprises microphone, loud speaker and (wireless) interface to public switched telephone network (PSTN);
-(B) personal computer comprises processor, memory, operating system (OS), user interface is (as keyboard and display, such as be integrated in touch-sensitive display) and wireless data interface (comprising web browser), make user can download and perform enforcement specific functional features and (such as show the information of fetching from internet, another device of remote control, combine from multiple different sensors of smart phone (as camera, scanner, GPS, microphone etc.) and/or the information of external sensor to provide specific characteristic etc.) application program (APP).
The present invention is limited by the feature of independent claims.Dependent claims limits preferred implementation.Any Reference numeral in claim is not used in and limits its scope.
Some preferred implementations are illustrated above, but it is emphasized that the present invention is by the restriction of these execution modes, but can realize by the alternate manner in the theme that limits of claim.
list of references
●EP2701145A1(OTICON)

Claims (16)

1. an ears hearing assistance system, comprise be suitable for being arranged in the left ear of user and auris dextra part or among or be suitable for being implanted in wholly or in part the left and right auditory prosthesis of user's head, described ears hearing assistance system also comprises and is configured to communicate with described left and right auditory prosthesis and make user can affect the user interface of the function of described left and right auditory prosthesis, and each in the auditory prosthesis of described left and right comprises:
A) multiple input unit IUi, i=1 ... M, M is more than or equal to 2, for providing input signal xi (n) at i-th input unit place at multiple frequency band and the time-frequency representation Xi (k, m) in multiple moment, k is band index, m is time index, and n represents the time, the time-frequency representation Xi (k of i-th input signal, m) comprise echo signal component and noise signal component, echo signal component is derived from target signal source;
B) multi input unit noise reduction system, comprise and operationally link described multiple input unit IUi, i=1, M is also configured to provide beam-formed signal Y (k, m) multichannel Beam-former filter unit, the signal component wherein from other direction being different from target signal source direction is attenuated, and keeps undamped or attenuation degree lower than the signal component from other direction from the signal component in target signal source direction;
Described ears hearing assistance system is configured to make user can indicate target signal source relative to the direction of user or position through user interface.
2. ears hearing assistance system according to claim 1, is suitable for the corresponding multichannel Beam-former filter unit of synchronous described left and right auditory prosthesis, makes two Beam-former filter units concentrate on the position of target signal source.
3. ears hearing assistance system according to claim 1, wherein said user interface forms a part for left and/or right auditory prosthesis.
4. ears hearing assistance system according to claim 1, wherein said user interface forms a part for servicing unit.
5. ears hearing assistance system according to claim 1, wherein said user interface comprise be positioned at left and/or right auditory prosthesis part on, with the electrode of user's head contact.
6. ears hearing assistance system according to claim 5, the eeg signal that wherein said system is suitable for picking up based on described electrode indicates target signal source relative to the direction of user or position.
7. ears hearing assistance system according to claim 1, is suitable for enablely between the auditory prosthesis of described left and right, setting up wireless communication link between ear thus enable swap data betwixt.
8. ears hearing assistance system according to claim 4, is suitable for enablely between described servicing unit and corresponding left and right auditory prosthesis, setting up external wireless communication link thus enable swap data betwixt.
9. ears hearing assistance system according to claim 1, each in the auditory prosthesis of wherein said left and right also comprises operationally to be linked described multichannel Beam-former filter unit and is configured to provide enhancing signal single channel post-processing filter unit.
10. ears hearing assistance system according to claim 1, the described multichannel Beam-former filter unit of each in the auditory prosthesis of wherein said left and right comprises provides filtering weighting w mvdrthe MVDR filter of (k, m), described filtering weighting w mvdr(k, m) based on line of vision amount d (k, m) and noise signal input unit between covariance matrix R vv(k, m).
11. ears hearing assistance systems according to claim 1, wherein multichannel Beam-former filter unit and/or single channel post-processing filter cell location become to retain echo signal ear between spatial cues.
12. ears hearing assistance systems according to claim 1, each in the auditory prosthesis of wherein said left and right comprises the memory cell comprising multiple predetermined line of vision amounts, and each line of vision amount corresponds to instruction and/or focuses on the Beam-former of predetermined direction and/or position.
13. ears hearing assistance systems according to claim 1, each in the auditory prosthesis of wherein said left and right comprises speech activity detector, for determining that input signal exists the corresponding time period of people's speech.
14. ears hearing assistance systems according to claim 13, wherein said system is adapted so that input signal exists the determination of the corresponding time period of people's speech at least partly based on eeg signal.
15. ears hearing assistance systems according to claim 1, described multiple input unit IU of wherein said left and right auditory prosthesis iin at least one comprise for sound import is converted to electrical input signal x ' ithe microphone of (n) and for providing i-th input unit IU ithe input signal x at place in () is at the time-frequency representation X of multiple frequency band k and multiple moment m ithe time of (k, m) is to time-frequency converting unit.
16. ears hearing assistance systems according to claim 1, wherein said left and right auditory prosthesis comprises and is suitable for being arranged in ear place or is arranged in user's duct wholly or in part or is implanted in the hearing instrument of user's head wholly or in part.
CN201510156082.3A 2014-04-03 2015-04-03 Binaural hearing aid system including binaural noise reduction Active CN104980865B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP14163333.9 2014-04-03
EP14163333.9A EP2928210A1 (en) 2014-04-03 2014-04-03 A binaural hearing assistance system comprising binaural noise reduction

Publications (2)

Publication Number Publication Date
CN104980865A true CN104980865A (en) 2015-10-14
CN104980865B CN104980865B (en) 2020-05-12

Family

ID=50397047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510156082.3A Active CN104980865B (en) 2014-04-03 2015-04-03 Binaural hearing aid system including binaural noise reduction

Country Status (4)

Country Link
US (2) US9516430B2 (en)
EP (2) EP2928210A1 (en)
CN (1) CN104980865B (en)
DK (1) DK2928214T3 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454646A (en) * 2016-08-13 2017-02-22 厦门傅里叶电子有限公司 Method for synchronizing left and right channels in audio frequency amplifier
CN106911994A (en) * 2015-12-22 2017-06-30 Gn 瑞声达 A/S Hearing device
CN107046668A (en) * 2016-02-08 2017-08-15 奥迪康有限公司 Monaural intelligibility of speech predicting unit, audiphone and binaural hearing system
CN107426663A (en) * 2016-05-23 2017-12-01 奥迪康有限公司 Configurable audiphone including Beam-former filter unit and gain unit
CN108337605A (en) * 2017-03-19 2018-07-27 临境声学科技江苏有限公司 The hidden method for acoustic formed based on Difference Beam
CN108574922A (en) * 2017-03-09 2018-09-25 奥迪康有限公司 The hearing devices of wireless receiver including sound
CN108597532A (en) * 2017-03-19 2018-09-28 临境声学科技江苏有限公司 Hidden method for acoustic based on MVDR
CN108694956A (en) * 2017-03-29 2018-10-23 大北欧听力公司 Hearing device and correlation technique with adaptive sub-band beam forming
CN109891913A (en) * 2016-08-24 2019-06-14 领先仿生公司 For promoting the system and method that level difference perceives between ear by retaining between ear level difference
CN110636423A (en) * 2018-06-22 2019-12-31 西万拓私人有限公司 Method for enhancing signal directionality in a hearing device
WO2020029998A1 (en) * 2018-08-08 2020-02-13 斯达克实验室公司 Electroencephalogram-assisted beam former, beam forming method and ear-mounted hearing system
CN111512646A (en) * 2017-09-12 2020-08-07 维思博Ai公司 Low-delay audio enhancement
CN113763983A (en) * 2020-06-04 2021-12-07 中国科学院声学研究所 Robust speech enhancement method and system based on mouth-binaural room impulse response

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE49527E1 (en) 2006-02-10 2023-05-16 Cochlear Limited Recognition of implantable medical device
US9888328B2 (en) * 2013-12-02 2018-02-06 Arizona Board Of Regents On Behalf Of Arizona State University Hearing assistive device
EP2928210A1 (en) 2014-04-03 2015-10-07 Oticon A/s A binaural hearing assistance system comprising binaural noise reduction
US9800981B2 (en) * 2014-09-05 2017-10-24 Bernafon Ag Hearing device comprising a directional system
US9911416B2 (en) * 2015-03-27 2018-03-06 Qualcomm Incorporated Controlling electronic device based on direction of speech
DE102015211747B4 (en) 2015-06-24 2017-05-18 Sivantos Pte. Ltd. Method for signal processing in a binaural hearing aid
US10027374B1 (en) * 2015-08-25 2018-07-17 Cellium Technologies, Ltd. Systems and methods for wireless communication using a wire-based medium
EP3139635A1 (en) 2015-08-28 2017-03-08 Alex Volkov Synchronization of audio streams and sampling rate for wireless communication
DE102015219572A1 (en) * 2015-10-09 2017-04-13 Sivantos Pte. Ltd. Method for operating a hearing device and hearing device
US9591427B1 (en) * 2016-02-20 2017-03-07 Philip Scott Lyren Capturing audio impulse responses of a person with a smartphone
DK3214620T3 (en) * 2016-03-01 2019-11-25 Oticon As MONAURAL DISTURBING VOICE UNDERSTANDING UNIT, A HEARING AND A BINAURAL HEARING SYSTEM
US20190070414A1 (en) * 2016-03-11 2019-03-07 Mayo Foundation For Medical Education And Research Cochlear stimulation system with surround sound and noise cancellation
US10149049B2 (en) * 2016-05-13 2018-12-04 Bose Corporation Processing speech from distributed microphones
US11086593B2 (en) * 2016-08-26 2021-08-10 Bragi GmbH Voice assistant for wireless earpieces
DK3300078T3 (en) * 2016-09-26 2021-02-15 Oticon As VOICE ACTIVITY DETECTION UNIT AND A HEARING DEVICE INCLUDING A VOICE ACTIVITY DETECTION UNIT
CN106714063B (en) * 2016-12-16 2019-05-17 深圳信息职业技术学院 Hearing-aid device microphone voice signal Beamforming Method, system and hearing-aid device
US10911877B2 (en) * 2016-12-23 2021-02-02 Gn Hearing A/S Hearing device with adaptive binaural auditory steering and related method
DE102017200597B4 (en) * 2017-01-16 2020-03-26 Sivantos Pte. Ltd. Method for operating a hearing system and hearing system
EP3657673A1 (en) 2017-02-02 2020-05-27 Oticon A/s An adaptive level estimator, a hearing device, a method and a binaural hearing system
US10555094B2 (en) * 2017-03-29 2020-02-04 Gn Hearing A/S Hearing device with adaptive sub-band beamforming and related method
DK3383067T3 (en) 2017-03-29 2020-07-20 Gn Hearing As HEARING DEVICE WITH ADAPTIVE SUB-BAND RADIATION AND ASSOCIATED PROCEDURE
EP3386216B1 (en) * 2017-04-06 2021-08-25 Oticon A/s A hearing system comprising a binaural level and/or gain estimator, and a corresponding method
US10251011B2 (en) * 2017-04-24 2019-04-02 Intel Corporation Augmented reality virtual reality ray tracing sensory enhancement system, apparatus and method
US9992585B1 (en) 2017-05-24 2018-06-05 Starkey Laboratories, Inc. Hearing assistance system incorporating directional microphone customization
EP3760115A1 (en) * 2017-06-22 2021-01-06 Oticon A/s A system for capturing electrooculography signals
KR102443637B1 (en) * 2017-10-23 2022-09-16 삼성전자주식회사 Electronic device for determining noise control parameter based on network connection inforiton and operating method thereof
WO2019084214A1 (en) 2017-10-24 2019-05-02 Whisper.Ai, Inc. Separating and recombining audio for intelligibility and comfort
US10182299B1 (en) * 2017-12-05 2019-01-15 Gn Hearing A/S Hearing device and method with flexible control of beamforming
DE102018206979A1 (en) 2018-05-04 2019-11-07 Sivantos Pte. Ltd. Method for operating a hearing aid and hearing aid
EP3588982B1 (en) * 2018-06-25 2022-07-13 Oticon A/s A hearing device comprising a feedback reduction system
EP3664470B1 (en) * 2018-12-05 2021-02-17 Sonova AG Providing feedback of an own voice loudness of a user of a hearing device
EP3672282B1 (en) * 2018-12-21 2022-04-06 Sivantos Pte. Ltd. Method for beamforming in a binaural hearing aid
WO2020176414A1 (en) * 2019-02-25 2020-09-03 Starkey Laboratories, Inc. Detecting user's eye movement using sensors in hearing instruments
US11786694B2 (en) 2019-05-24 2023-10-17 NeuroLight, Inc. Device, method, and app for facilitating sleep
US11043201B2 (en) * 2019-09-13 2021-06-22 Bose Corporation Synchronization of instability mitigation in audio devices
DE102020204332B4 (en) 2020-04-02 2022-05-12 Sivantos Pte. Ltd. Method for operating a hearing system and hearing system
US11264017B2 (en) * 2020-06-12 2022-03-01 Synaptics Incorporated Robust speaker localization in presence of strong noise interference systems and methods
US11412332B2 (en) 2020-10-30 2022-08-09 Sonova Ag Systems and methods for data exchange between binaural hearing devices
EP4007308A1 (en) 2020-11-27 2022-06-01 Oticon A/s A hearing aid system comprising a database of acoustic transfer functions

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101635A1 (en) * 2006-10-30 2008-05-01 Phonak Ag Hearing assistance system including data logging capability and method of operating the same
US20080260189A1 (en) * 2005-11-01 2008-10-23 Koninklijke Philips Electronics, N.V. Hearing Aid Comprising Sound Tracking Means
US20120008807A1 (en) * 2009-12-29 2012-01-12 Gran Karl-Fredrik Johan Beamforming in hearing aids
CN102421050A (en) * 2010-09-17 2012-04-18 三星电子株式会社 Apparatus and method for enhancing audio quality using non-uniform configuration of microphones
US20120250916A1 (en) * 2011-03-31 2012-10-04 Siemens Medical Instruments Pte. Ltd Hearing aid device with a directional microphone system and method for operating a hearing aid device having a directional microphone system
US20140056435A1 (en) * 2012-08-24 2014-02-27 Retune DSP ApS Noise estimation for use with noise reduction and echo cancellation in personal communication

Family Cites Families (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757932A (en) * 1993-09-17 1998-05-26 Audiologic, Inc. Digital hearing aid system
US5511128A (en) * 1994-01-21 1996-04-23 Lindemann; Eric Dynamic intensity beamforming system for noise reduction in a binaural hearing aid
WO2004103023A1 (en) 1995-09-26 2004-11-25 Ikuichiro Kinoshita Method for preparing transfer function table for localizing virtual sound image, recording medium on which the table is recorded, and acoustic signal editing method using the medium
EP1326479B2 (en) 1997-04-16 2018-05-23 Emma Mixed Signal C.V. Method and apparatus for noise reduction, particularly in hearing aids
DE60125553T2 (en) * 2000-05-10 2007-10-04 The Board Of Trustees For The University Of Illinois, Urbana METHOD OF INTERFERENCE SUPPRESSION
US7206423B1 (en) * 2000-05-10 2007-04-17 Board Of Trustees Of University Of Illinois Intrabody communication for a hearing aid
DE60010457T2 (en) 2000-09-02 2006-03-02 Nokia Corp. Apparatus and method for processing a signal emitted from a target signal source in a noisy environment
US7076072B2 (en) * 2003-04-09 2006-07-11 Board Of Trustees For The University Of Illinois Systems and methods for interference-suppression with directional sensing patterns
US7945064B2 (en) * 2003-04-09 2011-05-17 Board Of Trustees Of The University Of Illinois Intrabody communication with ultrasound
DE102005032274B4 (en) 2005-07-11 2007-05-10 Siemens Audiologische Technik Gmbh Hearing apparatus and corresponding method for eigenvoice detection
GB0609248D0 (en) 2006-05-10 2006-06-21 Leuven K U Res & Dev Binaural noise reduction preserving interaural transfer functions
US8249284B2 (en) 2006-05-16 2012-08-21 Phonak Ag Hearing system and method for deriving information on an acoustic scene
NL2000510C1 (en) 2007-02-28 2008-09-01 Exsilent Res Bv Method and device for sound processing.
US20080259731A1 (en) 2007-04-17 2008-10-23 Happonen Aki P Methods and apparatuses for user controlled beamforming
US9191740B2 (en) 2007-05-04 2015-11-17 Personics Holdings, Llc Method and apparatus for in-ear canal sound suppression
DK2088802T3 (en) * 2008-02-07 2013-10-14 Oticon As Method for estimating the weighting function of audio signals in a hearing aid
CA2688328A1 (en) * 2008-12-12 2010-06-12 Simon Haykin Apparatus, systems and methods for binaural hearing enhancement in auditory processing systems
EP2200342B1 (en) * 2008-12-22 2013-09-18 Siemens Medical Instruments Pte. Ltd. Hearing aid controlled using a brain wave signal
US8660281B2 (en) * 2009-02-03 2014-02-25 University Of Ottawa Method and system for a multi-microphone noise reduction
WO2009144332A2 (en) 2009-09-21 2009-12-03 Phonak Ag A binaural hearing system
CA2781702C (en) 2009-11-30 2017-03-28 Nokia Corporation An apparatus for processing audio and speech signals in an audio device
EP2352312B1 (en) 2009-12-03 2013-07-31 Oticon A/S A method for dynamic suppression of surrounding acoustic noise when listening to electrical inputs
EP2537353B1 (en) 2010-02-19 2018-03-07 Sivantos Pte. Ltd. Device and method for direction dependent spatial noise reduction
US9025782B2 (en) 2010-07-26 2015-05-05 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for multi-microphone location-selective processing
US9552840B2 (en) 2010-10-25 2017-01-24 Qualcomm Incorporated Three-dimensional sound capturing and reproducing with multi-microphones
EP2463856B1 (en) 2010-12-09 2014-06-11 Oticon A/s Method to reduce artifacts in algorithms with fast-varying gain
US20120321112A1 (en) 2011-06-16 2012-12-20 Apple Inc. Selecting a digital stream based on an audio sample
DK2563045T3 (en) * 2011-08-23 2014-10-27 Oticon As Method and a binaural listening system to maximize better ear effect
EP2563044B1 (en) * 2011-08-23 2014-07-23 Oticon A/s A method, a listening device and a listening system for maximizing a better ear effect
EP2584794A1 (en) 2011-10-17 2013-04-24 Oticon A/S A listening system adapted for real-time communication providing spatial information in an audio stream
US8638960B2 (en) * 2011-12-29 2014-01-28 Gn Resound A/S Hearing aid with improved localization
US8891777B2 (en) * 2011-12-30 2014-11-18 Gn Resound A/S Hearing aid with signal enhancement
US9185499B2 (en) * 2012-07-06 2015-11-10 Gn Resound A/S Binaural hearing aid with frequency unmasking
US9439004B2 (en) 2012-02-22 2016-09-06 Sonova Ag Method for operating a binaural hearing system and a binaural hearing system
US9420386B2 (en) * 2012-04-05 2016-08-16 Sivantos Pte. Ltd. Method for adjusting a hearing device apparatus and hearing device apparatus
DE102012214081A1 (en) * 2012-06-06 2013-12-12 Siemens Medical Instruments Pte. Ltd. Method of focusing a hearing instrument beamformer
US9338561B2 (en) * 2012-12-28 2016-05-10 Gn Resound A/S Hearing aid with improved localization
US9167356B2 (en) * 2013-01-11 2015-10-20 Starkey Laboratories, Inc. Electrooculogram as a control in a hearing assistance device
US10425747B2 (en) * 2013-05-23 2019-09-24 Gn Hearing A/S Hearing aid with spatial signal enhancement
EP3917167A3 (en) * 2013-06-14 2022-03-09 Oticon A/s A hearing assistance device with brain computer interface
EP2840807A1 (en) * 2013-08-19 2015-02-25 Oticon A/s External microphone array and hearing aid using it
EP2876900A1 (en) * 2013-11-25 2015-05-27 Oticon A/S Spatial filter bank for hearing system
EP2882203A1 (en) * 2013-12-06 2015-06-10 Oticon A/s Hearing aid device for hands free communication
EP2887695B1 (en) 2013-12-19 2018-02-14 GN Hearing A/S A hearing device with selectable perceived spatial positioning of sound sources
US9307331B2 (en) * 2013-12-19 2016-04-05 Gn Resound A/S Hearing device with selectable perceived spatial positioning of sound sources
WO2015120475A1 (en) * 2014-02-10 2015-08-13 Bose Corporation Conversation assistance system
EP2908549A1 (en) * 2014-02-13 2015-08-19 Oticon A/s A hearing aid device comprising a sensor member
EP2928210A1 (en) 2014-04-03 2015-10-07 Oticon A/s A binaural hearing assistance system comprising binaural noise reduction
EP2928211A1 (en) * 2014-04-04 2015-10-07 Oticon A/s Self-calibration of multi-microphone noise reduction system for hearing assistance devices using an auxiliary device
US9961456B2 (en) * 2014-06-23 2018-05-01 Gn Hearing A/S Omni-directional perception in a binaural hearing aid system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080260189A1 (en) * 2005-11-01 2008-10-23 Koninklijke Philips Electronics, N.V. Hearing Aid Comprising Sound Tracking Means
US20080101635A1 (en) * 2006-10-30 2008-05-01 Phonak Ag Hearing assistance system including data logging capability and method of operating the same
US20120008807A1 (en) * 2009-12-29 2012-01-12 Gran Karl-Fredrik Johan Beamforming in hearing aids
CN102421050A (en) * 2010-09-17 2012-04-18 三星电子株式会社 Apparatus and method for enhancing audio quality using non-uniform configuration of microphones
US20120250916A1 (en) * 2011-03-31 2012-10-04 Siemens Medical Instruments Pte. Ltd Hearing aid device with a directional microphone system and method for operating a hearing aid device having a directional microphone system
US20140056435A1 (en) * 2012-08-24 2014-02-27 Retune DSP ApS Noise estimation for use with noise reduction and echo cancellation in personal communication

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106911994A (en) * 2015-12-22 2017-06-30 Gn 瑞声达 A/S Hearing device
CN106911994B (en) * 2015-12-22 2021-07-09 Gn 瑞声达 A/S Hearing device
US10827286B2 (en) 2015-12-22 2020-11-03 Gn Hearing A/S Hearing device with spatial cue information processing capability
CN107046668A (en) * 2016-02-08 2017-08-15 奥迪康有限公司 Monaural intelligibility of speech predicting unit, audiphone and binaural hearing system
CN107046668B (en) * 2016-02-08 2021-01-05 奥迪康有限公司 Single-ear speech intelligibility prediction unit, hearing aid and double-ear hearing system
CN107426663B (en) * 2016-05-23 2021-08-24 奥迪康有限公司 Configurable hearing aid comprising a beamformer filtering unit and a gain unit
CN107426663A (en) * 2016-05-23 2017-12-01 奥迪康有限公司 Configurable audiphone including Beam-former filter unit and gain unit
CN106454646A (en) * 2016-08-13 2017-02-22 厦门傅里叶电子有限公司 Method for synchronizing left and right channels in audio frequency amplifier
CN109891913A (en) * 2016-08-24 2019-06-14 领先仿生公司 For promoting the system and method that level difference perceives between ear by retaining between ear level difference
CN108574922B (en) * 2017-03-09 2021-08-24 奥迪康有限公司 Hearing device comprising a wireless receiver of sound
CN108574922A (en) * 2017-03-09 2018-09-25 奥迪康有限公司 The hearing devices of wireless receiver including sound
CN108597532A (en) * 2017-03-19 2018-09-28 临境声学科技江苏有限公司 Hidden method for acoustic based on MVDR
CN108337605A (en) * 2017-03-19 2018-07-27 临境声学科技江苏有限公司 The hidden method for acoustic formed based on Difference Beam
CN108694956B (en) * 2017-03-29 2023-08-22 大北欧听力公司 Hearing device with adaptive sub-band beamforming and related methods
CN108694956A (en) * 2017-03-29 2018-10-23 大北欧听力公司 Hearing device and correlation technique with adaptive sub-band beam forming
CN111512646A (en) * 2017-09-12 2020-08-07 维思博Ai公司 Low-delay audio enhancement
CN110636423B (en) * 2018-06-22 2021-08-17 西万拓私人有限公司 Method for enhancing signal directionality in a hearing device
CN110636423A (en) * 2018-06-22 2019-12-31 西万拓私人有限公司 Method for enhancing signal directionality in a hearing device
WO2020029998A1 (en) * 2018-08-08 2020-02-13 斯达克实验室公司 Electroencephalogram-assisted beam former, beam forming method and ear-mounted hearing system
US11617043B2 (en) 2018-08-08 2023-03-28 Starkey Laboratories, Inc. EEG-assisted beamformer, beamforming method and ear-worn hearing system
CN113763983A (en) * 2020-06-04 2021-12-07 中国科学院声学研究所 Robust speech enhancement method and system based on mouth-binaural room impulse response

Also Published As

Publication number Publication date
EP2928214A1 (en) 2015-10-07
EP2928214B1 (en) 2019-05-08
US20150289065A1 (en) 2015-10-08
US10123134B2 (en) 2018-11-06
DK2928214T3 (en) 2019-07-15
CN104980865B (en) 2020-05-12
EP2928210A1 (en) 2015-10-07
US9516430B2 (en) 2016-12-06
US20170048626A1 (en) 2017-02-16

Similar Documents

Publication Publication Date Title
US11671773B2 (en) Hearing aid device for hands free communication
US10123134B2 (en) Binaural hearing assistance system comprising binaural noise reduction
US10431239B2 (en) Hearing system
CN108600907B (en) Method for positioning sound source, hearing device and hearing system
US9565502B2 (en) Binaural hearing assistance system comprising a database of head related transfer functions
EP3285501B1 (en) A hearing system comprising a hearing device and a microphone unit for picking up a user&#39;s own voice
CN107690119B (en) Binaural hearing system configured to localize sound source
US9949040B2 (en) Peer to peer hearing system
EP3057337B1 (en) A hearing system comprising a separate microphone unit for picking up a users own voice

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant