CA2095344A1 - Bimodal speech processor - Google Patents
Bimodal speech processorInfo
- Publication number
- CA2095344A1 CA2095344A1 CA002095344A CA2095344A CA2095344A1 CA 2095344 A1 CA2095344 A1 CA 2095344A1 CA 002095344 A CA002095344 A CA 002095344A CA 2095344 A CA2095344 A CA 2095344A CA 2095344 A1 CA2095344 A1 CA 2095344A1
- Authority
- CA
- Canada
- Prior art keywords
- aid
- speech
- bimodal
- processor
- acoustic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/552—Binaural
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/18—Applying electric currents by contact electrodes
- A61N1/32—Applying electric currents by contact electrodes alternating or intermittent currents
- A61N1/36—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
- A61N1/36036—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
- A61N1/36038—Cochlear stimulation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/02—Details
- A61N1/04—Electrodes
- A61N1/05—Electrodes for implantation or insertion into the body, e.g. heart electrode
- A61N1/0526—Head electrodes
- A61N1/0541—Cochlear electrodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/35—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
- H04R25/356—Amplitude, e.g. amplitude shift or compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/60—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
- H04R25/604—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
- H04R25/606—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers acting directly on the eardrum, the ossicles or the skull, e.g. mastoid, tooth, maxillary or mandibular bone, or mechanically stimulating the cochlea, e.g. at the oval window
Landscapes
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Radiology & Medical Imaging (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Neurosurgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Prostheses (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
A bimodal aid comprising a speech processor (11) linked to an acoustic aid processor (12). Both processors derive audible information, particularly speech information, from a microphone (13). The speech processor processes the audio information according to patient-specific settings stored in a memory (23) in order to apply a control signal to an implant aid (15) in one ear of a patient.
The acoustic aid signal processor (12) further processes information derived from and by the speech processor (11) in accordance with patient-specific settings in memory (23) so as to supply a control signal to an acoustic aid (14) located in the other ear of the patient. The acoustic aid signal processor (12) incorporates a programmable filter device which allows for rapid, iterative adaptation of the bimodal aid to the subjective auditory requirements of the patient. The bimodal aid can be used to drive an implant aid (15) only or to drive an acoustic aid (14) only.
The acoustic aid signal processor (12) further processes information derived from and by the speech processor (11) in accordance with patient-specific settings in memory (23) so as to supply a control signal to an acoustic aid (14) located in the other ear of the patient. The acoustic aid signal processor (12) incorporates a programmable filter device which allows for rapid, iterative adaptation of the bimodal aid to the subjective auditory requirements of the patient. The bimodal aid can be used to drive an implant aid (15) only or to drive an acoustic aid (14) only.
Description
W092/0~330 ~ ~9 ~3~ PCT/AU9t/00506 BIMODAL SPEECH PROCESSOR
Technical Field The present invention relates to improvements in the proces~ing of sound for the purposes o~ supplying an information sign~l to either an acoustic hearing aid, a cochlear implant aid device or both so as to improve the quality of hearing of a patient.
Back~round of the_Invention Throughout this specification, reference to an acoustic hearing aid is reference to an aid of the type adapted to fit in or adjacent an ear of a patient and which provides an acoustic output suitable to at least partially compensate for ; hearing deflclencies o~ the patient. Throughout th~s ~peci~ication a cochlear implant aid will refer to a devlce ' which i~clude~ components which are ~itted wlthin the body of a patient and which are adapted to electrically stimulate the nervous system of a patient in order to at least partially compensate for usually pro~ound hearing loss of the patient.
There is a trend towards fitting cochlear implants to patients with some residual hearing in the contralateral ear.
; Many patients recognise speech better using conventional acoustlc hearing aids together with the cochlear implant than they do using either device alone but ~ind the use of the ; combinatlon unacceptable. These patients opt to use either the acoustic hearing aid or the cochlear implant aid but not both devices toge~her.
,...
: ' . ' ' ~ ' ' ' ~ :" ' ., .
~ ' ' . . '' " , ~ ~ 9~3~ -2-It is an object of the present invention to provide a : bimodal aid device which can drive both an acoustic hearing aid and a cochlear implant aid which thereby improves the qual~ty of binaural information received by a patient.
Two further problems experienced iIl the prior art of ~earing aids are (l) quickly and easily measuring the nature and degree of hearing impairment of a client for the purposes of providing an appropriate hearing aid and (2) the difficulty in matching appropriate heariny aid qualities and ~0 capabilities to the qpecific requirements of the user Recently, a few hearing aid devices have appeared on the ~ar~et which allow on-line control of ~he gain characteristics of the device at different frequencies.
However, these devices do not provide speech processing lS capability such a~ is provided by formant extraction and like feature extraction circuits.
It is a further ob~ect of particular e~bodiments~of the pre~ent invention to provide a signal proce~sing device for use in association with an acoustic hearing aid which addresses these problems.
SummarY of the Invention Accordingly, in one broad form of the invention there is provided a bimodal aid for the hearing impaired which includes processing means adapted to receive and proce~s : 25 audio information received from a microphone; said processing ~eans supplying processed information derived from ~aid audio information to an implant aid adapted ~o be implanted in a first ear of a patient and to an acoustic aid adapted to be . .
.. . . .
, .
:
W09~/0833~ ~o~-3~ PCT/~U91/0~5~6 worn in or adjacent a second ear o~ said pa~ient wnereby binaural information is provlded to said patient.
In yet a further broad ~orm of the invention there is provided a bimodal aid for the hearing i~mpaired comprising a sound/speech proce~sor electrically connected to a hearing aid transducer adapt~d to be worn adjacent to or in an ear of a patient and electrically connected to an electrical signal : transducer adapted to be worn in an ear of a pa~ient, ~aid : speech processor r~ceiving and proce~sing audio input information so as to produce an acoustic si~nal from said hearing aid transducer and an electrical i~nal from said electrical transducer whereby coherent blnaural information ls provided to said patlent.
In a further broad form there is provided an electronically co~figurable sound/s~eech processor for the hearlng impaired; said processor including configuratio~
means an~ signal processing means; said proce~sor adapted to receive audio information and to process said audio : informatlon by said signal proce~sing means in accordance with parameters set by said configuration ~eans ~o as to produce an output signal adapted to stimulate a hearing aid ~ransducer; said configuration means adapted to receive one or more of electronic signal input or software input for the purpose of modifying said parameters.
In a further broad form of the invention there i~
. provided 2 method of and a means for control of a hearing aid and a cochlear implant by means of said sound/speech proce~sor.
. .
. , ': ' '' ~
W092/08330 PCT/AU91/OOS~
~09a344 ~4~
In a particular form said sound/speech processor utilises the speech features FO, Fl, F2, AO, Al, A2, A3, A4, A5 and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a hearing aid transducer, wherein said features are defined as follows:-FO is the fundamental frequency, .. Fl is the first formant frequency, F2 is the second formant frequency, ` AO is the amplitude of FO, . Al.is the-amplitude of Fl, A2 is the amplitude of F2, A3 is the amplitude of band 3: 2000 to 2800 Hz, A4 is the amplltude o~ band 4: 2800 to 4000 Hz, A5 is the amplltude of band 5: ~000 Hz and above.
In a further partlcular form of said ~ound/speech ,, processor, 3aid signal processi~g means includes means for dynamically changing the amplitude of different frequency bands in the defined speech spectrum as a ~unction o~ the -.~, speech features Al, A2, A3, A4, and A5 parameters so that the loudness in these bands is appropriately scaled between the . threshold and maximum comfortable levels of the hearing aid ' user at the Fl and F2 frequencies and in the higher frequency `i bands.
In a further particular for~ of said sound/speech proce~sor, said signal processing means includes filter means whose settings are dynamically varied according to speech parame~ers extracted from said signal processing means , .
... . .
': :
W092/08330 2 ~ ~ a 3 4 4 PCT/AU91/00506 whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
In yet a further particular form of said sound/speech processor, said signal processing means w:ithin said processor includes means for reconstructing signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled optimally so as to enhance speech recognition in a user.
In a first mode of operation of said sound~speech processor, said filter means set by said conflguration means are prov~ed ba3ed on measurements set by an audiologist during a hearlng aid ~lttlng procedure. The filter settings remaln fixed aSter completion of the procedure and remain flxed thereafter.
In an alternatlve ~ode oP opera~lon of said ~ound/speech processor, said proce~qor includes ~ilter means whose parameters are changed dynamically while sald processor is in use providing said output signal to said hearing aid transducer in accordance with informatlon provided to said configuration means by speech parameter extraction means acting on said audio information.
In a further particular mode of opera~ion of said sound/speech processor said output signal is synthesised by said signal processing means utllising only speech para~eters.
Brief Description of the Drawin~s E~bodiments of the invention will now be described with ' W092/08330 PCT/AU91/00~06 ~ )o9-~344 -6- `
. ~ . .
reference to the drawings wherein:-Fig. l is a schematic representation of a bimodal aid according to a first embodiment of the invention, Fig. 2 is a schematic representation of the main functional components which comprise the entire bimodal aid, Fig. 3 is a schematic block diagram o~ the implant processing circuitry together with the acoustic -processor circuitry, ~ ~ ~~~~
Fig. 4 is a chart showing an example of the pattern of electrical stimulation of the implant electrodes for varlous steady state phonemes using the multi-peak codlng strategy, Flg. 5 is a graph showlng the standard loudness growth function for the speech proce~sor portion when . ' driv~ng the implant, .:
Fig. 6 is a schematic block diagram o~ the functional components of thc bi~odal proces~or which drive the acoustic aid, Fig. ~ is a schematic block diagram of the components comprising the acoustic proc~ssor which proces~es the speech signal in accordance with " particular forms of the invention to drive the acoustic aid, :
Fig. 8 is a component block diagram of the acoustic processor, ,:
. , . ' ' ' '.
. ,' ' ' .
. :
W092/0$330 ~ 4~ PCT/AU91/00506 _~ _ . Fig. 9 is a component block diagram of a Biquad filter as utllised in the acoustic processor of Fig.
8, Fig. 10 is a component block diagram of the output ` 5 driver ~or the acoustic aid, - Fig. 11 is a schematic representation o~ pre~erred modes of operation of the acoustic proces~or - for voiced vowel input, ~ig. 12 shows a sound intensity again~t ~requency for an acou~ic aid operatlng accordin~ to mode-1-; - ~
Fig. 13 shows comparative plots of sound in~enslty against frequency in relation to an example of mode~ one and three operatlon o~ the acoustic proce~sor, Fig. 14 outllne~ a fltting ~trategy ~or the blmodal aid, Flg. 15 i~ a ~lowchart o~ a fittin~ ~rategy for the ; acoustlc aid portion of the device iss mode 1, ; and Fig. 16 i a schematic block diagram of the ut~lisation : of the diagnostic and programming unit for use in association with the bimodal aid.
- ~escri~tiosl of Pre~erred ~mbodi~ents With reference to ~ig. 1, the bimodal aid i5 a hearing aid device which has the ~apability to provide issformation through a cochlear implant aid in one ear and a speech processing acoustic hearing aid in the other ear o~ a patlent. Both the implant aid and the acoustic aid are , ., : -' . :, ,'.' ' ' ' ' ' ' ' ~ . ' . .
W092/08330 PCT/AU91/005~
Technical Field The present invention relates to improvements in the proces~ing of sound for the purposes o~ supplying an information sign~l to either an acoustic hearing aid, a cochlear implant aid device or both so as to improve the quality of hearing of a patient.
Back~round of the_Invention Throughout this specification, reference to an acoustic hearing aid is reference to an aid of the type adapted to fit in or adjacent an ear of a patient and which provides an acoustic output suitable to at least partially compensate for ; hearing deflclencies o~ the patient. Throughout th~s ~peci~ication a cochlear implant aid will refer to a devlce ' which i~clude~ components which are ~itted wlthin the body of a patient and which are adapted to electrically stimulate the nervous system of a patient in order to at least partially compensate for usually pro~ound hearing loss of the patient.
There is a trend towards fitting cochlear implants to patients with some residual hearing in the contralateral ear.
; Many patients recognise speech better using conventional acoustlc hearing aids together with the cochlear implant than they do using either device alone but ~ind the use of the ; combinatlon unacceptable. These patients opt to use either the acoustic hearing aid or the cochlear implant aid but not both devices toge~her.
,...
: ' . ' ' ~ ' ' ' ~ :" ' ., .
~ ' ' . . '' " , ~ ~ 9~3~ -2-It is an object of the present invention to provide a : bimodal aid device which can drive both an acoustic hearing aid and a cochlear implant aid which thereby improves the qual~ty of binaural information received by a patient.
Two further problems experienced iIl the prior art of ~earing aids are (l) quickly and easily measuring the nature and degree of hearing impairment of a client for the purposes of providing an appropriate hearing aid and (2) the difficulty in matching appropriate heariny aid qualities and ~0 capabilities to the qpecific requirements of the user Recently, a few hearing aid devices have appeared on the ~ar~et which allow on-line control of ~he gain characteristics of the device at different frequencies.
However, these devices do not provide speech processing lS capability such a~ is provided by formant extraction and like feature extraction circuits.
It is a further ob~ect of particular e~bodiments~of the pre~ent invention to provide a signal proce~sing device for use in association with an acoustic hearing aid which addresses these problems.
SummarY of the Invention Accordingly, in one broad form of the invention there is provided a bimodal aid for the hearing impaired which includes processing means adapted to receive and proce~s : 25 audio information received from a microphone; said processing ~eans supplying processed information derived from ~aid audio information to an implant aid adapted ~o be implanted in a first ear of a patient and to an acoustic aid adapted to be . .
.. . . .
, .
:
W09~/0833~ ~o~-3~ PCT/~U91/0~5~6 worn in or adjacent a second ear o~ said pa~ient wnereby binaural information is provlded to said patient.
In yet a further broad ~orm of the invention there is provided a bimodal aid for the hearing i~mpaired comprising a sound/speech proce~sor electrically connected to a hearing aid transducer adapt~d to be worn adjacent to or in an ear of a patient and electrically connected to an electrical signal : transducer adapted to be worn in an ear of a pa~ient, ~aid : speech processor r~ceiving and proce~sing audio input information so as to produce an acoustic si~nal from said hearing aid transducer and an electrical i~nal from said electrical transducer whereby coherent blnaural information ls provided to said patlent.
In a further broad form there is provided an electronically co~figurable sound/s~eech processor for the hearlng impaired; said processor including configuratio~
means an~ signal processing means; said proce~sor adapted to receive audio information and to process said audio : informatlon by said signal proce~sing means in accordance with parameters set by said configuration ~eans ~o as to produce an output signal adapted to stimulate a hearing aid ~ransducer; said configuration means adapted to receive one or more of electronic signal input or software input for the purpose of modifying said parameters.
In a further broad form of the invention there i~
. provided 2 method of and a means for control of a hearing aid and a cochlear implant by means of said sound/speech proce~sor.
. .
. , ': ' '' ~
W092/08330 PCT/AU91/OOS~
~09a344 ~4~
In a particular form said sound/speech processor utilises the speech features FO, Fl, F2, AO, Al, A2, A3, A4, A5 and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a hearing aid transducer, wherein said features are defined as follows:-FO is the fundamental frequency, .. Fl is the first formant frequency, F2 is the second formant frequency, ` AO is the amplitude of FO, . Al.is the-amplitude of Fl, A2 is the amplitude of F2, A3 is the amplitude of band 3: 2000 to 2800 Hz, A4 is the amplltude o~ band 4: 2800 to 4000 Hz, A5 is the amplltude of band 5: ~000 Hz and above.
In a further partlcular form of said ~ound/speech ,, processor, 3aid signal processi~g means includes means for dynamically changing the amplitude of different frequency bands in the defined speech spectrum as a ~unction o~ the -.~, speech features Al, A2, A3, A4, and A5 parameters so that the loudness in these bands is appropriately scaled between the . threshold and maximum comfortable levels of the hearing aid ' user at the Fl and F2 frequencies and in the higher frequency `i bands.
In a further particular for~ of said sound/speech proce~sor, said signal processing means includes filter means whose settings are dynamically varied according to speech parame~ers extracted from said signal processing means , .
... . .
': :
W092/08330 2 ~ ~ a 3 4 4 PCT/AU91/00506 whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
In yet a further particular form of said sound/speech processor, said signal processing means w:ithin said processor includes means for reconstructing signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled optimally so as to enhance speech recognition in a user.
In a first mode of operation of said sound~speech processor, said filter means set by said conflguration means are prov~ed ba3ed on measurements set by an audiologist during a hearlng aid ~lttlng procedure. The filter settings remaln fixed aSter completion of the procedure and remain flxed thereafter.
In an alternatlve ~ode oP opera~lon of said ~ound/speech processor, said proce~qor includes ~ilter means whose parameters are changed dynamically while sald processor is in use providing said output signal to said hearing aid transducer in accordance with informatlon provided to said configuration means by speech parameter extraction means acting on said audio information.
In a further particular mode of opera~ion of said sound/speech processor said output signal is synthesised by said signal processing means utllising only speech para~eters.
Brief Description of the Drawin~s E~bodiments of the invention will now be described with ' W092/08330 PCT/AU91/00~06 ~ )o9-~344 -6- `
. ~ . .
reference to the drawings wherein:-Fig. l is a schematic representation of a bimodal aid according to a first embodiment of the invention, Fig. 2 is a schematic representation of the main functional components which comprise the entire bimodal aid, Fig. 3 is a schematic block diagram o~ the implant processing circuitry together with the acoustic -processor circuitry, ~ ~ ~~~~
Fig. 4 is a chart showing an example of the pattern of electrical stimulation of the implant electrodes for varlous steady state phonemes using the multi-peak codlng strategy, Flg. 5 is a graph showlng the standard loudness growth function for the speech proce~sor portion when . ' driv~ng the implant, .:
Fig. 6 is a schematic block diagram o~ the functional components of thc bi~odal proces~or which drive the acoustic aid, Fig. ~ is a schematic block diagram of the components comprising the acoustic proc~ssor which proces~es the speech signal in accordance with " particular forms of the invention to drive the acoustic aid, :
Fig. 8 is a component block diagram of the acoustic processor, ,:
. , . ' ' ' '.
. ,' ' ' .
. :
W092/0$330 ~ 4~ PCT/AU91/00506 _~ _ . Fig. 9 is a component block diagram of a Biquad filter as utllised in the acoustic processor of Fig.
8, Fig. 10 is a component block diagram of the output ` 5 driver ~or the acoustic aid, - Fig. 11 is a schematic representation o~ pre~erred modes of operation of the acoustic proces~or - for voiced vowel input, ~ig. 12 shows a sound intensity again~t ~requency for an acou~ic aid operatlng accordin~ to mode-1-; - ~
Fig. 13 shows comparative plots of sound in~enslty against frequency in relation to an example of mode~ one and three operatlon o~ the acoustic proce~sor, Fig. 14 outllne~ a fltting ~trategy ~or the blmodal aid, Flg. 15 i~ a ~lowchart o~ a fittin~ ~rategy for the ; acoustlc aid portion of the device iss mode 1, ; and Fig. 16 i a schematic block diagram of the ut~lisation : of the diagnostic and programming unit for use in association with the bimodal aid.
- ~escri~tiosl of Pre~erred ~mbodi~ents With reference to ~ig. 1, the bimodal aid i5 a hearing aid device which has the ~apability to provide issformation through a cochlear implant aid in one ear and a speech processing acoustic hearing aid in the other ear o~ a patlent. Both the implant aid and the acoustic aid are , ., : -' . :, ,'.' ' ' ' ' ' ' ' ~ . ' . .
W092/08330 PCT/AU91/005~
2~9~34~ -8-controlled by the same speech processor which derives its raw audio input from a single microphone.
The speech processor extracts certain parameters from the incoming acoustic wavefor~ that a~e relevant to the perception of speech sounds. Some of the! speech parameters extracted by the speech processor are translated in~o an electrical signal and delivered to a cochlear implant. These ; features can also be u~ed as a basis for modification of the speech waveform following which it is then amplified and presented to-the acoustic aid. - ~
There are some patlents with a small amount of residual hearing who have already received an implant, and who have previously worn hearlng aids in the non-implanted ear. These patients often report that the sounds produced by conventional acoustic hearlng alds are incompa~lble with those produced by the implant. Such pa~ients tend ~o resort to one or the other and thu~ do not make maximal use of their limited auditory capacities. These patients are candidates for the bimodal aid. Such a device incorporating a cochlear implant aid and a speech proce~slng acous~ic aid can provide information which will allow these patients to discrimlnate speech better than any currently available hearing dev.tce alone.
; Generally, if pa~ients have ~ome residual hearing, it tends to be low frequency. The cochlear implant produces stimulation at positions ln the cochlear that correspond to higher frequencies ~u~ually above ~OOHz). Thus, by combining the two channels it is possible to provide useful information ,_ W092/08330 ~ Q~9 ~ 3 ~ ~ PCT/AU9t/00506 _g_ over a much wider range of frequencies than either channel could provide alone. Furthermore, the frequency and temporal resolution of residual hearing can be better than that provided by the pulsatile electric signal of the cochlear implant aid portion o~ the bimodal aid.
In addition to the above "bimodal" uses the acoustic aid driver part of the device can also be used as a speech processing hearing aid independent of the cochlear implant aid output. When used in thls manner it has advantages over conventional acoustic hearing aids. Conventional hearing aids are limited in practice because the adjustments to the frequency/gain characterlstics are restricted to a small ,~ number of options and there are many users who are not , optimally alded. There is a need for a hearing aid with a ;, 15 more ~lexlble frequency/gain characteristlc and this can be achieved with this aid. In addition, the feature extraction clrcuits which are the basis of the cochlear implant aid allow the hardware tu measure important characteristics of th0 speech signal in quiet conditions and in conditions of moderate amounts of background noise. These characteristics can then be amplified selectively and enhanced relative to the rest of the acoustic signal, or used to synthesize a new speech-like wa~eform that carries the sa~e information exclusively. This is performed by the acoustic signal processor (12) which outputs to an acoustic aid.
The synthesized waveform is used to overcome special problems. For example, high frequency sounds above the limit of a user' 5 hearing can be presented as lower frequencies ':
.
' ' ' W092/08330 ~ 0 3 ~ 3 ~ 4 - 1 o- PCT/AU91/00506 within the user's hearing range. Broad peaks in the speech spectrum can be made narrower if this provides better frequency resolution and helps to reduce masking of one peak by other adjacent peaks. There is no other single, wearable device capable of implementing all these processes.
The sound/speech proces-~or can take in a speech ~gnal from a microphone, measure selected features of that signal (including the frequency and amplitude of formants for voiced speech) and control the outputs to both a cochlear implant aid and an acoustlc hearing aid in the ca~e of the first ~ ~~~~
embodiment or only the acoustic hearing aid in the case of the second embodiment.
l.O FIRST ~MBODIMENT - BIMODAL AID
Figure 2 shows a schematic diagram of the op~ratlon of the device. The cochlear implant aid portlon of the dev1ce i~ covered by existing patents or patent applications to the same applicant and the implant aid operates upon one ear of a patient using similar strategies to tho~e already developed for implant users. In addition, the users of the bimodal device will receive an auditory signal via an acoustlc aid in the non-implanted ear. The capabilities of the bimodal aid allow this signal to be specially tailored in order to convey information co~pl0mentary to the implant and utilise the residual hearing of the patient maximally.
Specifically, Fig. 2 discloses the body worn portion of the bimodal device comprises a speech proce~or ll intimately connected to an acoustic aid processor 12 together with a microphone 13, an acoustic hearing aid 14 and an implant aid , W092/08330 2 ~ 9 ~ 3 ~ '1 PCT/AU91/00506 15. The implant aid ;5 comprises an electrode array 16 electrically connected by harness 1~ to a receiver stimulator 18 which is in radio communication with speech processor 11 by way of internal coil 19 and external coil 20.
In addition Fig. 2 shows auxiliary ite~s being the diagnostic and programming unit 21 and the diagno~tlc programming interface 22.
Currently the diagnostic and programming u~it 21 is implemented as a program running on a personal computer ; 10 .whilst the diagnostic programming inter~ace 22 is a - ---~~ ~ ~~
communlcatlons card connected to the PC bus. The diagnostic and progra~ming unit 21 is utilised in a clinical situation to test ~or and control device paramet~rs of operatton ~or the ~peech processor 11 and/or acoustic aid proces~or 12 whlch optlmi~e hearlng performance for a patient according to de~lned crlteria. The~e parameters are communicated via the dia~nostlc program~ing lnterface 22 to a ~ap memory st`orage 23 in the ~peech processor 11. It i5 by reference to ~he parameters stored in the map memory ~torage 23 that the manner of proce~sing of the audio signal received from microphone 13 is deter~ined both for the speech processor 11 when driving the implant aid 15 and the acoustic aid processor 12 when driving the ~coustic ald 14.
The co~ponents illustrated in Fig. 2 other than the acoustic aid processor 12 and the acoustic aid 14 and the computer program controlling the function of ~he diagnos~ic and programming uni~ 21 have been described el~ewhere in earlier f iled patents and patent applica~ions and remain the ., , , . ' ' ' ' ~ ~ '' ..
W092~08330 2 ~ 9 ~ ~ 4 4 PC~/AU9ttO0506 ';:
:
same in so far as operation of the Cochlear implant aid is concerned.
The speech processor 11 and the preci~e methodology for exciting electrically the implant aid has varied since the - 5 inception of these devices and can be expected to continue to vary. For example excitation of the sti~nulating electrodes placed within the ear of a patient can be either digital or analogue in nature. TQ date, one of the present applicants Cochlear Pty. Li~ited, has pursued a strategy of digital electronic stimulation using what have been termed pulsatile electrical signals applied to a pulsatile electrical signal transducer.
Particularly, the speech processor 11 has been commercially available in a numb~r of form~ since around 1982 from Cochlear Pty. Llmited (one of the co-appllcants for the present application~. The early units and, indeed, even the most'r2ce~t unlts are primarily aimed at improving speech perception in favour of all other soundR received from :~ microphone 13. This is done by causlng speech processor 11 to di~cern and proce~s from the raw audio lnput received from microphone ~3 acoustic features of speech which have been determined to best characterlse speech information as perceived by the human auditory system.
;:
` ~arly forms of the speech processor 11 presented three :~ 25 acoustic fea~ures of speech to implant users. These were . amplitude, presented as current level of electrical - stimulatio~; funda~ental fre~uency ~FO) or voice pitch, presented as rate of pulsatile stimulation; and the second .
...
.
. . .
.,.,~ .
,~
W092/08330 2 ~ 9 3 4 4 PCT/AU91/00506 formant frequency ~F2) represented by the position of the stimulation electrode pair located within the ear of the patient. The F2 frequency is usually found within the frequency ranse 800 to 2500 Hz.
Later a second stimulating electrode pair was added representing the first formant (F1) of speech. The F1 signal is typically found within the frequency ranye 280 ~z to 1000 Hz. This scheme ~known as the FOFlF2 schemel provided improved performance in areas of speech perception as against the earlier FOF2 scheme~ In most recent times--the-information provided to and processed by the speech processor 11 has been increased with one particular purpose being to improve speech intelligibility under moderate level3 of background nolse.
This latest coding scheme provides all of the information available in the FOFlF2 scheme while providing additional information from three high frequency band pass filters. These filters cover the followin~ frequency ranges:
2000 to 2800 Hz, 2800 to 4000 Hz and 4000 to 8000 Hz. The - 20 energy within these ranges controls the amplitude of electrical stimulation of three fixed electrode pairs in the basal end of the electrode array. Thus, additional information about high frequency sounds is presented at a tonotopically appropriate place within the cochlear.
The overall stimulation rate for voiced sounds remains as FO (fundamental ~requency or voice pitch~ but in the new scheme four electrical stimulation pulses occur for each W092/08330 PCT/AU9l/00506 2 ~ 4~ -14-., glottal pulseO This compares with the FOFlF2 strategy in which only two pul es occur per voice pitch period. In the new coding scheme, for voiced speech sounds, the two pulse5 representing the first and second formant are still provided and additional stimulation pulses occur representing energy in the 2000 to 2800 ~z and the 2800 to 4000 Hz ranges.
For unvoiced phonemes, yet another pulse representing energy above 4000 Hz is provided while no stimulation for the first formant is provided, since there is usually little energy in this frequency range. Stimulation occurs at ~`~
random pulse rate o~ approximately 260 Hz which is about double that used in the earlier strategy.
The latest noise suppression algorith~ operates in a continuous manner, rather than as a volce activated switçh as 16 previously been used. This removes the perceptually annoyin~
switchlng on and of~ of the earlier system. In the new algorlthm the noise flow is continuously asse~sed in each frequency band over a period of ten seconds. The lowest level over this period is assumed to be background noise and is subtracted from the amplitude relevant to that frequency band. Thus any increaRe in signal amplitude above the noise level is presented to the patient while the ambient noise ; level itself i9 reduced to near threshold.
Fig. 3 illustrates the basic filter a~d prccessing structure o~ a bimodal aid incorDorating means ~o implement any o~ the above desoribed latest proc~sslng scheme.
~. ' .
W092/08330 2 a ~ PCT/AU91/00506 International Patent Application PCT/AU90/0040~ to the present applicant entitled Multi-peak Speech Processor describes in detail the operation of the~e components. The entire text and drawings of the specification of that application are incorporated herein by cross reference. The most pertlnent portions of that specification are included immed~ately below.
The nature of the electrode array that is utilised in con~unction w1th the latest coding strategy and the manner -and nature of its implantation is described in the ~
literature, for example 6 chlear ~rosthe3es, editors Clark .M., Tong G., Patrick; publlshed by Churchill Livingstone 1990. Chapter 9 of that book entitled "The Surgery of Cochlear Implantation" by Webb R.L., Pyman B.C., Franz B. K-H
lS and Clark G.M. i8 partlcularly pertinent. The text and drawings of that chapter are incorporated herei~ by cross re~erence. I
The coding strategy extracts and codes the F1 and F2 spectral peakR from the microphone audio signal, using the extracted frequency estimate~ to select a more apical and a more basal pair of electrodes for stimulation. Each selected electrode is stimulated at a pul~e rate equal to the Pundamental fre~uency F0. In addition to F1 and ~2, three high frequency bands of spectral information are extracted.
The amplitude estimates from band three ~2000-2~00 Hz), band four (2800-4090 Hz), and band five (above 4000 Hz) are prese~ted to fixed electrodes, for example the seventh, .
' .~ ~ , . ' '. : ' :':
; ~ Q 9 ~ 16-fourth and first electrodes, respectively, of the electrode array 16 (Fig. 2 and Fig. 4).
- The first, fourth and seven~h electrodes are selected ~ as the default electrodes for the high-fre~uency bands because they are spaced far enough apart 50 that most patients will be able to discriminate between stimulation at these three locations. Note that these derault assignments may be reprogrammed as required. If the three high frequency bands were asslgned only to the three most basal electrodes in the..MAP,.many patients might not find the additional high frequency information as useful since patients often do not demonstrate good place-pltch discrimination between ad~acent ba~al electrodes. Additionally, the overall pltch percept .. resultlng from the electrical ctimulatlon might be too hlgh.
Table I below indlcate~ the frequency ra~g~s of ~he various formants employed in the speech coding scheme for the present ~nvent~on.
, TABLE I
Frequ~ncv Ran~e Formant or Band 20280 - 1000 Hz Fl 800 - 4000 Hz F2 2000 - 2800 Hz Band 3 - Electrode 2800 - 4000 Hz Band 4 - Electrode 1 4000 Hz and above Band 5 - Electrode 1 If the input signal is voiced~ it has a periodic :` fundamental frequency. The electrode pairs selected from the ` estimates of Fl, F2 and bands 3 and 4 are stimulated W092/08330 '~ ~ 9 j ~ 4 4 PCT/AU91/00506 ' . --1~--sequentially at the ra~e equal to F0. The most basal electrode pair is stimulated first, followed by progressively more apical electrode pairs, as shown in Fig. 4. Band 5 is not presented in Fig. 4 because negliglble information is contained in this frequency band for most voiced sounds.
If the input signal is unvoiced, ener~y in the Fl band (280-lO00 Hz) is usually zero. Consequently it is replaced with the frequency band that extracts information above ~000 Hz. In this situation, the electrodes pairs selected from the estimates of F2, and bands 3, 4 and 5 receive the pulsatile stimulation. The rate of stimulation is periodic and varies between 200-300 Hz. The codin~ strategy thus may be seen to extract and code five spectral peaks but only four ` spectral peaks are e~coded for any one stlmulus sequence.
FIG. 4 illustrates the pattern of electrlcal : stimulation for various steady state phonemes when using this coding~strategy. A primary function of the MAP is to translate the frequency of the dominant spectral~peaks ~F1 and F2) to electrode selection. To per~orm this function, the electrodes are nu~bered ~e~uentially starting at the round window of the cochlea. Electrode 1 is the most basal electrode and electrode 22 is the most apical in the electrode array. Stimulation of di~ferent electrodes normally results in pitch perceptions that reflect the tonotopic organization of the cochlea. Electrode 22 elicits the lowest place-pitch percept, or the "dullest" sound.
Electrode 1 elicits the highest place-pitch percept, or "sharpest" sound.
~' .
- : :
, ' . ~ ~ ' ~ . ' , ' .
.
. . .
~9~44 -18- ~
To allocate the frequency range for the Fl and F2 spectral peaks to the total number of electrodes, a default mapping algorithm splits up the total number of electrodes available to use into a ratio of approxirnately 1:2, as shown in FIG. 4.
Inside the speech processor a random access memory stores a set of number tables, referred to collectively as the MAP memory storage 23. The MAP determines both stimulus parameters for Fl, F2 and bands 3-5, and the amplltude estimates. The encoding of the stimulus parameters follows a equence of distinct steps. The steps may be summarized as follows:
1. The first formant frequency (F1) is converted t~ a number based on the dominant spectral peak in the region between 280-1000 Hz.
2. The Fl number is used, in conjunction wlth one of the MAP tables, to determine the electrode to be stimulated to represent the first formant. The indifferent electrode is determined by the mode.
~- 20 3. The second formant frequency (F2) i9 converted to a number based on the dominant spectral peak in region between 800-4000 Hz.
4. The F2 number i5 used, in conjunction ~ith one of the MAP tables to determine the electrode to be stimulated to represent the second formant. The indifferent electrode is determined by the mode.
W092/08330 2 ~ ~} ~ 3 ~ ~ PCT/AU91/00506 5. The amplitude estimates for bands 3, 4 and 5 are assigned ~o the three default electrodes ?, 4 and 1 for bands 3, 4 and 5, respectively, or such other electrodes ~hat may be selected when the MAP is being prepared.
6. The amplitude of the acous-tic signal in each of the frequency bands i8 converted to a number ranging from 0 -150. The level of 3tlmulation that will be delivered is - determlned by referring to set MAP tables tha~ relate acoustlc amplitude (in range of 0-150) to stimulatlon level for the-specific electrodes selected in steps 2, 4 and 5,~ ~~~~ ~ ~
above.
7. The da~a are further encoded in the speech proce~sor and tran~mitted to the recelver/stimulator 18. It, in turn, decodes the data and send~ the stlmull to the appropriate electrodes. Sti~ulus pulses are pre~ented at a : rate equal to F0 during voiced periods and at a random a periodlc rate within the range of FO and F1 formants (typically 200 to 300 Hz~ during unvoiced periods.
The speech processor 11 additionally includes a non-linear loudness growth algori~hm that conver~s acoustic ~ignal amplitude to electrical stimulation parameters. The speech proce~sor 11 converts the amplitude of the acoustic signal into a d~gital linear scale with values ~rom 0 - 150 as shown in FIG. 5. That digital scale ~in combinatio~ wlth the information stored in the patien~'s MAP) determines ~he actual charge delivered to the electrodes in the electrode array 16.
.:
. ' ..
' , .
W092/08330 2 ~ ~ 3 3 ~ ~ PCT/AU9l/005 Improvements on this assembly are disclosed in co-pending applications to Cochlear Pty. Limited. Specifically International Application PCT/AU90/00406 discloses an improved connection system between mi~rophone 13 and speech processor 11 and between the external coil assembly 20 and the speech processor 11. The text and dra~ings o~ the specification of that application are incorporated herein by cross reference.
A noise suppression circuit is disclosed in International Patent-Application PCT/AU90~00404. The text and drawings of the specification which accompanied ~hat , applicatio~ are incorporated herein by cross reference.
FIG. 6 is a block diagram of the processing circultry showing the functional interconnection of components for driving the acoustic heariny ald 14. The main components comprise ~lcrophone 13, automatic gain control 24, speech parameter extractor 25, encoder 26, patient MAP ~emory storage 23, noi~e generator 2~ and acoustic aid si~nal processor 12.
The heart of the bimodal aid as far as allowlng the acoustic aid 14 to be driven from the speech processor 11 is the acoustic aid slynal processor 12.
The acoustic aid signal processor is software confi0uratlve and contains three two-pole filters each of which can be used in either bandpass, lowpass or highpa s configura~lon. The centre frequency, bandwidth and output amplitude o~ these filters are controlled by the processor.
W092/08330 ~ PCT/AU91/00506 The filters can be used in series or in parallel and the input waveform can be the speech waveform, pulses, a noise signal or external signal. The external signal can be from another microphone, other acoustic output or another acoustic signal processor. This results in a particularly flexible aid that can operate either in a ~anner similar to a conventlonal acoustic hearing aid (though with more accurate gain fitting than most currently available aids- can provide) or as an aid providing different types of processed speech 10 information. The acoustic aid signal processor including-the -three programmable filters has been implemented on a single silicon chip. Each filter is usable as a high-pass, band-pa~s, or low-pass filter. ~28 centre frequencies between lOOHz and 16000 Hz, 128 Q value~ between 0.53 and 120, and 15 128 amplitude value~ between 0 and 64 dB are available for each filter (Q=centre ~requency/B.W.). This chip has the flexibility to cover a wide range of krequencies, amplitudes ; and spectral shapes.
It also includes a dlgital-to-analog converter (DAC) 20 that is used to produce the excitation waveform for the filters. The DAC can produce waveforms of arbitrary shape t9UCh as sinusoidal or pul3atile) controlled directly by the processor, or can be switched to provide excitation by the speech wave~orm, or a whlte noise generator.
A schematic dia~ram of the three-filter circuit is shown in Fig 7.
.
. .
.
' ' :
W092/08330 ,~ 9 3 ~4~ PCT/AU91/00506 : A functional specification for a single chip . implementation of the acoustic aid signal processor 12 is provided by FIGS. 8, 3 and 10. Details of the specification are as follows: v 5 Topologv Flg. 8 shows the overall topology of the chip. Three programmable f~lters in which centre frequency and band width can be independently controlled are provided. The outputs of these filters can be independently attenuated or amplified and then mixed.- -The output o~ one of the three filters can ~~~
be inverted if necessary by setting an INV bit.
Fig. 9 shows detalls of one of the Blquad filters formlng the three filter array together wlth the ~requency latches and Q latches which determlne the parameters of the Biquad filter.
The topology of the chip can be altered from serial to parallel or a mixed structure by three PARn bi~s.
The signal source for this structure can be selected by a four channel multlplexer ~MUX). This ~elects ~5 volts, a buffered outpu~ of the audio signal, an internally generated noise source, or an external signal. This signal source is : ~ed to a 7 bit digital to analog converter ~DAC) as a reference voltage.
The multiplying DAC can convert the DC level into a pulse generator, or provide a fine gain control on the audio or external signal, or noise source . The most signiflcant bit ~MSB) is used to invert the output.
W~92/08330 ~ ~ 9 ~ 3 ~ ~ PCT/~U9l/00506 All filter outputs are summed and passed to a push pull earphone driver which can provide effectively lO volts peak-to-peak across a 2~0 ohm (nominal) earphone. The chip uses a single supply of 5 volts.
Note that the earphone has a DC re~sistance of 88 ohm with the impedance ri ing gradually to 270 oh~ at 1 kffz. The output stage consists of a bridge of P and N transistor switches as shown in Fig. 10. The switches are pulse width modulated by a signal derived ~rom a comparator driven from a triangle wave~on one input and the aud1o signal on the other.
The on re~i~tance of the switches should be less ~han 5 ohms ~lower if possible).
Apart from the class D output, there iQ a slngle ended linear output. This should be capable of sourcing or sinklng 5 mA with le~s than 1 volt drop.
The chlp is programmed by writing to the MAP of the speech processor. To distinguish b~tween chip and MAP
writes, bits Aa - A12 ~re decoded. Any write to the block 1800 - 18FF in MAP will also write to the chip.
Two addres~es, one odd and one even (Y13 and Y14) are decoded and ORed and the output of these (R/W new) can be used to write to the filter chip in a more selective manner.
Odd writes are used to select the MU~ and even writes set the auto sensitivity control (ASC) latch on that chip.
The four lowest address bitR are used ~o write to 14 r~gisters which contain the progra~miny information for the acoumtic pracemsor chip. Reglsters YO to Y11 program , .
W092/08330 2 ~ 9 3 3 4 4 PCT/AU91/00506 -2~- ..
irequency, Q, gain (attenuation or amplification) and configuration in turn for each of the 3 filters. Register Y12 sets the chip topology. Register Y15 is used to write to the DAC.
Referring to Fig. 8, the topoloyy latch Y12 is as follows:
. Dl, D2, D3 Topology bit~
D4, D5 DAC source DAC source o 0 +5 volt 0 1 Audio lS 1 0 Noise - 1 1 ~xternal source Topoloqv bits D0 - INV inverts output of Filter 1 Dl - PARl sends Filter 1 output to summer D2 - PAR2 sends Filter 2 output to summer D3 - PAR3 sends Filter 3 output to summer D6 - DIR sends the DAC output direct to summer When a filter's PAR bit is set, the cascaded filter and attenuator/amplifier are sent to the summer but the filter output itself is sent to a bus so that it can be made available to other filters. When the PAR bit is not ~et. the filter-attenuator/amplifier combination is sent to the b~s.
"
.
, W092/08330 ~ ~;}~ PCT/AU9t/00506 ., .
In this way the filter gains may be scaled if the filters are . cascaded.
Filter ~rogramminq bits Configuration latches Y3, Y7, Y11 D0, D1, D2, filter type select D3, D4 clock select DS, D6 f i I ter input - Filter inPut selection The filter inputs selected by D6 and D5 vary as shown below Input Fllter O 1 2 3 1 DAC AUDIO FILT2 FILT3 <INVERTER
The speech processor extracts certain parameters from the incoming acoustic wavefor~ that a~e relevant to the perception of speech sounds. Some of the! speech parameters extracted by the speech processor are translated in~o an electrical signal and delivered to a cochlear implant. These ; features can also be u~ed as a basis for modification of the speech waveform following which it is then amplified and presented to-the acoustic aid. - ~
There are some patlents with a small amount of residual hearing who have already received an implant, and who have previously worn hearlng aids in the non-implanted ear. These patients often report that the sounds produced by conventional acoustic hearlng alds are incompa~lble with those produced by the implant. Such pa~ients tend ~o resort to one or the other and thu~ do not make maximal use of their limited auditory capacities. These patients are candidates for the bimodal aid. Such a device incorporating a cochlear implant aid and a speech proce~slng acous~ic aid can provide information which will allow these patients to discrimlnate speech better than any currently available hearing dev.tce alone.
; Generally, if pa~ients have ~ome residual hearing, it tends to be low frequency. The cochlear implant produces stimulation at positions ln the cochlear that correspond to higher frequencies ~u~ually above ~OOHz). Thus, by combining the two channels it is possible to provide useful information ,_ W092/08330 ~ Q~9 ~ 3 ~ ~ PCT/AU9t/00506 _g_ over a much wider range of frequencies than either channel could provide alone. Furthermore, the frequency and temporal resolution of residual hearing can be better than that provided by the pulsatile electric signal of the cochlear implant aid portion o~ the bimodal aid.
In addition to the above "bimodal" uses the acoustic aid driver part of the device can also be used as a speech processing hearing aid independent of the cochlear implant aid output. When used in thls manner it has advantages over conventional acoustic hearing aids. Conventional hearing aids are limited in practice because the adjustments to the frequency/gain characterlstics are restricted to a small ,~ number of options and there are many users who are not , optimally alded. There is a need for a hearing aid with a ;, 15 more ~lexlble frequency/gain characteristlc and this can be achieved with this aid. In addition, the feature extraction clrcuits which are the basis of the cochlear implant aid allow the hardware tu measure important characteristics of th0 speech signal in quiet conditions and in conditions of moderate amounts of background noise. These characteristics can then be amplified selectively and enhanced relative to the rest of the acoustic signal, or used to synthesize a new speech-like wa~eform that carries the sa~e information exclusively. This is performed by the acoustic signal processor (12) which outputs to an acoustic aid.
The synthesized waveform is used to overcome special problems. For example, high frequency sounds above the limit of a user' 5 hearing can be presented as lower frequencies ':
.
' ' ' W092/08330 ~ 0 3 ~ 3 ~ 4 - 1 o- PCT/AU91/00506 within the user's hearing range. Broad peaks in the speech spectrum can be made narrower if this provides better frequency resolution and helps to reduce masking of one peak by other adjacent peaks. There is no other single, wearable device capable of implementing all these processes.
The sound/speech proces-~or can take in a speech ~gnal from a microphone, measure selected features of that signal (including the frequency and amplitude of formants for voiced speech) and control the outputs to both a cochlear implant aid and an acoustlc hearing aid in the ca~e of the first ~ ~~~~
embodiment or only the acoustic hearing aid in the case of the second embodiment.
l.O FIRST ~MBODIMENT - BIMODAL AID
Figure 2 shows a schematic diagram of the op~ratlon of the device. The cochlear implant aid portlon of the dev1ce i~ covered by existing patents or patent applications to the same applicant and the implant aid operates upon one ear of a patient using similar strategies to tho~e already developed for implant users. In addition, the users of the bimodal device will receive an auditory signal via an acoustlc aid in the non-implanted ear. The capabilities of the bimodal aid allow this signal to be specially tailored in order to convey information co~pl0mentary to the implant and utilise the residual hearing of the patient maximally.
Specifically, Fig. 2 discloses the body worn portion of the bimodal device comprises a speech proce~or ll intimately connected to an acoustic aid processor 12 together with a microphone 13, an acoustic hearing aid 14 and an implant aid , W092/08330 2 ~ 9 ~ 3 ~ '1 PCT/AU91/00506 15. The implant aid ;5 comprises an electrode array 16 electrically connected by harness 1~ to a receiver stimulator 18 which is in radio communication with speech processor 11 by way of internal coil 19 and external coil 20.
In addition Fig. 2 shows auxiliary ite~s being the diagnostic and programming unit 21 and the diagno~tlc programming interface 22.
Currently the diagnostic and programming u~it 21 is implemented as a program running on a personal computer ; 10 .whilst the diagnostic programming inter~ace 22 is a - ---~~ ~ ~~
communlcatlons card connected to the PC bus. The diagnostic and progra~ming unit 21 is utilised in a clinical situation to test ~or and control device paramet~rs of operatton ~or the ~peech processor 11 and/or acoustic aid proces~or 12 whlch optlmi~e hearlng performance for a patient according to de~lned crlteria. The~e parameters are communicated via the dia~nostlc program~ing lnterface 22 to a ~ap memory st`orage 23 in the ~peech processor 11. It i5 by reference to ~he parameters stored in the map memory ~torage 23 that the manner of proce~sing of the audio signal received from microphone 13 is deter~ined both for the speech processor 11 when driving the implant aid 15 and the acoustic aid processor 12 when driving the ~coustic ald 14.
The co~ponents illustrated in Fig. 2 other than the acoustic aid processor 12 and the acoustic aid 14 and the computer program controlling the function of ~he diagnos~ic and programming uni~ 21 have been described el~ewhere in earlier f iled patents and patent applica~ions and remain the ., , , . ' ' ' ' ~ ~ '' ..
W092~08330 2 ~ 9 ~ ~ 4 4 PC~/AU9ttO0506 ';:
:
same in so far as operation of the Cochlear implant aid is concerned.
The speech processor 11 and the preci~e methodology for exciting electrically the implant aid has varied since the - 5 inception of these devices and can be expected to continue to vary. For example excitation of the sti~nulating electrodes placed within the ear of a patient can be either digital or analogue in nature. TQ date, one of the present applicants Cochlear Pty. Li~ited, has pursued a strategy of digital electronic stimulation using what have been termed pulsatile electrical signals applied to a pulsatile electrical signal transducer.
Particularly, the speech processor 11 has been commercially available in a numb~r of form~ since around 1982 from Cochlear Pty. Llmited (one of the co-appllcants for the present application~. The early units and, indeed, even the most'r2ce~t unlts are primarily aimed at improving speech perception in favour of all other soundR received from :~ microphone 13. This is done by causlng speech processor 11 to di~cern and proce~s from the raw audio lnput received from microphone ~3 acoustic features of speech which have been determined to best characterlse speech information as perceived by the human auditory system.
;:
` ~arly forms of the speech processor 11 presented three :~ 25 acoustic fea~ures of speech to implant users. These were . amplitude, presented as current level of electrical - stimulatio~; funda~ental fre~uency ~FO) or voice pitch, presented as rate of pulsatile stimulation; and the second .
...
.
. . .
.,.,~ .
,~
W092/08330 2 ~ 9 3 4 4 PCT/AU91/00506 formant frequency ~F2) represented by the position of the stimulation electrode pair located within the ear of the patient. The F2 frequency is usually found within the frequency ranse 800 to 2500 Hz.
Later a second stimulating electrode pair was added representing the first formant (F1) of speech. The F1 signal is typically found within the frequency ranye 280 ~z to 1000 Hz. This scheme ~known as the FOFlF2 schemel provided improved performance in areas of speech perception as against the earlier FOF2 scheme~ In most recent times--the-information provided to and processed by the speech processor 11 has been increased with one particular purpose being to improve speech intelligibility under moderate level3 of background nolse.
This latest coding scheme provides all of the information available in the FOFlF2 scheme while providing additional information from three high frequency band pass filters. These filters cover the followin~ frequency ranges:
2000 to 2800 Hz, 2800 to 4000 Hz and 4000 to 8000 Hz. The - 20 energy within these ranges controls the amplitude of electrical stimulation of three fixed electrode pairs in the basal end of the electrode array. Thus, additional information about high frequency sounds is presented at a tonotopically appropriate place within the cochlear.
The overall stimulation rate for voiced sounds remains as FO (fundamental ~requency or voice pitch~ but in the new scheme four electrical stimulation pulses occur for each W092/08330 PCT/AU9l/00506 2 ~ 4~ -14-., glottal pulseO This compares with the FOFlF2 strategy in which only two pul es occur per voice pitch period. In the new coding scheme, for voiced speech sounds, the two pulse5 representing the first and second formant are still provided and additional stimulation pulses occur representing energy in the 2000 to 2800 ~z and the 2800 to 4000 Hz ranges.
For unvoiced phonemes, yet another pulse representing energy above 4000 Hz is provided while no stimulation for the first formant is provided, since there is usually little energy in this frequency range. Stimulation occurs at ~`~
random pulse rate o~ approximately 260 Hz which is about double that used in the earlier strategy.
The latest noise suppression algorith~ operates in a continuous manner, rather than as a volce activated switçh as 16 previously been used. This removes the perceptually annoyin~
switchlng on and of~ of the earlier system. In the new algorlthm the noise flow is continuously asse~sed in each frequency band over a period of ten seconds. The lowest level over this period is assumed to be background noise and is subtracted from the amplitude relevant to that frequency band. Thus any increaRe in signal amplitude above the noise level is presented to the patient while the ambient noise ; level itself i9 reduced to near threshold.
Fig. 3 illustrates the basic filter a~d prccessing structure o~ a bimodal aid incorDorating means ~o implement any o~ the above desoribed latest proc~sslng scheme.
~. ' .
W092/08330 2 a ~ PCT/AU91/00506 International Patent Application PCT/AU90/0040~ to the present applicant entitled Multi-peak Speech Processor describes in detail the operation of the~e components. The entire text and drawings of the specification of that application are incorporated herein by cross reference. The most pertlnent portions of that specification are included immed~ately below.
The nature of the electrode array that is utilised in con~unction w1th the latest coding strategy and the manner -and nature of its implantation is described in the ~
literature, for example 6 chlear ~rosthe3es, editors Clark .M., Tong G., Patrick; publlshed by Churchill Livingstone 1990. Chapter 9 of that book entitled "The Surgery of Cochlear Implantation" by Webb R.L., Pyman B.C., Franz B. K-H
lS and Clark G.M. i8 partlcularly pertinent. The text and drawings of that chapter are incorporated herei~ by cross re~erence. I
The coding strategy extracts and codes the F1 and F2 spectral peakR from the microphone audio signal, using the extracted frequency estimate~ to select a more apical and a more basal pair of electrodes for stimulation. Each selected electrode is stimulated at a pul~e rate equal to the Pundamental fre~uency F0. In addition to F1 and ~2, three high frequency bands of spectral information are extracted.
The amplitude estimates from band three ~2000-2~00 Hz), band four (2800-4090 Hz), and band five (above 4000 Hz) are prese~ted to fixed electrodes, for example the seventh, .
' .~ ~ , . ' '. : ' :':
; ~ Q 9 ~ 16-fourth and first electrodes, respectively, of the electrode array 16 (Fig. 2 and Fig. 4).
- The first, fourth and seven~h electrodes are selected ~ as the default electrodes for the high-fre~uency bands because they are spaced far enough apart 50 that most patients will be able to discriminate between stimulation at these three locations. Note that these derault assignments may be reprogrammed as required. If the three high frequency bands were asslgned only to the three most basal electrodes in the..MAP,.many patients might not find the additional high frequency information as useful since patients often do not demonstrate good place-pltch discrimination between ad~acent ba~al electrodes. Additionally, the overall pltch percept .. resultlng from the electrical ctimulatlon might be too hlgh.
Table I below indlcate~ the frequency ra~g~s of ~he various formants employed in the speech coding scheme for the present ~nvent~on.
, TABLE I
Frequ~ncv Ran~e Formant or Band 20280 - 1000 Hz Fl 800 - 4000 Hz F2 2000 - 2800 Hz Band 3 - Electrode 2800 - 4000 Hz Band 4 - Electrode 1 4000 Hz and above Band 5 - Electrode 1 If the input signal is voiced~ it has a periodic :` fundamental frequency. The electrode pairs selected from the ` estimates of Fl, F2 and bands 3 and 4 are stimulated W092/08330 '~ ~ 9 j ~ 4 4 PCT/AU91/00506 ' . --1~--sequentially at the ra~e equal to F0. The most basal electrode pair is stimulated first, followed by progressively more apical electrode pairs, as shown in Fig. 4. Band 5 is not presented in Fig. 4 because negliglble information is contained in this frequency band for most voiced sounds.
If the input signal is unvoiced, ener~y in the Fl band (280-lO00 Hz) is usually zero. Consequently it is replaced with the frequency band that extracts information above ~000 Hz. In this situation, the electrodes pairs selected from the estimates of F2, and bands 3, 4 and 5 receive the pulsatile stimulation. The rate of stimulation is periodic and varies between 200-300 Hz. The codin~ strategy thus may be seen to extract and code five spectral peaks but only four ` spectral peaks are e~coded for any one stlmulus sequence.
FIG. 4 illustrates the pattern of electrlcal : stimulation for various steady state phonemes when using this coding~strategy. A primary function of the MAP is to translate the frequency of the dominant spectral~peaks ~F1 and F2) to electrode selection. To per~orm this function, the electrodes are nu~bered ~e~uentially starting at the round window of the cochlea. Electrode 1 is the most basal electrode and electrode 22 is the most apical in the electrode array. Stimulation of di~ferent electrodes normally results in pitch perceptions that reflect the tonotopic organization of the cochlea. Electrode 22 elicits the lowest place-pitch percept, or the "dullest" sound.
Electrode 1 elicits the highest place-pitch percept, or "sharpest" sound.
~' .
- : :
, ' . ~ ~ ' ~ . ' , ' .
.
. . .
~9~44 -18- ~
To allocate the frequency range for the Fl and F2 spectral peaks to the total number of electrodes, a default mapping algorithm splits up the total number of electrodes available to use into a ratio of approxirnately 1:2, as shown in FIG. 4.
Inside the speech processor a random access memory stores a set of number tables, referred to collectively as the MAP memory storage 23. The MAP determines both stimulus parameters for Fl, F2 and bands 3-5, and the amplltude estimates. The encoding of the stimulus parameters follows a equence of distinct steps. The steps may be summarized as follows:
1. The first formant frequency (F1) is converted t~ a number based on the dominant spectral peak in the region between 280-1000 Hz.
2. The Fl number is used, in conjunction wlth one of the MAP tables, to determine the electrode to be stimulated to represent the first formant. The indifferent electrode is determined by the mode.
~- 20 3. The second formant frequency (F2) i9 converted to a number based on the dominant spectral peak in region between 800-4000 Hz.
4. The F2 number i5 used, in conjunction ~ith one of the MAP tables to determine the electrode to be stimulated to represent the second formant. The indifferent electrode is determined by the mode.
W092/08330 2 ~ ~} ~ 3 ~ ~ PCT/AU91/00506 5. The amplitude estimates for bands 3, 4 and 5 are assigned ~o the three default electrodes ?, 4 and 1 for bands 3, 4 and 5, respectively, or such other electrodes ~hat may be selected when the MAP is being prepared.
6. The amplitude of the acous-tic signal in each of the frequency bands i8 converted to a number ranging from 0 -150. The level of 3tlmulation that will be delivered is - determlned by referring to set MAP tables tha~ relate acoustlc amplitude (in range of 0-150) to stimulatlon level for the-specific electrodes selected in steps 2, 4 and 5,~ ~~~~ ~ ~
above.
7. The da~a are further encoded in the speech proce~sor and tran~mitted to the recelver/stimulator 18. It, in turn, decodes the data and send~ the stlmull to the appropriate electrodes. Sti~ulus pulses are pre~ented at a : rate equal to F0 during voiced periods and at a random a periodlc rate within the range of FO and F1 formants (typically 200 to 300 Hz~ during unvoiced periods.
The speech processor 11 additionally includes a non-linear loudness growth algori~hm that conver~s acoustic ~ignal amplitude to electrical stimulation parameters. The speech proce~sor 11 converts the amplitude of the acoustic signal into a d~gital linear scale with values ~rom 0 - 150 as shown in FIG. 5. That digital scale ~in combinatio~ wlth the information stored in the patien~'s MAP) determines ~he actual charge delivered to the electrodes in the electrode array 16.
.:
. ' ..
' , .
W092/08330 2 ~ ~ 3 3 ~ ~ PCT/AU9l/005 Improvements on this assembly are disclosed in co-pending applications to Cochlear Pty. Limited. Specifically International Application PCT/AU90/00406 discloses an improved connection system between mi~rophone 13 and speech processor 11 and between the external coil assembly 20 and the speech processor 11. The text and dra~ings o~ the specification of that application are incorporated herein by cross reference.
A noise suppression circuit is disclosed in International Patent-Application PCT/AU90~00404. The text and drawings of the specification which accompanied ~hat , applicatio~ are incorporated herein by cross reference.
FIG. 6 is a block diagram of the processing circultry showing the functional interconnection of components for driving the acoustic heariny ald 14. The main components comprise ~lcrophone 13, automatic gain control 24, speech parameter extractor 25, encoder 26, patient MAP ~emory storage 23, noi~e generator 2~ and acoustic aid si~nal processor 12.
The heart of the bimodal aid as far as allowlng the acoustic aid 14 to be driven from the speech processor 11 is the acoustic aid slynal processor 12.
The acoustic aid signal processor is software confi0uratlve and contains three two-pole filters each of which can be used in either bandpass, lowpass or highpa s configura~lon. The centre frequency, bandwidth and output amplitude o~ these filters are controlled by the processor.
W092/08330 ~ PCT/AU91/00506 The filters can be used in series or in parallel and the input waveform can be the speech waveform, pulses, a noise signal or external signal. The external signal can be from another microphone, other acoustic output or another acoustic signal processor. This results in a particularly flexible aid that can operate either in a ~anner similar to a conventlonal acoustic hearing aid (though with more accurate gain fitting than most currently available aids- can provide) or as an aid providing different types of processed speech 10 information. The acoustic aid signal processor including-the -three programmable filters has been implemented on a single silicon chip. Each filter is usable as a high-pass, band-pa~s, or low-pass filter. ~28 centre frequencies between lOOHz and 16000 Hz, 128 Q value~ between 0.53 and 120, and 15 128 amplitude value~ between 0 and 64 dB are available for each filter (Q=centre ~requency/B.W.). This chip has the flexibility to cover a wide range of krequencies, amplitudes ; and spectral shapes.
It also includes a dlgital-to-analog converter (DAC) 20 that is used to produce the excitation waveform for the filters. The DAC can produce waveforms of arbitrary shape t9UCh as sinusoidal or pul3atile) controlled directly by the processor, or can be switched to provide excitation by the speech wave~orm, or a whlte noise generator.
A schematic dia~ram of the three-filter circuit is shown in Fig 7.
.
. .
.
' ' :
W092/08330 ,~ 9 3 ~4~ PCT/AU91/00506 : A functional specification for a single chip . implementation of the acoustic aid signal processor 12 is provided by FIGS. 8, 3 and 10. Details of the specification are as follows: v 5 Topologv Flg. 8 shows the overall topology of the chip. Three programmable f~lters in which centre frequency and band width can be independently controlled are provided. The outputs of these filters can be independently attenuated or amplified and then mixed.- -The output o~ one of the three filters can ~~~
be inverted if necessary by setting an INV bit.
Fig. 9 shows detalls of one of the Blquad filters formlng the three filter array together wlth the ~requency latches and Q latches which determlne the parameters of the Biquad filter.
The topology of the chip can be altered from serial to parallel or a mixed structure by three PARn bi~s.
The signal source for this structure can be selected by a four channel multlplexer ~MUX). This ~elects ~5 volts, a buffered outpu~ of the audio signal, an internally generated noise source, or an external signal. This signal source is : ~ed to a 7 bit digital to analog converter ~DAC) as a reference voltage.
The multiplying DAC can convert the DC level into a pulse generator, or provide a fine gain control on the audio or external signal, or noise source . The most signiflcant bit ~MSB) is used to invert the output.
W~92/08330 ~ ~ 9 ~ 3 ~ ~ PCT/~U9l/00506 All filter outputs are summed and passed to a push pull earphone driver which can provide effectively lO volts peak-to-peak across a 2~0 ohm (nominal) earphone. The chip uses a single supply of 5 volts.
Note that the earphone has a DC re~sistance of 88 ohm with the impedance ri ing gradually to 270 oh~ at 1 kffz. The output stage consists of a bridge of P and N transistor switches as shown in Fig. 10. The switches are pulse width modulated by a signal derived ~rom a comparator driven from a triangle wave~on one input and the aud1o signal on the other.
The on re~i~tance of the switches should be less ~han 5 ohms ~lower if possible).
Apart from the class D output, there iQ a slngle ended linear output. This should be capable of sourcing or sinklng 5 mA with le~s than 1 volt drop.
The chlp is programmed by writing to the MAP of the speech processor. To distinguish b~tween chip and MAP
writes, bits Aa - A12 ~re decoded. Any write to the block 1800 - 18FF in MAP will also write to the chip.
Two addres~es, one odd and one even (Y13 and Y14) are decoded and ORed and the output of these (R/W new) can be used to write to the filter chip in a more selective manner.
Odd writes are used to select the MU~ and even writes set the auto sensitivity control (ASC) latch on that chip.
The four lowest address bitR are used ~o write to 14 r~gisters which contain the progra~miny information for the acoumtic pracemsor chip. Reglsters YO to Y11 program , .
W092/08330 2 ~ 9 3 3 4 4 PCT/AU91/00506 -2~- ..
irequency, Q, gain (attenuation or amplification) and configuration in turn for each of the 3 filters. Register Y12 sets the chip topology. Register Y15 is used to write to the DAC.
Referring to Fig. 8, the topoloyy latch Y12 is as follows:
. Dl, D2, D3 Topology bit~
D4, D5 DAC source DAC source o 0 +5 volt 0 1 Audio lS 1 0 Noise - 1 1 ~xternal source Topoloqv bits D0 - INV inverts output of Filter 1 Dl - PARl sends Filter 1 output to summer D2 - PAR2 sends Filter 2 output to summer D3 - PAR3 sends Filter 3 output to summer D6 - DIR sends the DAC output direct to summer When a filter's PAR bit is set, the cascaded filter and attenuator/amplifier are sent to the summer but the filter output itself is sent to a bus so that it can be made available to other filters. When the PAR bit is not ~et. the filter-attenuator/amplifier combination is sent to the b~s.
"
.
, W092/08330 ~ ~;}~ PCT/AU9t/00506 ., .
In this way the filter gains may be scaled if the filters are . cascaded.
Filter ~rogramminq bits Configuration latches Y3, Y7, Y11 D0, D1, D2, filter type select D3, D4 clock select DS, D6 f i I ter input - Filter inPut selection The filter inputs selected by D6 and D5 vary as shown below Input Fllter O 1 2 3 1 DAC AUDIO FILT2 FILT3 <INVERTER
3 DAC NOIS~ FILT1 FILT2 ; A clock pre-scaler is provided to extend the frequency .. ranges of the filters. This is done by dividing the clock by 2 or 4 before feeding it to the filter's own divider.
Decoding is as follows:
. 20 D4 D3 DIVR Divider input O O 1 5 MHz O 1 2 2.5 MHz 1 0 4 1.25 MHz 1 1 1 5 MHz bu~ 2 switches (HF) are opened in the filter to give double the centre frequency.
.- ~
,' ~, ' , .
. . .
W092/0X330 ~ a ~ 14 PCT/AU91/00506 Filter tvpe .
With reference to Fig. 9, the filters consist of two integrators in a loop with a variable gain feedback path.
The input may be a switched or an unswitched capacitor, it may be applied to either the first or second input and the output may be taken from either the first or second integrator. This produces various different transfer functions as given below.
Mode D2 D1 DO
0 SW~ 1st 1st Power down 1 SW 1st 2nd Lowpass 2 SW 2nd 1st Lowpass 3 SW 2nd 2nd Bandpass 4 US 1st 1st Highpac~
US 1st 2nd Bandpass 6 US 2nd 1st Bandpass 7 US 2nd 2nd Highpass In this table a bit zero selects the particular condition. Thus mode O, i.e. DO, D1, D2 all zero, powers down the filter and switches ~ts output off.
In some cases it is desirable to shut all other functions off. This is done by an external pin, PDB to power .. down bimodal operation. Only the Ram Cell monitor is stiil . operational.
The programmable filter clock divider comprises a 7 bit D
ripple counter which is compared wi~h the contents of the frequency latch. The first match produces a counter reset.
' , ~ , . .
W092/08330 2 ~ ~ ~ 3 ~ 4 PCT/AU91/00506 The frequency latch must be fed with the complement of the count required. If for example a reset is required after a count of 2 then the latch is all high except for bit D1 which is low. The outputs of all the NOR gates connected to the latch will be low except for the one connected to D1. Now if the counter coun~s up from zero, on the first occasion of Q2 -~ going high, thls NOR will al50 go low and the 7 input NOR at the output will reset the coun~er.
:: The filter Q is programmed by using a capacitor which has a 4 bit binary sequence, toge~her wlth a 3 bit programmable resistive divider. The resistors are programmed by a number n which is represented by bits D4-D6 in the Y1 latch (YS and Y9 for fllters 2 and 3). The capacitors are programmed by m, represe~ted by blts DO-D3.
The Q is glven by:
Q = 8( 1 ~ 1.R75n)~m : 'By using switches, the ~eedback resis~or of an amplifier can be con~igured to be like a buffered attenuator - i.e. the output can drive another attenuator or lnverter.
Two separate sections are used, one to give ~, 4 dB steps giving a total of +/- 28 dB and another to give 8, 0.5 dB
, steps with a maximum of +/- 3.5 dB. The total range is therefore +/- 31.5 dB. Two 8 channel MUXes are used to select the required taps on the potential dividers.
i: 25 The attenuator/ampli~iers can be selected by addressing `.' latches Y2, Y6 and YlO. Bit D6 = 1 give attenuation, O
~' amplification.
., .' .
: . . - , :
.
. ~, . .
;,; . : ' W092/a8330 PCT/AU91/005~
2~9~3~ 8-. . .
.o set the aain: Y2 = ain in dB ~; ~
To set the attenuation: Y2 = 64 - ~atten in dB ~ 2) T~ Y2 is set to o (all bits low) then the attenuator/amplifier is powered down and switche~ L-.
a Note that setting HF adds 6 dB to the gain.
The attenuator is arranged so that its gain is not changed except when ~he signal passes through zero. This is to prevent clicks and pops during gain changes. A zero crossing detector which produces a pulse on either a positive - or negative going zero crossing is used to strobe a latch which transfers the required gain to the attenuator MUXes.
The multiplying DAC i5 a 6 bit resistive ladder type and multiplies the input REF, selected by the SOURCE SELECT (Flg.
8) by the digital quantity in latch Y15. Blt D6 inverts the output. The settllng time i~ 50 microseconds.
The acoustic aid processor 12, by its ~lexible and programmable construction, allows many signal processing strategies to be tried and ultimately settled upon so as to best adapt the acoustic aid to provide information to the wearer which compllments information received from an implant aid worn in the other ear of the wearer. This same flexibillty an~ programabillty alRo can be used to tailor the bimodal processor for operation of an acoustic hearing aid only.
In both cases it is the combination of use of a single microphone together with the preprocessing capabilities of ~he speech processor 11 combined with the flexibility and ., . . . . .
:' . . , ' W092/08330 ~ ~ 4 ~ PCT~AO9l/00506 -2g-programability of the acoustic aid processor 12 which provides features and advantages not found in hearing aid devices to date.
The bimodal aid will now be described when used to drive an acoustic hearing aid only. However the modes of operation to be described in relation thereto arc equally usable to help obtain the complementary behaviour me~tioned above in the first embodiment in a relation to the use of ; both an acoustic aid and an implant aid by a singl~ wearer.
The following description therefore~~and to that extent should be taken as applying equally to the first embodiment.
It should be under~tood that the nature of the complementary behaviour between the two aids is subjective and i5 determined by a combination of iterative testing and wear~ng experience. The structure of the bimodal aid described herein allows this complementarity to be achieved.
The testing procedures and methods for storing desired patient parameters in the M~P memory storage 23 will be described later in the specification.
2.0 SECOND_EMBODIMENT - BIMODAL AID USED AS ACOUSTIC AID
~` ONLY
The inherent flexibility of the acoustic aid signal processor 12 incorporating the three software configurable filters provides for an almost unlimited degree of ~lexibility of processing signals in the frequency domain received from the speech processor 11 and destined for the acous~ic aid 14. Four parti~ular modes of operation have ' .
'` ' ' ' . ' " ' .
W092/08330 ~ ~4 PCT/AU91/00506 ~ 30-been identified as desirable and achievable by the acoustic aid signal processor 12.
The four basic modes of operation of the acoustic output to the acoustic hearing aid are available and these are shown schematically in Figure 11. ~ach mode encompasses a large number of variations.
In mode 1, the filter parameters are set by the audiologist during the iterative fitting procedure and remain fixed thereafter. In modes 2-4, the speech parameter ~~ 10 extraction circuits provide instantaneous information about the speech signal that is used to change the filter parameters dynamically while the aid is in use. In modes 2 and 3 the input signal i5 the speech waveform. The output signal is manipulated ~n dif~erent ways by controlling the filter~ to emphasize the chosen parts of th2 waveform (such : as the formants) and to attenuate other parts (such as the background noise). In mode 4 the speech wave~orm is used only by the speech parameter extractor, and the output waveform is synthesized completely using the speech parameters. The differences between thc original speech waveform and the output of the hearing aid become greater as one progresses ~rom mode 1 to 4, and the control over the ~requency spectrum and intensity of the output signal also increases.
;25 2.1 Mode 1 - Frequency Response Tailoring In this mode the acous~ic output is tailored to match the patient's hearing loss. The 6 poles of filtering enable ' ~.
.
this to be done accurately (usually within 2 dB of the ideal gain specified by the audiologist at all frequencies) and the Automatlc Gain Control allows the limited dynamic range of the residual hearing to be used.
The acoustic signal processor of the second embodiment configured in mode 1, provides both operational and practical advantages over conventional hearing aids. These advantages can best be appreciated by considering the steps involved in setting up both types of hearing aid for ~~~ 10 operation: ~ ~ ~
; 'a) The conventional aid: The majority of commercially available hearing aids merely amplify, and sometlmes compress, the incoming sound. To f it one of these aids the audiologist would normally measure the user'~ threshold~ uslng an audiometer, calculate the ;j~ appropriate ideal gain by hand using a prescribed fitting rule (e.g. the Nat~onal Acoustlcs Laboratory . ~NAL) rule, Byrne and Dillon, 1986~. The audiologist `~ would then search ~hrough the specifications of the aids stocked in the clinic to find one with a gain ~hat most closely resembled the ideal gain. On all aids some changes can be made by the audiolo~ist although the amount o~ control depends on the type of aid. The feature~ that can be varied ~ay include any combination o~ the overall gain, the maximum output, the level at which compression begi~s. Frequency specific variation of gain, if available at all, is usually only in two .: .
'''' .
.' ~ .
W092/08330 2 0 9 a 3 L~ 4 PCT/AU9l/005~
frequency bands corresponding to 'high' frequencies and 'low' frequencies respectively. Behind-the-ear aids and body-worn aids, though less co~smetically acceptable, usually of~er great~r scope for change by the audiologist than in-the-ear aids. This is because, with these types of aid, the acoustic properties of the tube and earmould can be varied in addition to the controls on the aid itself. In-the-ear aids also require an earmould to be made speclfically for that aid by the manufacturer. This is a costly and time ; consuminy business. This makes testing and comparing in-the-ear aids dif~icult and expensive and many clinics avoid using them. In-the-ear aid~ are also usually more llmited in their maximum output and are there~ore not aften ~uitable for more severe hearing losses.
When ~he aid is configured it is then tested on the client. If it proves unacceptable the audiologist must ,~ choose and recon~lgure another sort of aid. This is repeated until an aid is found that the client considers acceptable.
~b) The speech-processing hearing aid of the second embodiment ~mode 1): the audlologist measures the client'c hearing thresholds and any other hearing levels ~hat might be needed for the strategies to be tested, e.g. maximum comfortable level ~MCL). The measurements are made usin~ ~he hearing aid and .. . .
-`' diagnostic and programming unit with associated configuring software rather than a separate audiometer.
These values are then stored in a data file automatically. A strategy is chosen and the aid is configured accordingly taking a maximum time of about five minutes. Calculation and fitting of ideal gain is done automatically and can be quickly accessed in a graphical form at any time by the audiologist. The configured aid is then presented to the subjec~ for - evaluation. Different f~ttings can-be tried in quick ~uccession until an appropriate one is found.
' ~ence, the advantages are ; 1) The actual device, earmould and transducer are used for mea~urement of client thresholds allowing more accurate assessment of the ideal gain required for the device. For conventional hearing aids these measures are usually made using headphones and the effect of the earmould acoustics is estimated separately. For in-the-ear aids it is not possible to measure earmould 2Q acoustics before fitting because the mould and the aid are manufactured together.
2) Different ~itting procedures (e.g. NAL
i ~ormulation, Byrne and Dillon, 1986) can be implemented for testing very quickly without requiring a change of ~5 a~d because the changes are programmed in software rather than by hardware adjustment.
. ' ' . :
' , , ' W092/08330 2 ~ 9 3 3 ~ 4 -34- PCT/AU91/00506 3~ The gain fi~ting can often be more accurate than is possible on many commercially available aids because of the flexible programming of frequency responses. These values are then stored in a data file automatically. A strategy is chosen and the aid is configured accordingly, taking a maximum time of about five minutes. Calculation and fitting of ideal gain is done automatically and can be quic~ly accessed in a graphical form at any time by the audiologist. The configured a1d is then presented to the subject for evaluation. Different fittings can be tried in quick succession modelling various available aids because of the flexible programming of frequency responses.
4) The audiologist can change the ideai gain ~unction at will if he/she believes that the ideal gain based on the client's threshold measurements is not optimal for that client. With many conventional hearing aids th1s can only be done gro sly by changing the gain in "high" or "low" frequency bands or by choosing a different aid with quoted specifications closer to the new requirement for the client.
Decoding is as follows:
. 20 D4 D3 DIVR Divider input O O 1 5 MHz O 1 2 2.5 MHz 1 0 4 1.25 MHz 1 1 1 5 MHz bu~ 2 switches (HF) are opened in the filter to give double the centre frequency.
.- ~
,' ~, ' , .
. . .
W092/0X330 ~ a ~ 14 PCT/AU91/00506 Filter tvpe .
With reference to Fig. 9, the filters consist of two integrators in a loop with a variable gain feedback path.
The input may be a switched or an unswitched capacitor, it may be applied to either the first or second input and the output may be taken from either the first or second integrator. This produces various different transfer functions as given below.
Mode D2 D1 DO
0 SW~ 1st 1st Power down 1 SW 1st 2nd Lowpass 2 SW 2nd 1st Lowpass 3 SW 2nd 2nd Bandpass 4 US 1st 1st Highpac~
US 1st 2nd Bandpass 6 US 2nd 1st Bandpass 7 US 2nd 2nd Highpass In this table a bit zero selects the particular condition. Thus mode O, i.e. DO, D1, D2 all zero, powers down the filter and switches ~ts output off.
In some cases it is desirable to shut all other functions off. This is done by an external pin, PDB to power .. down bimodal operation. Only the Ram Cell monitor is stiil . operational.
The programmable filter clock divider comprises a 7 bit D
ripple counter which is compared wi~h the contents of the frequency latch. The first match produces a counter reset.
' , ~ , . .
W092/08330 2 ~ ~ ~ 3 ~ 4 PCT/AU91/00506 The frequency latch must be fed with the complement of the count required. If for example a reset is required after a count of 2 then the latch is all high except for bit D1 which is low. The outputs of all the NOR gates connected to the latch will be low except for the one connected to D1. Now if the counter coun~s up from zero, on the first occasion of Q2 -~ going high, thls NOR will al50 go low and the 7 input NOR at the output will reset the coun~er.
:: The filter Q is programmed by using a capacitor which has a 4 bit binary sequence, toge~her wlth a 3 bit programmable resistive divider. The resistors are programmed by a number n which is represented by bits D4-D6 in the Y1 latch (YS and Y9 for fllters 2 and 3). The capacitors are programmed by m, represe~ted by blts DO-D3.
The Q is glven by:
Q = 8( 1 ~ 1.R75n)~m : 'By using switches, the ~eedback resis~or of an amplifier can be con~igured to be like a buffered attenuator - i.e. the output can drive another attenuator or lnverter.
Two separate sections are used, one to give ~, 4 dB steps giving a total of +/- 28 dB and another to give 8, 0.5 dB
, steps with a maximum of +/- 3.5 dB. The total range is therefore +/- 31.5 dB. Two 8 channel MUXes are used to select the required taps on the potential dividers.
i: 25 The attenuator/ampli~iers can be selected by addressing `.' latches Y2, Y6 and YlO. Bit D6 = 1 give attenuation, O
~' amplification.
., .' .
: . . - , :
.
. ~, . .
;,; . : ' W092/a8330 PCT/AU91/005~
2~9~3~ 8-. . .
.o set the aain: Y2 = ain in dB ~; ~
To set the attenuation: Y2 = 64 - ~atten in dB ~ 2) T~ Y2 is set to o (all bits low) then the attenuator/amplifier is powered down and switche~ L-.
a Note that setting HF adds 6 dB to the gain.
The attenuator is arranged so that its gain is not changed except when ~he signal passes through zero. This is to prevent clicks and pops during gain changes. A zero crossing detector which produces a pulse on either a positive - or negative going zero crossing is used to strobe a latch which transfers the required gain to the attenuator MUXes.
The multiplying DAC i5 a 6 bit resistive ladder type and multiplies the input REF, selected by the SOURCE SELECT (Flg.
8) by the digital quantity in latch Y15. Blt D6 inverts the output. The settllng time i~ 50 microseconds.
The acoustic aid processor 12, by its ~lexible and programmable construction, allows many signal processing strategies to be tried and ultimately settled upon so as to best adapt the acoustic aid to provide information to the wearer which compllments information received from an implant aid worn in the other ear of the wearer. This same flexibillty an~ programabillty alRo can be used to tailor the bimodal processor for operation of an acoustic hearing aid only.
In both cases it is the combination of use of a single microphone together with the preprocessing capabilities of ~he speech processor 11 combined with the flexibility and ., . . . . .
:' . . , ' W092/08330 ~ ~ 4 ~ PCT~AO9l/00506 -2g-programability of the acoustic aid processor 12 which provides features and advantages not found in hearing aid devices to date.
The bimodal aid will now be described when used to drive an acoustic hearing aid only. However the modes of operation to be described in relation thereto arc equally usable to help obtain the complementary behaviour me~tioned above in the first embodiment in a relation to the use of ; both an acoustic aid and an implant aid by a singl~ wearer.
The following description therefore~~and to that extent should be taken as applying equally to the first embodiment.
It should be under~tood that the nature of the complementary behaviour between the two aids is subjective and i5 determined by a combination of iterative testing and wear~ng experience. The structure of the bimodal aid described herein allows this complementarity to be achieved.
The testing procedures and methods for storing desired patient parameters in the M~P memory storage 23 will be described later in the specification.
2.0 SECOND_EMBODIMENT - BIMODAL AID USED AS ACOUSTIC AID
~` ONLY
The inherent flexibility of the acoustic aid signal processor 12 incorporating the three software configurable filters provides for an almost unlimited degree of ~lexibility of processing signals in the frequency domain received from the speech processor 11 and destined for the acous~ic aid 14. Four parti~ular modes of operation have ' .
'` ' ' ' . ' " ' .
W092/08330 ~ ~4 PCT/AU91/00506 ~ 30-been identified as desirable and achievable by the acoustic aid signal processor 12.
The four basic modes of operation of the acoustic output to the acoustic hearing aid are available and these are shown schematically in Figure 11. ~ach mode encompasses a large number of variations.
In mode 1, the filter parameters are set by the audiologist during the iterative fitting procedure and remain fixed thereafter. In modes 2-4, the speech parameter ~~ 10 extraction circuits provide instantaneous information about the speech signal that is used to change the filter parameters dynamically while the aid is in use. In modes 2 and 3 the input signal i5 the speech waveform. The output signal is manipulated ~n dif~erent ways by controlling the filter~ to emphasize the chosen parts of th2 waveform (such : as the formants) and to attenuate other parts (such as the background noise). In mode 4 the speech wave~orm is used only by the speech parameter extractor, and the output waveform is synthesized completely using the speech parameters. The differences between thc original speech waveform and the output of the hearing aid become greater as one progresses ~rom mode 1 to 4, and the control over the ~requency spectrum and intensity of the output signal also increases.
;25 2.1 Mode 1 - Frequency Response Tailoring In this mode the acous~ic output is tailored to match the patient's hearing loss. The 6 poles of filtering enable ' ~.
.
this to be done accurately (usually within 2 dB of the ideal gain specified by the audiologist at all frequencies) and the Automatlc Gain Control allows the limited dynamic range of the residual hearing to be used.
The acoustic signal processor of the second embodiment configured in mode 1, provides both operational and practical advantages over conventional hearing aids. These advantages can best be appreciated by considering the steps involved in setting up both types of hearing aid for ~~~ 10 operation: ~ ~ ~
; 'a) The conventional aid: The majority of commercially available hearing aids merely amplify, and sometlmes compress, the incoming sound. To f it one of these aids the audiologist would normally measure the user'~ threshold~ uslng an audiometer, calculate the ;j~ appropriate ideal gain by hand using a prescribed fitting rule (e.g. the Nat~onal Acoustlcs Laboratory . ~NAL) rule, Byrne and Dillon, 1986~. The audiologist `~ would then search ~hrough the specifications of the aids stocked in the clinic to find one with a gain ~hat most closely resembled the ideal gain. On all aids some changes can be made by the audiolo~ist although the amount o~ control depends on the type of aid. The feature~ that can be varied ~ay include any combination o~ the overall gain, the maximum output, the level at which compression begi~s. Frequency specific variation of gain, if available at all, is usually only in two .: .
'''' .
.' ~ .
W092/08330 2 0 9 a 3 L~ 4 PCT/AU9l/005~
frequency bands corresponding to 'high' frequencies and 'low' frequencies respectively. Behind-the-ear aids and body-worn aids, though less co~smetically acceptable, usually of~er great~r scope for change by the audiologist than in-the-ear aids. This is because, with these types of aid, the acoustic properties of the tube and earmould can be varied in addition to the controls on the aid itself. In-the-ear aids also require an earmould to be made speclfically for that aid by the manufacturer. This is a costly and time ; consuminy business. This makes testing and comparing in-the-ear aids dif~icult and expensive and many clinics avoid using them. In-the-ear aid~ are also usually more llmited in their maximum output and are there~ore not aften ~uitable for more severe hearing losses.
When ~he aid is configured it is then tested on the client. If it proves unacceptable the audiologist must ,~ choose and recon~lgure another sort of aid. This is repeated until an aid is found that the client considers acceptable.
~b) The speech-processing hearing aid of the second embodiment ~mode 1): the audlologist measures the client'c hearing thresholds and any other hearing levels ~hat might be needed for the strategies to be tested, e.g. maximum comfortable level ~MCL). The measurements are made usin~ ~he hearing aid and .. . .
-`' diagnostic and programming unit with associated configuring software rather than a separate audiometer.
These values are then stored in a data file automatically. A strategy is chosen and the aid is configured accordingly taking a maximum time of about five minutes. Calculation and fitting of ideal gain is done automatically and can be quickly accessed in a graphical form at any time by the audiologist. The configured aid is then presented to the subjec~ for - evaluation. Different f~ttings can-be tried in quick ~uccession until an appropriate one is found.
' ~ence, the advantages are ; 1) The actual device, earmould and transducer are used for mea~urement of client thresholds allowing more accurate assessment of the ideal gain required for the device. For conventional hearing aids these measures are usually made using headphones and the effect of the earmould acoustics is estimated separately. For in-the-ear aids it is not possible to measure earmould 2Q acoustics before fitting because the mould and the aid are manufactured together.
2) Different ~itting procedures (e.g. NAL
i ~ormulation, Byrne and Dillon, 1986) can be implemented for testing very quickly without requiring a change of ~5 a~d because the changes are programmed in software rather than by hardware adjustment.
. ' ' . :
' , , ' W092/08330 2 ~ 9 3 3 ~ 4 -34- PCT/AU91/00506 3~ The gain fi~ting can often be more accurate than is possible on many commercially available aids because of the flexible programming of frequency responses. These values are then stored in a data file automatically. A strategy is chosen and the aid is configured accordingly, taking a maximum time of about five minutes. Calculation and fitting of ideal gain is done automatically and can be quic~ly accessed in a graphical form at any time by the audiologist. The configured a1d is then presented to the subject for evaluation. Different fittings can be tried in quick succession modelling various available aids because of the flexible programming of frequency responses.
4) The audiologist can change the ideai gain ~unction at will if he/she believes that the ideal gain based on the client's threshold measurements is not optimal for that client. With many conventional hearing aids th1s can only be done gro sly by changing the gain in "high" or "low" frequency bands or by choosing a different aid with quoted specifications closer to the new requirement for the client.
5) Information about the fitting is available to the audiologist at any stage thus giving them more ~'on-line" control over the fitting than with any aid on the mar~et.
6) The calculation of ideal gain is done automatically for most fitting procedures, (with the .
.~ ' ' ' ~ :: ' : .
', , , W092/08330 2 ~ 9 ~ ~ ~ 4 PCT/AU91/00506 exception of those used on insertion gain bridges, this has to be done by hand) and thus the new device saves time and removes a possible source of error.
In summary, the device of thP second embodiment can be configured exactly as many conventional a:ids, often more accurately. Setting up and testing the device are quicker, more efficient, and less prone to sources of error.
Fig. 12 provides an example f~tting in mode l which is achievable utilising the acoustic aid signal processor 12 of . lQ the bimodal speech proce~sor. - - ~
2.2 Mode 2 - Loudnes~ MaPpinq This is similar to mode l except that the level output at any ~pecl~ic frequency ~s mapped non-linearly on a ~requency speclfic baqis by dynamlcally changing the gain parameters of the three fllters in response to amplitude and ; frequency variations measured by the processor. This requires that the audiologist measure the maximum comfort levels to which maximum amplltude can be mapped in addition to client thresholds. The advantage of this mode is that it makes a more accurate mapping of dynamic range possible.
Hence, if used appropriately, the relative loudness of the spectral components i9 preserved. This may be better than mode l for users whose dynamic range changes a lot as a function of frequency. This method of loudness control avoids many of the undesirable spectral distortlons that accompany more commonly used schemes such as peak limiting and non-11near corpr~ssion.
.
W092/08330 ~ ~ PCT/AU91/00506 2.3 Mode 3 - Dynamic Enhancement of S~ectral Features In this mode the frequency paramet:ers of the three filters are changed dynamically (unli~e modes 1 and 2 where they are fixed). When the values are made to chanse in a manner depending on the speech parametets measured by the processor then salient speech features can be enhanced. This gives rise to a wide range o~ possible speech-processing strategies in this mode. For example the centre frequencies of two bandpass filters can be used to track the F1 and F2 (first and second formant) peaks~~in-the signal. This acts as both a form of noise cancellation and also a removal of parts of the signal that migh~ mask the information in the peaks to be traced. The resulting signal a~ter flltering :Ls amplified to the appropriate loudness for the user on a frequency-specific basis as in mode 2. Thlq may be most u~e~ul ~orusers with impaired frequency resolution as well as raised thresholds. The device used in thls mode can also be used to amplitude modulate the signal at the fundamental frequency (FO) which can be another way of enhancing this parameter.
The most similar commercially available device~ are the "noise-cancelling" hearing aids such as those containing the "Zeta-noise-blocker" chip. These devices calculate an average long-term spectrum that represents the background noise and this noise is then filtered out of the signal along with any speech that happens to be at the same frequencies.
This mode 3 scheme i~ based on enhancement of the speech .
signal at the measured formant frequencies rather than : ' : .
W092/08330 ~ 9 ~ 3 ~ 4 PCT/AU91/00506 cancellation of noise. This means tha~ speech information which is close in frequency to the noise will not be lost although the noise further from the formants will be reduced.
The schemP will also enhance the seiected speech features in quiet conditions as well as in noisy conditions.
Fig. 13 provides an example of mode 3 wherein selective peak sharpening is performed by the acoustic aid signal processor 12.
2.4 Mode 4 - SPeech Reconstruction - This mode differs from the-other-modes of operation of the second embodiment in that the user does not receive a modified version of the input signal, but a completely synthesized signal constructed using parameters extracted by the speech processor. The signal can be reconstructed in many different ways depending on the u~er's hearing loss.
This reconstruction provides very tight control over the signals presented and hence allows very accurate mapping onto the user's residual hearing abilities. It may be most useful for users with very limlted hearing for whom normal amplification would provide no open set speech recognition.
A second example of the use of this mode is for frequency transposition. Sounds normally occurring at frequencies that are inaudible ~o the user can be represehted by synthesized signals within the audible range ~or that user. Such schemes have been attempted ln the past, but not using a completely re-synthesized waveform as in the present case. The re-synthesis scheme has been shown to work for :'' :' .
... .
W092/08330 2 Q ' ~ '~ PCT/AU91/OOS06 -3~- ~
electrical stimulation with cochlear implant users and may be of benefit to severely-to-profoundly impaired hearing aid users as well.
Each mode of operation allows a wide range of potential strategies. The modes are not discrete ,and some strategies that combine elements from different modes can be implemented. For example, a reconstructed signal representing FO information (mode 4) can be added to a filtered speech signal ~mode 3).
; - 10 3.0 USE OF THE BIMODAL AID -Wlth reference to Fig. 2 the bimodal aid is proyrammed by use of a diagnostic and programming un$t 21 which communicates with the speech proceqsor 11 and, in turn, with the acoustic aid processor 12 by way of a diagnostic programming interface 22. The diagnostio and programming unit 21 is implemented as a program on a personal computer.
The interface 22 is a communications card connected on the PC
bus.
Software has been written to find the optimum filter settings to produce the frequency~gain characteristic specified by the audiologist, for use in the frequency response tailoring mo~e of operation described above.
So~tware to pro~ram to the other modes of operation has also been pr~grammed and ~ested.
With ref*rence to Figs. 14, 15 and 16 the basic procedure for use of the bimodal device is a3 follows.
''' , -W092/08330 2 ~ 3 ~ 3 4 ~ PCT/AV9l/00506 .; _.
', In bimodal use the fitting procedure is as outlined in ; flow chart form in Fig. 14. The bimodal MAP is produced on the personal computer following an iterative testing procedure of the subjective performance of the bimodal aid S for a multiplicity of trial settinys of the MAP.
Fig. 15 outlines the procedure in flow chart form withparticular reference to obtaining an op~timum setting for the . acoustic ald 14 in mode 1.
Fig. 16 outline~ the basic interaction between the 10- control program in the diagnostic and programming unit and the fitting and mapping procedures performed by the . Audiologist.
The above de~cribes only some embodiments of the present invention and modifications obvious to those ~killed in the art can be made without departing from the ~cope and splrlt of the present lnvent1on.
.j ~
. .
.;
:
....
:"
;.
;"~ .
, .
.~ ' ' ' ~ :: ' : .
', , , W092/08330 2 ~ 9 ~ ~ ~ 4 PCT/AU91/00506 exception of those used on insertion gain bridges, this has to be done by hand) and thus the new device saves time and removes a possible source of error.
In summary, the device of thP second embodiment can be configured exactly as many conventional a:ids, often more accurately. Setting up and testing the device are quicker, more efficient, and less prone to sources of error.
Fig. 12 provides an example f~tting in mode l which is achievable utilising the acoustic aid signal processor 12 of . lQ the bimodal speech proce~sor. - - ~
2.2 Mode 2 - Loudnes~ MaPpinq This is similar to mode l except that the level output at any ~pecl~ic frequency ~s mapped non-linearly on a ~requency speclfic baqis by dynamlcally changing the gain parameters of the three fllters in response to amplitude and ; frequency variations measured by the processor. This requires that the audiologist measure the maximum comfort levels to which maximum amplltude can be mapped in addition to client thresholds. The advantage of this mode is that it makes a more accurate mapping of dynamic range possible.
Hence, if used appropriately, the relative loudness of the spectral components i9 preserved. This may be better than mode l for users whose dynamic range changes a lot as a function of frequency. This method of loudness control avoids many of the undesirable spectral distortlons that accompany more commonly used schemes such as peak limiting and non-11near corpr~ssion.
.
W092/08330 ~ ~ PCT/AU91/00506 2.3 Mode 3 - Dynamic Enhancement of S~ectral Features In this mode the frequency paramet:ers of the three filters are changed dynamically (unli~e modes 1 and 2 where they are fixed). When the values are made to chanse in a manner depending on the speech parametets measured by the processor then salient speech features can be enhanced. This gives rise to a wide range o~ possible speech-processing strategies in this mode. For example the centre frequencies of two bandpass filters can be used to track the F1 and F2 (first and second formant) peaks~~in-the signal. This acts as both a form of noise cancellation and also a removal of parts of the signal that migh~ mask the information in the peaks to be traced. The resulting signal a~ter flltering :Ls amplified to the appropriate loudness for the user on a frequency-specific basis as in mode 2. Thlq may be most u~e~ul ~orusers with impaired frequency resolution as well as raised thresholds. The device used in thls mode can also be used to amplitude modulate the signal at the fundamental frequency (FO) which can be another way of enhancing this parameter.
The most similar commercially available device~ are the "noise-cancelling" hearing aids such as those containing the "Zeta-noise-blocker" chip. These devices calculate an average long-term spectrum that represents the background noise and this noise is then filtered out of the signal along with any speech that happens to be at the same frequencies.
This mode 3 scheme i~ based on enhancement of the speech .
signal at the measured formant frequencies rather than : ' : .
W092/08330 ~ 9 ~ 3 ~ 4 PCT/AU91/00506 cancellation of noise. This means tha~ speech information which is close in frequency to the noise will not be lost although the noise further from the formants will be reduced.
The schemP will also enhance the seiected speech features in quiet conditions as well as in noisy conditions.
Fig. 13 provides an example of mode 3 wherein selective peak sharpening is performed by the acoustic aid signal processor 12.
2.4 Mode 4 - SPeech Reconstruction - This mode differs from the-other-modes of operation of the second embodiment in that the user does not receive a modified version of the input signal, but a completely synthesized signal constructed using parameters extracted by the speech processor. The signal can be reconstructed in many different ways depending on the u~er's hearing loss.
This reconstruction provides very tight control over the signals presented and hence allows very accurate mapping onto the user's residual hearing abilities. It may be most useful for users with very limlted hearing for whom normal amplification would provide no open set speech recognition.
A second example of the use of this mode is for frequency transposition. Sounds normally occurring at frequencies that are inaudible ~o the user can be represehted by synthesized signals within the audible range ~or that user. Such schemes have been attempted ln the past, but not using a completely re-synthesized waveform as in the present case. The re-synthesis scheme has been shown to work for :'' :' .
... .
W092/08330 2 Q ' ~ '~ PCT/AU91/OOS06 -3~- ~
electrical stimulation with cochlear implant users and may be of benefit to severely-to-profoundly impaired hearing aid users as well.
Each mode of operation allows a wide range of potential strategies. The modes are not discrete ,and some strategies that combine elements from different modes can be implemented. For example, a reconstructed signal representing FO information (mode 4) can be added to a filtered speech signal ~mode 3).
; - 10 3.0 USE OF THE BIMODAL AID -Wlth reference to Fig. 2 the bimodal aid is proyrammed by use of a diagnostic and programming un$t 21 which communicates with the speech proceqsor 11 and, in turn, with the acoustic aid processor 12 by way of a diagnostic programming interface 22. The diagnostio and programming unit 21 is implemented as a program on a personal computer.
The interface 22 is a communications card connected on the PC
bus.
Software has been written to find the optimum filter settings to produce the frequency~gain characteristic specified by the audiologist, for use in the frequency response tailoring mo~e of operation described above.
So~tware to pro~ram to the other modes of operation has also been pr~grammed and ~ested.
With ref*rence to Figs. 14, 15 and 16 the basic procedure for use of the bimodal device is a3 follows.
''' , -W092/08330 2 ~ 3 ~ 3 4 ~ PCT/AV9l/00506 .; _.
', In bimodal use the fitting procedure is as outlined in ; flow chart form in Fig. 14. The bimodal MAP is produced on the personal computer following an iterative testing procedure of the subjective performance of the bimodal aid S for a multiplicity of trial settinys of the MAP.
Fig. 15 outlines the procedure in flow chart form withparticular reference to obtaining an op~timum setting for the . acoustic ald 14 in mode 1.
Fig. 16 outline~ the basic interaction between the 10- control program in the diagnostic and programming unit and the fitting and mapping procedures performed by the . Audiologist.
The above de~cribes only some embodiments of the present invention and modifications obvious to those ~killed in the art can be made without departing from the ~cope and splrlt of the present lnvent1on.
.j ~
. .
.;
:
....
:"
;.
;"~ .
, .
Claims (25)
1. A bimodal aid for the hearing impaired which includes processing means adapted to receive and process audio information received from a microphone; said processing means supplying processed information derived from said audio information to an implant aid adapted to be implanted in a first ear of a patient and to an acoustic aid adapted to be worn in or adjacent a second ear of said patient whereby binaural information is provided to said patient.
2. The bimodal aid of claim 1 wherein said processing means comprises an implant aid speech processor and an acoustic aid signal processor; said implant aid speech processor adapted to operate on said audio information so as to electrically stimulate said implant aid; said acoustic aid signal processor operating on said audio information and said processed information received from said implant aid speech processor so as to stimulate said acoustic aid.
3. The bimodal aid of claim 2 wherein said implant aid includes a plurality of electrodes which, when stimulated by said implant aid speech processor, apply electrical stimuli directly to the cochlea of said patient.
4. The bimodal aid of claim 2 wherein said implant aid speech processor processes said audio information according to a multi-peak strategy.
5. The bimodal aid of claim 2 wherein said acoustic aid signal processor includes an electronically configurable sound/speech processor which includes filter means whose parameters can be electronically varied according to information stored in said bimodal aid.
6. The bimodal aid of claim 5 wherein said filter means comprises an array of three filters whose parameters and interconnection can be varied according to said information stored in said bimodal aid.
7. The bimodal aid of claim 5 or claim 6 wherein said bimodal aid includes configuration means and signal processing means; said implant aid speech processor adapted to receive audio information and to process said audio information by said signal processing means in accordance with parameters set by said configuration means so as to produce an output signal adapted to stimulate said acoustic aid; said configuration means adapted to receive one or more of electronic signal input and/or said information stored in said acoustic aid for the purpose of modifying said parameters.
8. The bimodal aid of claim 7, wherein said electronically configurable sound/speech processor utilises speech features and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a hearing aid transducer.
9. The bimodal aid of claim 8 wherein said signal processing means includes means for dynamically changing the gain applied to different frequency bands in the defined speech spectrum as a function of selected ones of said speech features so that the loudness in these bands is appropriately scaled between the threshold and maximum comfortable levels of the hearing aid user.
10. The bimodal aid of claim 9 wherein said signal processing means includes said filter means whose settings may further be dynamically varied according to speech parameters extracted by said signal processing means whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
11. The bimodal aid of claim 10, wherein said signal processing means includes means for reconstructing speech signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled so as to enhance speech recognition in a user.
12. The bimodal aid of claim 11 wherein in a first mode of operation of said electronically configurable sound/speech processor, said filter means is set by said configuration means based on measurements made by an audiologist during a hearing aid fitting procedure and remains fixed thereafter.
13. The bimodal aid of any one of claims 7 to 12 wherein said output signal is synthesised by said signal processing means utilising only speech parameters.
14. A method of control of a hearing aid and a cochlear implant by means of the aid of any one of claims 1 to 13.
15. A bimodal aid for the hearing impaired comprising a sound/speech processor electrically connected to an acoustic aid adapted to be worn adjacent to or in the first ear of a patient and electrically connected to a cochlear implant adapted to be located in the second ear of said patient to directly stimulate the auditory nerve of said patient; said speech processor receiving and processing audio input information so as to produce an acoustic signal from said acoustic aid and an electrical signal from said cochlear implant whereby coherent binaural information is provided to said patient.
16. An electronically configurable sound/speech processor or the hearing impaired, said sound/speech processor including configuration means and signal processing means;
said sound/speech processor adapted to receive audio information and to process said audio information by said signal processing means in accordance with parameters set by said configuration means so as to produce an output signal adapted to stimulate a hearing aid transducer; said configuration means adapted to receive one or more of electronic signal input or stored information for the purpose of modifying said parameters.
said sound/speech processor adapted to receive audio information and to process said audio information by said signal processing means in accordance with parameters set by said configuration means so as to produce an output signal adapted to stimulate a hearing aid transducer; said configuration means adapted to receive one or more of electronic signal input or stored information for the purpose of modifying said parameters.
17. The electronically configurable sound/speech processor of claim 16, wherein said sound/speech processor utilises speech features and voiced/voiceless sound decisions to produce said output signal adapted to stimulate a hearing aid transducer.
18. The electronically configurable sound/speech processor of claim 17 wherein said signal processing means includes means for dynamically changing the gain applied to different frequency bands in the defined speech spectrum as a function of selected ones of speech features so that the loudness in these bands is appropriately scaled between the threshold and maximum comfortable levels of the hearing aid user.
19. The electronically configurable sound/speech processor of claim 16 wherein said signal processing means includes filter means whose settings are dynamically varied according to speech parameters extracted by said signal processing means whereby said filter means dynamically adapts said output signal to overcome the effects of noise and/or particular deficiencies in the hearing of a user.
20. The electronically configurable sound/speech processor of claim 19, wherein said signal processing means includes means for reconstructing speech signals in real time whereby the amplitude and/or frequency characteristics of said output signal can be controlled to enhance speech recognition in a user.
21. The electronically configurable sound/speech processor of claim 20 wherein in a first mode of operation of said electronically configurable sound/speech processor, said filter means is set by said configuration means based on measurements made by an audiologist during a hearing aid fitting procedure and remains fixed thereafter.
22. The electronically configurable sound/speech processor of claim 20 wherein said sound/speech processor includes said filter means whose parameters may further be changed dynamically while said processor is in use providing said output signal to said hearing aid transducer in accordance with information provided to said configuration means by speech parameter extraction means acting on said audio information.
23. The electronically configurable sound/speech processor of any one of the claims 16 to 22 wherein said output signal is synthesised by said signal processing means utilising only speech parameters.
24. A method of control of both a hearing aid and a cochlear implant by means including the sound/speech processor of any one of claims 16 to 23.
25. A bimodal aid including the sound/speech processor of any one of claims 16 to 23.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AUPK314490 | 1990-11-01 | ||
AUPK3144 | 1990-11-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2095344A1 true CA2095344A1 (en) | 1992-05-02 |
Family
ID=3775048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002095344A Abandoned CA2095344A1 (en) | 1990-11-01 | 1991-11-01 | Bimodal speech processor |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP0555278A4 (en) |
JP (1) | JPH06506322A (en) |
CA (1) | CA2095344A1 (en) |
WO (1) | WO1992008330A1 (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ATE225590T1 (en) | 1993-07-01 | 2002-10-15 | Univ Melbourne | COCHLEAR IMPLANT DEVICES |
US5651071A (en) * | 1993-09-17 | 1997-07-22 | Audiologic, Inc. | Noise reduction system for binaural hearing aid |
US5511128A (en) * | 1994-01-21 | 1996-04-23 | Lindemann; Eric | Dynamic intensity beamforming system for noise reduction in a binaural hearing aid |
US5626629A (en) * | 1995-05-31 | 1997-05-06 | Advanced Bionics Corporation | Programming of a speech processor for an implantable cochlear stimulator |
US6005955A (en) * | 1996-08-07 | 1999-12-21 | St. Croix Medical, Inc. | Middle ear transducer |
US5836863A (en) * | 1996-08-07 | 1998-11-17 | St. Croix Medical, Inc. | Hearing aid transducer support |
US5814095A (en) * | 1996-09-18 | 1998-09-29 | Implex Gmbh Spezialhorgerate | Implantable microphone and implantable hearing aids utilizing same |
US5935166A (en) | 1996-11-25 | 1999-08-10 | St. Croix Medical, Inc. | Implantable hearing assistance device with remote electronics unit |
CA2323983A1 (en) * | 2000-10-19 | 2002-04-19 | Universite De Sherbrooke | Programmable neurostimulator |
US6730015B2 (en) | 2001-06-01 | 2004-05-04 | Mike Schugt | Flexible transducer supports |
US7529587B2 (en) | 2003-10-13 | 2009-05-05 | Cochlear Limited | External speech processor unit for an auditory prosthesis |
WO2005097255A1 (en) | 2004-04-02 | 2005-10-20 | Advanced Bionics Corporation | Electric and acoustic stimulation fitting systems and methods |
US8244365B2 (en) * | 2004-05-10 | 2012-08-14 | Cochlear Limited | Simultaneous delivery of electrical and acoustical stimulation in a hearing prosthesis |
US8280087B1 (en) | 2008-04-30 | 2012-10-02 | Arizona Board Of Regents For And On Behalf Of Arizona State University | Delivering fundamental frequency and amplitude envelope cues to enhance speech understanding |
DE102008060056B4 (en) * | 2008-12-02 | 2011-12-15 | Siemens Medical Instruments Pte. Ltd. | Method and hearing aid system for adapting a bimodal supply |
US10721574B2 (en) | 2011-11-04 | 2020-07-21 | Med-El Elektromedizinische Geraete Gmbh | Fitting unilateral electric acoustic stimulation for binaural hearing |
WO2014123890A1 (en) * | 2013-02-05 | 2014-08-14 | Med-El Elektromedizinische Geraete Gmbh | Fitting unilateral electric acoustic stimulation for binaural hearing |
EP2948214A1 (en) * | 2013-01-24 | 2015-12-02 | Advanced Bionics AG | Hearing system comprising an auditory prosthesis device and a hearing aid |
US20160151629A1 (en) * | 2013-07-05 | 2016-06-02 | Advanced Bionics Ag | Cochlear implant system |
WO2015130318A1 (en) * | 2014-02-28 | 2015-09-03 | Advanced Bionics Ag | Systems and methods for facilitating post-implant acoustic-only operation of an electro-acoustic stimulation ("eas") sound processor |
WO2015170140A1 (en) * | 2014-05-06 | 2015-11-12 | Advanced Bionics Ag | Systems and methods for cancelling tonal noise in a cochlear implant system |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3629521A (en) * | 1970-01-08 | 1971-12-21 | Intelectron Corp | Hearing systems |
US3818149A (en) * | 1973-04-12 | 1974-06-18 | Shalako Int | Prosthetic device for providing corrections of auditory deficiencies in aurally handicapped persons |
US3894196A (en) * | 1974-05-28 | 1975-07-08 | Zenith Radio Corp | Binaural hearing aid system |
DE2716336B1 (en) * | 1977-04-13 | 1978-07-06 | Siemens Ag | Procedure and hearing aid for the compensation of hearing defects |
AU541258B2 (en) * | 1980-12-12 | 1985-01-03 | Commonwealth Of Australia, The | Speech processor |
US4596902A (en) * | 1985-07-16 | 1986-06-24 | Samuel Gilman | Processor controlled ear responsive hearing aid and method |
EP0349599B2 (en) * | 1987-05-11 | 1995-12-06 | Jay Management Trust | Paradoxical hearing aid |
US4852175A (en) * | 1988-02-03 | 1989-07-25 | Siemens Hearing Instr Inc | Hearing aid signal-processing system |
US5027410A (en) * | 1988-11-10 | 1991-06-25 | Wisconsin Alumni Research Foundation | Adaptive, programmable signal processing and filtering for hearing aids |
AU6339290A (en) * | 1989-09-08 | 1991-04-08 | Cochlear Pty. Limited | Multi-peak speech processor |
-
1991
- 1991-11-01 JP JP3517611A patent/JPH06506322A/en active Pending
- 1991-11-01 EP EP19910918663 patent/EP0555278A4/en not_active Withdrawn
- 1991-11-01 WO PCT/AU1991/000506 patent/WO1992008330A1/en not_active Application Discontinuation
- 1991-11-01 CA CA002095344A patent/CA2095344A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
EP0555278A1 (en) | 1993-08-18 |
EP0555278A4 (en) | 1994-08-10 |
JPH06506322A (en) | 1994-07-14 |
WO1992008330A1 (en) | 1992-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2095344A1 (en) | Bimodal speech processor | |
US4532930A (en) | Cochlear implant system for an auditory prosthesis | |
US5271397A (en) | Multi-peak speech processor | |
US5095904A (en) | Multi-peak speech procession | |
US7603176B2 (en) | Stimulation channel selection methods | |
US20030167077A1 (en) | Sound-processing strategy for cochlear implants | |
US8260429B2 (en) | Incremental stimulation sound processor | |
US8843205B2 (en) | Stimulation channel selection for a stimulating medical device | |
Geurts et al. | Enhancing the speech envelope of continuous interleaved sampling processors for cochlear implants | |
AU2016285966B2 (en) | Selective stimulation with cochlear implants | |
US9623242B2 (en) | Methods of frequency-modulated phase coding (FMPC) for cochlear implants and cochlear implants applying same | |
US11077302B2 (en) | Fast objective fitting measurements for cochlear implants | |
US9597502B2 (en) | Systems and methods for controlling a width of an excitation field created by current applied by a cochlear implant system | |
US20110081033A1 (en) | Method for adjusting an audio transducing processor | |
Kaiser et al. | Using a personal computer to perform real-time signal processing in cochlear implant research | |
US20240292162A1 (en) | Method for improving effective dynamic range of neural stimulation for artificial cochlear system, and reconfigurable current dac-based neural stimulation ic chip therefor | |
Hortmann et al. | Sound signal processing | |
AU6339290A (en) | Multi-peak speech processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Dead |