EP4335117A1 - Hearing aid and method for use of same - Google Patents
Hearing aid and method for use of sameInfo
- Publication number
- EP4335117A1 EP4335117A1 EP21939968.0A EP21939968A EP4335117A1 EP 4335117 A1 EP4335117 A1 EP 4335117A1 EP 21939968 A EP21939968 A EP 21939968A EP 4335117 A1 EP4335117 A1 EP 4335117A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sound
- range
- processor
- hearing aid
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title abstract description 24
- 208000009205 Tinnitus Diseases 0.000 claims abstract description 75
- 231100000886 tinnitus Toxicity 0.000 claims abstract description 75
- 230000035807 sensation Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 25
- 230000011664 signaling Effects 0.000 claims description 8
- 230000006735 deficit Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 description 21
- 238000012546 transfer Methods 0.000 description 15
- 210000000883 ear external Anatomy 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000035945 sensitivity Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 7
- 230000003321 amplification Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 6
- 238000007906 compression Methods 0.000 description 6
- 230000006698 induction Effects 0.000 description 6
- 238000003199 nucleic acid amplification method Methods 0.000 description 6
- 208000016354 hearing loss disease Diseases 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 206010011878 Deafness Diseases 0.000 description 4
- 230000004308 accommodation Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 230000000981 bystander Effects 0.000 description 4
- 230000010370 hearing loss Effects 0.000 description 4
- 231100000888 hearing loss Toxicity 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 210000003454 tympanic membrane Anatomy 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 210000000613 ear canal Anatomy 0.000 description 3
- 210000005069 ears Anatomy 0.000 description 3
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000002360 preparation method Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 208000027697 autoimmune lymphoproliferative syndrome due to CTLA4 haploinsuffiency Diseases 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 210000003477 cochlea Anatomy 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 210000000959 ear middle Anatomy 0.000 description 2
- 210000002768 hair cell Anatomy 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000010183 spectrum analysis Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 101100274179 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) CHA4 gene Proteins 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 210000000860 cochlear nerve Anatomy 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/35—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
- H04R25/353—Frequency, e.g. frequency shift or compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/405—Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/75—Electric tinnitus maskers providing an auditory perception
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1083—Reduction of ambient noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/61—Aspects relating to mechanical or electronic switches or control elements, e.g. functioning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
- H04R2430/25—Array processing for suppression of unwanted side-lobes in directivity characteristics, e.g. a blocking matrix
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/558—Remote control, e.g. of amplification, frequency
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/60—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
- H04R25/603—Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of mechanical or electronic switches or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
Definitions
- This invention relates, in general, to hearing aids and, in particular, to hearing aids and methods for use of the same that provide signal processing and feature sets to enhance speech and sound intelligibility.
- Tinnitus with or without additional hearing loss, can affect anyone at any age, although elderly adults more frequently experience hearing loss. Untreated tinnitus is associated with lower quality of life and can have far-reaching implications for the individual experiencing hearing loss as well as those close to the individual. As a result, there is a continuing need for improved hearing aids and methods for use of the same that enable patients to better hear conversations and the like.
- the hearing aid includes left and right bodies, which are connected by a band member, that at least respectively partially conform to the contours of the external ear and is sized to engage therewith.
- an electronic signal processor that is programmed with a respective left ear qualified sound range and a right ear qualified sound range.
- Each of the left ear qualified sound range and the right ear qualified sound range may be a range of sound corresponding to a preferred hearing range of an ear of the patient.
- the electronic signal processor is also programmed with a tinnitus frequency which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid is converted to the qualified sound range prior to output with the output amplified at 0 dB at the tinnitus frequency.
- the hearing aid may create a pairing via a transceiver with a proximate smart device, such as a smart phone, smart watch, or tablet computer.
- a proximate smart device such as a smart phone, smart watch, or tablet computer.
- the hearing aid may use distributed computing between the hearing aid and the proximate smart device for execution of various processes.
- a user may send a control signal from the proximate smart device to effect control.
- a hearing aid includes various electronic components contained within a body, including an electronic signal processor that is programmed with a respective left ear qualified sound range and a right ear qualified sound range.
- Each of the left ear qualified sound range and the right ear qualified sound range may be a range of sound corresponding to a preferred hearing range of an ear of the patient.
- the electronic signal processor is also programmed with a tinnitus frequency which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid is converted to the qualified sound range prior to output with an inverse amplitude signal applied at the tinnitus frequency to mitigate the tinnitus experienced by the patient.
- the hearing aid has a dominant sound mode of operation, an immediate background mode of operation, and a background mode of operation working together while being selectively and independently adjustable by the patient.
- the dominant sound mode of operation the hearing aid is able to identify a loudest sound in the processed signal and increases a volume of the loudest sound in the signal being processed.
- the immediate background mode of operation the hearing aid is able to identify sound in an immediate surrounding to the hearing aid and suppresses the sound in the signal being processed.
- the hearing aid is able to identify extraneous ambient sound received at the hearing aid and suppress the extraneous ambient sound in the signal being processed.
- the hearing aid may create a pairing via a transceiver with a proximate smart device, such as a smart phone, smart watch, or tablet computer.
- the hearing aid may use distributed computing between the hearing aid and the proximate smart device for execution of various processes.
- a user may send a control signal from the proximate smart device to activate one of the dominant sound modes of operation, the immediate background mode of operation, and the background mode of operation.
- Figure 1A is a front perspective schematic diagram depicting one embodiment of a hearing aid being utilized according to the teachings presented herein;
- Figure IB is a top plan view depicting the hearing aid of figure 1A being utilized according to the teachings presented herein;
- Figure 2 is a front perspective view of one embodiment of the hearing aid depicted in figure 1;
- Figure 3A is a front-left perspective view of another embodiment of the hearing aid depicted in figure 1;
- Figure 3B is a front-right perspective view of the embodiment of the hearing aid depicted in figure 3 A;
- Figure 4 is a front perspective view of another embodiment of a hearing aid according to the teachings presented herein;
- FIG. 5 is a functional block diagram depicting one embodiment of the hearing aid shown herein;
- FIG. 6 is a functional block diagram depicting another embodiment of the hearing aid shown herein;
- Figure 7 is a functional block diagram depicting a further embodiment of the hearing aid shown herein;
- Figure 8 is a functional block diagram a still further embodiment of the hearing aid shown herein;
- Figure 9 is a functional block diagram depicting one embodiment of a smart device shown in figure 1, which may form a pairing with the hearing aid;
- Figure 10 is a functional block diagram depicting one embodiment of sampling rate processing, according to the teachings presented herein;
- Figure 11 is a functional block diagram depicting one embodiment of harmonics processing, according to the teachings presented herein;
- Figure 12 is a functional block diagram depicting one embodiment of frequency shift, signal amplification, and harmonics enhancement, according to the teachings presented herein;
- Figure 13 is a functional block diagram depicting one embodiment of headset operational process flow, according to the teachings presented herein;
- Figure 14 is a graph depicting one operational embodiment of the hearing aid presented herein. io DETAILED DESCRIPTION OF THE INVENTION
- a hearing aid which is schematically illustrated and designated 10.
- a user U who may be considered a patient requiring a hearing aid, is wearing the hearing aid 10 and sitting at a table T at a restaurant or cafe, for example, and engaged in a conversation with an 20 individual Ii and an individual I 2.
- the user U is also suffering from tinnitus TS.
- the user U is speaking sound Si
- the individual E is speaking sound S 2
- the individual I 2 is speaking sound S 3.
- a bystander Bi is engaged in a conversation with a bystander B 2.
- the bystander Bi is speaking sound S 4 and the bystander B 2 is speaking sound S 5.
- An ambulance A is driving by the table T and emitting 25 sound S 6.
- the sounds Si, S 2 , and S 3 may be described as the immediate background sounds.
- the sounds S 4 , S 5 , and S 6 may be described as the background sounds.
- the sound S 6 may be described as the dominant sound as it is the loudest sound at table T.
- the hearing aid 10 is programmed with a qualified sound range for each ear in a two-ear embodiment and for one ear in a one- 30 ear embodiment.
- the qualified sound range may be a range of sound corresponding to a preferred hearing range for each ear of the user modified with a subjective assessment of sound quality according to the user.
- the preferred hearing range may be a range of sound corresponding to the highest hearing capacity of an ear of the user U between a range, which, by way of example, may be between 50Hz and 10,000Hz.
- the preferred hearing range for each ear may be multiple ranges of sound corresponding to the highest hearing capacity ranges of an ear of the user U between 50Hz and 10,000Hz.
- the various sounds Si through S 6 received may be transformed and divided into the multiple ranges of sound.
- the preferred hearing range for each ear may be an about 300Hz frequency to an about 500Hz frequency range of sound corresponding to highest hearing capacity of a patient.
- the subjective assessment according to the user may include a completed assessment of a degree of annoyance caused to the user by an impairment of wanted sound.
- the subj ective assessment according to the user may also include a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound. That is, the subjective assessment according to the user may include a completed assessment to determine best sound quality to the user. Sound received at the hearing aid 10 is converted to the qualified sound range prior to output, which the user U hears.
- the hearing aid 10 has a dominant sound mode of operation 26, an immediate background mode of operation 28, and a background mode of operation 30 under the selective adjustment of the user U.
- the hearing aid 10 identifies a loudest sound, such as the sound S 6 , in the processed signal and increases a volume of the loudest sound in the signal being processed.
- the hearing aid 10 identifies sound in an immediate surrounding, such as the sounds Si, S2, and S3 at the table T, to the hearing aid 10 and suppresses these sounds in the signal being processed.
- the hearing aid 10 identifies extraneous ambient sound, such as the sounds S4, S5, and S 6 , received at the hearing aid 10 and suppresses the extraneous ambient sounds in the signal being processed. Additionally, in the various modes of operation, the hearing aid 10 may identify the direction a particular sound is originating and express this direction in the two-ear embodiment, with appropriate sound distribution. By way of example, the ambulance A and the sound S 6 are originating on the left side of the user U and the sound is appropriately distributed at the hearing aid 10 to reflect this occurrence as indicated by an arrow L.
- the hearing aid 10 is also programmed with a tinnitus frequency which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid 10 is converted to the qualified sound range, which was previously discussed, prior to output with the output amplified at 0 dB at the tinnitus frequency. In this manner, the hearing aid 10 mitigates or eliminates the problems the user U experiences from the tinnitus TS.
- the hearing aid 10 may be programmed with a tinnitus frequency, which, as previously mentioned, is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid 10 is converted to the qualified sound range prior to output with an inverse amplitude signal applied at the tinnitus frequency to mitigate the tinnitus TS experienced by the patient.
- This application may alleviate the tinnitus TS in patients having impaired hearing and in patients without hearing impairment other than the tinnitus TS.
- the hearing aid 10 may create a pairing with a proximate smart device 12, such as a smart phone (depicted), smart watch, or tablet computer.
- the proximate smart device 12 includes a display 14 having an interface 16 having controls, such as an ON/OFF switch or volume controls 18 and mode of operation controls 20.
- controls such as an ON/OFF switch or volume controls 18 and mode of operation controls 20.
- a user may send a control signal wirelessly from the proximate smart device 12 to the hearing aid 10 to control a function, like volume controls 18, or to activate mode ON 22 or mode OFF 24 relative to one of the dominant sound modes of operation 26, the immediate background mode of operation 28, or the background mode of operation 30.
- the user U may activate other controls wirelessly from the proximate smart device 12.
- other controls may include microphone input sensitivity adjusted per ear, speaker volume input adjusted per ear, the aforementioned background suppression for both ears, dominant sound amplification per ear, and ON/OFF.
- processor symbol P after the hearing aid 10 creates the pairing with a proximate smart device 12, the hearing aid 10 and the proximate smart device 12 may leverage the wireless communication link therebetween and use processing distributed between the hearing aid 10 and the proximate smart device 12 to process the signals and perform other analysis.
- the hearing aid 10 includes a left body 32 and a right body 34 connected to a band member 36 that is configured to partially circumscribe the user U. Each of the left body 32 and the right body 34 cover an external ear of the user U and are sized to engage therewith.
- microphones 38, 40, 42 which gather sound directionally and convert the gathered sound into an electrical signal, are located on the left body 32. With respect to gathering sound, the microphone 38 may be positioned to gather forward sound, the microphone 40 may be positioned to gather lateral sound, and the microphone 42 may be positioned to gather rear sound. Microphones may be similarly positioned on the right body 34.
- Various internal compartments 44 provide space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 46 provide a patient interface with the hearing aid 10.
- each of the left body 32 and the right body 34 cover an external ear of the user U and being sized to engage therewith confers certain benefits.
- Sound waves enter through the outer ear and reach the middle ear to vibrate the eardrum.
- the eardrum then vibrates the oscilles, which are small bones in the middle ear.
- the sound vibrations travel through the oscilles to the inner ear.
- hair cells When the sound vibrations reach the cochlea, they push against specialized cells known as hair cells.
- the hair cells turn the vibrations into electrical nerve impulses.
- the auditory nerve connects the cochlea to the auditory centers of the brain. When these electrical nerve impulses reach the brain, they are experienced as sound.
- the outer ear serves a variety of functions.
- the various air-filled cavities composing the outer ear have a natural or resonant frequency to which they respond best. This is true of all air-filled cavities.
- the resonance of each of these cavities is such that each structure increases the sound pressure at its resonant frequency by approximately 10 to 12 dB.
- Headsets are used in hearing testing in medical and associated facilities for a reason: tests have shown that completely closing the ear canal in order to prevent any form of outside noise plays direct role in acoustic matching.
- the more severe hearing problem the closer the hearing aid speaker must be to the ear drum.
- the closer to the speaker is to the ear drum the more the device plugs the canal and negatively impacts the ear’s pressure system. That is, the various chambers of the ear have a defined operational pressure determined, in part, by the ear’s structure. By plugging the ear canal, the pressure system in the ear is distorted and the operational pressure of the ear is negatively impacted.
- “plug size” hearing aids having limitations with respect to distorting the defined operational pressure within the ear.
- the hearing aid of figure 2 - and other figures - creates a closed chamber around the ear increasing the pressure within the chamber.
- the hearing aid 10 includes a left body 52 having an ear hook 54 extending from the left body 52 to an ear mold 56.
- the left body 52 and the ear mold 56 may each at least partially conform to the contours of the external ear and sized to engage therewith.
- the left body 52 may be sized to engage with the contours of the ear in a behind-the-ear-fit.
- the ear mold 56 may be sized to be fitted for the physical shape of a patient’s ear.
- the ear hook 54 may include a flexible tubular material that propagates sound from the left body 52 to the ear mold 56.
- Microphones 58 which gather sound and convert the gathered sound into an electrical signal, are located on the left body 52.
- An opening 60 within the ear mold 56 permits sound traveling through the ear hook 54 to exit into the patient’s ear.
- An internal compartment 62 provides space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 64 provide a patient interface with the hearing aid 10 on the left body 52 of the hearing aid 10.
- the hearing aid 10 includes a right body 72 having an ear hook 74 extending from the right body 72 to an ear mold 76.
- the right body 72 and the ear mold 76 may each at least partially conform to the contours of the external ear and sized to engage therewith.
- the right body 72 may be sized to engage with the contours of the ear in a behind-the-ear-fit.
- the ear mold 76 may be sized to be fitted for the physical shape of a patient’s ear.
- the ear hook 74 may include a flexible tubular material that propagates sound from the right body 72 to the ear mold 76.
- Microphones 78 which gather sound and convert the gathered sound into an electrical signal, are located on the right body 72.
- An opening 80 within the ear mold 76 permits sound traveling through the ear hook 74 to exit into the patient’s ear.
- An internal compartment 82 provides space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 84 provide a patient interface with the hearing aid 10 on the right body 72 of the hearing aid 10. It should be appreciated that the various controls 64, 84 and other components of the left and right bodies 52, 72 may be at least partially integrated and consolidated. Further, it should be appreciated that the hearing aid 10 may have one or more microphones on each of the left and right bodies 52, 72 to improve directional hearing in certain implementations and provide, in some implementations, 360-degree directional sound input.
- the left and right bodies 52, 72 are connected at the respective ear hooks 54, 74 by a band member 90 which is configured to partially circumscribe a head or a neck of the patient.
- a compartment 92 within the band member 90 may provide space for electronics and the like.
- the hearing aid 10 may include left and right earpiece covers 94, 96 respectively positioned exteriorly to the left and right bodies 52, 72.
- Each of the left and right earpiece covers 94, 96 isolate noise to block out interfering outside noises.
- the microphones 58 in the left body 52 and the microphones 78 in the right body 72 may cooperate to provide directional hearing.
- the hearing aid 10 includes a body 112 having an ear hook 114 extending from the body 112 to an ear mold 116.
- the body 112 and the ear mold 116 may each at least partially conform to the contours of the external ear and sized to engage therewith.
- the body 112 may be sized to engage with the contours of the ear in a behind-the-ear-fit.
- the ear mold 116 may be sized to be fitted for the physical shape of a patient’s ear.
- the ear hook 114 may include a flexible tubular material that propagates sound from the body 112 to the ear mold 116.
- a microphone 118 which gathers sound and converts the gathered sound into an electrical signal, is located on the body 112.
- An opening 120 within the ear mold 116 permits sound traveling through the ear hook 114 to exit into the patient’s ear.
- An internal compartment 122 provides space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 124 provide a patient interface with the hearing aid 10 on the body 112 of the hearing aid 10.
- an illustrative embodiment of the internal components of the hearing aid 10 is depicted.
- the hearing aid 10 depicted in the embodiment of figure 2 and figures 3 A, 3B is presented. It should be appreciated, however, that the teachings of figure 5 equally apply to the embodiment of figure 4.
- an electronic signal processor 130 may be housed within the internal compartments 62, 82.
- the hearing aid 10 may include an electronic signal processor 130 for each ear or the electronic signal processor 130 for each ear may be at least partially integrated or fully integrated.
- the electronic signal processor 130 is housed.
- the electronic signal processor 130 may include an analog-to-digital converter (ADC) 132, a digital signal processor (DSP) 134, a digital-to-analog converter (DAC) 136, and a signal generator 137.
- the electronic signal processor 130 including the digital signal processor embodiment, may have memory accessible to a processor.
- One or more microphone inputs 138 corresponding to one or more respective microphones, a speaker output 140, various controls, such as a programming connector 142 and hearing aid controls 144, an induction coil 146, a battery 148, and a transceiver 150 are also housed within the hearing aid 10.
- a signaling architecture communicatively interconnects the microphone inputs 138 to the electronic signal processor 130 and the electronic signal processor 130 to the speaker output 140.
- the various hearing aid controls 144, the induction coil 146, the battery 148, and the transceiver 150 are also communicatively interconnected to the electronic signal processor 130 by the signaling architecture.
- the speaker output 140 sends the sound output to a speaker or speakers to project sound and in particular, acoustic signals in the audio frequency band as processed by the hearing aid 10.
- the programming connector 142 may provide an interface to a computer or other device.
- the hearing aid controls 144 may include an ON/OFF switch as well as volume controls, for example.
- the induction coil 146 may receive magnetic field signals in the audio frequency band from a telephone receiver or a transmitting induction loop, for example, to provide a telecoil functionality.
- the induction coil 146 may also be utilized to receive remote control signals encoded on a transmitted or radiated electromagnetic carrier, with a frequency above the audio band.
- Various programming signals from a transmitter may also be received via the induction coil 146 or via the transceiver 150, as will be discussed.
- the battery 148 provides power to the hearing aid 10 and may be rechargeable or accessed through a battery compartment door (not shown), for example.
- the transceiver 150 may be internal, external, or a combination thereof to the housing.
- the transceiver 150 may be a transmitter/receiver, receiver, or an antenna, for example. Communication between various smart devices and the hearing aid 10 may be enabled by a variety of wireless methodologies employed by the transceiver 150, including 802.11, 3G, 4G, Edge, WiFi, ZigBee, near field communications (NFC), Bluetooth low energy, and Bluetooth, for example.
- the various controls and inputs and outputs presented above are exemplary and it should be appreciated that other types of controls may be incorporated in the hearing aid 10.
- the electronics and form of the hearing aid 10 may vary.
- the hearing aid 10 and associated electronics may include any type of headphone configuration, a behind-the-ear configuration, an in-the-ear configuration, or in-the-ear configuration, for example.
- electronic configurations with multiple microphones for directional hearing are within the teachings presented herein.
- the hearing aid has an over-the- ear configuration where the entire ear is covered, which not only provides the hearing aid functionality but hearing protection functionality as well.
- the electronic signal processor 130 may be programmed with a tinnitus frequency, which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient.
- the electronic signal processor 130 may then convert sound received at the hearing aid to the qualified sound range prior to output with the output amplified at 0 dB at the tinnitus frequency or an inverse amplitude signal applied at the tinnitus frequency.
- the inverse amplitude signal is provided by the signal generator 137.
- the electronic signal processor 130 may be programmed with a preferred hearing range which, in one embodiment, is the preferred hearing sound range corresponding to highest hearing capacity of a patient.
- the left ear preferred hearing range and the right ear preferred hearing range are each a range of sound corresponding to highest hearing capacity of an ear of a patient between, by way of example, a variable range, such as between 50Hz and 10,000Hz.
- the preferred hearing range for each of the left ear and the right ear may be an about 300Hz frequency to an about 500Hz frequency range of sound.
- Existing audiogram hearing aid industry testing equipment measures hearing capacity at defined frequencies, such as 60Hz; 125Hz; 250Hz; 500Hz; 1,000Hz; 2,000Hz; 4,000Hz; 8,000Hz and existing hearing aids work on a ratio-based frequency scheme.
- the present teachings however measure hearing capacity at a small step, such as 5Hz, 10Hz, or 20Hz. Thereafter, one or a few, such as three, frequency ranges are defined to serve as the preferred hearing range or preferred hearing ranges. As discussed herein, in some embodiments of the present approach, a two-step process is utilized.
- hearing is tested in an ear within a range, such as between 50Hz and 5,000Hz, for example, at a variable increment, such as a 50Hz increment or other increment, and between 5,000Hz and 10,0000Hz at a variable increment, such as a 200Hz increment or other increment, to identify potential hearing ranges.
- the testing may be switched to a 5Hz, 10Hz, or 20Hz increment to precisely identify the preferred hearing range.
- various controls 144 may include an adjustment that widens the about frequency range of about 200Hz, for example, to a frequency range of 100Hz to 700Hz or even wider, for example. Further, the preferred hearing sound range may be shifted by use of the various controls 144.
- Directional microphone systems on each microphone position and processing may be included that provide a boost to sounds coming from the front of the patient and reduce sounds from other directions. Such a directional microphone system and processing may improve speech understanding in situations with excessive background noise. Digital noise reduction, impulse noise reduction, and wind noise reduction may also be incorporated. As alluded to, system compatibility features, such as FM compatibility and Bluetooth compatibility, may be included in the hearing aid 10.
- the processor may process instructions for execution within the electronic signal processor 130 as a computing device, including instructions stored in the memory.
- the memory stores information within the computing device.
- the memory is a volatile memory unit or units.
- the memory is a non-volatile memory unit or units.
- the memory is accessible to the processor and includes processor- executable instructions that, when executed, cause the processor to execute a series of operations.
- the processor-executable instructions cause the processor to receive an input analog signal from the microphone inputs 138 and convert the input analog signal to a digital signal.
- the input analog signal is modified with a subjective assessment of sound quality according to the patient at a converter 131.
- the processor-executable instructions then cause the processor to transform through compression, for example, the digital signal into a processed digital signal having the subjective assessment of sound quality according to the patient.
- the digital signal may be modified with a subjective assessment of sound quality according to the patient, if such a modification has not already occurred.
- the processed digital signal is then transformed into the preferred hearing range.
- the transformation may be a frequency transformation where the input frequency is frequency transformed into the preferred hearing range. Such a transformation is a toned-down, narrower articulation that is clearly understandable as it is customized for the user.
- the processor is then caused by the processor-executable instructions to convert the processed digital signal to an output analog signal, which may be amplified as required, and drive the output analog signal to the speaker output 140.
- an analog sound is converted by way of the subjective assessment of sound quality according to the user.
- the signal is then transferred into the preferred hearing range prior to a digital-to-analog conversion and amplification.
- the memory that is accessible to the processor may include additional processor- executable instructions that, when executed, cause the processor to execute a series of operations.
- the processor-executable instructions may cause the processor to receive a control signal to control volume or another functionality.
- the processor-executable instructions may also receive a control signal and cause the activation of one of a dominant sound mode of operation 26, an immediate background mode of operation 28, and a background mode of operation 30.
- the various modes of operation, including the dominant sound mode of operation 26, the immediate background mode of operation 28, and the background mode of operation 30, may be implemented on a per ear basis or for both ears.
- processor-executable instructions may also cause the processor to create a pairing via the transceiver 150 with a proximate smart device 12.
- the processor-executable instructions may then cause the processor to receive a control signal from the proximate smart device to control volume or another functionality.
- the processor-executable instructions may then receive a control signal and cause the activation of one of a dominant sound mode of operation 26, an immediate background mode of operation 28, and a background mode of operation 30.
- the processor-executable instructions may cause the processor to receive an input analog signal from the microphone inputs 138 and convert the input analog signal to a digital signal modified with a subjective assessment of sound quality according to the user.
- the processor then transforms through compression the digital signal into a processed digital signal having the preferred hearing range.
- the processor is caused to identify a loudest sound in the processed digital signal and increase a volume of the loudest sound in the processed digital signal.
- the processor is then caused, in the immediate background mode of operation 28, to identify sound in an immediate surrounding to the hearing aid 10 and suppress the sound in the processed digital signal.
- the processor is caused to identify extraneous ambient sound received at the hearing aid 10 and suppress the extraneous ambient sound in the processed digital signal.
- the processor may be caused to convert the processed digital signal to an output analog signal and drive the output analog signal to the speaker.
- the processor-executable instructions may cause the hearing aid to receive an input analog signal from the microphone.
- the processor-executable instructions then cause the processor to convert the input analog signal to a digital signal, which is then transformed into a processed digital signal having the qualified sound range.
- the processor-executable instructions cause the processor to convert the processed digital signal to an output analog signal, with an amplification of the output analog signal at 0 dB at the tinnitus frequency.
- the output analog signal is then caused to be driven to the speaker.
- the processor-executable instructions cause the processor to receive an input analog signal from the microphone and then convert the input analog signal to a digital signal.
- the digital signal is then caused to be transformed into a processed digital signal having the qualified sound range with an inverse amplitude signal at the tinnitus frequency.
- the processed digital signal is then converted to an output analog signal prior to being driven the output analog signal to the speaker.
- the processor-executable instructions may cause the processor to create a pairing via the transceiver 150 with the proximate smart device 12. Then, the processor-executable instructions may cause the processor to receive an input analog signal from the microphone and convert the input analog signal to a digital signal. The processor may then be caused to transform through compression with distributed computing between the processor and the proximate smart device 12, the digital signal into a processed digital signal having the preferred hearing range modified with a subjective assessment of sound quality according to the user to provide the qualified sound range. At the processor within the hearing aid, the processor-executable instructions cause the processor to convert the processed digital signal to an output analog signal and drive the output analog signal to the speaker.
- the left ear preferred hearing range and the right ear preferred hearing range may comprise a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component.
- the processor- executable instructions may cause the processor to process a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component.
- the processor-executable instructions may cause the processor to receive an input analog signal from the microphone inputs and convert the input analog signal to a digital signal modified with a subjective assessment of sound quality according to the user. The processor then transforms the digital signal into a processed digital signal having a preferred hearing range.
- the preferred hearing range may be one or more ranges of sound corresponding to the highest hearing capacity of an ear of the patient.
- the preferred hearing range may be modified with a subjective assessment of sound quality according to the patient.
- the subjective assessment of sound quality according to the patient may be a completed assessment of a degree of annoyance caused to the patient by an impairment of wanted sound.
- the preferred hearing range may be modified with enhanced harmonics, including a cut-off harmonics component, an additional harmonics component, or a harmonics transfer component, for example.
- the processor-executable instructions may also cause the processor to convert the processed digital signal to an output analog signal and drive the output analog signal to the speaker. It should be appreciated that the processor-executable instructions may cause the processor to utilize the transceiver to utilize distributed processing between the hearing aid and the proximate smart device to transform through compression the digital signal into a processed digital signal having the preferred hearing range with harmonics enhancement.
- processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments.
- program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types.
- Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein.
- the particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
- the electronic signal processor 130 receives a signal from the one or more microphone inputs 138 and outputs a signal to the speaker output 140.
- the electronic signal processor 130 includes a gain stage 160 that receives the electronic signal from the microphone inputs 138 and amplifies the signal.
- the gain stage 160 forwards the signal to an analog-to-digital converter (ADC) 162, which converts the amplified analogue electronic signal to a digital electronic signal.
- ADC analog-to-digital converter
- the gain stage 160 in one embodiment, is a point during an audio signal flow that adjustments may be made to the audio signal prior to conversion by the analog-to-digital converter (ADC) 162.
- the gain stage 160 may include a modification of the signal to accommodate a subjective assessment of sound quality according to the user or patient.
- a digital signal processor (DSP) 164 receives the digital electronic signal from the ADC 162 and is configured to process the digital electronic signal with the desired compensation based on the qualified sound range, which includes the preferred hearing range, which is stored therein, and may include the subjective assessment of sound quality according to the user.
- DSP digital signal processor
- the DSP 164 may cancel or reduce - or augment or increase - the ambient noise to support the desired dominant sound mode of operation 26, immediate background mode of operation 28, or background mode of operation 30 by utilizing an algorithm.
- Such an algorithm may examine modulation characteristics of the speech envelope, such as harmonic structure, modulation depth, and modulation count. Based on these characteristics, various triggers may be defined that describe wanted versus unwanted background noise as well as immediate noise. The sound may then be altered digitally. It should be appreciated that other digital noise reduction and gain techniques may be utilized, including algorithms incorporating adaptive beamforming and adaptive optimal filtering processing.
- the DSP 164 alone or in combination with other electronic components of the electronic signal processor 130, provides compensation to patient’s experiencing tinnitus.
- the DSP 164 may cause the output to be modified with the output amplified at 0 dB at the tinnitus frequency.
- the DSP 164 alone or in combination with other electronic components of the electronic signal processor 130, may apply an inverse amplitude signal applied at the tinnitus frequency to provide compensation for tinnitus.
- the processed digital electronic signal is then driven to a digital-to-analog converter (DAC) 166, which converts the processed digital electronic signal to a processed analog electronic signal that is then driven to a multiplexer 168 and onto a low output impedance output driver 170 prior to output, at the speaker output 140.
- a gain stage 172 receives the electronic signal from the microphone inputs 138 and amplifies the analog electronic signal prior to driving the signal to an active noise modulation (ANM) unit 174, which is configured to perform active noise suppression or active noise augmentation by way of various amplifiers and filters.
- a signal path includes the DSP 164 providing the processed digital electronic signal to a DAC 176 and a filter 178.
- the ANM-driven signal and filter-driven signal are combined at the combiner unit 180 prior to be provided to a pulse width modulator (PWM) 182 prior to the signal being driven to the multiplexer 168.
- PWM pulse width modulator
- the ANM- driven signal may cancel or reduce - or augment or increase - the ambient noise to provide the desired dominant sound mode of operation 26, immediate background mode of operation 28, or background mode of operation 30 while the DSP-driven signal corrects the input signal to compensate for hearing loss according to the qualified sound range.
- a signal controller 200 is centrally located in communication with a signal analyzer and controller 202 serving the left side of the hearing aid 10 and with a signal analyzer and controller 204 serving the right side of the hearing aid 10.
- the signal analyzer and controller 202 may include signal generator functionality.
- a Bluetooth interface unit 206 is also in communication with the signal analyzer and controller 202 and with the signal analyzer and controller 204, which may also include signal generator functionality.
- the Bluetooth interface unit 206 is located in communication with a smart device application 208 that may be installed on a smart device, such as a smart phone or smart watch.
- a battery pack and charger 210 serves the hearing aid 10 with power.
- a forward microphone 212, a sideways-facing microphone 214, and a back microphone 216 are respectfully connected in series to by-pass filters 218, 220, 222, which in turn are respectfully connected in series to pre-amplifiers 224, 226, 228 connected to the signal analyzer and controller 202.
- a forward microphone 242, a sideways-facing microphone 244, and a back microphone 246 are respectfully connected in series to by-pass filters 248, 250, 252, which in turn are respectfully connected in series to pre-amplifiers 254, 256, 258 connected to the signal analyzer and controller 204.
- the signal analyzer and controller 202 is connected in parallel to a noise filter 230 and an amplifier 232, which also receives a signal from the noise filter 230.
- the amplifier 232 drives a signal to the left speaker 234.
- the signal analyzer and controller 204 is connected in parallel to a noise filter 260 and an amplifier 262, which also receives a signal from the noise filter 260.
- the amplifier 262 drives a signal to the right speaker 264.
- each of the signal analyzer and controllers 202, 204 transfers the live sound frequency into a qualified sound range including a frequency range or frequency ranges that the person using the hearing aid 10 hears through, in some embodiments, a combination of frequency transfer, sampling rate, cut-off harmonics, additive harmonics, and harmonic transfer.
- the qualified sound range also includes a modification of the sound based on a subjective assessment of sound quality.
- each of the signal analyzer and controllers 202, 204 may determine a direction of the sound source. Further, as mentioned, each of the signal analyzer and controllers 202, 204 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency or applying an inverse amplitude signal applied at the tinnitus frequency to provide compensation for the tinnitus TS.
- a smart device input 280, an adjustable background noise filter 282, a voice directional analysis module 284, and a control unit 286 are interconnected.
- a front microphone 288, a side microphone 290, and a rear microphone 292 are connected to a microphone input sensitivity module 294.
- a processor 296, an amplifier 298, volume control 300, and a speaker 302 are also provided.
- a front microphone 308, a side microphone 310, and a rear microphone 312 are connected to a microphone input sensitivity module 314.
- a processor 316, an amplifier 318, volume control 320, and a speaker 322 are also provided.
- the front microphone 288, the side microphone 290, and the rear microphone 292 provide a direct signal 330 to the microphone input sensitivity module 294, which provides a feedback signal 332.
- the direct signal 330 and the feedback signal 332 provide for the regulation of the input volume at the front microphone 288, the side microphone 290, and the rear microphone 292.
- the microphone input sensitivity module 294, in turn, provides a direct signal 334 to the adjustable background noise filter 282.
- a direct signal 336 is provided to the voice directional analysis module 284.
- the front microphone 308, the side microphone 310, and the rear microphone 312 provide a direct signal 340 to the microphone input sensitivity module 314, which provides a feedback signal 342.
- the direct signal 340 and the feedback signal 342 provide for the regulation of the input volume at the front microphone 308, the side microphone 310, and the rear microphone 312.
- the microphone input sensitivity module 314, in turn, provides a direct signal 344 to the adjustable background noise filter 282.
- the voice directional analysis 284 which determines the direction of origin of sound received by the front microphone 288, the side microphone 290, the rear microphone 292, the front microphone 308, the side microphone 310, and the rear microphone 312, provides a direct signal 346 to the processor 296 and a direct signal 348 to the processor 316.
- the processor 296 is associated with the speaker 302 and provides a direct signal 350 to the amplifier 298, which provides a direct signal 352 to the volume control 300.
- the processor 296 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency or applying an inverse amplitude signal applied at the tinnitus frequency to provide compensation for the tinnitus TS.
- a direct signal 354 is then provided to the speaker 302.
- the speaker 302 is physically positioned on the same ear as the front microphone 288, the side microphone 290, and the rear microphone 292.
- the processor 316 is associated with the speaker 322 and provides a direct signal 360 to the amplifier 318, which provides a direct signal 362 to the volume control 320.
- the processor 316 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency or applying an inverse amplitude signal applied at the tinnitus frequency to provide compensation for the tinnitus TS.
- a direct signal 364 is then provided to the speaker 322.
- the speaker 322 is physically positioned on the same ear as the front microphone 308, the side microphone 310, and the rear microphone 312.
- the smart device input 280 provides a direct signal 370 to each of the processors 296, 316.
- a direct signal 372 is also provided by the smart device input 280 to the smart device by way of connection 374, which is under the direct control of the control unit 286 by way of a direct control signal 376.
- a bi-directional interface 378 operates between the control unit 286 and the microphone input sensitivity module 294.
- a bi-directional interface 380 operates between the control unit 286 and the adjustable background noise filter 282.
- a bi-directional interface 382 operates between the control unit 286 and the microphone input sensitivity module 314 that services the front microphone 308, the side microphone 310, and the rear microphone 312.
- the control unit 286 and the processor 296 share a bi-directional interface 384 and the control unit 286 and the processor 316 share a bi-directional interface 386.
- the control unit 286 provides direct control over the volume control 300 associated with the speaker 302 and the volume control 320 associated with the speaker 322 via respective direct control signals 388, 390.
- the proximate smart device 12 may be a wireless communication device of the type including various fixed, mobile, and/or portable devices. To expand rather than limit the discussion of the proximate smart device 12, such devices may include, but are not limited to, cellular or mobile smart phones, tablet computers, smartwatches, and so forth.
- the proximate smart device 12 may include a processor 400, memory 402, storage 404, a transceiver 406, and a cellular antenna 408 interconnected by a busing architecture 410 that also supports the display 14, I/O panel 414, and a camera 416. It should be appreciated that although a particular architecture is explained, other designs and layouts are within the teachings presented herein.
- the teachings presented herein permit the proximate smart device 12 such as a smart phone to form a pairing with the hearing aid 10 and operate the hearing aid 10.
- the proximate smart device 12 includes the memory 402 accessible to the processor 400 and the memory 402 includes processor-executable instructions that, when executed, cause the processor 400 to provide an interface for an operator that includes an interactive application for viewing the status of the hearing aid 10.
- the processor 400 is caused to present a menu for controlling the hearing aid 10.
- the processor 400 is then caused to receive an interactive instruction from the user and forward a control signal via the transceiver 406, for example, to implement the instruction at the hearing aid 10.
- the processor 400 may also be caused to generate various reports about the operation of the hearing aid 10.
- the processor 400 may also be caused to translate or access a translation service for the audio.
- the processor- executable instructions cause the processor 400 to provide an interface for the user U of the hearing aid 10 to select a mode of operation.
- the hearing aid 10 has the dominant sound mode of operation 26, the immediate background mode of operation 28, and the background mode of operation 30.
- the dominant sound mode of operation 26 the hearing aid 10 identifies a loudest sound in the processed digital signal and increases a volume of the loudest sound in the signal being processed.
- the immediate background mode of operation 28 the hearing aid 10 identifies sound in an immediate surrounding to the hearing aid 10 and suppresses the sound in the signal being processed.
- the hearing aid 10 identifies extraneous ambient sound received at the hearing aid 10 and suppresses the extraneous ambient sound in the signal being processed.
- the processor- executable instructions cause the processor 400 to create a pairing via the transceiver 406 with the hearing aid 10. Then, the processor-executable instructions may cause the processor 400 to transform through compression with distributed computing between the processor 400 and the hearing aid 10, the digital signal into a processed digital signal having the qualified sound range, which includes the preferred hearing range as well as the subj ective assessment of sound quality.
- the left ear preferred hearing range and the right ear preferred hearing range may comprise a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component.
- the processor-executable instructions may cause the processor 400 to process a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component.
- the subjective assessment according to the user may include a completed assessment of a degree of annoyance caused to the user by an impairment of wanted sound.
- the subjective assessment according to the user may also include a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound. That is, the subjective assessment according to the user may include a completed assessment to determine best sound quality to the user.
- the processor-executable instructions cause the processor 400 to create the pairing via the transceiver 406 with the hearing aid 10 and cause the processor 400 to transform through compression with distributed computing between the processor 400 and the hearing aid 10, the digital signal into a processed digital signal having the qualified sound range including the preferred hearing range and subjective assessment of sound quality.
- the preferred hearing range may be a range or ranges of sound corresponding to highest hearing capacity of an ear of a patient modified with a subjective assessment of sound quality according to the patient.
- the preferred hearing range may further include harmonics, such as a cut-off harmonics component, an additional harmonics component, or a harmonics transfer component, for example.
- the preferred hearing range may also include a frequency transfer component, a sampling rate component, a signal amplification component.
- the subjective assessment according to the user may include a completed assessment of a degree of annoyance caused to the user by an impairment of wanted sound.
- the subjective assessment according to the user may also include a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound. That is, the subjective assessment according to the user may include a completed assessment to determine best sound quality to the user.
- the processor-executable instructions cause the processor 400 to create the pairing via the transceiver 406 with the hearing aid 10 and cause the processor 400 to implement one of two solutions for addressing the tinnitus TS.
- the processor 400 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency.
- the processor 400 may apply an inverse amplitude signal applied at the tinnitus frequency to provide compensation, including elimination, for the tinnitus TS.
- a sampling rate circuit 430 which may form a portion of the hearing aid 10 may have an analog signal 432 as an input and a digital signal 434 as an output. More particularly, an analog-to-digital converter (ADC) 436 receives the analog signal 432 and a signal from a frequency spectrum analyzer 438 as inputs. The ADC 436 provides outputs including the digital signal 434 and a signal to the frequency spectrum analyzer 438.
- the frequency spectrum analyzer 438 forms a feedback loop with a sampling rate controller 442 and a sampling rate generator 444. As shown, the frequency spectrum analyzer 438 analyzes the range of one received analog signal 432 and through the feedback loop using the sampling rate controller 442 and sampling rate generator 444 the sampling rage at the ADC 426 is optimized.
- total sound ST may be defined as follows:
- Hi 1 st Harmonic
- 3 ⁇ 4 2 nd Harmonic
- HN N 111 Harmonic, where H is the mathematical multiplication of F B .
- total sound S T is the sum of cardinal sound (CS) and an N stage of Background Noise (BN), such that the following applies:
- ST CS + BNG + BNi, wherein:
- CS highest amplitude sound within a defined timeframe.
- sampling rate SR
- the hearing aid sampling rate (SR) may be designed to be between
- the sampling rate (SR) change may be controlled by the ratio between the cardinal sound (CS) and background noise (BN) received in the analog signal 432.
- the sampling rate circuit 430 provides a high accuracy of optimization of the base frequency (F B ) and harmonics (Hi, 3 ⁇ 4, ..., H N ) components of the cardinal sound (CS) as well as the base frequency (F B ) and harmonics (Hi,
- this ensures that the higher the background noise (BN), the higher the sampling rate (SR) in order to properly serve the two stage background noise (BN) control.
- the ADC 436 receives total sound (S T ) as an input.
- the ADC 436 then performs the frequency spectrum analysis 452 which is under the control of the frequency spectrum analyzer 438, the sampling rate controller 442, and the sampling rate generator 444 presented in figure 10.
- the ADC 436 outputs a digital total sound (S T ) signal that undergoes the frequency spectrum analysis 452 which is subject to calculation 454.
- the base frequency (F B ) and harmonics (Hi, 3 ⁇ 4, . . . , H N ) components are separated.
- the harmonics processing 450 calculates at block 454, a converted actual frequency (CFA) and a differential converted harmonics (DCH N ) to create at block 458 , a converted total sound (CS T ), which is the output of the harmonics processing 450.
- CFA converted actual frequency
- DCH N differential converted harmonics
- total sound S T .
- F B range between FB L and FB H with F BL being the lowest frequency value in base frequency and F BH being the highest frequency Value in Base Frequency;
- H N harmonics of F B with H N being a mathematical multiplication of F B ;
- FA an actual frequency value being examined;
- HAI 1 st harmonic of FA
- the hearing aid 10 presented herein may transfer the base frequency range (F B ) along with several of the harmonics (H N ) into the actual hearing range (AHR) by converting the base frequency range (F B ) and several chosen harmonics (H N ) into the actual hearing range (AHR) as one coherent converted total sound (CST) by using the following algorithm defined by following equations:
- CHAI 1 st converted harmonic
- CHA2 2 nd converted harmonic
- a high-pass filter may cut all differential converted harmonics (DCH) above a predetermined frequency.
- DCH differential converted harmonics
- the frequency of 5,000Hz may be used as a benchmark.
- CST converted total sound
- the harmonics processing 450 may provide the conversion for each participating frequency in total sound (S T ) and distributing all participating converted actual frequencies (CF A ) and differential converted harmonics (DCH N ) in the converted total sound (CS T ) in the same ratio as participated in the original total sound (S T ). In some implementations, should more than seventy-five percent (75%) of all the differential converted harmonics (DCH N ) be out of the high-pass filter range, the harmonics processing 450 may use an adequate multiplier (between 0.1-0.9) and add the created new differential converted harmonics (DCH N ) to converted total sound (CST).
- an initial analog signal 472 is received.
- the initial analog signal 472 is converted by an ADC 474, before undergoing signal preparation by signal preparation circuit 474.
- signal preparation may include the operations presented in figure 10.
- the processed signal may be modified based on a subjective assessment of sound quality and before undergoing a frequency shift and signal amplification at circuit blocks 474, 480.
- Harmonics enhancement circuitry 482 processes the signal as presented in figure 11, for example, before the signal is converted from digital to analog at a DAC 484. The signal is then outputted as an analog signal 486.
- left sound input is received at a preamplifier 502 for processing prior to the processed signal being driven to a digital signal processor 504, which performs an analog-to-digital conversion 530 prior to adjusting background noise according to a filter at block 532.
- Various filtering may occur, including general 534, immediate 536, and cardinal sound 538.
- the filtered signal is then driven to the digital signal processor 520 for directional control that compares left and right signals, and time delays between left and right signals. The result is a distributed left and right signal, which is based on the established left and right hearing capacity of the patient.
- the signal is then driven back to the digital signal processor 504 for left ear algorithm processing, which may include transforming the digital signal into a processed digital signal having the qualified sound range having the preferred hearing range with optional harmonics enhancement and optional modification with a subjective assessment of sound quality according to the patent to provide the best signal quality possible.
- the left ear algorithm processing may also include processing to address tinnitus, as discussed above.
- a memory module 542 provides the instructions for the transformation, which may be uploaded by the algorithm upload module 522.
- An amplifier 506 receives the processed digital signal and delivers an amplified processed digital signal to a speaker 508 for left output sound.
- right sound input is received at a preamplifier 512 for processing prior to the processed signal being driven to a digital signal processor 514, which performs an analog-to-digital conversion 550 prior to adjusting background noise according to a filter at block 552.
- Various filtering may occur, including general 554, immediate 556, and cardinal sound 558.
- the filtered signal is then driven to the digital signal processor 520 for directional control that compares left and right signals, and time delays between left and right signals. The result is a distributed left and right signal, which is based on the established left and right hearing capacity of the patient.
- the right portion of the signal is then driven back to the digital signal processor 514 for right ear algorithm processing, which may include transforming the digital signal into a processed digital signal having the qualified sound range including the preferred hearing range with optional harmonics enhancement and optional modification with a subjective assessment of sound quality according to the patent to provide the best signal quality possible.
- the right ear algorithm processing may also include processing to address tinnitus, as discussed hereinabove.
- a memory module 562 provides the instructions for the transformation, which may be uploaded by the algorithm upload module 522.
- An amplifier 516 receives the processed digital signal and delivers an amplified processed digital signal to a speaker 518 for right output sound.
- the hearing aid may apply an inverse amplitude signal applied at a tinnitus frequency to provide compensation, including elimination, of the tinnitus TS in patients.
- normal sound is a multitude of sinusoidal signals. While several characteristics, such as frequency, amplitude, and signal-to- noise ratio, for example, describe sound, an applied phase difference between two equal frequency and equal amplitude signals may eliminate tinnitus.
- Utilization of the inverse amplitude signal as discussed above may partially or fully eliminate tinnitus
- the order of execution or performance of the methods and data flows illustrated and described herein is not essential, unless otherwise specified. That is, elements of the methods and data flows may be performed in any order, unless otherwise specified, and that the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element are all possible sequences of execution.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Headphones And Earphones (AREA)
Abstract
A hearing aid (10) and method for use of the same are disclosed. In one embodiment, the hearing aid (10) includes a body (112) having various electronic components contained therein, including an electronic signal processor (130) that is programmed with a respective left ear qualified sound range and a right ear qualified sound range. Each of the left ear qualified sound range and the right ear qualified sound range may be a range of sound corresponding to a preferred hearing range of an ear of the patient. The electronic signal processor (130) is also programmed with a tinnitus frequency which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid (10) is converted to the qualified sound range prior to output with the output amplified at 0 dB at the tinnitus frequency or an inverse amplitude signal applied at the tinnitus frequency.
Description
HEARING AID AND METHOD FOR USE OF SAME
TECHNICAL FIELD OF THE INVENTION This invention relates, in general, to hearing aids and, in particular, to hearing aids and methods for use of the same that provide signal processing and feature sets to enhance speech and sound intelligibility.
BACKGROUND OF THE INVENTION
Tinnitus, with or without additional hearing loss, can affect anyone at any age, although elderly adults more frequently experience hearing loss. Untreated tinnitus is associated with lower quality of life and can have far-reaching implications for the individual experiencing hearing loss as well as those close to the individual. As a result, there is a continuing need for improved hearing aids and methods for use of the same that enable patients to better hear conversations and the like.
SUMMARY OF THE INVENTION
It would be advantageous to achieve a hearing aid and method for use of the same that would significantly change the course of existing hearing aids by adding features to correct existing limitations in functionality. It would also be desirable to enable a mechanical and electronics-based solution that would provide enhanced performance and improved usability with an enhanced feature set. It would be further desirable to enable a mechanical and electronics-based solution that would address - through mitigation or elimination - tinnitus. To better address one or more of these concerns, a hearing aid and method for use of the same are disclosed. In one embodiment, the hearing aid includes left and right bodies, which are connected by a band member, that at least respectively partially conform to the contours of the external ear and is sized to engage therewith. Various electronic components are contained within the body, including an electronic signal processor that is programmed with a respective left ear qualified sound range and a right ear qualified sound range. Each of the left ear
qualified sound range and the right ear qualified sound range may be a range of sound corresponding to a preferred hearing range of an ear of the patient. The electronic signal processor is also programmed with a tinnitus frequency which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid is converted to the qualified sound range prior to output with the output amplified at 0 dB at the tinnitus frequency. In another embodiment, the hearing aid may create a pairing via a transceiver with a proximate smart device, such as a smart phone, smart watch, or tablet computer. The hearing aid may use distributed computing between the hearing aid and the proximate smart device for execution of various processes. Also, a user may send a control signal from the proximate smart device to effect control.
In a further embodiment, a hearing aid includes various electronic components contained within a body, including an electronic signal processor that is programmed with a respective left ear qualified sound range and a right ear qualified sound range. Each of the left ear qualified sound range and the right ear qualified sound range may be a range of sound corresponding to a preferred hearing range of an ear of the patient. The electronic signal processor is also programmed with a tinnitus frequency which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid is converted to the qualified sound range prior to output with an inverse amplitude signal applied at the tinnitus frequency to mitigate the tinnitus experienced by the patient. In a still further embodiment, the hearing aid has a dominant sound mode of operation, an immediate background mode of operation, and a background mode of operation working together while being selectively and independently adjustable by the patient. In the dominant sound mode of operation, the hearing aid is able to identify a loudest sound in the processed signal and increases a volume of the loudest sound in the signal being processed. In the immediate background mode of operation, the hearing aid is able to identify sound in an immediate surrounding to the hearing aid and suppresses the sound in the signal being processed. In the background mode of operation, the hearing aid is able to identify extraneous ambient sound received at the hearing aid and suppress the extraneous ambient sound in the signal being processed. In a further embodiment, the hearing aid may create a pairing via a transceiver with a proximate smart device, such as a smart phone, smart watch, or tablet computer. The hearing aid may use distributed computing between the hearing aid and the proximate smart device for execution of various processes. Also, a user may send a control signal from the proximate smart device to activate one of the dominant sound modes of
operation, the immediate background mode of operation, and the background mode of operation. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the features and advantages of the present invention, reference is now made to the detailed description of the invention along with the accompanying figures in which corresponding numerals in the different figures refer to corresponding parts and in which: Figure 1A is a front perspective schematic diagram depicting one embodiment of a hearing aid being utilized according to the teachings presented herein;
Figure IB is a top plan view depicting the hearing aid of figure 1A being utilized according to the teachings presented herein;
Figure 2 is a front perspective view of one embodiment of the hearing aid depicted in figure 1;
Figure 3A is a front-left perspective view of another embodiment of the hearing aid depicted in figure 1;
Figure 3B is a front-right perspective view of the embodiment of the hearing aid depicted in figure 3 A; Figure 4 is a front perspective view of another embodiment of a hearing aid according to the teachings presented herein;
Figure 5 is a functional block diagram depicting one embodiment of the hearing aid shown herein;
Figure 6 is a functional block diagram depicting another embodiment of the hearing aid shown herein;
Figure 7 is a functional block diagram depicting a further embodiment of the hearing aid shown herein;
Figure 8 is a functional block diagram a still further embodiment of the hearing aid shown herein; Figure 9 is a functional block diagram depicting one embodiment of a smart device shown in figure 1, which may form a pairing with the hearing aid;
Figure 10 is a functional block diagram depicting one embodiment of sampling rate processing, according to the teachings presented herein;
Figure 11 is a functional block diagram depicting one embodiment of harmonics processing, according to the teachings presented herein;
Figure 12 is a functional block diagram depicting one embodiment of frequency shift, signal amplification, and harmonics enhancement, according to the teachings presented herein;
5 Figure 13 is a functional block diagram depicting one embodiment of headset operational process flow, according to the teachings presented herein; and
Figure 14 is a graph depicting one operational embodiment of the hearing aid presented herein. io DETAILED DESCRIPTION OF THE INVENTION
While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts, which can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make is and use the invention, and do not delimit the scope of the present invention.
Referring initially to figure 1A and figure IB, therein is depicted one embodiment of a hearing aid, which is schematically illustrated and designated 10. As shown, a user U, who may be considered a patient requiring a hearing aid, is wearing the hearing aid 10 and sitting at a table T at a restaurant or cafe, for example, and engaged in a conversation with an 20 individual Ii and an individual I2. The user U is also suffering from tinnitus TS. As part of a conversation at the table T, the user U is speaking sound Si, the individual E is speaking sound S2, and the individual I2 is speaking sound S3. Nearby, in the background, a bystander Bi is engaged in a conversation with a bystander B2. The bystander Bi is speaking sound S4 and the bystander B2 is speaking sound S5. An ambulance A is driving by the table T and emitting 25 sound S6. The sounds Si, S2, and S3 may be described as the immediate background sounds. The sounds S4, S5, and S6 may be described as the background sounds. The sound S6 may be described as the dominant sound as it is the loudest sound at table T.
As will be described in further detail hereinbelow, the hearing aid 10 is programmed with a qualified sound range for each ear in a two-ear embodiment and for one ear in a one- 30 ear embodiment. As shown, in the two-ear embodiment, the qualified sound range may be a range of sound corresponding to a preferred hearing range for each ear of the user modified with a subjective assessment of sound quality according to the user. The preferred hearing range may be a range of sound corresponding to the highest hearing capacity of an ear of the
user U between a range, which, by way of example, may be between 50Hz and 10,000Hz. Further, as shown, in the two-ear embodiment, the preferred hearing range for each ear may be multiple ranges of sound corresponding to the highest hearing capacity ranges of an ear of the user U between 50Hz and 10,000Hz. In some embodiments of this multiple range of sound implementation, the various sounds Si through S6 received may be transformed and divided into the multiple ranges of sound. In particular, the preferred hearing range for each ear may be an about 300Hz frequency to an about 500Hz frequency range of sound corresponding to highest hearing capacity of a patient.
The subjective assessment according to the user may include a completed assessment of a degree of annoyance caused to the user by an impairment of wanted sound. The subj ective assessment according to the user may also include a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound. That is, the subjective assessment according to the user may include a completed assessment to determine best sound quality to the user. Sound received at the hearing aid 10 is converted to the qualified sound range prior to output, which the user U hears.
In one embodiment, the hearing aid 10 has a dominant sound mode of operation 26, an immediate background mode of operation 28, and a background mode of operation 30 under the selective adjustment of the user U. In the dominant sound mode of operation 26, the hearing aid 10 identifies a loudest sound, such as the sound S6, in the processed signal and increases a volume of the loudest sound in the signal being processed. In the immediate background mode of operation, the hearing aid 10 identifies sound in an immediate surrounding, such as the sounds Si, S2, and S3 at the table T, to the hearing aid 10 and suppresses these sounds in the signal being processed. In the background mode of operation, the hearing aid 10 identifies extraneous ambient sound, such as the sounds S4, S5, and S6, received at the hearing aid 10 and suppresses the extraneous ambient sounds in the signal being processed. Additionally, in the various modes of operation, the hearing aid 10 may identify the direction a particular sound is originating and express this direction in the two-ear embodiment, with appropriate sound distribution. By way of example, the ambulance A and the sound S6 are originating on the left side of the user U and the sound is appropriately distributed at the hearing aid 10 to reflect this occurrence as indicated by an arrow L.
In one embodiment, the hearing aid 10 is also programmed with a tinnitus frequency which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid 10 is converted to the qualified sound range, which was
previously discussed, prior to output with the output amplified at 0 dB at the tinnitus frequency. In this manner, the hearing aid 10 mitigates or eliminates the problems the user U experiences from the tinnitus TS.
In a further embodiment that addresses the user U experiencing the tinnitus TS, the hearing aid 10 may be programmed with a tinnitus frequency, which, as previously mentioned, is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. Sound received at the hearing aid 10 is converted to the qualified sound range prior to output with an inverse amplitude signal applied at the tinnitus frequency to mitigate the tinnitus TS experienced by the patient. This application may alleviate the tinnitus TS in patients having impaired hearing and in patients without hearing impairment other than the tinnitus TS.
In one embodiment, the hearing aid 10 may create a pairing with a proximate smart device 12, such as a smart phone (depicted), smart watch, or tablet computer. The proximate smart device 12 includes a display 14 having an interface 16 having controls, such as an ON/OFF switch or volume controls 18 and mode of operation controls 20. A user may send a control signal wirelessly from the proximate smart device 12 to the hearing aid 10 to control a function, like volume controls 18, or to activate mode ON 22 or mode OFF 24 relative to one of the dominant sound modes of operation 26, the immediate background mode of operation 28, or the background mode of operation 30. It should be appreciated that the user U may activate other controls wirelessly from the proximate smart device 12. By way of example and not by way of limitation, other controls may include microphone input sensitivity adjusted per ear, speaker volume input adjusted per ear, the aforementioned background suppression for both ears, dominant sound amplification per ear, and ON/OFF. Further, in one embodiment, as shown by processor symbol P, after the hearing aid 10 creates the pairing with a proximate smart device 12, the hearing aid 10 and the proximate smart device 12 may leverage the wireless communication link therebetween and use processing distributed between the hearing aid 10 and the proximate smart device 12 to process the signals and perform other analysis.
Referring to figure 2, as shown, in the illustrated embodiment, the hearing aid 10 includes a left body 32 and a right body 34 connected to a band member 36 that is configured to partially circumscribe the user U. Each of the left body 32 and the right body 34 cover an external ear of the user U and are sized to engage therewith. In some embodiments, microphones 38, 40, 42, which gather sound directionally and convert the gathered sound into an electrical signal, are located on the left body 32. With respect to gathering sound, the microphone 38 may be positioned to gather forward sound, the microphone 40 may be
positioned to gather lateral sound, and the microphone 42 may be positioned to gather rear sound. Microphones may be similarly positioned on the right body 34. Various internal compartments 44 provide space for housing electronics, which will be discussed in further detail hereinbelow. Various controls 46 provide a patient interface with the hearing aid 10.
Having each of the left body 32 and the right body 34 cover an external ear of the user U and being sized to engage therewith confers certain benefits. Sound waves enter through the outer ear and reach the middle ear to vibrate the eardrum. The eardrum then vibrates the oscilles, which are small bones in the middle ear. The sound vibrations travel through the oscilles to the inner ear. When the sound vibrations reach the cochlea, they push against specialized cells known as hair cells. The hair cells turn the vibrations into electrical nerve impulses. The auditory nerve connects the cochlea to the auditory centers of the brain. When these electrical nerve impulses reach the brain, they are experienced as sound. The outer ear serves a variety of functions. The various air-filled cavities composing the outer ear, the two most prominent being the concha and the ear canal, have a natural or resonant frequency to which they respond best. This is true of all air-filled cavities. The resonance of each of these cavities is such that each structure increases the sound pressure at its resonant frequency by approximately 10 to 12 dB. In summary, among the functions of the outer ear: a) boost or amplify high-frequency sounds; b) provide the primary cue for the determination of the elevation of a sound's source; c) assist in distinguishing sounds that arise from in front of the listener from those that arise from behind the listener. Headsets are used in hearing testing in medical and associated facilities for a reason: tests have shown that completely closing the ear canal in order to prevent any form of outside noise plays direct role in acoustic matching. The more severe hearing problem, the closer the hearing aid speaker must be to the ear drum. However, the closer to the speaker is to the ear drum, the more the device plugs the canal and negatively impacts the ear’s pressure system. That is, the various chambers of the ear have a defined operational pressure determined, in part, by the ear’s structure. By plugging the ear canal, the pressure system in the ear is distorted and the operational pressure of the ear is negatively impacted.
As alluded, “plug size” hearing aids having limitations with respect to distorting the defined operational pressure within the ear. Considering the function of the outer ear’s air filled cavities in increasing the sound pressure at resonant frequencies, the hearing aid of figure 2 - and other figures - creates a closed chamber around the ear increasing the pressure within the chamber. This higher pressure plus the utilization of a more powerful speaker within the
headset at qualified sound range, e.g., the frequency range the user hears best with the best quality sound, provide the ideal set of parameters for a powerful hearing aid.
Referring to figure 3A and figure 3B, as shown, in the illustrated embodiment, the hearing aid 10 includes a left body 52 having an ear hook 54 extending from the left body 52 to an ear mold 56. The left body 52 and the ear mold 56 may each at least partially conform to the contours of the external ear and sized to engage therewith. By way of example, the left body 52 may be sized to engage with the contours of the ear in a behind-the-ear-fit. The ear mold 56 may be sized to be fitted for the physical shape of a patient’s ear. The ear hook 54 may include a flexible tubular material that propagates sound from the left body 52 to the ear mold 56. Microphones 58, which gather sound and convert the gathered sound into an electrical signal, are located on the left body 52. An opening 60 within the ear mold 56 permits sound traveling through the ear hook 54 to exit into the patient’s ear. An internal compartment 62 provides space for housing electronics, which will be discussed in further detail hereinbelow. Various controls 64 provide a patient interface with the hearing aid 10 on the left body 52 of the hearing aid 10.
As also shown, the hearing aid 10 includes a right body 72 having an ear hook 74 extending from the right body 72 to an ear mold 76. The right body 72 and the ear mold 76 may each at least partially conform to the contours of the external ear and sized to engage therewith. By way of example, the right body 72 may be sized to engage with the contours of the ear in a behind-the-ear-fit. The ear mold 76 may be sized to be fitted for the physical shape of a patient’s ear. The ear hook 74 may include a flexible tubular material that propagates sound from the right body 72 to the ear mold 76. Microphones 78, which gather sound and convert the gathered sound into an electrical signal, are located on the right body 72. An opening 80 within the ear mold 76 permits sound traveling through the ear hook 74 to exit into the patient’s ear. An internal compartment 82 provides space for housing electronics, which will be discussed in further detail hereinbelow. Various controls 84 provide a patient interface with the hearing aid 10 on the right body 72 of the hearing aid 10. It should be appreciated that the various controls 64, 84 and other components of the left and right bodies 52, 72 may be at least partially integrated and consolidated. Further, it should be appreciated that the hearing aid 10 may have one or more microphones on each of the left and right bodies 52, 72 to improve directional hearing in certain implementations and provide, in some implementations, 360-degree directional sound input.
In one embodiment, the left and right bodies 52, 72 are connected at the respective ear hooks 54, 74 by a band member 90 which is configured to partially circumscribe a head or a neck of the patient. A compartment 92 within the band member 90 may provide space for electronics and the like. Additionally, the hearing aid 10 may include left and right earpiece covers 94, 96 respectively positioned exteriorly to the left and right bodies 52, 72. Each of the left and right earpiece covers 94, 96 isolate noise to block out interfering outside noises. To add further benefit, in one embodiment, the microphones 58 in the left body 52 and the microphones 78 in the right body 72 may cooperate to provide directional hearing.
Referring to figure 4, therein is depicted another embodiment of the hearing aid 10. As shown, in the illustrated embodiment, the hearing aid 10 includes a body 112 having an ear hook 114 extending from the body 112 to an ear mold 116. The body 112 and the ear mold 116 may each at least partially conform to the contours of the external ear and sized to engage therewith. By way of example, the body 112 may be sized to engage with the contours of the ear in a behind-the-ear-fit. The ear mold 116 may be sized to be fitted for the physical shape of a patient’s ear. The ear hook 114 may include a flexible tubular material that propagates sound from the body 112 to the ear mold 116. A microphone 118, which gathers sound and converts the gathered sound into an electrical signal, is located on the body 112. An opening 120 within the ear mold 116 permits sound traveling through the ear hook 114 to exit into the patient’s ear. An internal compartment 122 provides space for housing electronics, which will be discussed in further detail hereinbelow. Various controls 124 provide a patient interface with the hearing aid 10 on the body 112 of the hearing aid 10.
Referring now to figure 5, an illustrative embodiment of the internal components of the hearing aid 10 is depicted. By way of illustration and not by way of limitation, the hearing aid 10 depicted in the embodiment of figure 2 and figures 3 A, 3B is presented. It should be appreciated, however, that the teachings of figure 5 equally apply to the embodiment of figure 4. As shown, with respect to figures 3A and 3B, in one embodiment, within the internal compartments 62, 82, an electronic signal processor 130 may be housed. The hearing aid 10 may include an electronic signal processor 130 for each ear or the electronic signal processor 130 for each ear may be at least partially integrated or fully integrated. In another embodiment, with respect to figure 4, within the internal compartment 122 of the body 112, the electronic signal processor 130 is housed. In order to measure, filter, compress, and generate, for example, continuous real-world analog signals in form of sounds, the electronic signal processor 130 may include an analog-to-digital converter (ADC) 132, a digital signal
processor (DSP) 134, a digital-to-analog converter (DAC) 136, and a signal generator 137. The electronic signal processor 130, including the digital signal processor embodiment, may have memory accessible to a processor. One or more microphone inputs 138 corresponding to one or more respective microphones, a speaker output 140, various controls, such as a programming connector 142 and hearing aid controls 144, an induction coil 146, a battery 148, and a transceiver 150 are also housed within the hearing aid 10.
As shown, a signaling architecture communicatively interconnects the microphone inputs 138 to the electronic signal processor 130 and the electronic signal processor 130 to the speaker output 140. The various hearing aid controls 144, the induction coil 146, the battery 148, and the transceiver 150 are also communicatively interconnected to the electronic signal processor 130 by the signaling architecture. The speaker output 140 sends the sound output to a speaker or speakers to project sound and in particular, acoustic signals in the audio frequency band as processed by the hearing aid 10. By way of example, the programming connector 142 may provide an interface to a computer or other device. The hearing aid controls 144 may include an ON/OFF switch as well as volume controls, for example. The induction coil 146 may receive magnetic field signals in the audio frequency band from a telephone receiver or a transmitting induction loop, for example, to provide a telecoil functionality. The induction coil 146 may also be utilized to receive remote control signals encoded on a transmitted or radiated electromagnetic carrier, with a frequency above the audio band. Various programming signals from a transmitter may also be received via the induction coil 146 or via the transceiver 150, as will be discussed. The battery 148 provides power to the hearing aid 10 and may be rechargeable or accessed through a battery compartment door (not shown), for example. The transceiver 150 may be internal, external, or a combination thereof to the housing. Further, the transceiver 150 may be a transmitter/receiver, receiver, or an antenna, for example. Communication between various smart devices and the hearing aid 10 may be enabled by a variety of wireless methodologies employed by the transceiver 150, including 802.11, 3G, 4G, Edge, WiFi, ZigBee, near field communications (NFC), Bluetooth low energy, and Bluetooth, for example.
The various controls and inputs and outputs presented above are exemplary and it should be appreciated that other types of controls may be incorporated in the hearing aid 10. Moreover, the electronics and form of the hearing aid 10 may vary. The hearing aid 10 and associated electronics may include any type of headphone configuration, a behind-the-ear configuration, an in-the-ear configuration, or in-the-ear configuration, for example. Further,
as alluded, electronic configurations with multiple microphones for directional hearing are within the teachings presented herein. In some embodiments, the hearing aid has an over-the- ear configuration where the entire ear is covered, which not only provides the hearing aid functionality but hearing protection functionality as well.
Continuing to refer to figure 5, in one embodiment, the electronic signal processor 130 may be programmed with a tinnitus frequency, which is a range of sound corresponding to a sensation of tinnitus in the ear of the patient. The electronic signal processor 130 may then convert sound received at the hearing aid to the qualified sound range prior to output with the output amplified at 0 dB at the tinnitus frequency or an inverse amplitude signal applied at the tinnitus frequency. In one implementation, the inverse amplitude signal is provided by the signal generator 137.
Still continuing to refer to figure 5, in one embodiment, the electronic signal processor 130 may be programmed with a preferred hearing range which, in one embodiment, is the preferred hearing sound range corresponding to highest hearing capacity of a patient. In one embodiment, the left ear preferred hearing range and the right ear preferred hearing range are each a range of sound corresponding to highest hearing capacity of an ear of a patient between, by way of example, a variable range, such as between 50Hz and 10,000Hz. The preferred hearing range for each of the left ear and the right ear may be an about 300Hz frequency to an about 500Hz frequency range of sound.
With this approach, the hearing capacity of the patient is enhanced. Existing audiogram hearing aid industry testing equipment measures hearing capacity at defined frequencies, such as 60Hz; 125Hz; 250Hz; 500Hz; 1,000Hz; 2,000Hz; 4,000Hz; 8,000Hz and existing hearing aids work on a ratio-based frequency scheme. The present teachings however measure hearing capacity at a small step, such as 5Hz, 10Hz, or 20Hz. Thereafter, one or a few, such as three, frequency ranges are defined to serve as the preferred hearing range or preferred hearing ranges. As discussed herein, in some embodiments of the present approach, a two-step process is utilized. First, hearing is tested in an ear within a range, such as between 50Hz and 5,000Hz, for example, at a variable increment, such as a 50Hz increment or other increment, and between 5,000Hz and 10,0000Hz at a variable increment, such as a 200Hz increment or other increment, to identify potential hearing ranges. Then, in the second step, the testing may be switched to a 5Hz, 10Hz, or 20Hz increment to precisely identify the preferred hearing range.
Further, in one embodiment, various controls 144 may include an adjustment that widens the about frequency range of about 200Hz, for example, to a frequency range of 100Hz to 700Hz or even wider, for example. Further, the preferred hearing sound range may be shifted by use of the various controls 144. Directional microphone systems on each microphone position and processing may be included that provide a boost to sounds coming from the front of the patient and reduce sounds from other directions. Such a directional microphone system and processing may improve speech understanding in situations with excessive background noise. Digital noise reduction, impulse noise reduction, and wind noise reduction may also be incorporated. As alluded to, system compatibility features, such as FM compatibility and Bluetooth compatibility, may be included in the hearing aid 10.
The processor may process instructions for execution within the electronic signal processor 130 as a computing device, including instructions stored in the memory. The memory stores information within the computing device. In one implementation, the memory is a volatile memory unit or units. In another implementation, the memory is a non-volatile memory unit or units. The memory is accessible to the processor and includes processor- executable instructions that, when executed, cause the processor to execute a series of operations. The processor-executable instructions cause the processor to receive an input analog signal from the microphone inputs 138 and convert the input analog signal to a digital signal. In one implementation, as part of the conversion from the input analog signal to a digital signal, the input analog signal is modified with a subjective assessment of sound quality according to the patient at a converter 131. The processor-executable instructions then cause the processor to transform through compression, for example, the digital signal into a processed digital signal having the subjective assessment of sound quality according to the patient. If should be appreciated that at this step, in one embodiment, the digital signal may be modified with a subjective assessment of sound quality according to the patient, if such a modification has not already occurred. The processed digital signal is then transformed into the preferred hearing range. The transformation may be a frequency transformation where the input frequency is frequency transformed into the preferred hearing range. Such a transformation is a toned-down, narrower articulation that is clearly understandable as it is customized for the user. The processor is then caused by the processor-executable instructions to convert the processed digital signal to an output analog signal, which may be amplified as required, and drive the output analog signal to the speaker output 140. Essentially, in one embodiment, utilizing a single algorithm an analog sound is converted by way of the
subjective assessment of sound quality according to the user. The signal is then transferred into the preferred hearing range prior to a digital-to-analog conversion and amplification.
The memory that is accessible to the processor may include additional processor- executable instructions that, when executed, cause the processor to execute a series of operations. The processor-executable instructions may cause the processor to receive a control signal to control volume or another functionality. The processor-executable instructions may also receive a control signal and cause the activation of one of a dominant sound mode of operation 26, an immediate background mode of operation 28, and a background mode of operation 30. The various modes of operation, including the dominant sound mode of operation 26, the immediate background mode of operation 28, and the background mode of operation 30, may be implemented on a per ear basis or for both ears.
These processor-executable instructions may also cause the processor to create a pairing via the transceiver 150 with a proximate smart device 12. The processor-executable instructions may then cause the processor to receive a control signal from the proximate smart device to control volume or another functionality. The processor-executable instructions may then receive a control signal and cause the activation of one of a dominant sound mode of operation 26, an immediate background mode of operation 28, and a background mode of operation 30.
In another implementation, the processor-executable instructions may cause the processor to receive an input analog signal from the microphone inputs 138 and convert the input analog signal to a digital signal modified with a subjective assessment of sound quality according to the user. The processor then transforms through compression the digital signal into a processed digital signal having the preferred hearing range. In the dominant sound mode of operation 26, the processor is caused to identify a loudest sound in the processed digital signal and increase a volume of the loudest sound in the processed digital signal. The processor is then caused, in the immediate background mode of operation 28, to identify sound in an immediate surrounding to the hearing aid 10 and suppress the sound in the processed digital signal. In the background mode of operation 30, the processor is caused to identify extraneous ambient sound received at the hearing aid 10 and suppress the extraneous ambient sound in the processed digital signal. Further, the processor may be caused to convert the processed digital signal to an output analog signal and drive the output analog signal to the speaker.
In some implementations, the processor-executable instructions may cause the hearing aid to receive an input analog signal from the microphone. The processor-executable instructions then cause the processor to convert the input analog signal to a digital signal, which is then transformed into a processed digital signal having the qualified sound range. Next, the processor-executable instructions cause the processor to convert the processed digital signal to an output analog signal, with an amplification of the output analog signal at 0 dB at the tinnitus frequency. The output analog signal is then caused to be driven to the speaker.
In some other embodiments, the processor-executable instructions cause the processor to receive an input analog signal from the microphone and then convert the input analog signal to a digital signal. The digital signal is then caused to be transformed into a processed digital signal having the qualified sound range with an inverse amplitude signal at the tinnitus frequency. By way of example, the inverse amplitude signal may include a signal shift along the x-axis according to the formula f(x) = sin(x) + f(x) = sin(x - p) = 0, where tinnitus signal of f(x) = sin(x) corresponds to the tinnitus frequency. The processed digital signal is then converted to an output analog signal prior to being driven the output analog signal to the speaker.
In other implementations, the processor-executable instructions may cause the processor to create a pairing via the transceiver 150 with the proximate smart device 12. Then, the processor-executable instructions may cause the processor to receive an input analog signal from the microphone and convert the input analog signal to a digital signal. The processor may then be caused to transform through compression with distributed computing between the processor and the proximate smart device 12, the digital signal into a processed digital signal having the preferred hearing range modified with a subjective assessment of sound quality according to the user to provide the qualified sound range. At the processor within the hearing aid, the processor-executable instructions cause the processor to convert the processed digital signal to an output analog signal and drive the output analog signal to the speaker. The left ear preferred hearing range and the right ear preferred hearing range may comprise a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component. Further, the processor- executable instructions may cause the processor to process a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component.
In another implementation, the processor-executable instructions may cause the processor to receive an input analog signal from the microphone inputs and convert the input analog signal to a digital signal modified with a subjective assessment of sound quality according to the user. The processor then transforms the digital signal into a processed digital signal having a preferred hearing range. The preferred hearing range may be one or more ranges of sound corresponding to the highest hearing capacity of an ear of the patient. As mentioned, to provide the qualified sound range, the preferred hearing range may be modified with a subjective assessment of sound quality according to the patient. The subjective assessment of sound quality according to the patient may be a completed assessment of a degree of annoyance caused to the patient by an impairment of wanted sound. The preferred hearing range may be modified with enhanced harmonics, including a cut-off harmonics component, an additional harmonics component, or a harmonics transfer component, for example. The processor-executable instructions may also cause the processor to convert the processed digital signal to an output analog signal and drive the output analog signal to the speaker. It should be appreciated that the processor-executable instructions may cause the processor to utilize the transceiver to utilize distributed processing between the hearing aid and the proximate smart device to transform through compression the digital signal into a processed digital signal having the preferred hearing range with harmonics enhancement.
The processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types. Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
Referring now to figure 6, in one embodiment, the electronic signal processor 130 receives a signal from the one or more microphone inputs 138 and outputs a signal to the
speaker output 140. The electronic signal processor 130 includes a gain stage 160 that receives the electronic signal from the microphone inputs 138 and amplifies the signal. The gain stage 160 forwards the signal to an analog-to-digital converter (ADC) 162, which converts the amplified analogue electronic signal to a digital electronic signal. The gain stage 160, in one embodiment, is a point during an audio signal flow that adjustments may be made to the audio signal prior to conversion by the analog-to-digital converter (ADC) 162. The gain stage 160 may include a modification of the signal to accommodate a subjective assessment of sound quality according to the user or patient. A digital signal processor (DSP) 164 receives the digital electronic signal from the ADC 162 and is configured to process the digital electronic signal with the desired compensation based on the qualified sound range, which includes the preferred hearing range, which is stored therein, and may include the subjective assessment of sound quality according to the user.
The DSP 164 may cancel or reduce - or augment or increase - the ambient noise to support the desired dominant sound mode of operation 26, immediate background mode of operation 28, or background mode of operation 30 by utilizing an algorithm. Such an algorithm may examine modulation characteristics of the speech envelope, such as harmonic structure, modulation depth, and modulation count. Based on these characteristics, various triggers may be defined that describe wanted versus unwanted background noise as well as immediate noise. The sound may then be altered digitally. It should be appreciated that other digital noise reduction and gain techniques may be utilized, including algorithms incorporating adaptive beamforming and adaptive optimal filtering processing.
In some embodiments, the DSP 164, alone or in combination with other electronic components of the electronic signal processor 130, provides compensation to patient’s experiencing tinnitus. As part of the electronic signal processor 130 processing the sound received at the hearing aid 10, the DSP 164 may cause the output to be modified with the output amplified at 0 dB at the tinnitus frequency. Alternatively, the DSP 164, alone or in combination with other electronic components of the electronic signal processor 130, may apply an inverse amplitude signal applied at the tinnitus frequency to provide compensation for tinnitus.
The processed digital electronic signal is then driven to a digital-to-analog converter (DAC) 166, which converts the processed digital electronic signal to a processed analog electronic signal that is then driven to a multiplexer 168 and onto a low output impedance output driver 170 prior to output, at the speaker output 140. A gain stage 172 receives the
electronic signal from the microphone inputs 138 and amplifies the analog electronic signal prior to driving the signal to an active noise modulation (ANM) unit 174, which is configured to perform active noise suppression or active noise augmentation by way of various amplifiers and filters. Another signal path includes the DSP 164 providing the processed digital electronic signal to a DAC 176 and a filter 178. The ANM-driven signal and filter-driven signal are combined at the combiner unit 180 prior to be provided to a pulse width modulator (PWM) 182 prior to the signal being driven to the multiplexer 168. In this manner the ANM- driven signal may cancel or reduce - or augment or increase - the ambient noise to provide the desired dominant sound mode of operation 26, immediate background mode of operation 28, or background mode of operation 30 while the DSP-driven signal corrects the input signal to compensate for hearing loss according to the qualified sound range.
Referring now to figure 7, in one embodiment of the hearing aid 10, a signal controller 200 is centrally located in communication with a signal analyzer and controller 202 serving the left side of the hearing aid 10 and with a signal analyzer and controller 204 serving the right side of the hearing aid 10. As shown the signal analyzer and controller 202 may include signal generator functionality. A Bluetooth interface unit 206 is also in communication with the signal analyzer and controller 202 and with the signal analyzer and controller 204, which may also include signal generator functionality. The Bluetooth interface unit 206 is located in communication with a smart device application 208 that may be installed on a smart device, such as a smart phone or smart watch. A battery pack and charger 210 serves the hearing aid 10 with power.
With respect to the left microphones, a forward microphone 212, a sideways-facing microphone 214, and a back microphone 216 are respectfully connected in series to by-pass filters 218, 220, 222, which in turn are respectfully connected in series to pre-amplifiers 224, 226, 228 connected to the signal analyzer and controller 202. Similarly, with respect to the right microphones, a forward microphone 242, a sideways-facing microphone 244, and a back microphone 246 are respectfully connected in series to by-pass filters 248, 250, 252, which in turn are respectfully connected in series to pre-amplifiers 254, 256, 258 connected to the signal analyzer and controller 204.
The signal analyzer and controller 202 is connected in parallel to a noise filter 230 and an amplifier 232, which also receives a signal from the noise filter 230. The amplifier 232 drives a signal to the left speaker 234. Similarly, the signal analyzer and controller 204 is connected in parallel to a noise filter 260 and an amplifier 262, which also receives a signal
from the noise filter 260. The amplifier 262 drives a signal to the right speaker 264. As previously alluded, each of the signal analyzer and controllers 202, 204 transfers the live sound frequency into a qualified sound range including a frequency range or frequency ranges that the person using the hearing aid 10 hears through, in some embodiments, a combination of frequency transfer, sampling rate, cut-off harmonics, additive harmonics, and harmonic transfer. The qualified sound range also includes a modification of the sound based on a subjective assessment of sound quality. Also, each of the signal analyzer and controllers 202, 204 may determine a direction of the sound source. Further, as mentioned, each of the signal analyzer and controllers 202, 204 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency or applying an inverse amplitude signal applied at the tinnitus frequency to provide compensation for the tinnitus TS.
Referring now to figure 8, in one embodiment of the hearing aid 10, a smart device input 280, an adjustable background noise filter 282, a voice directional analysis module 284, and a control unit 286 are interconnected. A front microphone 288, a side microphone 290, and a rear microphone 292 are connected to a microphone input sensitivity module 294. A processor 296, an amplifier 298, volume control 300, and a speaker 302 are also provided. On the other side, a front microphone 308, a side microphone 310, and a rear microphone 312 are connected to a microphone input sensitivity module 314. A processor 316, an amplifier 318, volume control 320, and a speaker 322 are also provided.
With respect to signaling, on a first side of the hearing aid 10, the front microphone 288, the side microphone 290, and the rear microphone 292 provide a direct signal 330 to the microphone input sensitivity module 294, which provides a feedback signal 332. The direct signal 330 and the feedback signal 332 provide for the regulation of the input volume at the front microphone 288, the side microphone 290, and the rear microphone 292. The microphone input sensitivity module 294, in turn, provides a direct signal 334 to the adjustable background noise filter 282. A direct signal 336 is provided to the voice directional analysis module 284.
On a second side of the hearing aid 10, the front microphone 308, the side microphone 310, and the rear microphone 312 provide a direct signal 340 to the microphone input sensitivity module 314, which provides a feedback signal 342. The direct signal 340 and the feedback signal 342 provide for the regulation of the input volume at the front microphone
308, the side microphone 310, and the rear microphone 312. The microphone input sensitivity module 314, in turn, provides a direct signal 344 to the adjustable background noise filter 282.
The voice directional analysis 284, which determines the direction of origin of sound received by the front microphone 288, the side microphone 290, the rear microphone 292, the front microphone 308, the side microphone 310, and the rear microphone 312, provides a direct signal 346 to the processor 296 and a direct signal 348 to the processor 316. The processor 296 is associated with the speaker 302 and provides a direct signal 350 to the amplifier 298, which provides a direct signal 352 to the volume control 300. The processor 296 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency or applying an inverse amplitude signal applied at the tinnitus frequency to provide compensation for the tinnitus TS. A direct signal 354 is then provided to the speaker 302. The speaker 302 is physically positioned on the same ear as the front microphone 288, the side microphone 290, and the rear microphone 292.
On the other hand, the processor 316 is associated with the speaker 322 and provides a direct signal 360 to the amplifier 318, which provides a direct signal 362 to the volume control 320. The processor 316 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency or applying an inverse amplitude signal applied at the tinnitus frequency to provide compensation for the tinnitus TS. A direct signal 364 is then provided to the speaker 322. The speaker 322 is physically positioned on the same ear as the front microphone 308, the side microphone 310, and the rear microphone 312.
In applications where the smart device input 280 is utilized, the smart device input 280 provides a direct signal 370 to each of the processors 296, 316. A direct signal 372 is also provided by the smart device input 280 to the smart device by way of connection 374, which is under the direct control of the control unit 286 by way of a direct control signal 376. Continuing with the discussion of the control unit 286, a bi-directional interface 378 operates between the control unit 286 and the microphone input sensitivity module 294. Similarly, a bi-directional interface 380 operates between the control unit 286 and the adjustable background noise filter 282. A bi-directional interface 382 operates between the control unit 286 and the microphone input sensitivity module 314 that services the front microphone 308, the side microphone 310, and the rear microphone 312.
The control unit 286 and the processor 296 share a bi-directional interface 384 and the control unit 286 and the processor 316 share a bi-directional interface 386. The control unit 286 provides direct control over the volume control 300 associated with the speaker 302 and the volume control 320 associated with the speaker 322 via respective direct control signals 388, 390.
Referring now to figure 9, the proximate smart device 12 may be a wireless communication device of the type including various fixed, mobile, and/or portable devices. To expand rather than limit the discussion of the proximate smart device 12, such devices may include, but are not limited to, cellular or mobile smart phones, tablet computers, smartwatches, and so forth. The proximate smart device 12 may include a processor 400, memory 402, storage 404, a transceiver 406, and a cellular antenna 408 interconnected by a busing architecture 410 that also supports the display 14, I/O panel 414, and a camera 416. It should be appreciated that although a particular architecture is explained, other designs and layouts are within the teachings presented herein.
In operation, the teachings presented herein permit the proximate smart device 12 such as a smart phone to form a pairing with the hearing aid 10 and operate the hearing aid 10. As shown, the proximate smart device 12 includes the memory 402 accessible to the processor 400 and the memory 402 includes processor-executable instructions that, when executed, cause the processor 400 to provide an interface for an operator that includes an interactive application for viewing the status of the hearing aid 10. The processor 400 is caused to present a menu for controlling the hearing aid 10. The processor 400 is then caused to receive an interactive instruction from the user and forward a control signal via the transceiver 406, for example, to implement the instruction at the hearing aid 10. The processor 400 may also be caused to generate various reports about the operation of the hearing aid 10. The processor 400 may also be caused to translate or access a translation service for the audio.
In a still further embodiment of processor-executable instructions, the processor- executable instructions cause the processor 400 to provide an interface for the user U of the hearing aid 10 to select a mode of operation. In one embodiment, as discussed, the hearing aid 10 has the dominant sound mode of operation 26, the immediate background mode of operation 28, and the background mode of operation 30. As previously discussed, in the dominant sound mode of operation 26, the hearing aid 10 identifies a loudest sound in the processed digital signal and increases a volume of the loudest sound in the signal being processed. In the immediate background mode of operation 28, the hearing aid 10 identifies
sound in an immediate surrounding to the hearing aid 10 and suppresses the sound in the signal being processed. In the background mode of operation 30, the hearing aid 10 identifies extraneous ambient sound received at the hearing aid 10 and suppresses the extraneous ambient sound in the signal being processed. In a still further embodiment of processor-executable instructions, the processor- executable instructions cause the processor 400 to create a pairing via the transceiver 406 with the hearing aid 10. Then, the processor-executable instructions may cause the processor 400 to transform through compression with distributed computing between the processor 400 and the hearing aid 10, the digital signal into a processed digital signal having the qualified sound range, which includes the preferred hearing range as well as the subj ective assessment of sound quality. The left ear preferred hearing range and the right ear preferred hearing range may comprise a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component. Further, the processor-executable instructions may cause the processor 400 to process a frequency transfer component, a sampling rate component, a cut-off harmonics component, an additional harmonics component, and/or a harmonics transfer component. The subjective assessment according to the user may include a completed assessment of a degree of annoyance caused to the user by an impairment of wanted sound. The subjective assessment according to the user may also include a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound. That is, the subjective assessment according to the user may include a completed assessment to determine best sound quality to the user.
Further still, the processor-executable instructions cause the processor 400 to create the pairing via the transceiver 406 with the hearing aid 10 and cause the processor 400 to transform through compression with distributed computing between the processor 400 and the hearing aid 10, the digital signal into a processed digital signal having the qualified sound range including the preferred hearing range and subjective assessment of sound quality. The preferred hearing range may be a range or ranges of sound corresponding to highest hearing capacity of an ear of a patient modified with a subjective assessment of sound quality according to the patient. The preferred hearing range may further include harmonics, such as a cut-off harmonics component, an additional harmonics component, or a harmonics transfer component, for example. The preferred hearing range may also include a frequency transfer component, a sampling rate component, a signal amplification component. The subjective
assessment according to the user may include a completed assessment of a degree of annoyance caused to the user by an impairment of wanted sound. The subjective assessment according to the user may also include a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound. That is, the subjective assessment according to the user may include a completed assessment to determine best sound quality to the user.
In a still further embodiment, the processor-executable instructions cause the processor 400 to create the pairing via the transceiver 406 with the hearing aid 10 and cause the processor 400 to implement one of two solutions for addressing the tinnitus TS. The processor 400 may modify the output sound to make accommodations for the tinnitus TS by causing the output to be modified with the output amplified at 0 dB at the tinnitus frequency. Alternatively, the processor 400 may apply an inverse amplitude signal applied at the tinnitus frequency to provide compensation, including elimination, for the tinnitus TS.
Referring now to figure 10, in some embodiments, a sampling rate circuit 430, which may form a portion of the hearing aid 10 may have an analog signal 432 as an input and a digital signal 434 as an output. More particularly, an analog-to-digital converter (ADC) 436 receives the analog signal 432 and a signal from a frequency spectrum analyzer 438 as inputs. The ADC 436 provides outputs including the digital signal 434 and a signal to the frequency spectrum analyzer 438. The frequency spectrum analyzer 438 forms a feedback loop with a sampling rate controller 442 and a sampling rate generator 444. As shown, the frequency spectrum analyzer 438 analyzes the range of one received analog signal 432 and through the feedback loop using the sampling rate controller 442 and sampling rate generator 444 the sampling rage at the ADC 426 is optimized.
By way of further explanation, with respect to sampling rate (SR), total sound ST may be defined as follows:
ST = FB + Hi + ¾ + .. . + HN, wherein:
ST = Total Sound;
FB = Base Frequency;
Hi = 1st Harmonic; ¾ = 2nd Harmonic; and
HN = N111 Harmonic, where H is the mathematical multiplication of FB.
That is, total sound ST is the sum of cardinal sound (CS) and an N stage of Background Noise (BN), such that the following applies:
ST = CS + BNG + BNi, wherein:
BNG = general background noise;
BNi = immediate background noise; and
CS = highest amplitude sound within a defined timeframe. Within this framework, differentiation of the number of background noise (BN) stages is matter of decision, not matter of structural change.
Therefore, with respect to sampling rate (SR), the following applies:
SR = N x highest frequency that the filter from ST = FB + Hi + ¾ . + HN will allow. In this manner, the hearing aid sampling rate (SR) may be designed to be between
1kHz - 40kHz; however, the range may be modified based on application. The sampling rate (SR) change may be controlled by the ratio between the cardinal sound (CS) and background noise (BN) received in the analog signal 432. The sampling rate circuit 430 provides a high accuracy of optimization of the base frequency (FB) and harmonics (Hi, ¾, ..., HN) components of the cardinal sound (CS) as well as the base frequency (FB) and harmonics (Hi,
¾, ... , HN) components of the background noise (BN). In some embodiments, this ensures that the higher the background noise (BN), the higher the sampling rate (SR) in order to properly serve the two stage background noise (BN) control.
Referring now to figure 11, in one embodiment of harmonics processing 450 which may be incorporated into the hearing aid 10, the ADC 436 receives total sound (ST) as an input. The ADC 436 then performs the frequency spectrum analysis 452 which is under the control of the frequency spectrum analyzer 438, the sampling rate controller 442, and the sampling rate generator 444 presented in figure 10. The ADC 436 outputs a digital total sound (ST) signal that undergoes the frequency spectrum analysis 452 which is subject to calculation 454. In this process, the base frequency (FB) and harmonics (Hi, ¾, . . . , HN) components are separated. Using the algorithms presented hereinabove and having a converted based frequency (CFB) set at block 456 as a target frequency range, the harmonics processing 450 calculates at block 454, a converted actual frequency (CFA) and a differential converted harmonics (DCHN) to create at block 458, a converted total sound (CST), which is the output of the harmonics processing 450.
More particularly, total sound (ST) may be defined as follows:
ST = FB + Hi + ¾ + ... + HN, wherein ST = total sound;
FB = base frequency range, with
FB = range between FBL and FBH with FBL being the lowest frequency value in base frequency and FBH being the highest frequency Value in Base Frequency;
HN = harmonics of FB with HN being a mathematical multiplication of FB; FA = an actual frequency value being examined;
HAI = 1st harmonic of FA;
HA2 = 2nd harmonic of FA; and
HAN = NL harmonic of FA with HAN being the mathematical multiplication of FA. In many hearing impediment cases, the total sound (ST) may be at any frequency range; furthermore, the two ears true hearing range may be entirely different. Therefore, the hearing aid 10 presented herein may transfer the base frequency range (FB) along with several of the harmonics (HN) into the actual hearing range (AHR) by converting the base frequency range (FB) and several chosen harmonics (HN) into the actual hearing range (AHR) as one coherent converted total sound (CST) by using the following algorithm defined by following equations:
Equation (1):
Equation (3):
CHArj= M x HN wherein for Equation (1), Equation (2), and Equation (3): M = multiplier between CFA and FA;
CST = converted total sound;
CFB = converted base frequency;
CHAI = 1st converted harmonic;
CHA2 = 2nd converted harmonic;
CHAN = Nth converted harmonic;
CFBL = lowest frequency value in CFB;
CFBH = Highest frequency value in CFB; and CFA = Converted actual frequency.
By way of example and not by way of limitation, an application of the algorithm utilizing Equation (1), Equation (2), and Equation (3) is presented. For this example, the following assumptions are utilized:
FBL = 170HZ FBH = 330Hz
CFBL = 600Hz CFBH = 880Hz FA = 180Hz
Therefore, for this example, the following will hold true: Hi = 360Hz
H4 = 720Hz ¾ = 1,440Hz Hie = 2,880Hz H32 = 5,760Hz Using the algorithm, the following values may be calculated:
CFA = 635Hz CHAI = 1,267Hz CHA4 = 2,534Hz CHAS = 5,068Hz CHAI6 = 10,137Hz
CHA32 = 20,275Hz
To calculate the differentials (D) between the harmonics HN and the converted harmonics (CHAN), the following equation is employed:
CHAN - HN = D equation. This will result in differential converted harmonics (DCH) as follows:
DCHi = 907Hz DCH4 = 1,814HZ DCHs = 3,628Hz
DCHie = 7,257Hz
DCH 2 = 14,515HZ
In some embodiments, a high-pass filter may cut all differential converted harmonics (DCH) above a predetermined frequency. The frequency of 5,000Hz may be used as a benchmark. In this case the frequencies participating in converted total sound (CST) are as follows:
CFA = 635Hz
DCHi = 907Hz
DCH4 = 1,814HZ DC¾ = 3,628Hz
The harmonics processing 450 may provide the conversion for each participating frequency in total sound (ST) and distributing all participating converted actual frequencies (CFA) and differential converted harmonics (DCHN) in the converted total sound (CST) in the same ratio as participated in the original total sound (ST). In some implementations, should more than seventy-five percent (75%) of all the differential converted harmonics (DCHN) be out of the high-pass filter range, the harmonics processing 450 may use an adequate multiplier (between 0.1-0.9) and add the created new differential converted harmonics (DCHN) to converted total sound (CST).
Referring now to figure 12, in one embodiment of signal processing 470 which may be incorporated into the hearing aid 10, an initial analog signal 472 is received. The initial analog signal 472 is converted by an ADC 474, before undergoing signal preparation by signal preparation circuit 474. Such signal preparation may include the operations presented in figure 10. The processed signal may be modified based on a subjective assessment of sound quality and before undergoing a frequency shift and signal amplification at circuit blocks 474, 480. Harmonics enhancement circuitry 482 processes the signal as presented in figure 11, for example, before the signal is converted from digital to analog at a DAC 484. The signal is then outputted as an analog signal 486.
Referring now to figure 13, where one embodiment of an operational flow 500 for the hearing aid 10 is depicted. With respect to left sound input, left sound input is received at a preamplifier 502 for processing prior to the processed signal being driven to a digital signal processor 504, which performs an analog-to-digital conversion 530 prior to adjusting background noise according to a filter at block 532. Various filtering may occur, including general 534, immediate 536, and cardinal sound 538. The filtered signal is then driven to the
digital signal processor 520 for directional control that compares left and right signals, and time delays between left and right signals. The result is a distributed left and right signal, which is based on the established left and right hearing capacity of the patient. The signal is then driven back to the digital signal processor 504 for left ear algorithm processing, which may include transforming the digital signal into a processed digital signal having the qualified sound range having the preferred hearing range with optional harmonics enhancement and optional modification with a subjective assessment of sound quality according to the patent to provide the best signal quality possible. The left ear algorithm processing may also include processing to address tinnitus, as discussed above. A memory module 542 provides the instructions for the transformation, which may be uploaded by the algorithm upload module 522. An amplifier 506 receives the processed digital signal and delivers an amplified processed digital signal to a speaker 508 for left output sound.
Similarly, with respect to right sound input, right sound input is received at a preamplifier 512 for processing prior to the processed signal being driven to a digital signal processor 514, which performs an analog-to-digital conversion 550 prior to adjusting background noise according to a filter at block 552. Various filtering may occur, including general 554, immediate 556, and cardinal sound 558. The filtered signal is then driven to the digital signal processor 520 for directional control that compares left and right signals, and time delays between left and right signals. The result is a distributed left and right signal, which is based on the established left and right hearing capacity of the patient. The right portion of the signal is then driven back to the digital signal processor 514 for right ear algorithm processing, which may include transforming the digital signal into a processed digital signal having the qualified sound range including the preferred hearing range with optional harmonics enhancement and optional modification with a subjective assessment of sound quality according to the patent to provide the best signal quality possible. The right ear algorithm processing may also include processing to address tinnitus, as discussed hereinabove. A memory module 562 provides the instructions for the transformation, which may be uploaded by the algorithm upload module 522. An amplifier 516 receives the processed digital signal and delivers an amplified processed digital signal to a speaker 518 for right output sound.
Referring now to figure 14, as previously discussed, the hearing aid may apply an inverse amplitude signal applied at a tinnitus frequency to provide compensation, including elimination, of the tinnitus TS in patients. For hearing impaired patients and patients without
reduced fearing problems, as graph 600 demonstrates, normal sound is a multitude of sinusoidal signals. While several characteristics, such as frequency, amplitude, and signal-to- noise ratio, for example, describe sound, an applied phase difference between two equal frequency and equal amplitude signals may eliminate tinnitus. As shown, an original signal 602 is f(x) = sin(x) with signals 604, 606 representing shifts along the x-axis, which utilize equal amplitude and frequency. In this the manner, an inverse amplitude signal may include a signal shift along the x-axis according to the formula f(x) = sin(x) + f(x) = sin(x - p) = 0, where tinnitus signal of f(x) = sin(x) corresponds to the tinnitus frequency. Utilization of the inverse amplitude signal as discussed above may partially or fully eliminate tinnitus The order of execution or performance of the methods and data flows illustrated and described herein is not essential, unless otherwise specified. That is, elements of the methods and data flows may be performed in any order, unless otherwise specified, and that the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element are all possible sequences of execution.
While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is, therefore, intended that the appended claims encompass any such modifications or embodiments.
Claims
1. A hearing aid (10) for a patient, the hearing aid (10) comprising: a body (112) including an electronic signal processor (130), a microphone (38), and a speaker (234) housed therein, a signaling architecture (410) communicatively interconnecting the microphone (38) to the electronic signal processor (130) and the electronic signal processor (130) to the speaker (234); the electronic signal processor (130) being programmed with a qualified sound range, the qualified sound range being a range of sound corresponding to a preferred hearing range of an ear of the patient; the electronic signal processor (130) being programmed with a tinnitus frequency, the tinnitus frequency being a range of sound corresponding to a sensation of tinnitus in the ear of the patient; and the electronic signal processor (130) including memory (402) accessible to a processor (296), the memory (402) including processor-executable instructions that, when executed, cause the processor (296) to: receive an input analog signal (432) from the microphone (38), convert the input analog signal (432) to a digital signal (434), transform the digital signal (434) into a processed digital signal (434) having the qualified sound range, convert the processed digital signal (434) to an output analog signal (486), amplify the output analog signal (486) at 0 dB at the tinnitus frequency, and drive the output analog signal (486) to the speaker (234).
2. The hearing aid (10) as recited in claim 1, wherein the qualified sound range further comprises a preferred hearing range of an ear of the patient modified with a subjective assessment of sound quality according to the patient.
3. The hearing aid (10) as recited in claim 1, wherein the preferred hearing range further comprises a range of sound corresponding to the highest hearing capacity of the ear of the patient between 50Hz and 10,000Hz.
4. The hearing aid (10) as recited in claim 1, wherein the preferred hearing range further comprise a range tested at 5Hz increments.
5. The hearing aid (10) as recited in claim 1, wherein the preferred hearing range further comprises a plurality of narrow hearing ranges.
6. The hearing aid (10) as recited in claim 1, wherein the subjective assessment according to the patient further comprises a completed assessment of a degree of annoyance caused to the patient by an impairment of wanted sound.
7. The hearing aid (10) as recited in claim 1, wherein the subjective assessment according to the patient further comprises a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound.
8. The hearing aid (10) as recited in claim 1, wherein the subjective assessment according to the patient further comprises a completed assessment to determine best sound quality to the patient.
9. A hearing aid (10) for a patient, the hearing aid (10) comprising: a body (112) including an electronic signal processor (130), a microphone (38), and a speaker (234) housed therein, a signaling architecture (410) communicatively interconnecting the microphone (38) to the electronic signal processor (130) and the electronic signal processor (130) to the speaker (234); a transceiver (150) communicatively interconnected to the signaling architecture (410) communicatively, the transceiver (150) being configured to provide a pairing with a proximate smart device (12); the electronic signal processor (130) being programmed with a qualified sound range, the qualified sound range being a range of sound corresponding to a preferred hearing range of an ear of the patient modified with a subjective assessment of sound quality according to the patient; the electronic signal processor (130) being programmed with a tinnitus frequency, the tinnitus frequency being a range of sound corresponding to a sensation of tinnitus in the ear of the patient; and the electronic signal processor (130) including memory (402) accessible to a processor (296), the memory (402) including processor-executable instructions that, when executed, cause the processor (296) to: receive an input analog signal (432) from the microphone (38), convert the input analog signal (432) to a digital signal (434), transform the digital signal (434) into a processed digital signal (434) having the qualified hearing range, convert the processed digital signal (434) to an output analog signal (486), amplify the output analog signal (486) at 0 db at the tinnitus frequency,
drive the output analog signal (486) to the speaker (234), create a pairing via the transceiver (150) with the proximate smart device (12), and receive a control signal from the proximate smart device (12).
10. A hearing aid (10) for a patient, the hearing aid (10) comprising: a body (112) including an electronic signal processor (130), a microphone (38), and a speaker (234) housed therein, a signaling architecture (410) communicatively interconnecting the microphone (38) to the electronic signal processor (130) and the electronic signal processor (130) to the speaker (234); a transceiver (150) communicatively interconnected to the signaling architecture (410) communicatively, the transceiver (150) being configured to provide a pairing with a proximate smart device (12); the electronic signal processor (130) being programmed with a qualified sound range, the qualified sound range being a range of sound corresponding to a preferred hearing range of an ear of the patient modified with a subjective assessment of sound quality according to the patient; the electronic signal processor (130) being programmed with a tinnitus frequency, the tinnitus frequency being a range of sound corresponding to a sensation of tinnitus in the ear of the patient; and the electronic signal processor (130) including memory (402) accessible to a processor (296), the memory (402) including processor-executable instructions that, when executed, cause the processor (296) to: create a pairing via the transceiver (150) with the proximate smart device (12), receive an input analog signal (432) from the microphone (38), convert the input analog signal (432) to a digital signal (434), transform, via distributed processing between the hearing aid (10) and the proximate smart device (12), the digital signal (434) into a processed digital signal (434) having the qualified hearing range, convert the processed digital signal (434) to an output analog signal (486), amplify the output analog signal (486) at 0 db at the tinnitus frequency, and drive the output analog signal (486) to the speaker (234).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163184064P | 2021-05-04 | 2021-05-04 | |
PCT/US2021/062582 WO2022235298A1 (en) | 2021-05-04 | 2021-12-09 | Hearing aid and method for use of same |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4335117A1 true EP4335117A1 (en) | 2024-03-13 |
Family
ID=83932166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21939968.0A Pending EP4335117A1 (en) | 2021-05-04 | 2021-12-09 | Hearing aid and method for use of same |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP4335117A1 (en) |
WO (1) | WO2022235298A1 (en) |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007147077A2 (en) * | 2006-06-14 | 2007-12-21 | Personics Holdings Inc. | Earguard monitoring system |
DK200970303A (en) * | 2009-12-29 | 2011-06-30 | Gn Resound As | A method for the detection of whistling in an audio system and a hearing aid executing the method |
-
2021
- 2021-12-09 WO PCT/US2021/062582 patent/WO2022235298A1/en active Application Filing
- 2021-12-09 EP EP21939968.0A patent/EP4335117A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2022235298A1 (en) | 2022-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3588982B1 (en) | A hearing device comprising a feedback reduction system | |
US11564043B2 (en) | Hearing device and a hearing system comprising a multitude of adaptive two channel beamformers | |
US11102589B2 (en) | Hearing aid and method for use of same | |
US11095992B2 (en) | Hearing aid and method for use of same | |
US11729557B2 (en) | Hearing device comprising a microphone adapted to be located at or in the ear canal of a user | |
EP3796677A1 (en) | A method of adaptive mixing of uncorrelated or correlated noisy signals, and a hearing device | |
US10880658B1 (en) | Hearing aid and method for use of same | |
US11128963B1 (en) | Hearing aid and method for use of same | |
US11153694B1 (en) | Hearing aid and method for use of same | |
EP4335117A1 (en) | Hearing aid and method for use of same | |
AU2020354942A1 (en) | Hearing aid and method for use of same | |
EP4297436A1 (en) | A hearing aid comprising an active occlusion cancellation system and corresponding method | |
WO2022066223A1 (en) | System and method for aiding hearing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20231201 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) |