US20210297791A1 - System and Method for Aiding Hearing - Google Patents
System and Method for Aiding Hearing Download PDFInfo
- Publication number
- US20210297791A1 US20210297791A1 US17/343,329 US202117343329A US2021297791A1 US 20210297791 A1 US20210297791 A1 US 20210297791A1 US 202117343329 A US202117343329 A US 202117343329A US 2021297791 A1 US2021297791 A1 US 2021297791A1
- Authority
- US
- United States
- Prior art keywords
- hearing
- harmonic
- processor
- ear
- patient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title abstract description 66
- 238000012216 screening Methods 0.000 claims abstract description 23
- 238000012360 testing method Methods 0.000 claims description 69
- 230000015654 memory Effects 0.000 claims description 43
- 238000003860 storage Methods 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 17
- 230000003321 amplification Effects 0.000 claims description 13
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 13
- 230000003247 decreasing effect Effects 0.000 claims description 6
- 229910001369 Brass Inorganic materials 0.000 claims description 4
- 239000010951 brass Substances 0.000 claims description 4
- 230000006870 function Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 10
- 210000000883 ear external Anatomy 0.000 description 10
- 206010011878 Deafness Diseases 0.000 description 6
- 230000010370 hearing loss Effects 0.000 description 6
- 231100000888 hearing loss Toxicity 0.000 description 6
- 208000016354 hearing loss disease Diseases 0.000 description 6
- 230000006698 induction Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000000981 bystander Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000014759 maintenance of location Effects 0.000 description 4
- 210000003454 tympanic membrane Anatomy 0.000 description 4
- 210000000613 ear canal Anatomy 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012163 sequencing technique Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 210000003477 cochlea Anatomy 0.000 description 2
- 210000000959 ear middle Anatomy 0.000 description 2
- 210000002768 hair cell Anatomy 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- 208000023514 Barrett esophagus Diseases 0.000 description 1
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 101150110932 US19 gene Proteins 0.000 description 1
- 101150114976 US21 gene Proteins 0.000 description 1
- 241000405217 Viola <butterfly> Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000000860 cochlear nerve Anatomy 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000002889 sympathetic effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1008—Earpieces of the supra-aural or circum-aural type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1016—Earpieces of the intra-aural type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B1/00—Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
- H04B1/38—Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
- H04B1/3827—Portable transceivers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/021—Behind the ear [BTE] hearing aids
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/55—Communication between hearing aids and external devices via a network for data exchange
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/552—Binaural
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/554—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
Definitions
- This invention relates, in general, to hearing aids and, in particular, to systems and methods that aid hearing to provide signal processing and feature sets to enhance speech and sound intelligibility.
- Hearing loss can affect anyone at any age, although elderly adults more frequently experience hearing loss. Untreated hearing loss is associated with lower quality of life and can have far-reaching implications for the individual experiencing hearing loss as well as those close to the individual. As a result, there is a continuing need for improved hearing aids and methods for use of the same that enable patients to better hear conversations and the like.
- a programming interface is configured to communicate with a device.
- the system screens, via a speaker and a user interface associated with the device, a left ear and separately, a right ear—of a patient.
- the system determines a left ear hearing range and a right ear hearing range.
- the screening utilizes harmonic frequencies of a harmonic frequency series, where the harmonic frequency series includes a fundamental frequency and integer multiples of the fundamental frequency.
- the harmonic frequencies may include classical music instrument sounds.
- FIG. 1A is a front perspective schematic diagram depicting one embodiment of a hearing aid programmed with one embodiment of a system for aiding hearing, according to the teachings presented herein;
- FIG. 1B is a top plan schematic diagram depicting the hearing aid of FIG. 1A being utilized according to the teachings presented herein;
- FIG. 2 is a front perspective view of one embodiment of the hearing aid depicted in FIG. 1A ;
- FIG. 3A is a front-left perspective view of another embodiment of the hearing aid depicted in FIG. 1A ;
- FIG. 3B is a front-right perspective view of the embodiment of the hearing aid depicted in FIG. 3A ;
- FIG. 4 is a front perspective view of another embodiment of a hearing aid programmed with one embodiment of a system for aiding hearing, according to the teachings presented herein;
- FIG. 5 is a schematic diagram depicting one embodiment the system for aiding hearing, according to the teachings presented herein;
- FIG. 6 is a flow chart depicting one embodiment of a method for calibrating and setting the hearing aid for a preferred hearing range or preferred hearing ranges, according to the teachings presented herein;
- FIG. 7 is a flow chart depicting another embodiment of a method for calibrating and setting the hearing aid for a preferred hearing range or preferred hearing ranges, according to the teachings presented herein;
- FIG. 8 is a flow chart depicting still another embodiment of a method for calibrating and setting the hearing aid for a preferred hearing range or preferred hearing ranges, according to the teachings presented herein;
- FIG. 9 is a front perspective schematic diagram depicting one embodiment of a hearing aid being programmed with one embodiment of a system for aiding hearing, according to the teachings presented herein;
- FIG. 10 is a functional block diagram depicting one embodiment of the hearing aid depicted in FIG. 9 ;
- FIG. 11 is a functional block diagram of a smart device, which forms a portion of the system for aiding hearing depicted in FIG. 9 ;
- FIG. 12 is a functional block diagram depicting one embodiment of a server, which forms a portion of the system for aiding hearing depicted in FIG. 9 ;
- FIG. 13 is a front perspective schematic diagram depicting another embodiment of a system for aiding hearing, according to the teachings presented herein;
- FIG. 14 is a functional block diagram depicting one embodiment of hearing aid test equipment depicted in FIG. 13 ;
- FIG. 15 is a conceptual module diagram depicting a software architecture of a testing equipment application of some embodiments.
- FIG. 1A and FIG. 1B therein is depicted one embodiment of a hearing aid, which is schematically illustrated and designated 10 .
- the hearing aid 10 is programmed according to a system for aiding hearing.
- a user U who may be considered a patient requiring a hearing aid, is wearing the hearing aid 10 and sitting at a table T at a restaurant or café, for example, and engaged in a conversation with an individual I 1 and an individual I 2 .
- the user U is speaking sound S 1
- the individual I 1 is speaking sound S 2
- the individual I 2 is speaking sound S 3 .
- a bystander B 1 is engaged in a conversation with a bystander B 2 .
- the bystander B 1 is speaking sound S 4 and the bystander B 2 is speaking sound S 5 .
- An ambulance A is driving by the table T and emitting sound S 6 in direction L.
- the sounds S 1 , S 2 , and S 3 may be described as the immediate background sounds.
- the sounds S 4 , S 5 , and S 6 may be described as the background sounds.
- the sound S 6 may be described as the dominant sound as it is the loudest sound at table T.
- the sounds S 1 , S 2 , S 3 , S 4 , S 5 , S 6 represent life sounds with are complex and continuously changing mixtures of base frequencies and harmonics.
- the sounds S 1 , S 2 , S 3 , S 4 , S 5 , S 6 are not discrete frequencies.
- the hearing aid 10 is programmed with a preferred hearing range for each ear in a two-ear embodiment and for one ear in a one-ear embodiment.
- the preferred hearing range may be a range of sound corresponding to the highest hearing capacity of an ear of the user U between 50 Hz and 5,000 Hz or between 50 Hz and 10,000 Hz, for example.
- the preferred hearing range for each ear may be multiple ranges of sound corresponding to the highest hearing capacity ranges of an ear of the user U between 50 Hz and 5,000 Hz or between 50 Hz and 10,000 Hz, for example.
- the various sounds S 1 , S 2 , S 3 , S 4 , S 5 , S 6 received may be transformed and divided into the multiple ranges of sound.
- a left ear hearing range and a right ear hearing range are determined by way of screening.
- the screening utilizes harmonic frequencies of a harmonic frequency series, where the harmonic frequency series includes a fundamental frequency and integer multiples of the fundamental frequency.
- the harmonic frequencies may include classical music instrument sounds.
- the testing identifies a preferred hearing range for a patient, on an ear-by-ear basis, with the use of life-sounds, rather than clinical discrete frequencies.
- the hearing aid 10 may create a pairing with a proximate smart device 12 , such as a smart phone (depicted), smart watch, or tablet computer.
- the proximate smart device 12 includes a display 14 having an interface 16 having controls, such as an ON/OFF switch or volume controls 18 , mode of operation controls 24 , general controls 20 .
- the user U may send a control signal wirelessly from the proximate smart device 12 to the hearing aid 10 to control a function, like the volume controls 18 .
- the hearing aid 10 and the proximate smart device 12 may leverage the wireless communication link therebetween and use processing distributed between the hearing aid 10 and the proximate smart device 12 to process the signals and perform other analysis.
- the hearing aid 10 is programmed according to the system for aiding hearing and the hearing aid 10 includes a left body 32 and a right body 34 connected to a band member 36 that is configured to partially circumscribe the user U. Each of the left body 32 and the right body 34 cover an external ear of the user U and are sized to engage therewith.
- microphones 38 , 40 , 42 which gather sound directionally and convert the gathered sound into an electrical signal, are located on the left body 32 . With respect to gathering sound, the microphone 38 may be positioned to gather forward sound, the microphone 40 may be positioned to gather lateral sound, and the microphone 42 may be positioned to gather rear sound. Microphones may be similarly positioned on the right body 34 .
- Various internal compartments 44 provide space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 46 provide a patient interface with the hearing aid 10 .
- each of the left body 32 and the right body 34 cover an external ear of the user U and being sized to engage therewith confers certain benefits.
- Sound waves enter through the outer ear and reach the middle ear to vibrate the eardrum.
- the eardrum then vibrates the oscilles, which are small bones in the middle ear.
- the sound vibrations travel through the oscilles to the inner ear.
- hair cells When the sound vibrations reach the cochlea, they push against specialized cells known as hair cells.
- the hair cells turn the vibrations into electrical nerve impulses.
- the auditory nerve connects the cochlea to the auditory centers of the brain. When these electrical nerve impulses reach the brain, they are experienced as sound.
- the outer ear serves a variety of functions.
- the various air-filled cavities composing the outer ear have a natural or resonant frequency to which they respond best. This is true of all air-filled cavities.
- the resonance of each of these cavities is such that each structure increases the sound pressure at its resonant frequency by approximately 10 to 12 dB.
- boost or amplify high-frequency sounds (a) boost or amplify high-frequency sounds; (b) provide the primary cue for the determination of the elevation of a sound's source; (c) assist in distinguishing sounds that arise from in front of the listener from those that arise from behind the listener.
- Headsets are used in hearing testing in medical and associated facilities for a reason: tests have shown that completely closing the ear canal in order to prevent any form of outside noise plays direct role in acoustic matching.
- the more severe hearing problem the closer the hearing aid speaker must be to the ear drum.
- the closer to the speaker is to the ear drum the more the device plugs the canal and negatively impacts the ear's pressure system. That is, the various chambers of the ear have a defined operational pressure determined, in part, by the ear's structure. By plugging the ear canal, the pressure system in the ear is distorted and the operational pressure of the ear is negatively impacted.
- “plug size” hearing aids having limitations with respect to distorting the defined operational pressure within the ear.
- the hearing aid 10 of FIG. 2 creates a closed chamber around the ear increasing the pressure within the chamber.
- the hearing aid 10 is programmed according to a system for aiding hearing.
- the hearing aid 10 includes a left body 52 having an ear hook 54 extending from the left body 52 to an ear mold 56 .
- the left body 52 and the ear mold 56 may each at least partially conform to the contours of the external ear and sized to engage therewith.
- the left body 52 may be sized to engage with the contours of the ear in a behind-the-ear-fit.
- the ear mold 56 may be sized to be fitted for the physical shape of a patient's ear.
- the ear hook 54 may include a flexible tubular material that propagates sound from the left body 52 to the ear mold 56 .
- Microphones 58 which gather sound and convert the gathered sound into an electrical signal, are located on the left body 52 .
- An opening 60 within the ear mold 56 permits sound traveling through the ear hook 54 to exit into the patient's ear.
- An internal compartment 62 provides space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 64 provide a patient interface with the hearing aid 10 on the left body 52 of the hearing aid 10 .
- the hearing aid 10 includes a right body 72 having an ear hook 74 extending from the right body 72 to an ear mold 76 .
- the right body 72 and the ear mold 76 may each at least partially conform to the contours of the external ear and sized to engage therewith.
- the right body 72 may be sized to engage with the contours of the ear in a behind-the-ear-fit.
- the ear mold 76 may be sized to be fitted for the physical shape of a patient's ear.
- the ear hook 74 may include a flexible tubular material that propagates sound from the right body 72 to the ear mold 76 .
- Microphones 78 which gather sound and convert the gathered sound into an electrical signal, are located on the right body 72 .
- An opening 80 within the ear mold 76 permits sound traveling through the ear hook 74 to exit into the patient's ear.
- An internal compartment 82 provides space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 84 provide a patient interface with the hearing aid 10 on the right body 72 of the hearing aid 10 . It should be appreciated that the various controls 64 , 84 and other components of the left and right bodies 52 , 72 may be at least partially integrated and consolidated. Further, it should be appreciated that the hearing aid 10 may have one or more microphones on each of the left and right bodies 52 , 72 to improve directional hearing in certain implementations and provide, in some implementations, 360-degree directional sound input.
- the left and right bodies 52 , 72 are connected at the respective ear hooks 54 , 74 by a band member 90 which is configured to partially circumscribe a head or a neck of the patient.
- An internal compartment 92 within the band member 90 may provide space for electronics and the like.
- the hearing aid 10 may include left and right earpiece covers 94 , 96 respectively positioned exteriorly to the left and right bodies 52 , 72 .
- Each of the left and right earpiece covers 94 , 96 isolate noise to block out interfering outside noises.
- the microphones 58 in the left body 52 and the microphones 78 in the right body 72 may cooperate to provide directional hearing.
- the hearing aid 10 includes a body 112 having an ear hook 114 extending from the body 112 to an ear mold 116 .
- the body 112 and the ear mold 116 may each at least partially conform to the contours of the external ear and sized to engage therewith.
- the body 112 may be sized to engage with the contours of the ear in a behind-the-ear-fit.
- the ear mold 116 may be sized to be fitted for the physical shape of a patient's ear.
- the ear hook 114 may include a flexible tubular material that propagates sound from the body 112 to the ear mold 116 .
- a microphone 118 which gathers sound and converts the gathered sound into an electrical signal, is located on the body 112 .
- An opening 120 within the ear mold 116 permits sound traveling through the ear hook 114 to exit into the patient's ear.
- An internal compartment 122 provides space for housing electronics, which will be discussed in further detail hereinbelow.
- Various controls 124 provide a patient interface with the hearing aid 10 on the body 112 of the hearing aid 10 .
- a frequency generator 152 may be an electronic device that generates frequency signals with set properties of amplitude, frequency, and wave shape.
- the frequency generator 152 may screen an ear of a patient, i.e., the user U, with harmonic frequencies of a harmonic frequency series.
- the harmonic frequencies may be musical sounds 154 .
- the musical sounds may be classical music instrument sounds, such as sounds from an instrument belonging to keyboard instruments, string instruments, woodwind instruments, or brass instruments, for example.
- the keyboard instruments may be a musical instrument played using a keyboard, a row of levers which are pressed by a finger or finger and may include a piano, organ, or harpsichord, for example.
- the string instruments may be chordophones or musical instruments that produce sound from vibrating strings when a performer plays or sounds the strings in some manner.
- the string instruments may include violins, violas, cellos, and basses, for example.
- the woodwind instruments may be a musical instrument that contains some type of resonator or tubular structure in which a column of air is set into vibration by the player blowing into or over a mouthpiece set at or near the end of the resonator.
- the woodwind instruments may include flutes, clarinets, oboes, bassoons, and saxophones, for example.
- the brass instruments may be a musical instrument that produces sound by sympathetic vibration of air in a tubular resonator in sympathy with the vibration of the player's lips.
- the brass instruments may include horns, trumpets, trombones, euphoniums, and tubas, for example.
- the frequency generator 152 is programmed to produce sounds and, in one embodiment, live sounds which are non-discrete based on an organ 156 , a trumpet 158 , and a violin 160 .
- non-discrete live sounds 162 are utilized to screen the ear of the user U.
- the non-discrete live sounds 162 include a harmonic frequency series between 50 Hz and 10,000 Hz, with the harmonic frequency series being a fundamental frequency and integer multiples of the fundamental frequency.
- the non-discrete live sounds 162 include a harmonic frequency series between 50 Hz and 5,000 Hz.
- the screening may be calibrated with multiple variables. Foremost, the test range of signals may be set. The selection of sound and music may be made.
- the harmonic frequencies screened may be decreasing frequencies or increasing frequencies.
- the harmonic frequencies may be a continuous sound or noncontinuous sound.
- the harmonic frequencies utilized for screening may include a single harmonic at a time or multiple harmonics at a time, which may or may not include the fundamental frequency.
- the amplification utilized in screening with the harmonic frequencies may be a constant amplification or an increasing amplification.
- non-discrete live sounds 162 may include harmonic frequencies as follows:
- S is the non-discrete live sound sounds
- F b is a base or fundamental frequency
- F h1 is a is a first integer multiple of F b ;
- F h2 is a second integer multiple of F b ;
- F hn is an nth integer multiple of F b .
- non-discrete live sounds 162 may include other harmonic frequencies as, by way of example, follows:
- S is the non-discrete live sound sounds
- F b is a base or fundamental frequency
- F h1 is a is a first integer multiple of F b .
- non-discrete live sounds 162 may include elements of the harmonic frequency series as follows:
- S is the non-discrete live sound sounds
- F b is a base frequency
- F h2 is a is a second integer multiple of F b ;
- F h4 is a is a fourth integer multiple of F b ;
- F 2hn is a is a 2nth integer multiple of F b .
- the harmonic frequencies being utilized for testing may include any number of frequencies in the harmonic frequency series, which includes a fundamental frequency and multiple integer multiples, including consecutive and non-consecutive integer multiples, of the fundamental frequency. That is, the selection of the harmonic frequencies may vary depending on the testing circumstances.
- the user U indicates when the non-discrete live sounds are heard at a decision block 164 and the response or a lack of response is recorded at a recorder 166 . Based on the data collected by the recorder 166 , an algorithm may be created for the hearing aid 10 to assist with hearing.
- the system 150 provides a non-discrete frequency test technology to establish a precise hearing frequency range or precise hearing frequency ranges in a patient's hearing by working with a base frequency F b and the harmonics (F h1 +F h2 + . . . +F hn ), or a subset thereof, of the base frequency F b .
- the system 150 is designed to test, measure, and establish the patient's true hearing range.
- the system 150 in one implementation, employs music instrument tunes specific to corresponding frequencies or frequency ranges. The system 150 , therefore, provides hearing impaired patients with a given frequency and the harmonics of the given frequency to identify the patient's hearing range.
- the testing methodology is similar to real life situations.
- the systems and methods presented herein utilize non-discrete harmonic frequencies to test a patient's hearing. Additionally, by utilizing non-discrete harmonic frequencies to test a patient's hearing to better replicate life sounds, testing time is decreased.
- the third harmonic of 500 Hz is 1,500 Hz and the third harmonic of 2,000 Hz is 6,000 Hz, which is almost at the end point of a human testing range. Further, testing of human hearing over 5,0000 Hz is unnecessary in about 90% of the cases as reverse slope hearing loss is uncommon.
- the method starts at block 180 , when a patient is going to undergo testing to determine the preferred hearing range or preferred hearing ranges for use of the hearing aid 10 .
- the frequency generator 152 and the recorder 166 interact with the methodology to provide the preferred hearing range 174 or a contribution thereto.
- the frequency generator 152 and the recorder 166 may be embodied on any combination of smart devices, servers, and hearing aid test equipment.
- a left ear or a right ear of a patient is tested with continuous sound being produced using increasing or decreasing frequencies between 100 Hz and 1,000 Hz, for example, for a sufficient time, such as 30 seconds.
- the patient may push a button when the sound is first heard.
- an initial frequency of 100 Hz at 20 dB is screened.
- the patient's ability to hear the initial frequency is recorded before the process continuously advances to the next frequency of a variable increment, which is 200 Hz at 20 dB, at block 184 and the patient's ability to hear is recorded at decision block 186 .
- 100 Hz is the base frequency and 200 Hz, 300 Hz, 400 Hz, 500 Hz, 600 Hz, 700 Hz, 800 Hz, 900 Hz, and 1,000 Hz are exemplary integer multiples of the fundamental frequency with the base frequency and the integer multiples forming the harmonic frequency series.
- the process advances continuously for the next incremental frequency in the harmonic frequency series, e.g., 300 Hz at 20 dB.
- the methodology continuously advances through 400 Hz at 20 dB.
- the process may continuously advance through the harmonic frequency series to block 196 and decision block 198 for 1,000 Hz at 20 dB.
- the testing methodology continues for the frequencies under test with the results being recorded.
- FIG. 7 another embodiment of a method for calibrating and setting the hearing aid 10 for a preferred hearing range or preferred hearing ranges utilizing the methodology presented herein is shown.
- amplification is increased in a step-by-step manner as a patient is tested in 100 Hz increments of a harmonic frequency series.
- equations exemplify this methodology:
- F b is the fundamental frequency
- F T is the testing frequency
- F b is an integer multiple of the fundamental frequency
- ZHz is the highest frequency in the chosen range
- a is an increased amplification
- y is an increased amplification
- the method starts at block 230 , when a patient is going to undergo testing to determine the preferred hearing range or preferred hearing ranges for use of the hearing aid 10 .
- the frequency generator 152 and the recorder 166 interact with the methodology to provide the preferred hearing range 174 or a contribution thereto.
- the frequency generator 152 and the recorder 166 may be embodied on any combination of smart devices, servers, and hearing aid test equipment.
- a left ear or a right ear of a patient is tested with continuous sound being produced using increasing or decreasing frequencies between 100 Hz and 1,000 Hz, for example, for a sufficient time, such as 30 seconds.
- the patient may push a button when the sound is first heard.
- an initial frequency of 100 Hz with at least one harmonic frequency of a harmonic series at 20 dB is screened.
- the patient's ability to hear the initial frequency is recorded before the process advances to the next frequency of a variable increment, which is 200 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+a, at block 234 and the patient's ability to hear is recorded at decision block 236 .
- the process advances continuously for the next incremental frequency in the harmonic frequency series, e.g., 300 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+b.
- the methodology advances through 400 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+c.
- the process may advance through the harmonic frequency series to block 246 and decision block 248 for 1,000 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+d.
- the testing methodology continues for the frequencies under test with the results being recorded.
- FIG. 8 a still further embodiment of a method for calibrating and setting the hearing aid 10 for a preferred hearing range or preferred hearing ranges utilizing the methodology presented herein is shown.
- constant amplification is utilized in a step-by-step manner as a patient is tested in 100 Hz increments of a harmonic frequency series.
- equations exemplify this methodology:
- F b is the fundamental or base frequency
- F T is the testing frequency
- F h is an integer multiple of the fundamental frequency
- ZHz is the highest frequency, Z, in the chosen range.
- the method starts at block 260 , when a patient is going to undergo testing to determine the preferred hearing range or preferred hearing ranges for use of the hearing aid 10 .
- the frequency generator 152 and the recorder 166 interact with the methodology to provide the preferred hearing range 174 or a contribution thereto.
- a left ear or a right ear of a patient is tested with continuous sound being produced using increasing or decreasing frequencies between 100 Hz and 1,000 Hz, for example, for a sufficient time, such as 30 seconds.
- the patient may push a button when the sound is first heard.
- an initial frequency of 100 Hz with at least one harmonic frequency of a harmonic series at 30 dB is screened.
- the patient's ability to hear the initial frequency is recorded before the process advances to the next incremental frequency, which is 200 Hz with at least one harmonic frequency of a harmonic series at 30 dB, at block 264 and the patient's ability to hear is recorded at decision block 266 .
- the process advances to the next incremental frequency in the testing of the applicable harmonic frequency series, e.g., 300 Hz with at least one harmonic frequency of a harmonic series at 30 dB.
- the methodology advances through 400 Hz with at least one harmonic frequency of a harmonic series at 30 dB.
- the process may advance through the harmonic frequency series to block 276 and decision block 278 for 1,000 Hz with at least one harmonic frequency of a harmonic series at 30 dB.
- the testing methodology continues for the frequencies under test with the results being recorded.
- FIG. 9 one embodiment of a system 300 for aiding hearing is shown.
- the user U who may be considered a patient requiring a hearing aid, is wearing the hearing aid 10 and sitting at a table T.
- the hearing aid 10 has a pairing with the proximate smart device 12 such the hearing aid 10 and the proximate smart device 12 may determine the user's preferred hearing range for each ear and subsequently program the hearing aid 10 with the preferred hearing ranges.
- the proximate smart device 12 which may be a smart phone, a smart watch, or a tablet computer, for example, is executing a hearing screening program.
- the display 14 serves as an interface for the user U.
- various indicators such as indicators 302 , 304 , 306 show that the testing of the left ear is in progress at 100 Hz at 20 dB.
- the user U is asked if the sound was heard at the indicator 306 and the user U may appropriately respond at soft button 308 or soft button 310 .
- the system 300 screens, via a speaker and the user interface 16 associated with the proximate smart device 12 , a left ear—and separately, a right ear—of the user U at multiple harmonic frequencies of a harmonic frequency series between 50 Hz and 10,000 Hz, with detected frequencies, optionally, being re-ranged tested to better identify the frequencies and decibel levels heard.
- the system 300 determines a left ear preferred hearing range and a right ear preferred hearing range.
- the harmonic frequency series may be a fundamental frequency and multiple integer multiples of the fundamental frequency.
- the proximate smart device 12 may be in communication with a server 320 having a housing 322 .
- the smart device may utilize distributed processing between the proximate smart device 12 and the server 320 to at least one of screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range.
- the processing to screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range may be located on a smart device, a server, hearing testing equipment, or any combination thereof.
- FIG. 10 an illustrative embodiment of the internal components of the hearing aid 10 is depicted.
- the hearing aid 10 depicted in the embodiment of FIG. 2 and FIGS. 3A, 3B is presented. It should be appreciated, however, that the teachings of FIG. 5 equally apply to the embodiment of FIG. 4 .
- an electronic signal processor 330 may be housed within the internal compartments 62 , 82 .
- the hearing aid 10 may include the electronic signal processor 330 for each ear or the electronic signal processor 330 for each ear may be at least partially integrated or fully integrated. In another embodiment, with respect to FIG.
- the electronic signal processor 330 is housed.
- the electronic signal processor 330 may include an analog-to-digital converter (ADC) 332 , a digital signal processor (DSP) 334 , and a digital-to-analog converter (DAC) 336 .
- ADC analog-to-digital converter
- DSP digital signal processor
- DAC digital-to-analog converter
- the electronic signal processor 330 including the digital signal processor embodiment, may have memory accessible to a processor.
- a signaling architecture communicatively interconnects the microphone inputs 338 to the electronic signal processor 330 and the electronic signal processor 330 to the speaker output 340 .
- the various hearing aid controls 344 , the induction coil 346 , the battery 348 , and the transceiver 350 are also communicatively interconnected to the electronic signal processor 330 by the signaling architecture.
- the speaker output 340 sends the sound output to a speaker or speakers to project sound and in particular, acoustic signals in the audio frequency band as processed by the hearing aid 10 .
- the programming connector 342 may provide an interface to a computer or other device and, in particular, the programming connector 342 may be utilized to program and calibrate the hearing aid 10 with the system 300 , according to the teachings presented herein.
- the hearing aid controls 344 may include an ON/OFF switch as well as volume controls, for example.
- the induction coil 346 may receive magnetic field signals in the audio frequency band from a telephone receiver or a transmitting induction loop, for example, to provide a telecoil functionality.
- the induction coil 346 may also be utilized to receive remote control signals encoded on a transmitted or radiated electromagnetic carrier, with a frequency above the audio band.
- Various programming signals from a transmitter may also be received via the induction coil 346 or via the transceiver 350 , as will be discussed.
- the battery 348 provides power to the hearing aid 10 and may be rechargeable or accessed through a battery compartment door (not shown), for example.
- the transceiver 350 may be internal, external, or a combination thereof to the housing. Further, the transceiver 350 may be a transmitter/receiver, receiver, or an antenna, for example. Communication between various smart devices and the hearing aid 10 may be enabled by a variety of wireless methodologies employed by the transceiver 150 , including 802.11, 3G, 4G, Edge, WiFi, ZigBee, near field communications (NFC), Bluetooth low energy, and Bluetooth, for example.
- the various controls and inputs and outputs presented above are exemplary and it should be appreciated that other types of controls may be incorporated in the hearing aid 10 .
- the electronics and form of the hearing aid 10 may vary.
- the hearing aid 10 and associated electronics may include any type of headphone configuration, a behind-the-ear configuration, an over-the-ear configuration, or in-the-ear configuration, for example.
- electronic configurations with multiple microphones for directional hearing are within the teachings presented herein.
- the hearing aid has an over-the-ear configuration where the entire ear is covered, which not only provides the hearing aid functionality but hearing protection functionality as well.
- the electronic signal processor 330 may be programmed with a preferred hearing range which, in one embodiment, is the preferred hearing sound range corresponding to highest hearing capacity of a patient.
- the left ear preferred hearing range and the right ear preferred hearing range are each a range of sound corresponding to highest hearing capacity of an ear of a patient between 50 Hz and 10,000 Hz, as tested with the utilization of one or more harmonic frequency series. With this approach, the hearing capacity of the patient is enhanced.
- Existing audiogram hearing aid industry testing equipment measures hearing capacity at defined, discrete frequencies, such as 60 Hz; 125 Hz; 250 Hz; 500 Hz; 1,000 Hz; 2,000 Hz; 4,000 Hz; 8,000 Hz and existing hearing aids work on a ratio-based frequency scheme.
- the present teachings measure hearing capacity with harmonics to improve the speed of the testing and to provide an algorithm for hearing similar to real-life with multiple non-discrete harmonics utilized.
- the preferred hearing sound range may be shifted by use of various controls the 124 .
- Directional microphone systems on each microphone position and processing may be included that provide a boost to sounds coming from the front of the patient and reduce sounds from other directions. Such a directional microphone system and processing may improve speech understanding in situations with excessive background noise. Digital noise reduction, impulse noise reduction, and wind noise reduction may also be incorporated.
- system compatibility features such as FM compatibility and Bluetooth compatibility, may be included in the hearing aid 10 .
- the ADC 332 outputs a digital total sound (S T ) signal that undergoes the frequency spectrum analysis.
- the base frequency (F B ) and harmonics (H 1 , H 2 , . . . , H N ) components are separated.
- the harmonics processing within the electronic signal processor 330 calculates a converted actual frequency (CF A ) and a differential converted harmonics (DCH N ) to create a converted total sound (CS T ), which is the output of the harmonics processing by the electronic signal processor 330 .
- total sound S T .
- F B range between FB L and FB H with F BL being the lowest frequency value in base frequency and F BB being the highest frequency Value in Base Frequency;
- H N harmonics of F B with H N being a mathematical multiplication of F B ;
- H A1 1 st harmonic of F A ;
- H AN N th harmonic of F A with H AN being the mathematical multiplication of F A .
- the total sound (S T ) may be at any frequency range; furthermore, the two ears true hearing range may be entirely different. Therefore, the hearing aid 10 presented herein may transfer the base frequency range (F B ) along with several of the harmonics (H N ) into the actual hearing range (AHR) by converting the base frequency range (F B ) and several chosen harmonics (H N ) into the actual hearing range (AHR) as one coherent converted total sound (CS T ) by using the following algorithm defined by following equations:
- Equation (1) Equation (2), and Equation (3):
- M multiplier between CF A and F A ;
- CF BL lowest frequency value in CF B ;
- CF BH Highest frequency value in CF B ;
- DCH differential converted harmonics
- a high-pass filter may cut all differential converted harmonics (DCH) above a predetermined frequency.
- DCH differential converted harmonics
- the frequency of 5,000 Hz may be used as a benchmark.
- CS T the frequencies participating in converted total sound
- the harmonics processing at the DSP 334 may provide the conversion for each participating frequency in total sound (S T ) and distributing all participating converted actual frequencies (CF A ) and differential converted harmonics (DCH N ) in the converted total sound (CS T ) in the same ratio as participated in the original total sound (S T ).
- the harmonics processing may use an adequate multiplier (between 0.1-0.9) and add the created new differential converted harmonics (DCH N ) to converted total sound (CS T ).
- the processor may process instructions for execution within the electronic signal processor 330 as a computing device, including instructions stored in the memory.
- the memory stores information within the computing device.
- the memory is a volatile memory unit or units.
- the memory is a non-volatile memory unit or units.
- the memory is accessible to the processor and includes processor-executable instructions that, when executed, cause the processor to execute a series of operations.
- the processor-executable instructions cause the processor to receive an input analog signal from the microphone inputs 338 and convert the input analog signal to a digital signal.
- the processor-executable instructions then cause the processor to transform through compression, for example, the digital signal into a processed digital signal having the preferred hearing range.
- the transformation may be a frequency transformation where the input frequency is frequency transformed into the preferred hearing range.
- the processor is then caused by the processor-executable instructions to convert the processed digital signal to an output analog signal and drive the output analog signal to the speaker output 340 .
- the proximate smart device 12 may be a wireless communication device of the type including various fixed, mobile, and/or portable devices. To expand rather than limit the discussion of the proximate smart device 12 , such devices may include, but are not limited to, cellular or mobile smart phones, tablet computers, smartwatches, and so forth.
- the proximate smart device 12 may include a processor 370 , memory 372 , storage 374 , a transceiver 376 , and a cellular antenna 378 interconnected by a busing architecture 380 that also supports the display 14 , I/O panel 382 , and a camera 384 . It should be appreciated that although a particular architecture is explained, other designs and layouts are within the teachings presented herein.
- the proximate smart device 12 includes the memory 372 accessible to the processor 370 and the memory 372 includes processor-executable instructions that, when executed, cause the processor 370 to screen, via the speaker and the user interface, a left ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment.
- the harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example.
- the processor-executable instructions may also determine a left ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the left ear of the patient.
- the processor-executable instructions then cause the processor 370 to screen, via the speaker and the user interface, a right ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment.
- the harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example.
- the processor-executable instructions may also determine a right ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the right ear of the patient.
- the processor executable instructions may cause the processor 370 to, when executed, utilize distributed processing between the proximate smart device 12 and a server to at least one of screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range.
- processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments.
- program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types.
- Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein.
- the particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
- the teachings presented herein permit the proximate smart device 12 such as a smart phone to form a pairing with the hearing aid 10 and operate the hearing aid 10 .
- the proximate smart device 12 includes the memory 372 accessible to the processor 370 and the memory 372 includes processor-executable instructions that, when executed, cause the processor 370 to provide an interface for an operator that includes an interactive application for viewing the status of the hearing aid 10 .
- the processor 370 is caused to present a menu for controlling the hearing aid 10 .
- the processor 370 is then caused to receive an interactive instruction from the user and forward a control signal via the transceiver 376 , for example, to implement the instruction at the hearing aid 10 .
- the processor 370 may also be caused to generate various reports about the operation of the hearing aid 10 .
- the processor 370 may also be caused to translate or access a translation service for the audio.
- one embodiment of the server 320 as a computing device includes, within the housing 322 , a processor 400 , memory 402 , and storage 404 interconnected with various buses 412 in a common or distributed, for example, mounting architecture that also supports inputs 406 , outputs 408 , and network interface 410 .
- multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
- multiple computing devices may be provided and operations distributed therebetween.
- the processor 400 may process instructions for execution within the server 320 , including instructions stored in the memory 402 or in storage 404 .
- the memory 402 stores information within the computing device.
- the memory 402 is a volatile memory unit or units. In another implementation, the memory 402 is a non-volatile memory unit or units.
- Storage 404 includes capacity that is capable of providing mass storage for the server 320 , including crane service database storage capacity.
- Various inputs 406 and outputs 408 provide connections to and from the server 320 , wherein the inputs 406 are the signals or data received by the server 320 , and the outputs 408 are the signals or data sent from the server 320 .
- the network interface 410 provides the necessary device controller to connect the server 320 to one or more networks.
- the memory 402 is accessible to the processor 400 and includes processor-executable instructions that, when executed, cause the processor 400 to execute a series of operations.
- the processor 400 may be caused to screen, via the speaker and the user interface, a left ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment.
- the harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example.
- the processor-executable instructions may also determine a left ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the left ear of the patient.
- the processor-executable instructions may also determine a right ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity of the right ear of the patient between 50 Hz and 10,000 Hz based on the utilization of harmonic frequencies of a harmonic frequency series.
- the processor-executable instructions then cause the processor 400 to screen, via the speaker and the user interface, a right ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment.
- the harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example.
- the processor-executable instructions may also determine a left ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the left ear of the patient. Also, the processor executable instructions may cause the processor 400 to, when executed, utilize distributed processing between the server 320 and either the proximate smart device 12 or hearing testing equipment to at least one of screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range.
- processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments.
- program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types.
- Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein.
- the particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
- FIG. 13 another embodiment of a system 430 for aiding hearing is shown.
- a user V who may be considered a patient requiring a hearing aid, is utilizing a hearing testing device 434 with a testing/programming unit 432 and a headset 436 having headphones 437 with a transceiver 438 for communicating with the hearing testing device 434 .
- a push button 442 is coupled with cabling 440 to the headset 436 to provide an interface for the user V to indicate when a particular sound, i.e., frequency and decibel is heard.
- the system 430 screens, via a speaker in the headset 436 and a user interface with the push button 442 , a left ear—and separately, a right ear—of the user V at selected frequencies based on the harmonic frequencies of a harmonic frequency series discussed above, between a frequency range of 50 Hz to 10,000 Hz, with detected frequencies being re-ranged tested to better identify the frequencies and decibel levels heard.
- the hearing testing device 434 depicted as a computing device is shown.
- a processor 450 may process instructions for execution within the computing device, including instructions stored in the memory 452 or in storage 454 .
- the memory 452 stores information within the computing device.
- the memory 452 is a volatile memory unit or units.
- the memory 452 is a non-volatile memory unit or units.
- the storage 454 provides capacity that is capable of providing mass storage for the hearing testing device 434 .
- Various inputs and outputs provide connections to and from the computing device, wherein the inputs are the signals or data received by the hearing testing device 434 , and the outputs are the signals or data sent from the hearing testing device 434 .
- the inputs are the signals or data received by the hearing testing device 434
- the outputs are the signals or data sent from the hearing testing device 434 .
- various inputs and outputs may be partially or fully integrated.
- the hearing testing device 432 may include the display 456 , a user interface 460 , a test frequency output 462 , a headset output 464 , a timer output 466 , a handset input 468 , a frequency range output 470 , and a microphone input 472 .
- the display 456 is an output device for visual information, including real-time or post-test screening results.
- the user interface 460 may provide a keyboard or push button for the operator of the hearing testing device 432 to provide input, including such functions as starting the screening, stopping the screening, and repeating a previously completed step.
- the test frequency output 462 may display the range to be examined, such as a frequency between 100 Hz and 5,000 Hz.
- the headset output 464 may output the signal under test to the patient.
- the timer output 466 may include an indication of the length of time the hearing testing device 432 will stay on a given frequency. For example, the hearing testing device 432 may stay 30 seconds on a particular frequency.
- the handset input 468 may be secured to a handset that provides “pause” and “okay” functionality for the patient during the testing.
- the frequency range output 462 may indicate the test frequency range per step, such as 50 Hz or other increment, for example.
- the microphone input 472 receives audio input from the operator relative to screening instructions intended for the patient, for example.
- the memory 452 and the storage 454 are accessible to the processor 450 and include processor-executable instructions that, when executed, cause the processor 450 to execute a series of operations.
- the processor-executable instructions may cause the processor 450 to permit the hearing testing device 432 to be conducted by one ear at a time.
- the processor-executable instructions may also cause the processor 450 to permit the patient to pause the process in response to a signal received at the handset input 468 .
- the processor 450 may be caused to start the hearing testing device 432 at 50 Hz by giving a 100 Hz signal with harmonics as part of harmonic frequency series for a predetermined length of time, such as 20 seconds to 30 seconds at a specified decibel or decibel range.
- the processor-executable instructions may cause the processor 450 to receive a detection signal from the handset input 468 during screening. Then, the processor-executable instructions cause the hearing testing device 432 to test to the next frequency or frequencies in the applicable harmonic frequency series at as step, such as 200 Hz, for example, and continue the screening process.
- the system determines a left ear preferred hearing range and a right ear preferred hearing range.
- processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments.
- program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types.
- Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein.
- the particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
- the testing equipment application 500 is a stand-alone application or is integrated into another application, while in other embodiments the application might be implemented within an operating system 530 .
- the testing equipment application 500 is provided as part of a server-based solution or a cloud-based solution.
- the application is provided via a thin client. That is, the application runs on a server while a user interacts with the application via a separate machine remote from the server.
- the application is provided via a thick client. That is, the application is distributed from the server to the client machine and runs on the client machine.
- the testing equipment application 500 includes a user interface (UI) interaction and generation module 502 , management (user) interface tools 504 , test procedure modules 506 , frequency generator modules 508 , decibels modules 510 , notification/alert modules 512 , report modules 514 , database module 516 , an operator module 518 , and a health care professional module 520 .
- the testing equipment application 500 has access to a testing equipment database 522 , which in one embodiment, may include test procedure data 524 , patient data 526 , harmonics data 528 , and presentation instructions 529 .
- storages 524 , 526 , 528 , 529 are all stored in one physical storage. In other embodiments, the storages 524 , 526 , 528 , 529 are in separate physical storages, or one of the storages is in one physical storage while the other is in a different physical storage.
- the system 300 identifies harmonic frequencies of a harmonic frequency series or of multiple harmonic frequency series that enables hearing.
- the system 300 is capable of combining various sounds, such as musical sounds or classical music instrument sounds, as discussed hereinabove, through a fundamental frequency and related frequencies of a harmonic frequency series or related frequencies of multiple harmonic frequency series, to creating or contribute to an algorithm that address or mitigate hearing loss for the patient.
- sounds such as musical sounds or classical music instrument sounds, as discussed hereinabove
- a fundamental frequency and related frequencies of a harmonic frequency series or related frequencies of multiple harmonic frequency series to creating or contribute to an algorithm that address or mitigate hearing loss for the patient.
- patients may be able to self-test or have minimal assistance during the testing.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Stereophonic System (AREA)
Abstract
Description
- This application is a continuation-in-part of co-pending International Application No. PCT/US21/29414, entitled “System and Method for Aiding Hearing” and filed on Apr. 27, 2021 in the names of Laslo Olah et al.; which claims priority from U.S. patent application Ser. No. 17/029,764, entitled “System and Method for Aiding Hearing” and filed on Sep. 23, 2020, in the names of Laslo Olah et al., now U.S. Pat. No. 10,993,047, issued on Apr. 27, 2021; which is a continuation-in-part of co-pending U.S. patent application Ser. No. 17/026,955, entitled “Hearing Aid and Method for Use of Same” and filed on Sep. 21, 2020, in the names of Laslo Olah et al.; which claims the benefit of priority from (1) U.S. Provisional Patent Application No. 62/935,961, entitled “Hearing Aid and Method for Use of Same” and filed on Nov. 15, 2019 in the name of Laslo Olah; and (2) U.S. Provisional Patent Application No. 62/904,616, entitled “Hearing Aid and Method for Use of Same” and filed on Sep. 23, 2019, in the name of Laslo Olah; all of which are hereby incorporated by reference, in entirety, for all purposes. U.S. patent application Ser. No. 17/026,955, entitled “Hearing Aid and Method for Use of Same” and filed on Sep. 21, 2020, in the names of Laslo Olah et al. is also a continuation-in-part of co-pending U.S. patent application Ser. No. 16/959,972, entitled “Hearing Aid and Method for Use of Same” and filed on Jul. 2, 2020 in the name of Laslo Olah; which claims priority from International Application No. PCT/US19/12550, entitled “Hearing Aid and Method for Use of Same” and filed on Jan. 7, 2019 in the name of Laslo Olah; which claims priority from U.S. Provisional Patent Application No. 62/613,804, entitled “Hearing Aid and Method for Use of Same” and filed on Jan. 5, 2018 in the name of Laslo Olah; all of which are hereby incorporated by reference, in entirety, for all purposes.
- This invention relates, in general, to hearing aids and, in particular, to systems and methods that aid hearing to provide signal processing and feature sets to enhance speech and sound intelligibility.
- Hearing loss can affect anyone at any age, although elderly adults more frequently experience hearing loss. Untreated hearing loss is associated with lower quality of life and can have far-reaching implications for the individual experiencing hearing loss as well as those close to the individual. As a result, there is a continuing need for improved hearing aids and methods for use of the same that enable patients to better hear conversations and the like.
- It would be advantageous to achieve a hearing aid and method for use of the same that would significantly change the course of existing hearing aids by adding features to correct existing limitations in functionality. It would also be desirable to enable a mechanical and electronics-based solution that would provide enhanced performance and improved usability with an enhanced feature set. To better address one or more of these concerns, a system and method for aiding hearing are disclosed. In one embodiment of the system, a programming interface is configured to communicate with a device. The system screens, via a speaker and a user interface associated with the device, a left ear and separately, a right ear—of a patient. The system then determines a left ear hearing range and a right ear hearing range. The screening utilizes harmonic frequencies of a harmonic frequency series, where the harmonic frequency series includes a fundamental frequency and integer multiples of the fundamental frequency. In some embodiments, the harmonic frequencies may include classical music instrument sounds.
- For a more complete understanding of the features and advantages of the present invention, reference is now made to the detailed description of the invention along with the accompanying figures in which corresponding numerals in the different figures refer to corresponding parts and in which:
-
FIG. 1A is a front perspective schematic diagram depicting one embodiment of a hearing aid programmed with one embodiment of a system for aiding hearing, according to the teachings presented herein; -
FIG. 1B is a top plan schematic diagram depicting the hearing aid ofFIG. 1A being utilized according to the teachings presented herein; -
FIG. 2 is a front perspective view of one embodiment of the hearing aid depicted inFIG. 1A ; -
FIG. 3A is a front-left perspective view of another embodiment of the hearing aid depicted inFIG. 1A ; -
FIG. 3B is a front-right perspective view of the embodiment of the hearing aid depicted inFIG. 3A ; -
FIG. 4 is a front perspective view of another embodiment of a hearing aid programmed with one embodiment of a system for aiding hearing, according to the teachings presented herein; -
FIG. 5 is a schematic diagram depicting one embodiment the system for aiding hearing, according to the teachings presented herein; -
FIG. 6 is a flow chart depicting one embodiment of a method for calibrating and setting the hearing aid for a preferred hearing range or preferred hearing ranges, according to the teachings presented herein; -
FIG. 7 is a flow chart depicting another embodiment of a method for calibrating and setting the hearing aid for a preferred hearing range or preferred hearing ranges, according to the teachings presented herein; -
FIG. 8 is a flow chart depicting still another embodiment of a method for calibrating and setting the hearing aid for a preferred hearing range or preferred hearing ranges, according to the teachings presented herein; -
FIG. 9 is a front perspective schematic diagram depicting one embodiment of a hearing aid being programmed with one embodiment of a system for aiding hearing, according to the teachings presented herein; -
FIG. 10 is a functional block diagram depicting one embodiment of the hearing aid depicted inFIG. 9 ; -
FIG. 11 is a functional block diagram of a smart device, which forms a portion of the system for aiding hearing depicted inFIG. 9 ; -
FIG. 12 is a functional block diagram depicting one embodiment of a server, which forms a portion of the system for aiding hearing depicted inFIG. 9 ; -
FIG. 13 is a front perspective schematic diagram depicting another embodiment of a system for aiding hearing, according to the teachings presented herein; -
FIG. 14 is a functional block diagram depicting one embodiment of hearing aid test equipment depicted inFIG. 13 ; and -
FIG. 15 is a conceptual module diagram depicting a software architecture of a testing equipment application of some embodiments. - While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts, which can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the present invention.
- Referring initially to
FIG. 1A andFIG. 1B , therein is depicted one embodiment of a hearing aid, which is schematically illustrated and designated 10. Thehearing aid 10 is programmed according to a system for aiding hearing. As shown, a user U, who may be considered a patient requiring a hearing aid, is wearing thehearing aid 10 and sitting at a table T at a restaurant or café, for example, and engaged in a conversation with an individual I1 and an individual I2. As part of a conversation at the table T, the user U is speaking sound S1, the individual I1 is speaking sound S2, and the individual I2 is speaking sound S3. Nearby, in the background, a bystander B1 is engaged in a conversation with a bystander B2. The bystander B1 is speaking sound S4 and the bystander B2 is speaking sound S5. An ambulance A is driving by the table T and emitting sound S6 in direction L. The sounds S1, S2, and S3 may be described as the immediate background sounds. The sounds S4, S5, and S6 may be described as the background sounds. The sound S6 may be described as the dominant sound as it is the loudest sound at table T. The sounds S1, S2, S3, S4, S5, S6 represent life sounds with are complex and continuously changing mixtures of base frequencies and harmonics. The sounds S1, S2, S3, S4, S5, S6 are not discrete frequencies. - As will be described in further detail hereinbelow, the
hearing aid 10 is programmed with a preferred hearing range for each ear in a two-ear embodiment and for one ear in a one-ear embodiment. The preferred hearing range may be a range of sound corresponding to the highest hearing capacity of an ear of the user U between 50 Hz and 5,000 Hz or between 50 Hz and 10,000 Hz, for example. Further, as shown, in the two-ear embodiment, the preferred hearing range for each ear may be multiple ranges of sound corresponding to the highest hearing capacity ranges of an ear of the user U between 50 Hz and 5,000 Hz or between 50 Hz and 10,000 Hz, for example. In some embodiments of this multiple range of sound implementation, the various sounds S1, S2, S3, S4, S5, S6 received may be transformed and divided into the multiple ranges of sound. - In some embodiments, as will be discussed in further detail hereinbelow, a left ear hearing range and a right ear hearing range are determined by way of screening. The screening utilizes harmonic frequencies of a harmonic frequency series, where the harmonic frequency series includes a fundamental frequency and integer multiples of the fundamental frequency. In some embodiments, the harmonic frequencies may include classical music instrument sounds. As will be discussed in additional detail hereinbelow, by programming the
hearing aid 10 with an algorithm based on screening utilizing harmonic frequencies of a harmonic frequency series, the testing identifies a preferred hearing range for a patient, on an ear-by-ear basis, with the use of life-sounds, rather than clinical discrete frequencies. - In one embodiment, the
hearing aid 10 may create a pairing with a proximatesmart device 12, such as a smart phone (depicted), smart watch, or tablet computer. The proximatesmart device 12 includes adisplay 14 having aninterface 16 having controls, such as an ON/OFF switch or volume controls 18, mode of operation controls 24, general controls 20. The user U may send a control signal wirelessly from the proximatesmart device 12 to thehearing aid 10 to control a function, like the volume controls 18. Further, in one embodiment, as shown by a processor symbol P, after thehearing aid 10 creates the pairing with the proximatesmart device 12, thehearing aid 10 and the proximatesmart device 12 may leverage the wireless communication link therebetween and use processing distributed between thehearing aid 10 and the proximatesmart device 12 to process the signals and perform other analysis. - Referring to
FIG. 2 , as shown, in the illustrated embodiment, thehearing aid 10 is programmed according to the system for aiding hearing and thehearing aid 10 includes aleft body 32 and aright body 34 connected to aband member 36 that is configured to partially circumscribe the user U. Each of theleft body 32 and theright body 34 cover an external ear of the user U and are sized to engage therewith. In some embodiments,microphones left body 32. With respect to gathering sound, themicrophone 38 may be positioned to gather forward sound, themicrophone 40 may be positioned to gather lateral sound, and themicrophone 42 may be positioned to gather rear sound. Microphones may be similarly positioned on theright body 34. Variousinternal compartments 44 provide space for housing electronics, which will be discussed in further detail hereinbelow.Various controls 46 provide a patient interface with thehearing aid 10. - Having each of the
left body 32 and theright body 34 cover an external ear of the user U and being sized to engage therewith confers certain benefits. Sound waves enter through the outer ear and reach the middle ear to vibrate the eardrum. The eardrum then vibrates the oscilles, which are small bones in the middle ear. The sound vibrations travel through the oscilles to the inner ear. When the sound vibrations reach the cochlea, they push against specialized cells known as hair cells. The hair cells turn the vibrations into electrical nerve impulses. The auditory nerve connects the cochlea to the auditory centers of the brain. When these electrical nerve impulses reach the brain, they are experienced as sound. The outer ear serves a variety of functions. The various air-filled cavities composing the outer ear, the two most prominent being the concha and the ear canal, have a natural or resonant frequency to which they respond best. This is true of all air-filled cavities. The resonance of each of these cavities is such that each structure increases the sound pressure at its resonant frequency by approximately 10 to 12 dB. In summary, among the functions of the outer ear: (a) boost or amplify high-frequency sounds; (b) provide the primary cue for the determination of the elevation of a sound's source; (c) assist in distinguishing sounds that arise from in front of the listener from those that arise from behind the listener. Headsets are used in hearing testing in medical and associated facilities for a reason: tests have shown that completely closing the ear canal in order to prevent any form of outside noise plays direct role in acoustic matching. The more severe hearing problem, the closer the hearing aid speaker must be to the ear drum. However, the closer to the speaker is to the ear drum, the more the device plugs the canal and negatively impacts the ear's pressure system. That is, the various chambers of the ear have a defined operational pressure determined, in part, by the ear's structure. By plugging the ear canal, the pressure system in the ear is distorted and the operational pressure of the ear is negatively impacted. - As alluded, “plug size” hearing aids having limitations with respect to distorting the defined operational pressure within the ear. Considering the function of the outer ear's air filled cavities in increasing the sound pressure at resonant frequencies, the
hearing aid 10 ofFIG. 2 —and other figures—creates a closed chamber around the ear increasing the pressure within the chamber. This higher pressure plus the utilization of a more powerful speaker within the headset at qualified sound range, e.g., the frequency range the user hears best with the best quality sound, provide the ideal set of parameters for a powerful hearing aid. - Referring to
FIG. 3A andFIG. 3B , as shown, in the illustrated embodiment, thehearing aid 10 is programmed according to a system for aiding hearing. Thehearing aid 10 includes aleft body 52 having anear hook 54 extending from theleft body 52 to anear mold 56. Theleft body 52 and theear mold 56 may each at least partially conform to the contours of the external ear and sized to engage therewith. By way of example, theleft body 52 may be sized to engage with the contours of the ear in a behind-the-ear-fit. Theear mold 56 may be sized to be fitted for the physical shape of a patient's ear. Theear hook 54 may include a flexible tubular material that propagates sound from theleft body 52 to theear mold 56.Microphones 58, which gather sound and convert the gathered sound into an electrical signal, are located on theleft body 52. Anopening 60 within theear mold 56 permits sound traveling through theear hook 54 to exit into the patient's ear. Aninternal compartment 62 provides space for housing electronics, which will be discussed in further detail hereinbelow.Various controls 64 provide a patient interface with thehearing aid 10 on theleft body 52 of thehearing aid 10. - As also shown, the
hearing aid 10 includes aright body 72 having anear hook 74 extending from theright body 72 to anear mold 76. Theright body 72 and theear mold 76 may each at least partially conform to the contours of the external ear and sized to engage therewith. By way of example, theright body 72 may be sized to engage with the contours of the ear in a behind-the-ear-fit. Theear mold 76 may be sized to be fitted for the physical shape of a patient's ear. Theear hook 74 may include a flexible tubular material that propagates sound from theright body 72 to theear mold 76.Microphones 78, which gather sound and convert the gathered sound into an electrical signal, are located on theright body 72. Anopening 80 within theear mold 76 permits sound traveling through theear hook 74 to exit into the patient's ear. Aninternal compartment 82 provides space for housing electronics, which will be discussed in further detail hereinbelow.Various controls 84 provide a patient interface with thehearing aid 10 on theright body 72 of thehearing aid 10. It should be appreciated that thevarious controls right bodies hearing aid 10 may have one or more microphones on each of the left andright bodies - In one embodiment, the left and
right bodies band member 90 which is configured to partially circumscribe a head or a neck of the patient. Aninternal compartment 92 within theband member 90 may provide space for electronics and the like. Additionally, thehearing aid 10 may include left and right earpiece covers 94, 96 respectively positioned exteriorly to the left andright bodies microphones 58 in theleft body 52 and themicrophones 78 in theright body 72 may cooperate to provide directional hearing. - Referring to
FIG. 4 , therein is depicted another embodiment of thehearing aid 10 that is programmed with the system for aiding hearing. It should be appreciated by a review ofFIG. 2 throughFIG. 4 that the system for aiding hearing presented herein may program any type of hearing aid. As shown, in the illustrated embodiment inFIG. 4 , thehearing aid 10 includes abody 112 having anear hook 114 extending from thebody 112 to anear mold 116. Thebody 112 and theear mold 116 may each at least partially conform to the contours of the external ear and sized to engage therewith. By way of example, thebody 112 may be sized to engage with the contours of the ear in a behind-the-ear-fit. Theear mold 116 may be sized to be fitted for the physical shape of a patient's ear. Theear hook 114 may include a flexible tubular material that propagates sound from thebody 112 to theear mold 116. Amicrophone 118, which gathers sound and converts the gathered sound into an electrical signal, is located on thebody 112. Anopening 120 within theear mold 116 permits sound traveling through theear hook 114 to exit into the patient's ear. Aninternal compartment 122 provides space for housing electronics, which will be discussed in further detail hereinbelow.Various controls 124 provide a patient interface with thehearing aid 10 on thebody 112 of thehearing aid 10. - Referring now to
FIG. 5 , one embodiment of asystem 150 for aiding hearing is depicted that provides for calibrating and setting thehearing aid 10 for a preferred hearing range or preferred hearing ranges. Afrequency generator 152 may be an electronic device that generates frequency signals with set properties of amplitude, frequency, and wave shape. Thefrequency generator 152 may screen an ear of a patient, i.e., the user U, with harmonic frequencies of a harmonic frequency series. In one embodiment, the harmonic frequencies may bemusical sounds 154. - In a further embodiment, the musical sounds may be classical music instrument sounds, such as sounds from an instrument belonging to keyboard instruments, string instruments, woodwind instruments, or brass instruments, for example. The keyboard instruments may be a musical instrument played using a keyboard, a row of levers which are pressed by a finger or finger and may include a piano, organ, or harpsichord, for example. The string instruments may be chordophones or musical instruments that produce sound from vibrating strings when a performer plays or sounds the strings in some manner. The string instruments may include violins, violas, cellos, and basses, for example. The woodwind instruments may be a musical instrument that contains some type of resonator or tubular structure in which a column of air is set into vibration by the player blowing into or over a mouthpiece set at or near the end of the resonator. The woodwind instruments may include flutes, clarinets, oboes, bassoons, and saxophones, for example. The brass instruments may be a musical instrument that produces sound by sympathetic vibration of air in a tubular resonator in sympathy with the vibration of the player's lips. The brass instruments may include horns, trumpets, trombones, euphoniums, and tubas, for example.
- As shown, the
frequency generator 152 is programmed to produce sounds and, in one embodiment, live sounds which are non-discrete based on anorgan 156, atrumpet 158, and aviolin 160. As shown, non-discrete live sounds 162 are utilized to screen the ear of the user U. In one embodiment, the non-discrete live sounds 162 include a harmonic frequency series between 50 Hz and 10,000 Hz, with the harmonic frequency series being a fundamental frequency and integer multiples of the fundamental frequency. In another embodiment, the non-discrete live sounds 162 include a harmonic frequency series between 50 Hz and 5,000 Hz. - As will be illustrated with additional examples hereinbelow, the screening may be calibrated with multiple variables. Foremost, the test range of signals may be set. The selection of sound and music may be made. By way of further example, the harmonic frequencies screened may be decreasing frequencies or increasing frequencies. By way of further example, the harmonic frequencies may be a continuous sound or noncontinuous sound. The harmonic frequencies utilized for screening may include a single harmonic at a time or multiple harmonics at a time, which may or may not include the fundamental frequency. The amplification utilized in screening with the harmonic frequencies may be a constant amplification or an increasing amplification.
- As shown, the non-discrete live sounds 162 may include harmonic frequencies as follows:
-
S=F H =F b +F h1 +F h2 + . . . +F hn; wherein - S is the non-discrete live sound sounds;
- FH is the harmonic frequencies;
- Fb is a base or fundamental frequency;
- Fh1 is a is a first integer multiple of Fb;
- Fh2 is a second integer multiple of Fb; and
- Fhn is an nth integer multiple of Fb.
- It should be appreciated, however, that the non-discrete live sounds 162 may include other harmonic frequencies as, by way of example, follows:
-
S=F H =F b +F h1; wherein - S is the non-discrete live sound sounds;
- FH is the harmonic frequencies;
- Fb is a base or fundamental frequency; and
- Fh1 is a is a first integer multiple of Fb.
- By way of further example, the non-discrete live sounds 162 may include elements of the harmonic frequency series as follows:
-
S=F H =F b +F h2 +F h4 +F 2hn; wherein - S is the non-discrete live sound sounds;
- FH is the harmonic frequencies;
- Fb is a base frequency;
- Fh2 is a is a second integer multiple of Fb;
- Fh4 is a is a fourth integer multiple of Fb; and
- F2hn is a is a 2nth integer multiple of Fb.
- It should be appreciated that the harmonic frequencies being utilized for testing, whether simultaneously, sequentially, or continuously, for example, may include any number of frequencies in the harmonic frequency series, which includes a fundamental frequency and multiple integer multiples, including consecutive and non-consecutive integer multiples, of the fundamental frequency. That is, the selection of the harmonic frequencies may vary depending on the testing circumstances. Upon screening, the user U indicates when the non-discrete live sounds are heard at a
decision block 164 and the response or a lack of response is recorded at arecorder 166. Based on the data collected by therecorder 166, an algorithm may be created for thehearing aid 10 to assist with hearing. - The
system 150 provides a non-discrete frequency test technology to establish a precise hearing frequency range or precise hearing frequency ranges in a patient's hearing by working with a base frequency Fb and the harmonics (Fh1+Fh2+ . . . +Fhn), or a subset thereof, of the base frequency Fb. In this manner, thesystem 150 is designed to test, measure, and establish the patient's true hearing range. Instead of working with discrete frequencies, thesystem 150, in one implementation, employs music instrument tunes specific to corresponding frequencies or frequency ranges. Thesystem 150, therefore, provides hearing impaired patients with a given frequency and the harmonics of the given frequency to identify the patient's hearing range. - By utilizing the base frequency Fb and the harmonics (Fh1+Fh2+ . . . +Fhn), or a portion of the harmonics thereof, of the base frequency Fb, the testing methodology is similar to real life situations. When sounds are encountered in real life, single discrete frequencies are not often encountered. Life-sounds are complex and, in part, continuously changing mixture of base frequencies and harmonics. Therefore, rather than test a patient's hearing with discrete frequencies, the systems and methods presented herein utilize non-discrete harmonic frequencies to test a patient's hearing. Additionally, by utilizing non-discrete harmonic frequencies to test a patient's hearing to better replicate life sounds, testing time is decreased. By way of example, the third harmonic of 500 Hz is 1,500 Hz and the third harmonic of 2,000 Hz is 6,000 Hz, which is almost at the end point of a human testing range. Further, testing of human hearing over 5,0000 Hz is unnecessary in about 90% of the cases as reverse slope hearing loss is uncommon.
- Referring now to
FIG. 6 , one embodiment of a method for calibrating and setting thehearing aid 10 for a preferred hearing range or preferred hearing ranges utilizing the methodology presented herein is shown. The method starts atblock 180, when a patient is going to undergo testing to determine the preferred hearing range or preferred hearing ranges for use of thehearing aid 10. Thefrequency generator 152 and therecorder 166 interact with the methodology to provide the preferredhearing range 174 or a contribution thereto. As will be discussed in further detail hereinbelow, thefrequency generator 152 and therecorder 166 may be embodied on any combination of smart devices, servers, and hearing aid test equipment. In the illustrated embodiment, a left ear or a right ear of a patient is tested with continuous sound being produced using increasing or decreasing frequencies between 100 Hz and 1,000 Hz, for example, for a sufficient time, such as 30 seconds. The patient may push a button when the sound is first heard. - At
block 180, an initial frequency of 100 Hz at 20 dB is screened. As shown bydecision block 182, the patient's ability to hear the initial frequency is recorded before the process continuously advances to the next frequency of a variable increment, which is 200 Hz at 20 dB, atblock 184 and the patient's ability to hear is recorded atdecision block 186. In this example, 100 Hz is the base frequency and 200 Hz, 300 Hz, 400 Hz, 500 Hz, 600 Hz, 700 Hz, 800 Hz, 900 Hz, and 1,000 Hz are exemplary integer multiples of the fundamental frequency with the base frequency and the integer multiples forming the harmonic frequency series. - At
block 188 anddecision block 190, the process advances continuously for the next incremental frequency in the harmonic frequency series, e.g., 300 Hz at 20 dB. Similarly, atblock 192 anddecision block 194, the methodology continuously advances through 400 Hz at 20 dB. The process may continuously advance through the harmonic frequency series to block 196 and decision block 198 for 1,000 Hz at 20 dB. As indicated inblock 200, the testing methodology continues for the frequencies under test with the results being recorded. - Referring now to
FIG. 7 , another embodiment of a method for calibrating and setting thehearing aid 10 for a preferred hearing range or preferred hearing ranges utilizing the methodology presented herein is shown. In this exemplary methodology, amplification is increased in a step-by-step manner as a patient is tested in 100 Hz increments of a harmonic frequency series. By way of example, the following equations exemplify this methodology: - Fb=100 Hz such that FT=Fb100+Fh1+Fh2+ . . . +Fhn at 20 db;
- Fb=200 Hz such that FT=Fb200+Fh1+Fh2+ . . . +Fhn at 20 db+a; and
- FbN=ZHz such that FT=FbZ+Fh1+Fh2+ . . . +Fhn at 20 db+y; wherein
- Fb is the fundamental frequency;
- FT is the testing frequency;
- Fb is an integer multiple of the fundamental frequency;
- ZHz is the highest frequency in the chosen range;
- a is an increased amplification; and
- y is an increased amplification.
- Continuing to refer to
FIG. 7 , the method starts atblock 230, when a patient is going to undergo testing to determine the preferred hearing range or preferred hearing ranges for use of thehearing aid 10. Thefrequency generator 152 and therecorder 166 interact with the methodology to provide the preferredhearing range 174 or a contribution thereto. As will be discussed in further detail hereinbelow, thefrequency generator 152 and therecorder 166 may be embodied on any combination of smart devices, servers, and hearing aid test equipment. In the illustrated embodiment, a left ear or a right ear of a patient is tested with continuous sound being produced using increasing or decreasing frequencies between 100 Hz and 1,000 Hz, for example, for a sufficient time, such as 30 seconds. The patient may push a button when the sound is first heard. - At
block 230, an initial frequency of 100 Hz with at least one harmonic frequency of a harmonic series at 20 dB is screened. As shown bydecision block 232, the patient's ability to hear the initial frequency is recorded before the process advances to the next frequency of a variable increment, which is 200 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+a, atblock 234 and the patient's ability to hear is recorded atdecision block 236. - At
block 238 anddecision block 240, the process advances continuously for the next incremental frequency in the harmonic frequency series, e.g., 300 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+b. Similarly, atblock 242 anddecision block 244, the methodology advances through 400 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+c. The process may advance through the harmonic frequency series to block 246 and decision block 248 for 1,000 Hz with at least one harmonic frequency of a harmonic series at 20 dB with an increased amplification applied thereto as reflected by 20 db+d. As indicated inblock 250, the testing methodology continues for the frequencies under test with the results being recorded. - Referring now to
FIG. 8 , a still further embodiment of a method for calibrating and setting thehearing aid 10 for a preferred hearing range or preferred hearing ranges utilizing the methodology presented herein is shown. In this exemplary methodology, constant amplification is utilized in a step-by-step manner as a patient is tested in 100 Hz increments of a harmonic frequency series. By way of example, the following equations exemplify this methodology: - Fb=100 Hz such that FT=Fb100+Fh1+Fh2+ . . . +Fhn at 30 db;
- Fb=200 Hz such that FT=Fb200+Fh1+Fh2+ . . . +Fhn at 30 db; and
- FbN=ZHz such that FT=FbZ+Fh1+Fh2+ . . . +Fhn at 30 db; wherein
- Fb is the fundamental or base frequency;
- FT is the testing frequency;
- Fh is an integer multiple of the fundamental frequency; and
- ZHz is the highest frequency, Z, in the chosen range.
- Continuing to refer to
FIG. 8 , The method starts atblock 260, when a patient is going to undergo testing to determine the preferred hearing range or preferred hearing ranges for use of thehearing aid 10. As with the methodologies inFIGS. 6-7 , thefrequency generator 152 and therecorder 166 interact with the methodology to provide the preferredhearing range 174 or a contribution thereto. In the illustrated embodiment, a left ear or a right ear of a patient is tested with continuous sound being produced using increasing or decreasing frequencies between 100 Hz and 1,000 Hz, for example, for a sufficient time, such as 30 seconds. The patient may push a button when the sound is first heard. - At
block 260, an initial frequency of 100 Hz with at least one harmonic frequency of a harmonic series at 30 dB is screened. As shown bydecision block 262, the patient's ability to hear the initial frequency is recorded before the process advances to the next incremental frequency, which is 200 Hz with at least one harmonic frequency of a harmonic series at 30 dB, atblock 264 and the patient's ability to hear is recorded atdecision block 266. - At
block 268 anddecision block 270, the process advances to the next incremental frequency in the testing of the applicable harmonic frequency series, e.g., 300 Hz with at least one harmonic frequency of a harmonic series at 30 dB. Similarly, atblock 272 anddecision block 274, the methodology advances through 400 Hz with at least one harmonic frequency of a harmonic series at 30 dB. The process may advance through the harmonic frequency series to block 276 and decision block 278 for 1,000 Hz with at least one harmonic frequency of a harmonic series at 30 dB. As indicated inblock 280, the testing methodology continues for the frequencies under test with the results being recorded. - Referring now to
FIG. 9 , one embodiment of asystem 300 for aiding hearing is shown. As shown, the user U, who may be considered a patient requiring a hearing aid, is wearing thehearing aid 10 and sitting at a table T. Thehearing aid 10 has a pairing with the proximatesmart device 12 such thehearing aid 10 and the proximatesmart device 12 may determine the user's preferred hearing range for each ear and subsequently program thehearing aid 10 with the preferred hearing ranges. The proximatesmart device 12, which may be a smart phone, a smart watch, or a tablet computer, for example, is executing a hearing screening program. Thedisplay 14 serves as an interface for the user U. As shown, various indicators, such asindicators indicator 306 and the user U may appropriately respond atsoft button 308 orsoft button 310. In this way, thesystem 300 screens, via a speaker and theuser interface 16 associated with the proximatesmart device 12, a left ear—and separately, a right ear—of the user U at multiple harmonic frequencies of a harmonic frequency series between 50 Hz and 10,000 Hz, with detected frequencies, optionally, being re-ranged tested to better identify the frequencies and decibel levels heard. Following the completion of the screening, thesystem 300 then determines a left ear preferred hearing range and a right ear preferred hearing range. As previously discussed, the harmonic frequency series may be a fundamental frequency and multiple integer multiples of the fundamental frequency. - As shown the proximate
smart device 12 may be in communication with aserver 320 having ahousing 322. The smart device may utilize distributed processing between the proximatesmart device 12 and theserver 320 to at least one of screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range. As previously mentioned, the processing to screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range may be located on a smart device, a server, hearing testing equipment, or any combination thereof. - Referring now to
FIG. 10 , an illustrative embodiment of the internal components of thehearing aid 10 is depicted. By way of illustration and not by way of limitation, thehearing aid 10 depicted in the embodiment ofFIG. 2 andFIGS. 3A, 3B is presented. It should be appreciated, however, that the teachings ofFIG. 5 equally apply to the embodiment ofFIG. 4 . As shown, with respect toFIGS. 3A and 3B , in one embodiment, within theinternal compartments electronic signal processor 330 may be housed. Thehearing aid 10 may include theelectronic signal processor 330 for each ear or theelectronic signal processor 330 for each ear may be at least partially integrated or fully integrated. In another embodiment, with respect toFIG. 4 , within theinternal compartment 122 of thebody 112, theelectronic signal processor 330 is housed. In order to measure, filter, compress, and generate, for example, continuous real-world analog signals in form of sounds, theelectronic signal processor 330 may include an analog-to-digital converter (ADC) 332, a digital signal processor (DSP) 334, and a digital-to-analog converter (DAC) 336. Theelectronic signal processor 330, including the digital signal processor embodiment, may have memory accessible to a processor. One ormore microphone inputs 338 corresponding to one or more respective microphones, aspeaker output 340, various controls, such as aprogramming connector 342 and hearing aid controls 344, aninduction coil 346, abattery 348, and atransceiver 350 are also housed within thehearing aid 10. - As shown, a signaling architecture communicatively interconnects the
microphone inputs 338 to theelectronic signal processor 330 and theelectronic signal processor 330 to thespeaker output 340. The various hearing aid controls 344, theinduction coil 346, thebattery 348, and thetransceiver 350 are also communicatively interconnected to theelectronic signal processor 330 by the signaling architecture. Thespeaker output 340 sends the sound output to a speaker or speakers to project sound and in particular, acoustic signals in the audio frequency band as processed by thehearing aid 10. By way of example, theprogramming connector 342 may provide an interface to a computer or other device and, in particular, theprogramming connector 342 may be utilized to program and calibrate thehearing aid 10 with thesystem 300, according to the teachings presented herein. The hearing aid controls 344 may include an ON/OFF switch as well as volume controls, for example. Theinduction coil 346 may receive magnetic field signals in the audio frequency band from a telephone receiver or a transmitting induction loop, for example, to provide a telecoil functionality. Theinduction coil 346 may also be utilized to receive remote control signals encoded on a transmitted or radiated electromagnetic carrier, with a frequency above the audio band. Various programming signals from a transmitter may also be received via theinduction coil 346 or via thetransceiver 350, as will be discussed. Thebattery 348 provides power to thehearing aid 10 and may be rechargeable or accessed through a battery compartment door (not shown), for example. Thetransceiver 350 may be internal, external, or a combination thereof to the housing. Further, thetransceiver 350 may be a transmitter/receiver, receiver, or an antenna, for example. Communication between various smart devices and thehearing aid 10 may be enabled by a variety of wireless methodologies employed by thetransceiver 150, including 802.11, 3G, 4G, Edge, WiFi, ZigBee, near field communications (NFC), Bluetooth low energy, and Bluetooth, for example. - The various controls and inputs and outputs presented above are exemplary and it should be appreciated that other types of controls may be incorporated in the
hearing aid 10. Moreover, the electronics and form of thehearing aid 10 may vary. Thehearing aid 10 and associated electronics may include any type of headphone configuration, a behind-the-ear configuration, an over-the-ear configuration, or in-the-ear configuration, for example. Further, as alluded, electronic configurations with multiple microphones for directional hearing are within the teachings presented herein. In some embodiments, the hearing aid has an over-the-ear configuration where the entire ear is covered, which not only provides the hearing aid functionality but hearing protection functionality as well. - Continuing to refer to
FIG. 10 , in one embodiment, theelectronic signal processor 330 may be programmed with a preferred hearing range which, in one embodiment, is the preferred hearing sound range corresponding to highest hearing capacity of a patient. In one embodiment, the left ear preferred hearing range and the right ear preferred hearing range are each a range of sound corresponding to highest hearing capacity of an ear of a patient between 50 Hz and 10,000 Hz, as tested with the utilization of one or more harmonic frequency series. With this approach, the hearing capacity of the patient is enhanced. Existing audiogram hearing aid industry testing equipment measures hearing capacity at defined, discrete frequencies, such as 60 Hz; 125 Hz; 250 Hz; 500 Hz; 1,000 Hz; 2,000 Hz; 4,000 Hz; 8,000 Hz and existing hearing aids work on a ratio-based frequency scheme. The present teachings, however, measure hearing capacity with harmonics to improve the speed of the testing and to provide an algorithm for hearing similar to real-life with multiple non-discrete harmonics utilized. - Further, in one embodiment, the preferred hearing sound range may be shifted by use of various controls the 124. Directional microphone systems on each microphone position and processing may be included that provide a boost to sounds coming from the front of the patient and reduce sounds from other directions. Such a directional microphone system and processing may improve speech understanding in situations with excessive background noise. Digital noise reduction, impulse noise reduction, and wind noise reduction may also be incorporated. As alluded to, system compatibility features, such as FM compatibility and Bluetooth compatibility, may be included in the
hearing aid 10. - The
ADC 332 outputs a digital total sound (ST) signal that undergoes the frequency spectrum analysis. In this process, the base frequency (FB) and harmonics (H1, H2, . . . , HN) components are separated. Using the algorithms presented hereinabove and having a converted based frequency (CFB) set as a target frequency range, the harmonics processing within theelectronic signal processor 330 calculates a converted actual frequency (CFA) and a differential converted harmonics (DCHN) to create a converted total sound (CST), which is the output of the harmonics processing by theelectronic signal processor 330. - More particularly, total sound (ST) may be defined as follows:
-
S T =F B +H 1 +H 2 + . . . +H N, wherein - ST=total sound;
- FB=base frequency range, with
- FB=range between FBL and FBH with FBL being the lowest frequency value in base frequency and FBB being the highest frequency Value in Base Frequency;
- HN=harmonics of FB with HN being a mathematical multiplication of FB;
- FA=an actual frequency value being examined;
- HA1=1st harmonic of FA;
- HA2=2nd harmonic of FA; and
- HAN=Nth harmonic of FA with HAN being the mathematical multiplication of FA.
- In many hearing impediment cases, the total sound (ST) may be at any frequency range; furthermore, the two ears true hearing range may be entirely different. Therefore, the
hearing aid 10 presented herein may transfer the base frequency range (FB) along with several of the harmonics (HN) into the actual hearing range (AHR) by converting the base frequency range (FB) and several chosen harmonics (HN) into the actual hearing range (AHR) as one coherent converted total sound (CST) by using the following algorithm defined by following equations: -
- wherein for Equation (1), Equation (2), and Equation (3):
- M=multiplier between CFA and FA;
- CST=converted total sound;
- CFB=converted base frequency;
- CHA1=1st converted harmonic;
- CHA2=2nd converted harmonic;
- CHAN=Nth converted harmonic;
- CFBL=lowest frequency value in CFB;
- CFBH=Highest frequency value in CFB; and
- CFA=Converted actual frequency.
- By way of example and not by way of limitation, an application of the algorithm utilizing Equation (1), Equation (2), and Equation (3) is presented. For this example, the following assumptions are utilized:
- FBL=170 Hz
- FBH=330 Hz
- CFBL=600 Hz
- CFBH=880 Hz
- FA=180 Hz
- Therefore, for this example, the following will hold true:
- H1=360 Hz
- H4=720 Hz
- H8=1,440 Hz
- H16=2,880 Hz
- H32=5,760 Hz
- Using the algorithm, the following values may be calculated:
- CFA=635 Hz
- CHA1=1,267 Hz
- CHA4=2,534 Hz
- CHA8=5,068 Hz
- CHA16=10,137 Hz
- CHA32=20,275 Hz
- To calculate the differentials (D) between the harmonics HN and the converted harmonics (CHAN), the following equation is employed:
-
CH AN —H N =D equation. - This will result in differential converted harmonics (DCH) as follows:
- DCH1=907 Hz
- DCH4=1,814 Hz
- DCH8=3, 628 Hz
- DCH16=7,257 Hz
- DCH32=14,515 Hz
- In some embodiments, a high-pass filter may cut all differential converted harmonics (DCH) above a predetermined frequency. The frequency of 5,000 Hz may be used as a benchmark. In this case the frequencies participating in converted total sound (CST) are as follows:
- CFA=635 Hz
- DCH1=907 Hz
- DCH4=1,814 Hz
- DCH8=3,628 Hz
- The harmonics processing at the
DSP 334 may provide the conversion for each participating frequency in total sound (ST) and distributing all participating converted actual frequencies (CFA) and differential converted harmonics (DCHN) in the converted total sound (CST) in the same ratio as participated in the original total sound (ST). In some implementations, should more than seventy-five percent (75%) of all the differential converted harmonics (DCHN) be out of the high-pass filter range, the harmonics processing may use an adequate multiplier (between 0.1-0.9) and add the created new differential converted harmonics (DCHN) to converted total sound (CST). - The processor may process instructions for execution within the
electronic signal processor 330 as a computing device, including instructions stored in the memory. The memory stores information within the computing device. In one implementation, the memory is a volatile memory unit or units. In another implementation, the memory is a non-volatile memory unit or units. The memory is accessible to the processor and includes processor-executable instructions that, when executed, cause the processor to execute a series of operations. The processor-executable instructions cause the processor to receive an input analog signal from themicrophone inputs 338 and convert the input analog signal to a digital signal. The processor-executable instructions then cause the processor to transform through compression, for example, the digital signal into a processed digital signal having the preferred hearing range. The transformation may be a frequency transformation where the input frequency is frequency transformed into the preferred hearing range. Such a transformation is a toned-down, narrower articulation that is clearly understandable as it is customized for the user. The processor is then caused by the processor-executable instructions to convert the processed digital signal to an output analog signal and drive the output analog signal to thespeaker output 340. - Referring now to
FIG. 11 , the proximatesmart device 12 may be a wireless communication device of the type including various fixed, mobile, and/or portable devices. To expand rather than limit the discussion of the proximatesmart device 12, such devices may include, but are not limited to, cellular or mobile smart phones, tablet computers, smartwatches, and so forth. The proximatesmart device 12 may include aprocessor 370,memory 372,storage 374, atransceiver 376, and acellular antenna 378 interconnected by abusing architecture 380 that also supports thedisplay 14, I/O panel 382, and acamera 384. It should be appreciated that although a particular architecture is explained, other designs and layouts are within the teachings presented herein. - The proximate
smart device 12 includes thememory 372 accessible to theprocessor 370 and thememory 372 includes processor-executable instructions that, when executed, cause theprocessor 370 to screen, via the speaker and the user interface, a left ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment. The harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example. The processor-executable instructions may also determine a left ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the left ear of the patient. - The processor-executable instructions then cause the
processor 370 to screen, via the speaker and the user interface, a right ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment. The harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example. The processor-executable instructions may also determine a right ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the right ear of the patient. Also, the processor executable instructions may cause theprocessor 370 to, when executed, utilize distributed processing between the proximatesmart device 12 and a server to at least one of screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range. - The processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types. Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
- After the
hearing aid 10 is programmed, in operation, the teachings presented herein permit the proximatesmart device 12 such as a smart phone to form a pairing with thehearing aid 10 and operate thehearing aid 10. As shown, the proximatesmart device 12 includes thememory 372 accessible to theprocessor 370 and thememory 372 includes processor-executable instructions that, when executed, cause theprocessor 370 to provide an interface for an operator that includes an interactive application for viewing the status of thehearing aid 10. Theprocessor 370 is caused to present a menu for controlling thehearing aid 10. Theprocessor 370 is then caused to receive an interactive instruction from the user and forward a control signal via thetransceiver 376, for example, to implement the instruction at thehearing aid 10. Theprocessor 370 may also be caused to generate various reports about the operation of thehearing aid 10. Theprocessor 370 may also be caused to translate or access a translation service for the audio. - Referring now to
FIG. 12 , one embodiment of theserver 320 as a computing device includes, within thehousing 322, aprocessor 400,memory 402, andstorage 404 interconnected withvarious buses 412 in a common or distributed, for example, mounting architecture that also supportsinputs 406,outputs 408, andnetwork interface 410. In other implementations, in the computing device, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Further still, in other implementations, multiple computing devices may be provided and operations distributed therebetween. Theprocessor 400 may process instructions for execution within theserver 320, including instructions stored in thememory 402 or instorage 404. Thememory 402 stores information within the computing device. In one implementation, thememory 402 is a volatile memory unit or units. In another implementation, thememory 402 is a non-volatile memory unit or units.Storage 404 includes capacity that is capable of providing mass storage for theserver 320, including crane service database storage capacity.Various inputs 406 andoutputs 408 provide connections to and from theserver 320, wherein theinputs 406 are the signals or data received by theserver 320, and theoutputs 408 are the signals or data sent from theserver 320. Thenetwork interface 410 provides the necessary device controller to connect theserver 320 to one or more networks. - The
memory 402 is accessible to theprocessor 400 and includes processor-executable instructions that, when executed, cause theprocessor 400 to execute a series of operations. Theprocessor 400 may be caused to screen, via the speaker and the user interface, a left ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment. The harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example. The processor-executable instructions may also determine a left ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the left ear of the patient. - The processor-executable instructions may also determine a right ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity of the right ear of the patient between 50 Hz and 10,000 Hz based on the utilization of harmonic frequencies of a harmonic frequency series. The processor-executable instructions then cause the
processor 400 to screen, via the speaker and the user interface, a right ear of a patient at harmonic frequencies of a harmonic frequency series, with detected frequencies being optionally re-ranged tested at a more discrete increment, such as a 5 Hz to 20 Hz increment. The harmonic frequency series may be between 50 Hz and 10,000 Hz or 50 Hz and 5,000 Hz, for example. The processor-executable instructions may also determine a left ear preferred hearing range, which is a range of sound corresponding to highest hearing capacity based on the utilization of harmonic frequency series of the left ear of the patient. Also, the processor executable instructions may cause theprocessor 400 to, when executed, utilize distributed processing between theserver 320 and either the proximatesmart device 12 or hearing testing equipment to at least one of screen the left ear, screen the right ear, determine the left ear preferred hearing range, and determine the right ear preferred hearing range. - The processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types. Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
- Referring now
FIG. 13 , another embodiment of asystem 430 for aiding hearing is shown. As shown, a user V, who may be considered a patient requiring a hearing aid, is utilizing ahearing testing device 434 with a testing/programming unit 432 and aheadset 436 havingheadphones 437 with atransceiver 438 for communicating with thehearing testing device 434. Apush button 442 is coupled withcabling 440 to theheadset 436 to provide an interface for the user V to indicate when a particular sound, i.e., frequency and decibel is heard. In this way, thesystem 430 screens, via a speaker in theheadset 436 and a user interface with thepush button 442, a left ear—and separately, a right ear—of the user V at selected frequencies based on the harmonic frequencies of a harmonic frequency series discussed above, between a frequency range of 50 Hz to 10,000 Hz, with detected frequencies being re-ranged tested to better identify the frequencies and decibel levels heard. - Referring now to
FIG. 14 , thehearing testing device 434 depicted as a computing device is shown. Within a housing (not shown), aprocessor 450,memory 452,storage 454, and adisplay 456 are interconnected by abusing architecture 458 within a mounting architecture. Theprocessor 450 may process instructions for execution within the computing device, including instructions stored in thememory 452 or instorage 454. Thememory 452 stores information within the computing device. In one implementation, thememory 452 is a volatile memory unit or units. In another implementation, thememory 452 is a non-volatile memory unit or units. Thestorage 454 provides capacity that is capable of providing mass storage for thehearing testing device 434. Various inputs and outputs provide connections to and from the computing device, wherein the inputs are the signals or data received by thehearing testing device 434, and the outputs are the signals or data sent from thehearing testing device 434. In the following description, it should be appreciated that various inputs and outputs may be partially or fully integrated. - By way of example, with respect to inputs and outputs, the
hearing testing device 432 may include thedisplay 456, auser interface 460, atest frequency output 462, aheadset output 464, atimer output 466, ahandset input 468, afrequency range output 470, and amicrophone input 472. Thedisplay 456 is an output device for visual information, including real-time or post-test screening results. Theuser interface 460 may provide a keyboard or push button for the operator of thehearing testing device 432 to provide input, including such functions as starting the screening, stopping the screening, and repeating a previously completed step. Thetest frequency output 462 may display the range to be examined, such as a frequency between 100 Hz and 5,000 Hz. Theheadset output 464 may output the signal under test to the patient. Thetimer output 466 may include an indication of the length of time thehearing testing device 432 will stay on a given frequency. For example, thehearing testing device 432 may stay 30 seconds on a particular frequency. Thehandset input 468 may be secured to a handset that provides “pause” and “okay” functionality for the patient during the testing. Thefrequency range output 462 may indicate the test frequency range per step, such as 50 Hz or other increment, for example. Themicrophone input 472 receives audio input from the operator relative to screening instructions intended for the patient, for example. - The
memory 452 and thestorage 454 are accessible to theprocessor 450 and include processor-executable instructions that, when executed, cause theprocessor 450 to execute a series of operations. With respect to processor-executable instructions, the processor-executable instructions may cause theprocessor 450 to permit thehearing testing device 432 to be conducted by one ear at a time. The processor-executable instructions may also cause theprocessor 450 to permit the patient to pause the process in response to a signal received at thehandset input 468. As part of the processor-executable instructions, theprocessor 450, for example, may be caused to start thehearing testing device 432 at 50 Hz by giving a 100 Hz signal with harmonics as part of harmonic frequency series for a predetermined length of time, such as 20 seconds to 30 seconds at a specified decibel or decibel range. The processor-executable instructions may cause theprocessor 450 to receive a detection signal from thehandset input 468 during screening. Then, the processor-executable instructions cause thehearing testing device 432 to test to the next frequency or frequencies in the applicable harmonic frequency series at as step, such as 200 Hz, for example, and continue the screening process. The system then determines a left ear preferred hearing range and a right ear preferred hearing range. - The processor-executable instructions presented hereinabove include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Processor-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, or the like, that perform particular tasks or implement particular abstract data types. Processor-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the systems and methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps and variations in the combinations of processor-executable instructions and sequencing are within the teachings presented herein.
- Referring now to
FIG. 15 , conceptually illustrates the software architecture of atesting equipment application 500 of some embodiments that may determine the preferred hearing ranges for patients. In some embodiments, thetesting equipment application 500 is a stand-alone application or is integrated into another application, while in other embodiments the application might be implemented within anoperating system 530. Furthermore, in some embodiments, thetesting equipment application 500 is provided as part of a server-based solution or a cloud-based solution. In some such embodiments, the application is provided via a thin client. That is, the application runs on a server while a user interacts with the application via a separate machine remote from the server. In other such embodiments, the application is provided via a thick client. That is, the application is distributed from the server to the client machine and runs on the client machine. - The
testing equipment application 500 includes a user interface (UI) interaction andgeneration module 502, management (user)interface tools 504,test procedure modules 506,frequency generator modules 508,decibels modules 510, notification/alert modules 512,report modules 514,database module 516, anoperator module 518, and a health careprofessional module 520. Thetesting equipment application 500 has access to atesting equipment database 522, which in one embodiment, may includetest procedure data 524,patient data 526,harmonics data 528, andpresentation instructions 529. In some embodiments,storages storages - Continuing to refer to
FIG. 15 , thesystem 300 identifies harmonic frequencies of a harmonic frequency series or of multiple harmonic frequency series that enables hearing. Thesystem 300 is capable of combining various sounds, such as musical sounds or classical music instrument sounds, as discussed hereinabove, through a fundamental frequency and related frequencies of a harmonic frequency series or related frequencies of multiple harmonic frequency series, to creating or contribute to an algorithm that address or mitigate hearing loss for the patient. In fact, as presented herein, patients may be able to self-test or have minimal assistance during the testing. - The order of execution or performance of the methods and data flows illustrated and described herein is not essential, unless otherwise specified. That is, elements of the methods and data flows may be performed in any order, unless otherwise specified, and that the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element are all possible sequences of execution.
- While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is, therefore, intended that the appended claims encompass any such modifications or embodiments.
Claims (21)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/343,329 US11115759B1 (en) | 2018-01-05 | 2021-06-09 | System and method for aiding hearing |
CA3222516A CA3222516A1 (en) | 2018-01-05 | 2021-12-16 | System and method for aiding hearing |
PCT/US2021/063845 WO2022260707A1 (en) | 2018-01-05 | 2021-12-16 | System and method for aiding hearing |
EP21945357.8A EP4352973A4 (en) | 2018-01-05 | 2021-12-16 | System and method for aiding hearing |
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862613804P | 2018-01-05 | 2018-01-05 | |
PCT/US2019/012550 WO2019136382A1 (en) | 2018-01-05 | 2019-01-07 | Hearing aid and method for use of same |
US16/959,972 US11134347B2 (en) | 2018-01-05 | 2019-01-07 | Hearing aid and method for use of same |
US201962904616P | 2019-09-23 | 2019-09-23 | |
US201962935961P | 2019-11-15 | 2019-11-15 | |
US17/026,955 US11102589B2 (en) | 2018-01-05 | 2020-09-21 | Hearing aid and method for use of same |
US17/029,764 US10993047B2 (en) | 2018-01-05 | 2020-09-23 | System and method for aiding hearing |
PCT/US2021/029414 WO2022066223A1 (en) | 2020-09-23 | 2021-04-27 | System and method for aiding hearing |
US17/343,329 US11115759B1 (en) | 2018-01-05 | 2021-06-09 | System and method for aiding hearing |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2021/029414 Continuation-In-Part WO2022066223A1 (en) | 2018-01-05 | 2021-04-27 | System and method for aiding hearing |
Publications (2)
Publication Number | Publication Date |
---|---|
US11115759B1 US11115759B1 (en) | 2021-09-07 |
US20210297791A1 true US20210297791A1 (en) | 2021-09-23 |
Family
ID=74346285
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/029,764 Active US10993047B2 (en) | 2018-01-05 | 2020-09-23 | System and method for aiding hearing |
US17/343,329 Active US11115759B1 (en) | 2018-01-05 | 2021-06-09 | System and method for aiding hearing |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/029,764 Active US10993047B2 (en) | 2018-01-05 | 2020-09-23 | System and method for aiding hearing |
Country Status (4)
Country | Link |
---|---|
US (2) | US10993047B2 (en) |
EP (1) | EP4352973A4 (en) |
CA (1) | CA3222516A1 (en) |
WO (1) | WO2022260707A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11743660B2 (en) | 2018-01-05 | 2023-08-29 | Texas Institute Of Science, Inc. | Hearing aid and method for use of same |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11128963B1 (en) | 2018-01-05 | 2021-09-21 | Texas Institute Of Science, Inc. | Hearing aid and method for use of same |
US11153694B1 (en) | 2018-01-05 | 2021-10-19 | Texas Institute Of Science, Inc. | Hearing aid and method for use of same |
US10993047B2 (en) * | 2018-01-05 | 2021-04-27 | Texas Institute Of Science, Inc. | System and method for aiding hearing |
US10893370B1 (en) | 2018-01-05 | 2021-01-12 | Texas Institute Of Science, Inc. | System and method for aiding hearing |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1155777A (en) | 1997-07-31 | 1999-02-26 | Sony Corp | Sound pickup device |
EP1284587B1 (en) | 2001-08-15 | 2011-09-28 | Sound Design Technologies Ltd. | Low-power reconfigurable hearing instrument |
EP2445229A4 (en) * | 2009-06-16 | 2014-12-31 | Panasonic Corp | Hearing aid suitability determination device, hearing aid processing regulation system and hearing aid suitability determination method |
US8761421B2 (en) * | 2011-01-14 | 2014-06-24 | Audiotoniq, Inc. | Portable electronic device and computer-readable medium for remote hearing aid profile storage |
JP5500125B2 (en) | 2010-10-26 | 2014-05-21 | パナソニック株式会社 | Hearing aid |
US20130223661A1 (en) | 2012-02-27 | 2013-08-29 | Michael Uzuanis | Customized hearing assistance device system |
US9392967B2 (en) * | 2012-11-20 | 2016-07-19 | Bitwave Pte Ltd. | User interface and method to discover hearing sensitivity of user on smart phone |
WO2014108080A1 (en) * | 2013-01-09 | 2014-07-17 | Ace Communications Limited | Method and system for self-managed sound enhancement |
US20140309549A1 (en) * | 2013-02-11 | 2014-10-16 | Symphonic Audio Technologies Corp. | Methods for testing hearing |
EP2835985B1 (en) * | 2013-08-08 | 2017-05-10 | Oticon A/s | Hearing aid device and method for feedback reduction |
US9232322B2 (en) | 2014-02-03 | 2016-01-05 | Zhimin FANG | Hearing aid devices with reduced background and feedback noises |
US10181328B2 (en) | 2014-10-21 | 2019-01-15 | Oticon A/S | Hearing system |
DK3051844T3 (en) | 2015-01-30 | 2018-01-29 | Oticon As | Binaural hearing system |
KR20170026786A (en) | 2015-08-28 | 2017-03-09 | 전자부품연구원 | Smart hearing aid system with active noise control and hearing aid control device thereof |
US9642573B2 (en) | 2015-09-16 | 2017-05-09 | Yong D Zhao | Practitioner device for facilitating testing and treatment of auditory disorders |
US10433082B2 (en) * | 2016-07-26 | 2019-10-01 | Sonova Ag | Fitting method for a binaural hearing system |
US10993047B2 (en) | 2018-01-05 | 2021-04-27 | Texas Institute Of Science, Inc. | System and method for aiding hearing |
CN112237009B (en) | 2018-01-05 | 2022-04-01 | L·奥拉 | Hearing aid and method of use |
-
2020
- 2020-09-23 US US17/029,764 patent/US10993047B2/en active Active
-
2021
- 2021-06-09 US US17/343,329 patent/US11115759B1/en active Active
- 2021-12-16 CA CA3222516A patent/CA3222516A1/en active Pending
- 2021-12-16 EP EP21945357.8A patent/EP4352973A4/en active Pending
- 2021-12-16 WO PCT/US2021/063845 patent/WO2022260707A1/en active Application Filing
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11743660B2 (en) | 2018-01-05 | 2023-08-29 | Texas Institute Of Science, Inc. | Hearing aid and method for use of same |
US12075214B2 (en) | 2018-01-05 | 2024-08-27 | Texas Institute Of Science, Inc. | Hearing aid and method for use of same |
Also Published As
Publication number | Publication date |
---|---|
US11115759B1 (en) | 2021-09-07 |
EP4352973A4 (en) | 2024-10-09 |
US10993047B2 (en) | 2021-04-27 |
EP4352973A1 (en) | 2024-04-17 |
US20210021941A1 (en) | 2021-01-21 |
CA3222516A1 (en) | 2022-12-15 |
WO2022260707A1 (en) | 2022-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11115759B1 (en) | System and method for aiding hearing | |
US10966034B2 (en) | Method of operating a hearing device and a hearing device providing speech enhancement based on an algorithm optimized with a speech intelligibility prediction algorithm | |
US11510019B2 (en) | Hearing aid system for estimating acoustic transfer functions | |
DK2846559T3 (en) | Method of performing a RECD measurement using a hearing aid device | |
JPS62500485A (en) | Hearing aids, signal delivery devices, devices and methods for compensating for hearing defects | |
US20170272870A1 (en) | Method for predicting the intelligibility of noisy and/or enhanced speech and a binaural hearing system | |
CN111432318B (en) | Hearing device comprising direct sound compensation | |
CN107147981A (en) | Monaural intrusion intelligibility of speech predicting unit, audiphone and binaural hearing aid system | |
KR101963871B1 (en) | Apparatus and method for enhancing perceptual ability through sound control | |
JP2022531363A (en) | Auditory device with bone conduction sensor | |
EP4132009A2 (en) | A hearing device comprising a feedback control system | |
US12089005B2 (en) | Hearing aid comprising an open loop gain estimator | |
US20240205615A1 (en) | Hearing device comprising a speech intelligibility estimator | |
EP4300992A1 (en) | A hearing aid comprising a combined feedback and active noise cancellation system | |
CN112911477A (en) | Hearing system comprising a personalized beamformer | |
US20110142271A1 (en) | Method for frequency transposition in a hearing aid and hearing aid | |
US10893370B1 (en) | System and method for aiding hearing | |
WO2022066200A1 (en) | System and method for aiding hearing | |
EP4351171A1 (en) | A hearing aid comprising a speaker unit | |
EP4054210A1 (en) | A hearing device comprising a delayless adaptive filter | |
WO2022066223A1 (en) | System and method for aiding hearing | |
CN117440281A (en) | Zxfoom zxfoom , zxfoom zxfoom , compensation method thereof computer program product | |
CN114630223A (en) | Method for optimizing function of hearing and wearing type equipment and hearing and wearing type equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TEXAS INSTITUTE OF SCIENCE, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OLAH, LASLO;REEL/FRAME:056489/0719 Effective date: 20210603 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |