CN115004718A - Hearing aid and method of use - Google Patents

Hearing aid and method of use Download PDF

Info

Publication number
CN115004718A
CN115004718A CN202080079244.7A CN202080079244A CN115004718A CN 115004718 A CN115004718 A CN 115004718A CN 202080079244 A CN202080079244 A CN 202080079244A CN 115004718 A CN115004718 A CN 115004718A
Authority
CN
China
Prior art keywords
sound
processor
hearing aid
signal
ear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080079244.7A
Other languages
Chinese (zh)
Inventor
L·奥拉
G·索科洛夫斯基
S·洛谢夫
E·索科洛夫斯卡娅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Texas Science Institute Ltd
Original Assignee
Texas Science Institute Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Texas Science Institute Ltd filed Critical Texas Science Institute Ltd
Publication of CN115004718A publication Critical patent/CN115004718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/105Earpiece supports, e.g. ear hooks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/021Behind the ear [BTE] hearing aids
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/01Noise reduction using microphones having different directional characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Headphones And Earphones (AREA)

Abstract

A hearing aid (10) and method of use thereof are disclosed. In one embodiment, the hearing aid includes a body (32, 34) that at least partially conforms to the contour of the outer ear and is sized to engage therewith. The body (32, 34) contains various electronic components, including an electronic signal processor (130) programmed with respective left and right ear qualified sound ranges. Each of the left ear qualified sound range and the right ear qualified sound range may be a sound range corresponding to a preferred hearing range of the patient's ear, the preferred hearing range being modified according to the patient's subjective assessment of sound quality. Sound received at the hearing aid (10) is converted into an acceptable sound range before being output.

Description

Hearing aid and method of use
Technical Field
The present invention relates generally to hearing aids and, more particularly, to hearing aids and methods of use thereof that provide signal processing and feature sets to enhance speech and sound intelligibility.
Background
Hearing loss can affect anyone of any age, but older people experience hearing loss more frequently. Untreated hearing loss is associated with lower quality of life and may have profound effects on individuals experiencing hearing loss as well as on individuals in close proximity to the individual. There is therefore a continuing need for improved hearing aids and methods of use thereof to enable a patient to better hear conversations and the like.
Disclosure of Invention
It would be advantageous to implement a hearing aid and method of use thereof that corrects existing functional limitations by adding features that significantly alter the course of existing hearing aids. It is also desirable to implement a mechanical and electronic based solution that will provide enhanced performance and improved usability with an enhanced feature set. To better address one or more of these concerns, a hearing aid and method of use thereof are disclosed. In one embodiment, the hearing aid comprises a left body and a right body connected by a band-like member, which at least partially conform to the contour of the outer ear and are sized to engage with the contour, respectively. The body contains various electronic components, including an electronic signal processor, which is programmed with corresponding left and right ear qualified sound ranges. Each of the left ear qualified sound range and the right ear qualified sound range may be a sound range corresponding to a preferred hearing range of the patient's ear, the preferred hearing range being modified according to the patient's subjective assessment of sound quality. The sound received at the hearing aid is converted into a qualified sound range before being output. In another embodiment, the hearing aid may establish a pairing with a proximate smart device (e.g., a smartphone, a smartwatch, or a tablet computer) via the transceiver. Hearing aids may perform various processes using distributed computing between the hearing aid and a proximate smart device. Also, the user may send control signals from a proximate smart device to effect control.
In another embodiment, the hearing aid has a dominant sound mode of operation, a direct background mode of operation, and a background mode of operation, which work together while being selectively and independently adjustable by the patient. In the dominant sound mode of operation, the hearing aid is able to identify the loudest sound in the processed signal and increase the volume of the loudest sound in the signal being processed. In the direct background mode of operation, the hearing aid is able to identify the sound in the immediate surroundings of the hearing aid and suppress this sound in the signal being processed. In the background mode of operation, the hearing aid is able to identify the extraneous ambient sound received at the hearing aid and suppress the extraneous ambient sound in the signal being processed. In another embodiment, the hearing aid may establish a pairing with a proximate smart device (e.g., a smartphone, a smartwatch, or a tablet computer) via the transceiver. Hearing aids may perform various processes using distributed computing between the hearing aid and a proximate smart device. Also, the user may send a control signal from the proximate smart device to activate one of the dominant voice mode of operation, the direct background mode of operation, and the background mode of operation. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
For a more complete understanding of the features and advantages of the present invention, reference is now made to the detailed description taken in conjunction with the accompanying drawings, in which corresponding reference characters refer to corresponding parts throughout the different views, and in which:
fig. 1A is a schematic diagram depicting a front perspective view of one embodiment of a hearing aid utilized in accordance with the teachings presented herein;
fig. 1B is a top view depicting the hearing aid of fig. 1A utilized in accordance with the teachings presented herein;
fig. 2 is a front perspective view of an embodiment of the hearing aid depicted in fig. 1.
Fig. 3A is a front left perspective view of another embodiment of the hearing aid depicted in fig. 1;
fig. 3B is a front right perspective view of the embodiment of the hearing aid depicted in fig. 3A;
fig. 4 is a front perspective view of another embodiment of a hearing aid according to the teachings presented herein;
fig. 5 is a functional block diagram depicting one embodiment of a hearing aid as shown herein;
fig. 6 is a functional block diagram depicting another embodiment of a hearing aid as presented herein;
fig. 7 is a functional block diagram depicting a further embodiment of the hearing aid shown herein;
fig. 8 is a functional block diagram of a further embodiment of a hearing aid as presented herein;
fig. 9 is a functional block diagram depicting one embodiment of a smart device shown in fig. 1 that may be paired with a hearing aid;
FIG. 10 is a functional block diagram depicting one embodiment of sample rate processing in accordance with the teachings presented herein;
FIG. 11 is a functional block diagram depicting one embodiment of harmonic processing in accordance with the teachings presented herein;
FIG. 12 is a functional block diagram depicting one embodiment of frequency shifting, signal amplification, and harmonic enhancement in accordance with the teachings presented herein; and
fig. 13 is a functional block diagram depicting one embodiment of a headset operating process flow in accordance with the teachings presented herein.
Detailed Description
While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts which can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the present invention.
Referring initially to fig. 1A and 1B, one embodiment of a hearing aid is depicted, schematically shown and designated 10. As shown inE.g. user U (which may be considered as a patient requiring a hearing aid) is wearing a hearing aid 10 and sitting at a table T of a restaurant or cafe and is in contact with person I 1 And person I 2 A dialog is conducted. As part of a conversation at the table T, the user U is making a sound S 1 Person I 1 Is making a sound S 2 And person I 2 Is making a sound S 3 . Nearby, in the background, observer B 1 Is with spectator B 2 A dialog is conducted. Bystander B 1 Is making a sound S 4 And observer B 2 Is making a sound S 5 . The ambulance A is travelling over the table T and making a sound S 6 . Sound S 1 、S 2 And S 3 May be described as a direct background sound. Sound S 4 、S 5 And S 6 May be described as background sound. Sound S 6 May be described as the dominant sound because it is the loudest sound at the table T.
As will be described in further detail below, the hearing aid 10 is programmed with a sound range that is acceptable for each ear in a binaural embodiment and one ear in a monaural embodiment. As shown, in a binaural embodiment, the qualifying sound range may be a sound range corresponding to a preferred hearing range for each ear of the user, the preferred hearing range being modified by subjective assessment of sound quality from the user. The preferred hearing range may be a sound range corresponding to the highest hearing capacity of the ear of the user U between a range, which by way of example may be between 50Hz and 10,000 Hz. Further, as shown, in a binaural embodiment, the preferred hearing range for each ear may be a plurality of sound ranges corresponding to the highest hearing ability range of the user's U ear between 50Hz and 10,000 Hz. In some embodiments of such a multi-range sound implementation, the various sounds S received 1 To S 6 May be transformed and divided into a plurality of sound ranges. In particular, the preferred hearing range for each ear may be a sound range of about 300Hz to about 500Hz corresponding to the patient's highest hearing abilityAnd (5) enclosing.
The subjective assessment according to the user may include a complete assessment of: the level of annoyance caused to the user by the impairment of the desired sound. The subjective assessment according to the user may also include a complete assessment of: the level of pleasure that the patient is pleased by the implementation of the desired sound. That is, subjective assessment from the user may include a complete assessment to determine the best sound quality for the user. The sound received at the hearing aid 10 is converted into an acceptable sound range to be heard by the user U before being output.
In one embodiment, the hearing aid 10 has a dominant sound mode of operation 26, a direct background mode of operation 28 and a background mode of operation 30 under selective adjustment by the user U. In the dominant sound operating mode 26, the hearing aid 10 identifies the loudest sound (e.g., sound S) in the processed signal 6 ) And increasing the volume of the loudest sound in the signal being processed. In the direct background mode of operation, the hearing aid 10 recognizes sounds in the direct surroundings (e.g. sound S at table T towards the hearing aid 10 1 、S 2 And S 3 ) And suppress these sounds in the signal being processed. In the background mode of operation, the hearing aid 10 identifies the external ambient sound (e.g., sound S) received at the hearing aid 10 4 、S 5 And S 6 ) And suppress these extraneous ambient sounds in the signal being processed. Additionally, in various modes of operation, the hearing aid 10 may identify the direction of origin of a particular sound and express that direction with an appropriate sound distribution in a binaural embodiment. By way of example, ambulance A and sound S 6 Originating from the left side of the user U, the sound is distributed appropriately at the hearing aid 10 to reflect this, as indicated by arrow L.
In one embodiment, the hearing aid 10 may establish a pairing with a proximate smart device 12, such as a smartphone (depicted), a smart watch, or a tablet computer. The proximate smart device 12 includes a display 14 having an interface 16, the interface 16 having controls such as an on/off switch or volume control 18 and an operational mode control 20. The user may wirelessly send a control signal from the proximate smart device 12 to the hearing aid 10 to control a function, such as the volume control 18, or activate the mode on 22 or the mode off 24 with respect to one of the dominant sound mode of operation 26, the direct background mode of operation 28 or the background mode of operation 30. It should be appreciated that the user U may wirelessly activate other controls from the proximate smart device 12. By way of example and not limitation, other controls may include microphone input sensitivity adjusted by ear, speaker volume input adjusted by ear, the aforementioned background suppression for both ears, dominant sound amplification for each ear, and on/off. Further, in one embodiment, after the hearing aid 10 has established a pairing with the proximate smart device 12, the hearing aid 10 and the proximate smart device 12 may utilize the wireless communication link between them and use the processing distributed between the hearing aid 10 and the proximate smart device 12 to process the signals and perform other analysis, as indicated by processor symbol P.
Referring to fig. 2, as shown, in the illustrated embodiment, the hearing aid 10 includes a left body 32 and a right body 34 connected to a strap member 36, the strap member 36 being configured to partially encircle the user U. Each of the left and right side bodies 32, 34 covers the outer ear of the user U and is sized to engage therewith. In some embodiments, a microphone 38, 40, 42 that directionally collects sound and converts the collected sound into electrical signals is located on the left body 32. With respect to collecting sound, the microphone 38 may be positioned to collect forward sound, the microphone 40 may be positioned to collect lateral sound, and the microphone 42 may be positioned to collect rearward sound. A microphone may similarly be positioned on the right body 34. Various interior compartments 44 provide space for housing electronics, as will be discussed in further detail below. Various controls 46 provide a patient interface with the hearing aid 10.
Having each of left body 32 and right body 34 cover and be sized to engage the outer ear of user U provides certain benefits. Sound waves enter through the outer ear and reach the middle ear to vibrate the tympanic membrane. The tympanic membrane then vibrates the ossicles (osciles), which are the small bones in the middle ear. The sound vibrations travel through the ossicles to the inner ear. When sound vibrations reach the cochlea, they push specialized cells called hair cells. The hair cells convert the vibrations into electrical nerve impulses. The auditory nerve connects the cochlea to the auditory center of the brain. When these electrical nerve impulses reach the brain, they are experienced as sound. The outer ear serves various functions. The various air-filled chambers that make up the outer ear (the two most prominent being the concha (concha) and the ear canal) have natural or resonant frequencies that respond best to them. As is true of all inflatable chambers. The resonance of each of these cavities is such that each structure increases the sound pressure by about 10dB to 12dB at its resonance frequency. In summary, the functions of the outer ear include: a) enhancing or amplifying high frequency sound; b) providing primary cues for determining the height of a sound source; c) helping to distinguish between sounds generated in front of the listener and sounds generated behind the listener. Headphones are used in hearing tests in medical and related facilities for the following reasons: tests have shown that completely closing the ear canal to prevent any form of external noise from playing a direct role in acoustic matching. The more severe the hearing problem, the closer the hearing aid speaker must be to the tympanic membrane. However, the closer the speaker is to the tympanic membrane, the more the device will occlude the ear canal and negatively impact the pressure system of the ear. That is, the various chambers of the ear have defined operating pressures determined in part by the ear structure. By blocking the ear canal, the pressure system in the ear is distorted and the operating pressure of the ear is negatively affected.
As mentioned, hearing aids of "plug size" have limitations in distorting the defined operating pressure within the ear. Considering the role of the gas filled cavity of the outer ear in increasing the sound pressure at the resonance frequency, the hearing aids of fig. 2 and other figures form a closed chamber around the ear, thereby increasing the pressure in the chamber. This higher pressure, in combination with the use of a more powerful speaker in the earpiece at an acceptable sound range (e.g., the frequency range where the user optimally hears the best sound quality), provides a desirable set of parameters for a powerful hearing aid.
Referring to fig. 3A and 3B, as shown, in the illustrated embodiment, the hearing aid 10 includes a left side body 52, the left side body 52 having an ear hook (ear hook)54 extending from the left side body 52 to an ear mold (ear mold) 56. The left body 52 and the ear mold 56 can each at least partially conform to the contour of the outer ear and be sized to engage therewith. By way of example, the left body 52 may be sized to engage the contour of the ear in a behind-the-ear fit. The ear mold 56 may be sized to fit the physical shape of the patient's ear. The ear hook 54 may comprise a flexible tubular material that transmits sound from the left body 52 to the ear mold 56. A microphone 58 that collects sound and converts the collected sound into an electric signal is located on the left main body 52. An opening 60 in the ear mold 56 allows sound to travel through the ear hook 54 to exit into the ear of the patient. The interior compartment 62 provides space for housing electronics, which will be discussed in more detail below. Various controls 64 provide a patient interface with the hearing aid 10 on the left body 52 of the hearing aid 10.
As also shown, the hearing aid 10 includes a right side body 72, the right side body 72 having an ear hook 74 extending from the right side body 72 to an ear mold 76. The right body 72 and the ear mold 76 can each at least partially conform to the contour of the outer ear and be sized to engage therewith. By way of example, the right side body 72 may be sized to engage the contour of the ear in a behind-the-ear fit. The ear mold 76 may be sized to fit the physical shape of the patient's ear. The ear hook 74 can comprise a flexible tubular material that transmits sound from the right body 72 to the ear mold 76. A microphone 78 that collects sound and converts the collected sound into an electric signal is located on the right-side body 72. An opening 80 in the ear mold 76 allows sound to travel through the ear hook 74 to exit into the ear of the patient. The interior compartment 82 provides space for housing electronics, which will be discussed in more detail below. Various controls 84 provide a patient interface with the hearing aid 10 on the right body 72 of the hearing aid 10. It should be appreciated that the various controls 64, 84 and other components of the left and right bodies 52, 72 may be at least partially integrated and consolidated. Further, it should be appreciated that the hearing aid 10 may have one or more microphones on each of the left and right bodies 52, 72 to improve directional hearing in some implementations, and to provide 360 degree directional sound input in some implementations.
In one embodiment, the left side body 52 and the right side body 72 are connected at the respective ear hooks 54, 74 by a strap member 90, the strap member 90 being configured to partially encircle the head or neck of the patient. Compartments 92 within the ribbon member 90 may provide space for electronics and the like. Additionally, the hearing aid 10 may include left and right earpiece hoods 94, 96 positioned outside the left and right bodies 52, 72, respectively. Each of the left and right earphone covers 94 and 96 isolates noise to block interfering external noise. To further increase the benefit, in one embodiment, the microphone 58 in the left body 52 and the microphone 78 in the right body 72 may cooperate to provide directional hearing.
Referring to fig. 4, another embodiment of the hearing aid 10 is depicted. As shown, in the illustrated embodiment, the hearing aid 10 includes a body 112 having an ear hook 114 extending from the body 112 to an ear mold 116. The body 112 and the ear mold 116 can each at least partially conform to the contour of the outer ear and be sized to engage therewith. By way of example, the body 112 may be sized to engage the contour of the ear in a behind-the-ear fit. The ear mold 116 may be sized to fit the physical shape of the patient's ear. The ear hook 114 can comprise a flexible tubular material that transmits sound from the body 112 to the ear mold 116. A microphone 118 that collects sound and converts the collected sound into an electric signal is located on the right-side body 112. An opening 120 in the ear mold 116 allows sound to travel through the ear hook 114 to exit into the ear of the patient. The interior compartment 122 provides space for housing electronics, which will be discussed in more detail below. Various controls 124 provide a patient interface with the hearing aid 10 on the body 112 of the hearing aid 10.
Referring now to fig. 5, an illustrative embodiment of the internal components of the hearing aid 10 is depicted. By way of illustration and not limitation, the hearing aid 10 depicted in the embodiments of fig. 2 and 3A, 3B is presented. However, it should be appreciated that the teachings of FIG. 5 are equally applicable to the embodiment of FIG. 4. As shown with respect to fig. 3A and 3B, in one embodiment, within the interior compartments 62, 82, an electronic signal processor 130 may be housed. The hearing aid 10 may comprise an electronic signal processor 130 for each ear, or the electronic signal processor 130 for each ear may be at least partially or fully integrated. In another embodiment with respect to fig. 4, within the interior compartment 122 of the main body 112, an electronic signal processor 130 is housed. For measuring, filtering, compressing and generating continuous real-world analog signals, e.g., in the form of sound, the electronic signal processor 130 may include an analog-to-digital converter (ADC)132, a Digital Signal Processor (DSP)134 and a digital-to-analog converter (DAC) 136. The electronic signal processor 130, including the digital signal processor embodiment, may have a memory accessible by the processor. Also housed within the hearing aid 10 are one or more microphone inputs 138 corresponding to one or more respective microphones, a speaker output 140, various controls such as a programming connector 142 and hearing aid controls 144, an inductive coil 146, a battery 148, and a transceiver 150.
As shown, the signaling architecture communicatively interconnects the microphone input 138 to the electronic signal processor 130 and communicatively interconnects the electronic signal processor 130 to the speaker output 140. Various hearing aid controls 144, an induction coil 146, a battery 148 and a transceiver 150 are also communicatively interconnected to the electronic signal processor 130 through the signaling architecture. The speaker output 140 sends sound output to one or more speakers to project sound, particularly acoustic signals in the audio frequency band processed by the hearing aid 10. By way of example, programming connector 142 may provide an interface to a computer or other device. For example, the hearing aid controls 144 may include an on/off switch and a volume control. The induction coil 146 may receive magnetic field signals in the audio frequency band from a telephone receiver or transmit induction loop to, for example, provide a telecoil (telecoil) function. The induction coil 146 may also be used to receive remote control signals encoded on transmitted or radiated electromagnetic carriers at frequencies above the audio frequency band. Various programming signals from the transmitter may also be received via the inductive coil 146 or via the transceiver 150, as will be discussed. For example, the battery 148 provides power to the hearing aid 10 and may be rechargeable or accessed through a battery compartment door (not shown). The transceiver 150 may be internal to the housing, external, or a combination thereof. Further, the transceiver 150 may be a transmitter/receiver, a receiver, or an antenna, for example. Communication between the various smart devices and the hearing aid 10 may be achieved by various wireless methods employed by the transceiver 150, including, for example, 802.11, 3G, 4G, Edge, WiFi, ZigBee, Near Field Communication (NFC), bluetooth low energy, and bluetooth.
The various controls and inputs and outputs presented above are exemplary and it should be appreciated that other types of controls may be incorporated into the hearing aid 10. Furthermore, the electronics and form of the hearing aid 10 may vary. For example, the hearing aid 10 and associated electronics may include any type of earpiece configuration, behind-the-ear configuration, in-the-ear configuration, or in-the-ear configuration. Further, as mentioned, an electronic configuration having multiple microphones for directional hearing is within the teachings presented herein. In some embodiments, the hearing aid has an over-the-eat configuration that covers the entire ear, which provides not only hearing aid functionality, but also hearing protection functionality.
With continued reference to fig. 5, in one embodiment, the electronic signal processor 130 may be programmed with a preferred hearing range, which in one embodiment is a preferred hearing sound range corresponding to the highest hearing ability of the patient. In one embodiment, the left ear preferred hearing range and the right ear preferred hearing range are each sound ranges corresponding to the highest hearing ability of the patient's ear between variable ranges, such as between 50Hz and 10,000Hz, by way of example. A preferred hearing range for each of the left and right ears may be a sound range of frequencies of about 300Hz to frequencies of about 500 Hz.
In this way, the hearing ability of the patient is enhanced. Existing audiogram (audiogram) hearing aid industry test equipment measures hearing capacity at defined frequencies (e.g., 60 Hz; 125 Hz; 250 Hz; 500 Hz; 1,000 Hz; 2,000 Hz; 4,000 Hz; 8,000Hz) and existing hearing aids operate on a rate-based frequency scheme. However, the present teachings measure hearing ability in small steps such as 5Hz, 10Hz, or 20 Hz. Thereafter, one or several (e.g. three) frequency ranges are defined to serve as one or more preferred hearing ranges. As discussed herein, in some embodiments of the present method, a two-step process is utilized. First, hearing is tested in the ear, for example, in variable increments (e.g., 50Hz increments or other increments) in a range between, for example, 50Hz to 5,000Hz, and in variable increments (e.g., 200Hz increments or other increments) in a range between 5,000Hz to 10,0000Hz to identify a potential hearing range. Then, in a second step, the test may be switched to 5Hz, 10Hz or 20Hz increments to accurately identify the preferred hearing range.
Further, in one embodiment with respect to FIG. 4, the various controls 124 may include adjustments that broaden the approximate frequency range of approximately 200Hz, for example, to a frequency range of, for example, 100Hz to 700Hz, or even wider. Further, the preferred hearing sound range may be shifted by using various controls 124. A directional microphone system and processing may be included at each microphone location that provides enhancement of sound from in front of the patient and reduces sound from other directions. Such directional microphone systems and processes may improve speech understanding in situations with excessive background noise. Digital noise reduction, impulse noise reduction, and wind noise reduction may also be incorporated. As mentioned, system compatibility features such as FM compatibility and bluetooth compatibility may be included in the hearing aid 10.
The processor may process instructions for execution within the electronic signal processor 130 as a computing device, including instructions stored in memory. The memory stores information within the computing device. In one implementation, the memory is a volatile memory unit or units. In another implementation, the memory is one or more non-volatile memory units. The memory is accessible by the processor and includes processor-executable instructions that, when executed, cause the processor to perform a series of operations. The processor-executable instructions cause the processor to receive an input analog signal from the microphone input 138 and convert the input analog signal to a digital signal. In one implementation, as part of the conversion from the input analog signal to the digital signal, the input analog signal is modified at the converter 131 by subjective assessment of sound quality according to the patient. The processor-executable instructions then cause the processor to transform the digital signal, for example by compression, into a processed digital signal having a subjective assessment of sound quality based on the patient. It should be appreciated that in this step, the digital signal may be modified in one embodiment by subjective assessment of sound quality based on the patient, if such modification has not occurred. The processed digital signal is then transformed into a preferred hearing range. The transformation may be a frequency transformation, wherein the input frequency is frequency transformed into the preferred hearing range. This transformation is a gentle, narrower join, as it is customized for the user and therefore clearly understood. The processor executable instructions then cause the processor to convert the processed digital signal to an output analog signal that can be amplified as desired and drive the output analog signal to the speaker output 140. Essentially, in one embodiment, a single algorithm is utilized to convert analog sound in a manner that is based on a user's subjective assessment of sound quality. The signal is then transformed into the preferred hearing range before digital-to-analog conversion and amplification.
The processor-accessible memory may include additional processor-executable instructions that, when executed, cause the processor to perform a series of operations. The processor-executable instructions may cause the processor to receive a control signal to control volume or other functions. The processor-executable instructions may also receive a control signal and cause activation of one of the dominant sound mode of operation 26, the direct background mode of operation 28, and the background mode of operation 30. Various modes of operation, including the dominant sound mode of operation 26, the direct background mode of operation 28, and the background mode of operation 30, may be implemented on a per-ear or two-ear basis.
These processor-executable instructions may also cause the processor to establish a pairing with the proximate smart device 12 via the transceiver 150. The processor-executable instructions may then cause the processor to receive a control signal from a proximate smart device to control volume or other functions. The processor-executable instructions may then receive the control signal and cause activation of one of the dominant sound mode of operation 26, the direct background mode of operation 28, and the background mode of operation 30.
In another implementation, the processor-executable instructions may cause the processor to receive an input analog signal from the microphone input 138 and convert the input analog signal to a digital signal that is modified according to a user's subjective assessment of sound quality. The processor then transforms the digital signal by compression into a processed digital signal having a preferred hearing range. In the dominant sound operating mode 26, the processor is caused to identify the loudest sound in the processed signal and to increase the volume of the loudest sound in the processed digital signal. In the direct background mode of operation 28, the processor is then caused to recognize the sound in the direct surroundings of the hearing aid 10 and suppress the sound in the processed digital signal. In the background mode of operation 30, the processor is caused to identify the external ambient sound received at the hearing aid 10 and suppress the external ambient sound in the processed digital signal. Further, the processor may be caused to convert the processed digital signal into an output analog signal and drive the output analog signal to the speaker.
In other implementations, the processor-executable instructions may cause the processor to create a pairing with the proximate smart device 12 via the transceiver 150. The processor-executable instructions may then cause the processor to receive an input analog signal from the microphone and convert the input analog signal to a digital signal. The processor may then be caused to transform the digital signal, by compression, using distributed computing between the processor and the proximate smart device 12, into a processed digital signal having a preferred hearing range modified by subjective assessment of sound quality by the user to provide an acceptable sound range. At a processor within the hearing aid, the processor-executable instructions cause the processor to convert the processed digital signal to an output analog signal and drive the output analog signal to a speaker. The left ear preferred hearing range and the right ear preferred hearing range may comprise a frequency transfer component, a sample rate component, a cut-off harmonic component, an additional harmonic component and/or a harmonic transfer component. Further, the processor-executable instructions may cause the processor to process the frequency transfer component, the sample rate component, the cut-off harmonic component, the additional harmonic component, and/or the harmonic transfer component.
In another implementation, the processor-executable instructions may cause the processor to receive an input analog signal from a microphone input and convert the input analog signal to a digital signal that is modified based on a user's subjective assessment of sound quality. The processor then converts the digital signal into a processed digital signal having a preferred hearing range. The preferred hearing range may be one or more sound ranges corresponding to the highest hearing capacity of the patient's ear. As mentioned above, in order to provide a qualified sound range, the preferred hearing range may be modified by subjective assessment of sound quality from the patient. The subjective assessment of sound quality from the patient may be a complete assessment of: the level of annoyance to the patient caused by the impairment of the desired sound. The preferred hearing range may be modified by enhanced harmonics including, for example, cut-off harmonic components, additional harmonic components or harmonic transfer components. The processor-executable instructions may also cause the processor to convert the processed digital signal to an output analog signal and drive the output analog signal to a speaker. It should be appreciated that the processor-executable instructions may cause the processor to utilize a transceiver to utilize distributed processing between the hearing aid and the proximity smart device to transform the digital signal by compression into a processed digital signal including a preferred hearing range with harmonic enhancement.
Referring now to fig. 6, in one embodiment, the electronic signal processor 130 receives signals from one or more microphone inputs 138 and outputs signals to a speaker output 140. The electronic signal processor 130 includes a gain stage 160 that receives an electronic signal from the microphone input 138 and amplifies the signal. Gain stage 160 forwards the signal to an analog-to-digital converter (ADC)162, and ADC 162 converts the amplified analog electronic signal to a digital electronic signal. In one embodiment, the gain stage 260 is a point during the flow of the audio signal that may be adjusted prior to conversion by the analog-to-digital converter (ADC) 162. The gain stage may include modifying the signal to accommodate subjective assessment of sound quality based on the user or patient. A Digital Signal Processor (DSP)164 receives the digital electronic signals from ADC 162 and is configured to process the digital electronic signals with the desired compensation based on a qualified sound range that includes a preferred hearing range stored therein and possibly a subjective assessment of sound quality from a user.
DSP 164 may support the desired dominant voice mode of operation 26, direct background mode of operation 28, or background mode of operation 30 by utilizing algorithms to eliminate or reduce (or enhance or increase) ambient noise. Such an algorithm may examine the modulation characteristics of the speech envelope, such as harmonic structure, modulation depth, and modulation count. Based on these characteristics, various triggers can be defined to describe the wanted and unwanted background noise as well as the direct noise. The sound can then be changed digitally. It should be appreciated that other digital noise reduction and gain techniques may be utilized, including algorithms that combine adaptive beamforming and adaptive optimal filtering processing.
The processed digital electronic signal is then driven to a digital-to-analog converter (DAC)166, the DAC 166 converts the processed digital electronic signal to a processed analog electronic signal, which is then driven onto a multiplexer 168 and a low output impedance output driver 170 and then output at the speaker output 140. Gain stage 172 receives an electronic signal from microphone input 138 and amplifies the analog electronic signal before driving the signal to an Active Noise Modulation (ANM) unit 174, which ANM unit 174 is configured to perform active noise suppression or active noise enhancement through various amplifiers and filters. The other signal path includes the DSP 164 providing the processed digital electronic signal to the DAC 176 and the filter 178. The ANM driven signal and the filter driven signal are combined at a combiner unit 180 and then provided to a Pulse Width Modulator (PWM)182, which then drives the signals to a multiplexer 168. In this manner, the ANM-driven signal may cancel or reduce (or enhance or increase) ambient noise to provide the desired dominant sound operating mode 26, direct background operating mode 28, or background operating mode 30, while the DSP-driven signal corrects the input signal according to the qualifying sound range to compensate for the hearing loss.
Referring now to fig. 7, in one embodiment of the hearing aid 10, a signal controller 200 is centrally located in communication with a signal analyzer and controller 202 serving the left side of the hearing aid 10 and a signal analyzer and controller 204 serving the right side of the hearing aid 10. The bluetooth interface unit 206 also communicates with the signal analyzer and controller 202 and the signal analyzer and controller 204. The bluetooth interface unit 206 is destined to communicate with a smart device application 208, which smart device application 208 may be installed on a smart device (e.g., a smart phone or a smart watch). A battery pack and charger 210 provides power to the hearing aid 10.
With respect to the left microphone, the front microphone 212, the side-facing microphone 214 and the rear microphone 216 are connected in series to bypass filters 218, 220, 222, respectively, which bypass filters 218, 220, 222 are in turn connected in series to preamplifiers 224, 226, 228, respectively, which preamplifiers 224, 226, 228 are connected to the signal analyzer and controller 202. Similarly, with respect to the right microphone, the front microphone 242, the side-facing microphone 244, and the rear microphone 246 are connected in series to bypass filters 248, 250, 252, respectively, the bypass filters 248, 250, 252 are in turn connected in series to preamplifiers 254, 256, 258, respectively, the preamplifiers 254, 256, 258 are connected to the signal analyzer and controller 204.
The signal analyzer and controller 202 is connected in parallel to a noise filter 230 and an amplifier 232, the amplifier 232 also receiving the signal from the noise filter 230. Amplifier 232 drives a signal to left speaker 234. Similarly, the signal analyzer and controller 204 is connected in parallel to a noise filter 260 and an amplifier 262, the amplifier 262 also receiving the signal from the noise filter 260. Amplifier 262 drives the signal to right speaker 264. As previously described, each of the signal analyzers and controllers 202, 204 converts the live sound frequency into a qualified sound range that, in some embodiments, includes one or more frequency ranges that are heard by the person using the hearing aid 10 through a combination of frequency delivery, sampling rate, cut-off harmonics, additional harmonics, and harmonic delivery. The qualified sound range also includes modifications to the sound based on a subjective assessment of sound quality. Also, each of the signal analyzers and controllers 202, 204 may determine the direction of the sound source.
Referring now to fig. 8, in one embodiment of the hearing aid 10, a smart device input 280, an adjustable background noise filter 282, a voice direction analysis module 284, and a control unit 286 are interconnected. The front microphone 288, side microphone 290 and rear microphone 292 are connected to a microphone input sensitivity module 294. A processor 296, an amplifier 298, a volume control 300, and a speaker 302 are also provided. On the other side, a front microphone 308, a side microphone 310, and a rear microphone 312 are connected to a microphone input sensitivity module 314. A processor 316, an amplifier 318, a volume control 320, and a speaker 322 are also provided.
Regarding signaling, on a first side of the hearing aid 10, the front microphone 288, the side microphone 290 and the rear microphone 292 provide the direct signal 330 to the microphone input sensitivity module 294, which microphone input sensitivity module 294 provides the feedback signal 332. The direct signal 330 and the feedback signal 332 provide adjustments to the input volume at the front microphone 288, the side microphone 290 and the rear microphone 292. The microphone input sensitivity module 294 in turn provides the direct signal 334 to the adjustable background noise filter 282. The direct signal 336 is provided to a voice direction analysis module 284.
On a second side of the hearing aid 10, the front microphone 308, the side microphone 310 and the rear microphone 312 provide a direct signal 340 to the microphone input sensitivity module 314, which microphone input sensitivity module 314 provides a feedback signal 342. Direct signal 340 and feedback signal 342 provide adjustments to the input volume at front microphone 308, side microphone 310, and rear microphone 312. The microphone input sensitivity module 314 in turn provides a direct signal 344 to an adjustable background noise filter 282.
Voice direction analysis 284 determines the direction of origin of the sound received by front microphone 288, side microphone 290, rear microphone 292, front microphone 308, side microphone 310, and rear microphone 312 and provides direct signal 346 to processor 296 and direct signal 348 to processor 316. The processor 296 is associated with the speaker 302 and provides a direct signal 350 to the amplifier 298, which amplifier 298 provides a direct signal 352 to the volume control 300. The direct signal 354 is then provided to the speaker 302. The speaker 302 is physically located on the same ear as the front microphone 288, the side microphone 290 and the rear microphone 292.
In another aspect, the processor 316 is associated with a speaker 322 and provides a direct signal 360 to an amplifier 318, the amplifier 318 providing a direct signal 362 to a volume control 320. The direct signal 364 is then provided to the speaker 322. The speaker 322 is physically located on the same ear as the front microphone 308, the side microphone 310, and the rear microphone 312.
In applications where smart device input 280 is used, smart device input 280 provides a direct signal 370 to each of the processors 296, 316. The direct signal 372 is also provided by the smart device input 280 to the smart device via connection 374, which connection 374 is under direct control of the control unit 286 via direct control signal 376. Continuing with the discussion of the control unit 286, the bi-directional interface 378 operates between the control unit 286 and the microphone input sensitivity module 294. Similarly, a bi-directional interface 380 operates between control unit 286 and adjustable background noise filter 282. The bi-directional interface 382 operates between the control unit 286 and the microphone input sensitivity module 314 serving the front microphone 308, the side microphone 310, and the rear microphone 312.
The control unit 286 and the processor 296 share a bi-directional interface 384, and the control unit 286 and the processor 316 share a bi-directional interface 386. The control unit 286 provides direct control of the volume control 300 associated with the speaker 302 and the volume control 320 associated with the speaker 322 via respective direct control signals 388, 390.
Referring now to fig. 9, proximate smart devices 12 may be types of wireless communication devices including various fixed, mobile and/or portable devices. To expand but not limit the discussion of proximate smart devices 12, such devices may include, but are not limited to, cellular or mobile smart phones, tablet computers, smart watches, and the like. The proximate smart device 12 may include a processor 400, memory 402, storage 404, transceiver 406, and cellular antenna 408 interconnected by a bus architecture 410 that also supports the display 14, an I/O panel 414, and a camera 416. It should be understood that although a specific architecture is explained, other designs and layouts are also within the teachings presented herein.
In operation, the teachings presented herein allow a proximity smart device 12, such as a smartphone, to form a pairing with the hearing aid 10 and operate the hearing aid 10. As shown, the proximate smart device 12 includes a memory 402 accessible to the processor 400, and the memory 402 includes processor-executable instructions that, when executed, cause the processor 400 to provide an interface to an operator including an interactive application for viewing the status of the hearing aid 10. Causing the processor 400 to present a menu for controlling the hearing aid 10. The processor 400 is then caused to receive interaction instructions from the user and to forward control signals via the transceiver 406 to implement the instructions, for example, at the hearing aid 10. The processor 400 may also be caused to generate various reports on the operation of the hearing aid 10. The processor 400 may also be caused to convert or access a conversion service for audio.
In a further embodiment of the processor executable instructions cause the processor 400 to provide an interface for the user U of the hearing aid 10 to select the operation mode. In one embodiment, the hearing aid 10 has a dominant sound mode of operation 26, a direct background mode of operation 28, and a background mode of operation 30, as discussed. As previously discussed, in the dominant sound operating mode 26, the hearing aid 10 identifies the loudest sound in the processed signal and increases the volume of the loudest sound in the signal being processed. Then in the direct background operation mode 28 the hearing aid 10 recognizes the sound in the direct surroundings of the hearing aid 10 and suppresses this sound in the signal being processed. In the background mode of operation 30, the hearing aid 10 identifies the extraneous ambient sound received at the hearing aid 10 and suppresses the extraneous ambient sound in the signal being processed.
In a further embodiment of the processor-executable instructions, the processor-executable instructions cause the processor 400 to create a pairing with the hearing aid 10 via the transceiver 406. The processor executable instructions may then cause the processor 400 to transform the digital signal into a digital signal having a qualified sound range including the preferred hearing range and a subjective assessment of sound quality by compression using distributed calculations between the processor 400 and the hearing aid 10. The left ear preferred hearing range and the right ear preferred hearing range may comprise a frequency transfer component, a sampling rate component, a cut-off harmonic component, an additional harmonic component and/or a harmonic transfer component. Further, the processor-executable instructions may cause the processor 400 to process the frequency transfer component, the sample rate component, the cut harmonic component, the additional harmonic component, and/or the harmonic transfer component. The subjective assessment according to the user may include a complete assessment of: the level of annoyance to the user caused by the impairment of the desired sound. The subjective assessment according to the user may also include a complete assessment of: the level of pleasure that the patient is pleased by the implementation of the desired sound. That is, subjective assessment according to the user may include a complete assessment to determine the best sound quality for the user.
Further, the processor executable instructions cause the processor 400 to create a pairing with the hearing aid 10 via the transceiver 406 and cause the processor 400 to transform the digital signal into a processed digital signal having a qualified sound range comprising the preferred hearing range and a subjective assessment of sound quality by compression, utilizing distributed calculations between the processor 400 and the hearing aid 10. The preferred hearing range may be one or more sound ranges corresponding to the highest hearing ability of the patient's ear, modified by subjective assessment of sound quality from the patient. The preferred hearing range may also include harmonics, for example, a cut-off harmonic component, an additional harmonic component, or a harmonic transfer component. The preferred hearing range may also include frequency transfer components, sample rate components, signal amplification components. The subjective assessment according to the user may include a complete assessment of: the level of annoyance to the user caused by the impairment of the desired sound. The subjective assessment according to the user may also include a complete assessment of: the level of pleasure that the patient is pleased by the implementation of the desired sound. That is, subjective assessment from the user may include a complete assessment to determine the best sound quality for the user.
Referring now to fig. 10, in some embodiments, a sample rate circuit 430, which may form part of the hearing aid 10, may have an analog signal 432 as an input and a digital signal 434 as an output. More specifically, an analog-to-digital converter (ADC)436 receives as inputs the analog signal 432 and a signal from a spectrum analyzer 438. The ADC 436 provides an output comprising the digital signal 434 and a signal to a spectrum analyzer 438. The spectrum analyzer 438 forms a feedback loop with a sample rate controller 442 and a sample rate generator 444. As shown, the spectrum analyzer 438 analyzes a range of the received analog signal 432 and optimizes the sampling range at the ADC 426 through a feedback loop using a sampling rate controller 442 and a sampling rate generator 444.
By way of further explanation, with respect to the Sampling Rate (SR), the total sound S T The following can be defined:
S T =F B +H 1 +H 2 +……+H N wherein:
S T total sound;
F B a fundamental frequency;
H 1 first harmonic;
h2 ═ second harmonic; and
H N n-th harmonic, where H is F B The mathematical multiplication of (2).
I.e. the total sound S T Is the sum of the pitch (CS) and the N-level Background Noise (BN), so the following applies:
ST ═ CS + BNG + BNI, where:
BN G common background noise;
BN I direct background noise; and
CS is the highest amplitude sound within a defined time frame.
Within this framework, the differentiation of the Background Noise (BN) order is a decision problem, not a structure change problem.
Thus, with respect to the Sampling Rate (SR), the following applies:
SR N x from ST F B +H 1 +H 2 ……+H N The highest frequency that will be allowed.
In this way, the hearing aid Sample Rate (SR) can be designed to be between 1 kHz-40 kHz; however, the range may be modified based on the application. The change in Sampling Rate (SR) may be controlled by the ratio between the received pitch (CS) and Background Noise (BN) in the analog signal 432. The sample rate circuit 430 provides the fundamental frequency (F) of the fundamental tone (CS) B ) Harmonic wave (H) 1 、H 2 、...、H N ) Fundamental frequency (F) of component and Background Noise (BN) B ) And harmonic (H) 1 、H 2 、...、H N ) High accuracy optimization of the components. In some embodiments, this ensures that the higher the Background Noise (BN), the higher the Sampling Rate (SR) in order to properly serve the two-stage Background Noise (BN) control.
Referring now to fig. 11, in one embodiment of harmonic processing 450 that may be incorporated into the hearing aid 10, the ADC 436 receives the total sound (S) T ) As an input. The ADC 436 then performs a spectral analysis 452, the spectral analysis 452 being under the control of the spectral analyzer 438, the sample rate controller 442 and the sample rate generator 444 presented in fig. 10. Digital total sound (S) output by ADC 436 T ) The signal is subjected to a spectral analysis 452 which performs a calculation 454. In this process, the fundamental frequency (F) B ) Harmonic wave (H) 1 、H 2 、...、H N ) The components are separated. Using the algorithm presented above and the conversion-based frequency (CF) set at block 456 B ) As the target frequency range, the harmonic processing 450 calculates the converted actual frequency (CF) at block 454 A ) And harmonics of Differential Conversion (DCH) N ) To create a converted total sound (CS) at block 458 T ) Which is the output of the harmonic processing 450.
More specifically, the total sound (S) T ) The following can be defined:
S T =F B +H 1 +H 2 +…+H N wherein
S T Total sound;
F B a fundamental frequency range, wherein
F B On FB L And FB H In the range of (A) to (B), wherein F BL Is the lowest frequency value in the fundamental frequency, F BH Is the highest frequency value in the fundamental frequency;
H N =F B of harmonic of (a), wherein H N Is F B The mathematical multiplication of (1);
F A the actual frequency value being examined;
H A1 =F A the first harmonic of (a);
H A2 =F A the second harmonic of (2); and
H AN =F A of the order of N of harmonics of (2), wherein H AN Is F A The mathematical multiplication of (2).
In many cases of hearing impairment, the total sound (S) T ) Possibly in any frequency range; furthermore, the true hearing range of the two ears may be quite different. Thus, the hearing aid 10 presented herein may be operated by applying the fundamental frequency range (F) using an algorithm defined by the following equation B ) And a plurality of selected harmonics (H) N ) Conversion to total sound (CS) as a coherent conversion T ) The Actual Hearing Range (AHR) of the user, and the fundamental frequency range (F) B ) And several harmonics (H) N ) Conversion to Actual Hearing Range (AHR):
equation (1):
Figure BDA0003642980230000101
equation (2):
Figure BDA0003642980230000102
equation (3):
CH AN =M×H N
wherein, for equations (1), (2) and (3):
M=CF A and F A A multiplier in between;
CS T total sound after conversion;
CF B -converted base frequency;
CH A1 first harmonic of conversion;
CH A2 second harmonic conversion;
CH AN n-th order conversion harmonics;
CF BL =CF B the lowest frequency value of;
CF BH =CF B highest frequency value of (d); and
CF A the converted actual frequency.
By way of example, and not limitation, applications of algorithms utilizing equations (1), (2), and (3) are presented. For this example, the following assumptions are used:
F BL =170Hz
F BH =330Hz
CF BL =600Hz
CF BH =880Hz
F A =180Hz
thus, for this example, the following will hold:
H 1 =360Hz
H 4 =720Hz
H 8 =1,440Hz
H 16 =2,880Hz
H 32 =5,760Hz
using this algorithm, the following values can be calculated:
CF A =635Hz
CH A1 =1,267Hz
CH A4 =2,534Hz
CH A8 =5,068Hz
CH A16 =10,137Hz
CH A32 =20,275Hz
to calculate at harmonic H N And harmonic of the Conversion (CH) AN ) The following equation is used for the differential (D) between:
CH AN -H N d equation.
This will result in Differential Conversion Harmonics (DCH) as follows:
DCH 1 =907Hz
DCH 4 =1,814Hz
DCH 8 =3,628Hz
DCH 16 =7,257Hz
DCH 32 =14,515Hz
in some embodiments, the high pass filter may cut off all Differential Conversion Harmonics (DCH) above a predetermined frequency. A frequency of 5,000Hz may be used as a reference. In this case, the total sound (CS) participating in the conversion T ) The frequencies of (c) are as follows:
CF A =635Hz
DCH 1 =907Hz
DCH 4 =1,814Hz
DCH 8 =3,628Hz
the harmonic processing 450 may be total sound (S) T ) Provides conversion at each participating frequency and participates in the original total sound (S) T ) The same ratio of (CS) to (CS) of the total sound of the conversion T ) Of all participating Conversion (CF) A ) And Differential Conversion Harmonic (DCH) N ). In some implementations, if all Differential Conversion Harmonics (DCHs) N ) More than seventy-five percent (75%) out of the high pass filter range, the harmonic processing 450 can use the appropriate multipliers (between 0.1-0.9) and the new Differential Conversion Harmonic (DCH) to be created N ) Added to the converted total sound (CS) T )。
Referring now to fig. 12, in one embodiment of signal processing 470 that may be incorporated into the hearing aid 10, an initial analog signal 472 is received. The initial analog signal 472 is converted by the ADC 474 before undergoing signal preparation by the signal preparation circuit 476. Such signal preparation may include the operations presented in fig. 10. The processed signal may be modified based on a subjective assessment of sound quality and before undergoing frequency shifting and signal amplification at the circuit blocks 478, 480. For example, the harmonic enhancement circuit 482 processes the signal as presented in fig. 11, and then converts the signal from digital to analog at the DAC 484. The signal is then output as analog signal 486.
Referring now to fig. 13, one embodiment of an operational flow 500 of the hearing aid 10 is depicted. With respect to the left audio input, the left audio input is received at the preamplifier 502 for processing before the processed signal is driven to the digital signal processor 504, which digital signal processor 504 performs an analog-to-digital conversion 530 before adjusting the background noise according to the filter at block 532. Various filtering may be performed including general filtering 534, direct filtering 536, and pitch filtering 538. The filtered signal is then driven to a digital signal processor 520 for directional control that compares the left and right signals and the time delay between the left and right signals. The result is a distributed left and right signal based on the patient's established left and right hearing abilities. The signal is then driven back to the digital signal processor 504 for left ear algorithmic processing, which may include converting the digital signal to a processed digital signal having a qualified sound range with a preferred hearing range and optional harmonic enhancement and optional modification by subjective assessment of sound quality from the patient to provide the best signal quality possible. The memory module 542 provides instructions for transformation that may be uploaded by the algorithm upload module 522. The amplifier 506 receives the processed digital signal and delivers the amplified processed digital signal to the speaker 508 for left side output sound.
Similarly, with respect to the right audio input, which is received at the preamplifier 512 for processing before the processed signal is driven to the digital signal processor 514, the digital signal processor 514 performs an analog-to-digital conversion 550 before adjusting the background noise according to the filter at block 552. Various filtering may be performed including general filtering 554, direct filtering 556, and pitch filtering 558. The filtered signal is then driven to a digital signal processor 520 for directional control that compares the left and right signals and the time delay between the left and right signals. The result is a distributed left and right signal based on the patient's established left and right hearing abilities. The right portion of the signal is then driven back to the digital signal processor 514 for right ear algorithm processing, which may include converting the digital signal to a processed digital signal having a qualified sound range that includes the preferred hearing range and optional harmonic enhancement and optional modification by subjective assessment of sound quality from the patient to provide the best signal quality possible. The memory module 562 provides instructions for transformation that may be uploaded by the algorithm upload module 522. Amplifier 516 receives the processed digital signal and delivers the amplified processed digital signal to speaker 518 for right side output sound.
The order of execution or performance of the methods and data streams illustrated and described herein is not essential, unless otherwise specified. That is, the elements of the methods and data streams may be performed in any order, unless otherwise specified, and the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element is all possible sequences of execution.
While the present invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims cover any such modifications or embodiments.

Claims (10)

1. A hearing aid (10) for a patient, the hearing aid (10) comprising:
a left body (32) and a right body (34) connected by a strap member (36), the strap member (36) configured to partially encircle a patient;
each of the left side body (32) and the right side body (34) at least partially conforms to a contour of an outer ear of the patient and is sized to engage with the contour;
each of the left body (32) and the right body (34) including an electronic signal processor (130), a microphone (138), and a speaker (140) housed therein, a signaling architecture communicatively interconnecting the microphone (138) to the electronic signal processor (130) and communicatively interconnecting the electronic signal processor (130) to the speaker (140);
each of the electronic signal processors (130) is programmed with a respective left ear qualified sound range and right ear qualified sound range, each of the left ear qualified sound range and the right ear qualified sound range being a sound range corresponding to a preferred hearing range of the patient's ear, the preferred hearing range being modified according to the patient's subjective assessment of sound quality; and
each of the electronic signal processors (130) comprises a memory (402) accessible by the processor (400), the memory (402) comprising processor-executable instructions that, when executed, cause the processor (400) to:
receiving an input analog signal from the microphone (138),
converting the input analog signal to a digital signal,
converting the digital signal into a processed digital signal having a qualified sound range,
converting said processed digital signal into an output analog signal, an
Driving the output analog signal to the speaker (140).
2. The hearing aid (10) according to claim 1, wherein the left ear preferred hearing range and the right ear preferred hearing range are mutually exclusive.
3. The hearing aid (10) according to claim 1, wherein subjective evaluation from the patient's left ear further comprises a complete evaluation of: the level of annoyance caused to the patient by the impairment of the desired sound.
4. The hearing aid (10) according to claim 1, wherein subjective evaluation from the patient's left ear further comprises a complete evaluation of: the level of pleasure that the patient is pleased by the implementation of the desired sound.
5. A hearing aid (10) for a patient, the hearing aid (10) comprising:
a left side body (32) and a right side body (34) connected by a band-shaped member (36);
each of the left side body (32) and the right side body (34) at least partially conforms to a contour of an outer ear and is sized to engage with the contour;
each of the left body (32) and the right body (34) including an electronic signal processor (130), a microphone (138), and a speaker (140) housed therein, a signaling architecture communicatively interconnecting the microphone (138) to the electronic signal processor (130) and communicatively interconnecting the electronic signal processor (130) to the speaker (140);
each of the electronic signal processors (130) is programmed with a respective left ear qualified sound range and right ear qualified sound range, each of the left ear qualified sound range and the right ear qualified sound range being a sound range corresponding to a preferred hearing range of the patient's ear, the preferred hearing range being modified according to the patient's subjective assessment of sound quality; and
each of the electronic signal processors (130) comprises a memory (402) accessible to the processor (400), the memory (402) comprising processor-executable instructions that, when executed, cause the processor (400) to:
receiving an input analog signal from the microphone (138),
converting the input analog signal to a digital signal,
converting the digital signal into a processed digital signal having a qualified sound range,
in a dominant sound mode of operation (26), identifying a loudest sound in the processed digital signal and increasing a volume of the loudest sound in the processed digital signal;
converting said processed digital signal into an output analog signal, an
Driving the output analog signal to the speaker.
6. The hearing aid (10) of claim 5, wherein the memory (402) further comprises processor-executable instructions that, when executed, cause the processor (400) to: in a direct background mode of operation (28), sound in the direct surroundings of the hearing aid (10) is identified and suppressed in the processed digital signal.
7. The hearing aid (10) of claim 5, wherein the memory (402) further comprises processor-executable instructions that, when executed, cause the processor (400) to: in a background mode of operation (30), an external ambient sound received at the hearing aid (10) is identified and suppressed in the processed digital signal.
8. A hearing aid (10) for a patient, the hearing aid (10) comprising:
a body (32, 34) including an electronic signal processor (130), a microphone (138), and a speaker (140) housed therein, a signaling architecture communicatively interconnecting the microphone (138) to the electronic signal processor (130) and communicatively interconnecting the electronic signal processor (130) to the speaker (140);
the electronic signal processor (130) is programmed with a qualified sound range, the qualified sound being a sound range corresponding to a preferred hearing range of the patient's ear, the preferred hearing range being modified by subjective assessment of sound quality from the patient; and
the electronic signal processor (130) comprises a memory (402) accessible to the processor (400), the memory (402) comprising processor-executable instructions that, when executed, cause the processor (400) to:
receiving an input analog signal from the microphone (138),
converting the input analog signal to a digital signal,
transforming the digital signal into a processed digital signal having the qualified sound range,
converting said processed digital signal into an output analog signal, an
Driving the output analog signal to the speaker (140).
9. The hearing aid (10) of claim 8, further comprising a headphone housing (94, 96) located outside the body (32, 34), respectively, the headphone housing (94, 96) isolating noise to block interfering external noise.
10. The hearing aid (10) according to claim 8, wherein the preferred hearing range comprises a frequency transfer component, a sampling rate component, a signal amplification component, a cut-off harmonic component, an additional harmonic component and a harmonic transfer component.
CN202080079244.7A 2019-09-23 2020-09-22 Hearing aid and method of use Pending CN115004718A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962904616P 2019-09-23 2019-09-23
US62/904,616 2019-09-23
PCT/US2020/051978 WO2021061632A1 (en) 2019-09-23 2020-09-22 Hearing aid and method for use of same

Publications (1)

Publication Number Publication Date
CN115004718A true CN115004718A (en) 2022-09-02

Family

ID=75166808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080079244.7A Pending CN115004718A (en) 2019-09-23 2020-09-22 Hearing aid and method of use

Country Status (5)

Country Link
EP (1) EP4035423A4 (en)
KR (1) KR20220104679A (en)
CN (1) CN115004718A (en)
AU (1) AU2020354942A1 (en)
WO (1) WO2021061632A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9439008B2 (en) * 2013-07-16 2016-09-06 iHear Medical, Inc. Online hearing aid fitting system and methods for non-expert user
HK1207526A2 (en) * 2015-05-27 2016-01-29 力滔有限公司 A hearing device and a method for operating thereof
US10507137B2 (en) * 2017-01-17 2019-12-17 Karl Allen Dierenbach Tactile interface system
EP3456259A1 (en) * 2017-09-15 2019-03-20 Oticon A/s Method, apparatus, and computer program for adjusting a hearing aid device
CN112237009B (en) * 2018-01-05 2022-04-01 L·奥拉 Hearing aid and method of use

Also Published As

Publication number Publication date
KR20220104679A (en) 2022-07-26
EP4035423A1 (en) 2022-08-03
AU2020354942A1 (en) 2022-04-14
WO2021061632A1 (en) 2021-04-01
EP4035423A4 (en) 2024-02-14

Similar Documents

Publication Publication Date Title
US11564043B2 (en) Hearing device and a hearing system comprising a multitude of adaptive two channel beamformers
US9712928B2 (en) Binaural hearing system
US11095992B2 (en) Hearing aid and method for use of same
EP3525488B1 (en) A hearing device comprising a beamformer filtering unit for reducing feedback
EP3506658B1 (en) A hearing device comprising a microphone adapted to be located at or in the ear canal of a user
EP3057337A1 (en) A hearing system comprising a separate microphone unit for picking up a users own voice
US11102589B2 (en) Hearing aid and method for use of same
US11330375B2 (en) Method of adaptive mixing of uncorrelated or correlated noisy signals, and a hearing device
US10993047B2 (en) System and method for aiding hearing
EP4047955A1 (en) A hearing aid comprising a feedback control system
US10880658B1 (en) Hearing aid and method for use of same
CN112087699B (en) Binaural hearing system comprising frequency transfer
US11128963B1 (en) Hearing aid and method for use of same
US11153694B1 (en) Hearing aid and method for use of same
CN115004718A (en) Hearing aid and method of use
EP4218261A1 (en) System and method for aiding hearing
WO2022235298A1 (en) Hearing aid and method for use of same
EP4297436A1 (en) A hearing aid comprising an active occlusion cancellation system and corresponding method
EP4218262A1 (en) System and method for aiding hearing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination