EP4085654A1 - Verfahren und systeme zur bewertung der einsetzposition einer ohrinternen anordnung eines hörgeräts - Google Patents

Verfahren und systeme zur bewertung der einsetzposition einer ohrinternen anordnung eines hörgeräts

Info

Publication number
EP4085654A1
EP4085654A1 EP20841809.5A EP20841809A EP4085654A1 EP 4085654 A1 EP4085654 A1 EP 4085654A1 EP 20841809 A EP20841809 A EP 20841809A EP 4085654 A1 EP4085654 A1 EP 4085654A1
Authority
EP
European Patent Office
Prior art keywords
user
depth
hearing instrument
ear
hearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20841809.5A
Other languages
English (en)
French (fr)
Inventor
Karrie Recker
Jumana Harianawala
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Starkey Laboratories Inc
Original Assignee
Starkey Laboratories Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Starkey Laboratories Inc filed Critical Starkey Laboratories Inc
Priority claimed from PCT/US2020/065122 external-priority patent/WO2021138049A1/en
Publication of EP4085654A1 publication Critical patent/EP4085654A1/de
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1058Manufacture or assembly
    • H04R1/1075Mountings of transducers in earphones or headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/021Behind the ear [BTE] hearing aids
    • H04R2225/0216BTE hearing aids having a receiver in the ear mould
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/023Completely in the canal [CIC] hearing aids
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/025In the ear hearing aids [ITE] hearing aids
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/39Aspects relating to automatic logging of sound environment parameters and the performance of the hearing aid during use, e.g. histogram logging, or of user selected programs or settings in the hearing aid, e.g. usage logging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/83Aspects of electrical fitting of hearing aids related to problems arising from growth of the hearing aid user, e.g. children
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/15Determination of the acoustic seal of ear moulds or ear tips of hearing devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/558Remote control, e.g. of amplification, frequency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/60Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
    • H04R25/604Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/65Housing parts, e.g. shells, tips or moulds, or their manufacture
    • H04R25/652Ear tips; Ear moulds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones

Definitions

  • Tills disclosure relates to bearing instruments.
  • Hearing instalments are devices designed to be worn on, in, or near one or more of a user’s ears.
  • Common types of hearing instruments include hearing assistance devices (e.g,, ‘"hearing aids”), earphones, headphones, hearables, and so on.
  • Some hearing instalments include features in addition to or in the alternative to environmental sound amplification.
  • some modem hearing instalments include advanced audio processing for improved device functionality, controlling and programming the devices, and beamfomiing, and some can communicate wirelessly with external devices including other hearing instruments (e.g., tor streaming media).
  • This disclosure describes techniques for verifying correct insertion of in-ear assemblies of hearing instruments into ear canals of users.
  • a speaker of a hearing instrument may generate a sound directed into an ear canal of a user of the hearing instrument. Hie sound includes a range of frequencies.
  • a microphone of the hearing instrument measures an acoustic response to the sound.
  • a processing system classifies, based on the acoustic response to the sound, a depth of insertion of tin in-ear assembly of the hearing instrument into the ear canal of the user. Additionally, the processing system may generate an indication based on the depth of insertion of the in-ear assembly of the hearing instalment into the ear canal of the user.
  • this disclosure describes a method for fitting a hearing instalment, tire method comprising: generating, by a speaker of the hearing instalment, a sound that includes a range of frequencies; measuring, by a microphone of the hearing instrument, an acoustic response to the sound; classifying, by a processing system, based on the acoustic response to the sound, a depth of insertion of an in-ear assembly of the hearing instrument into an ear canal of a user; and generating an indication based on the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of the user.
  • this disclosure describes a system comprising: a speaker of a hearing instrument, the speaker configured to generate a sound that includes a range of frequencies; a microphone of the hearing instrument, wherein the microphone is configured to measure an acoustic response to the sound; and one or more processors implemented m circuitry, the one or more processors configured to: classify, based on the acoustic response to the sound, a depth of insertion of an in-ear assembly of the hearing instrument into an ear canal of a user; and generate an indication based on the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of the user.
  • this disclosure describes a method for fitting a hearing instrument, the method comprising: classifying, by a processing system, based on an acoustic response measured by a microphone of the hearing instrument to a sound generated by a speaker of the hearing instalment, a depth of insertion of an in-ear assembly of the hearing instrumen t into an ear canal of a user, wherein the sound includes a range of frequencies; and generating an indication based on the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of the user.
  • FIG. 1 is a conceptual diagram illustrating an example system that includes one or more hearing instruments, in accordance with one or more aspects of this disclosure.
  • FIG. 2 is a block diagram illustrating exampl e components of a hearing instrument, in accordance with one or more aspects of this disclosure.
  • FIG. 3 is a block diagram illustrating example components of a computing device, in accordance with one or more aspects of this disclosure.
  • FIG. 4 is a flowchart illustrating an example fitting operation in accordance with one or more aspects of this disclosure.
  • FIG. 5A, FIG. 5B. FIG. 5C, and FIG. 5D are conceptual diagrams illustrating example in-ear assemblies inserted into ear canals of users, in accordance with one or more aspects of this disclosure.
  • FIG. 6 is a conceptual diagram illustrating example cutoffs for classifying levels of insertion of an in-ear assembly of a hearing instrument into an ear canal of a user, in accordance with one or more aspects of this disclosure.
  • the most common problem w ith placing in-ear assemblies of hearing instruments in users’ ear canals is that the users do not insert the in-ear assemblies of the hearing instruments far enough into their ear canals.
  • a user ' s experience can be negatively impacted by not inserting an in-ear assembly of a hearing instrument far enough into the user’s ear canal.
  • the hearing instrument may look bad cosmetically, may cause the hearing instrument to be less comfortable physically, and may cause retention issues (e.g., the in-ear assembly of the hearing instrument may fall out and be lost).
  • the hearing instrument may amplify sounds from the user’s environment more if the in-ear assembly of the hearing instrument was under-inserted during estimation of the user’s hearing thresholds. Providing higher gam may increase the l ikelihood of the user perceiving audible feedback. Additi onally, providing higher gain may increase power consumption and reduce battery life of the hearing instrument.
  • the hearing instrument may not provide enough gain.
  • the user’s hearing threshold may he properly estimated, and the hearing instrument may be programmed with die proper hearing thresholds; but the resulting gain provided by the hearing instrumen t may not be enough for the user if the in-ear assembly of the hearing instrument is not placed far enough into the user’s ear canal.
  • the user may not be satisfied with the level of gain provi ded by the hearing instrument.
  • a hearing instrument includes a speaker and a microphone.
  • the speaker and/or the microphone may be included in an in-ear assembly of the hearing instrument.
  • the in-ear assembly of the hearing instrument is designed for complete or partial insertion into an ear canal of the user of the hearing instrument.
  • the speaker is configured to generate a sound directed into an ear canal of the user.
  • the sound includes a range of frequencies.
  • the microphone is configured to detect sounds from the ear canal of the user. Tims, both the speaker and the microphone may face into the user’s ear canal.
  • the microphone is configured to measure an acoustic response to the sound.
  • a processing system may classify 7 , based on the acoustic response to the sound, a depth of insertion of the in-ear assembly of the hearing instrument in tire ear canal of the user. Additionally, the processing system may generate an indication based on the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of the user. Thus, in some examples, the user may receive an indication of whether the in-ear assembly of the hearing instrument is inserted sufficiently far into the user’s ear canal.
  • FIG. 1 is a conceptual diagram illustrating an example system 100 that includes hearing instruments 102A, 10213, in accordance with one or more aspects of this disclosure.
  • This disclosure may refer to hearing instruments 102 A and 102B collectively, as "‘hearing instruments 102.”
  • a user 104 may wear hearing instruments 102. in some instances, such as when user 104 has unilateral hearing loss, user 104 may wear a single hearing instrument. In other instances, such as when user 104 has bilateral hearing loss, the user may wear two hearing instruments, with one hearing instrument for each ear of user 104.
  • Hearing instruments 102 may comprise one or more of various types of devices that are configured to provide auditory stimuli to user 104 and that are designed for wear and/or implantation at, on, or near an ear of user 104.
  • Hearing instalments 102 may be worn, at least partially, in the ear canal or concha.
  • each of hearing instalments 102 may comprise a hearing assistance device.
  • Hearing assistance devices include devices that help a user hear sounds m the user’s environment.
  • Example types of hearing assistance devices may include hearing aid devices, Personal Sound Amplification Products (PSAPs), and so on.
  • PSAPs Personal Sound Amplification Products
  • hearing instruments 102 are over-the-counter, direet-to-eonsumer, or prescription devices.
  • hearing instruments 102 include devices that provide auditory stimuli to user 104 that correspond to artificial sounds or sounds that are not naturally in the user’s environment, such as recorded music, computer-generated sounds, sounds from a microphone remote from the user, or other types of sounds.
  • hearing instalments 102 may include so-called “hearables,” earbuds, earphones, or other types of devices.
  • Some types of hearing instruments provide auditory stimuli to user 104 corresponding to sounds from the user’s environment and also artificial sounds.
  • one or more of hearing instruments 102 includes a housing or shell that is designed to be worn in the ear for both aesthetic and functional reasons and encloses the electronic components of the hearing instrument.
  • Such hearing instruments may be referred to as in-the-ear (ITE), in-the-canal (ITC), completely-in-the-canal (CIC), or invisible-in-the-canal (IIC) devices.
  • one or more of hearing instalments 102 may be behind-the-ear (BTE) devices, which include a housing worn behind the ear that contains electronic components of the hearing instrument, including the receiver (e.g., a speaker). The receiver conducts sound to an earbud inside the ear via an audio tube.
  • one or more of hearing instruments 102 may be receiver-in-canal (RIC) hearing -assistance devices, which include a housing worn behind the ear that contains electronic components and a housing worn in the ear canal that contains the receiver,
  • Hearing instruments 102 may implement a variety of features that help user 104 hear better. For example, hearing instruments 102 may amplify the intensity of incoming sound, amplify the intensity of certain frequencies of the incoming sound, translate or compress frequencies of the incoming sound, and/or perform other functions to improve the hearing of user 104.
  • hearing instruments 102 may implement a directional processing mode in which hearing instruments 102 selectively amplify sound originating from a particular direction (e.g., to the front of user 104) while potentially fully or partially canceling sound originating from other directions.
  • a directional processing mode may selectively attenuate off-axis unwanted sounds.
  • the directional processing mode may help users understand conversations occurring in crowds or other noisy environments, in some examples, hearing instruments 102 may use beamforming or directional processing cues to implement or augment directional processing modes.
  • hearing instruments 102 may reduce noise by canceling out or attenuating certain frequencies. Furthermore, in some examples, hearing instruments 102 may help user 104 enjoy audio media, such as music or sound components of visual media, by outputing sound based on audio data wirelessly transmitted to hearing instruments 102.
  • Hearing instruments 102 may he configured to communicate with each other.
  • hearing instruments 102 may communicate with each other using one or more wirelessly communication technologies.
  • Example types of wireless communication technology include Near-Field Magnetic Induction (NFMI) technology, a 9QGMHz technology, a BLUETOOTHTM technology, a WI-FI TM technology, audible sound signals, ultrasonic communication technology, infrared communication technology, an inductive communication technology, or another type of communication that does not rely on wires to transmit signals between devices.
  • hearing instruments 102 use a 2.4 GHz frequency band for wireless communication.
  • hearing instruments 102. may communicate with each other via non-wireless communication links, such as via one or more cables, direct electrical contacts, and so on.
  • system 100 may also include a computing system 106.
  • system 100 does not include computing system 106.
  • Computing system 106 comprises one or more computing devices, each of which may include one or more processors.
  • computing system 106 may comprise one or more mobile devices, server devices, personal computer devices, handheld devices, wireless access points, smart speaker devices, smart televisions, medical alarm devices, smart key fobs, smartwatches, smartphones, motion or presence sensor devices, smart, displays, screen -enhanced smart speakers, wireless routers, wireless communication hubs, prosthetic devices, mobility devices, special-purpose devices, accessory devices, and/or other types of devices.
  • Accessor)? devices may include devices that are configured specifically for use with hearing instruments 102.
  • Example types of accessory devices may include charging cases for hearing instruments 102, storage cases forbearing instruments 102, media streamer devices, phone streamer devices, external microphone devices, remote controls for hearing instruments 102, and other types of devices specifically designed for use with hearing instruments 102.
  • Actions described in this disclosure as being performed by computing system 106 may be performed by one or more of the computing devices of computing system 106.
  • One or more of hearing instruments 102 may communicate with computing system 106 using wireless or non-wireless communication hnks. For instance, hearing instruments 102 may communicate with computing system 106 using any of the example types of communication technologies described elsewhere in this disclosure.
  • hearing instrument 102A includes a speaker 108A, a microphone 110A, and a set of one or more processors 112A.
  • Hearing instrument 102B includes a speaker 108B, a microphone 110B, and a set of one or more processors 112B.
  • Hus disclosure may refer to speaker 108 A and speaker 108B collectively as “speakers 108.”
  • This disclosure may refer to microphone 110A and microphone 110B collectively as “microphones 110.”
  • Computing system 106 includes a set of one or more processors 112C. Processors 112C may be distributed among one or more devices of computing system 106.
  • processors 112A, 112.B, and 112C collectively as “processors 112.”
  • Processors 112 may be implemented in circuitry and may comprise microprocessors, application-specific integrated circuits, digital signal processors, or other types of circuits.
  • hearing instruments 102A, 102B, and computing system 106 may be configured to communicate with one another. Accordingly, processors 112 may be configured to operate together as a processing system 114. Thus, discussion in this disclosure of actions performed by processing system 114 may be performed by one or more processors m one or more of hearing instrument 102A, hearing instrument 102B, or computing system 106, either separately or in coordination.
  • hearing instruments 102 and computing system 106 may include components in addition to those shown in the example of FIG. 1, e.g., as shown in the examples of FIG. 2 and FIG. 3.
  • each of hearing instruments 102 may include one or more additional microphones configured to detect sound in an environment of user 104.
  • the additional microphones may include omnidirectional microphones, directional microphones, or other types of microphones.
  • Speakers 108 may be located on bearing instruments 102. so that sound generated by speakers 108 is directed medially through respective ear canals of user 104. For instance, speakers 108 may be located at medial tips of hearing instruments 102. The medial tips of hearing instalments 102 are designed to be the most medial parts of hearing instruments 102. Microphones 110 may be located on bearing instalments 102 so that microphones 110 may detect sound within the ear canals of user 104.
  • an in-ear assembly 116A of hearing instrument 102A contains speaker I08A and microphone 110A.
  • an in-ear assembly 116B of hearing instalment 102B contains speaker 108B and microphone 110B.
  • This disclosure may refer to in-ear assembly 116A and in-car assembly 116B collectively as “in-ear assemblies 116.” The following discussion focuses on in-ear assembly 116A but may be equally applicable to in-ear assembly 116B.
  • in-ear assembly 116A also includes one or more, or all of, processors 112A of hearing instrument 102A.
  • an in-ear assembly of hearing ins ⁇ rumen; I02B may include one or more, or all of, processors 112B of hearing instrument 102B.
  • in-ear assembly 116.4 includes all components of hearing instalment 102A.
  • in-ear assembly 116B includes all components of hearing instrument 102B.
  • components of hearing instalment 102A may be distributed between in-ear assembly 116A and another assembly of hearing instrument 102A.
  • in-ear assembly 116A may include speaker 108 A and microphone 110A and in-ear assembly 116A may be connected to a behind-the-ear assembly of hearing instrument 102A via a cable.
  • components of hearing instrument 102.B may be distributed between in-ear assembly '
  • in-ear assembly 116A may include all primary- 7 components of hearing instrument 102A.
  • in-ear assembly 116B may include ail primary components of hearing instrument 102B.
  • m-ear assembly 116A may be a temporary-use structure designed to familiarize user 104 with how to insert a sound tube into an ear canal of user 104. in other words, in-ear assembly 116A may help user 104 get a feel for how far to insert a tip of the sound tube of the BTE device into the ear canal of user 104.
  • in-ear assembly 116B may be a temporary- use structure designed to familiarize user 104 with how to insert a sound tube into an ear canal of user 104.
  • speaker 108A (or speaker 108B) is not located in in-ear assembly 7 116A (or in-ear assembly 116B). Rather, microphone 110A (or microphone 11 OB) may be in a removable structure that has a shape, size, and feel similar to the tip of a sound tube of a BTE device.
  • speaker 108 A may generate sound that includes different ranges of frequencies. For instance, in some examples, the range of frequencies is 2,000 to 20,000 Hz. In some examples, the range of frequencies is 2,000 to 16,000 Hz. In other examples, the range of frequencies has different low and high boundaries.
  • Microphone 11GA measures an acoustic response to the sound generated by speaker 108A.
  • acoustic response to the sound includes portions of the sound reflected by the user’s tympanic membrane.
  • processing system 114 may classify, based on the acoustic response to the sound, a depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104.
  • processing system 114 may classify the depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104 as being under-inserted, properly inserted, or over-inserted into the ear canal of user 104.
  • in-ear assembly 116A of hearing instrument 102A may be properly inserted when in-ear assembly 116A is entirely inside an ear canal of user 104 (or, minimally, a lateral end of in-ear assembly 116A is flush with an entrance to the ear canal of user 104).
  • Processing system 114 may generate an indication based on the depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104. For example, processing system 114 may cause speaker 108A to generate an audible indication indicating whether in-ear assembly 116A of hearing instalment 102A is under-inserted, properly inserted, or over-inserted into the ear canal of user 104, In another example, processing system 114 may cause a notification (e.g., on a smartphone, email message, etc.) to appear indicating the depth of insertion of in-ear assembly 116A of hearing instalment 102A.
  • a notification e.g., on a smartphone, email message, etc.
  • FIG. 2 is a block diagram illustrating example components of hearing instrument 102A, in accordance with one or more aspects of this disclosure.
  • Hearing instrument 102B may include the same or similar components of hearing instrument 102A shown in the example of FIG. 2.
  • hearing instrument 102A comprises one or more storage devices 202, one or more communication units 204, a receiver 206, one or more processors 208, one or more microphones 210, a set of sensors 212, a power source 214, and one or more communication channels 216.
  • Communication channels 216 provide communication between storage devices 202, communication unit(s) 204, receiver 206, processor(s) 208, microphone(s) 210, and sensors 212.
  • Components 202, 204, 206, 208, 210, and 212 may draw electrical power from power source 2.14.
  • each of components 202, 204, 206, 208, 210, 2.12, 214, and 216 are contained within a single housing 218.
  • each of components 202, 204, 206, 208, 210, 212, 214, and 216 may be within in-ear assembly 116A of hearing instalment 102A.
  • components 202, 2.04, 206, 2.08, 210, 2.12, 214, and 216 may be distributed among two or more housings.
  • receiver 206 may be included in an in-ear housing separate from a behind-the-ear housing that contains the remaining components of hearing instalment 102A.
  • a RIC cable may connect the two housings.
  • sensors 212 include an inertial measurement unit (IMU) 22.6 that is configured to generate data regarding the motion of hearing instrument 102A.
  • IMU 226 may include a set of sensors.
  • IMU 226 includes one or more accelerometers 228, a gyroscope 2.30, a magnetometer 232, combinations thereof, and/or other sensors for determining the motion of hearing instrument 102A.
  • hearing instrument 102 A may include one or more additional sensors 236.
  • Additional sensors 236 may include a photoplethysmography (PPG) sensor, blood oximetry sensors, blood pressure sensors, electrocardiograph (EKG) sensors, body temperature sensors, electroencephalography (EEG) sensors, environmental temperature sensors, environmental pressure sensors, environmental humidity sensors, skin galvanic response sensors, and/or other types of sensors.
  • PPG photoplethysmography
  • EKG electrocardiograph
  • EEG electroencephalography
  • environmental temperature sensors environmental pressure sensors
  • environmental pressure sensors environmental pressure sensors
  • environmental humidity sensors skin galvanic response sensors
  • skin galvanic response sensors and/or other types of sensors.
  • hearing instrument 102.A and sensors 212 may include more, fewer, or different components.
  • Storage device(s) 202 may store data.
  • Storage device(s) 202 may comprise volatile memory and may therefore not retain stored contents if powered off.
  • volatile memories may include random access memories (RA M), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
  • Storage device(s) 202 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memory' configurations may include flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • EPROM electrically programmable memories
  • EEPROM electrically erasable and programmable
  • Communication unit(s) 204 may enable hearing instrument 102Ato send data to and receive data from one or more other devices, such as a device of computing system 106 (FIG. 1), another hearing instrument (e g., hearing instrument 102.B), an accessor ⁇ - 7 device, a mobile device, or another types of device.
  • Communication unit(s) 204 may enable hearing instrument 102Ato use wireless or non-wireless communication technologies.
  • communication unit(s) 204 enable hearing instalment 102A to communicate using one or more of vari ous types of wireless technology, such as a BLUETOOTHTM technology, 3G, 4G, 4G LTE, 5G, ZigBee, Wi-FiTM, Near-Field Magnetic induction (NFM1), ultrasonic communication, infrared (IR) communication, or another wireless communication technology.
  • communication unit(s) 204 may enable hearing instalment 102 A to communicate using a cable-based technology, such as a Universal Serial Bus (USB) technology.
  • USB Universal Serial Bus
  • Receiver 206 comprises one or more speakers for generating audible sound.
  • Microphone(s) 210 detect incoming sound and generate one or more electrical signals (e.g., an analog or digital electrical signal) representing the incoming sound.
  • Processors) 208 may be processing circuits configured to perform various activities. For example, processors) 2.08 may process signals generated by microphone(s) 210 to enhance, amplify, or cancel-out particular channels within the incoming sound. Processor(s) 208 may then cause receiver 206 to generate sound based on the processed signals.
  • processor(s) 208 include one or more digital signal processors (DSPs).
  • DSPs digital signal processors
  • processor(s) 208 may cause communication unit(s) 204 to transmit one or more of various types of data. For example, processor(s) 208 may cause communication unit(s) 204 to transmit data to computing system 106. Furthermore, communication unit(s) 204 may receive audio data from computing system 106 and processor(s) 208 may cause receiver 206 to output sound based on the audio data.
  • receiver 206 includes speaker I08A.
  • Speaker 108A may generate a sound that includes a range of frequencies.
  • Speaker 108A may be a single speaker or one of a plurality of speakers in receiver 206.
  • receiver 206 may also include “woofers” or “tweeters” that provide additional frequency range.
  • speaker 108A may be implemented as a plurality of speakers.
  • microphones 210 include a microphone 110A.
  • Microphone 110A may measure an acoustic response to the sound generated by- speaker 108A.
  • microphones 210 include multiple microphones.
  • microphone 110A may be a first microphone and microphones 210 may also include a second, third, etc. microphone.
  • microphones 210 include microphones configured to measure sound in an auditor ⁇ ' environment of user 104.
  • one or more of microphones 210 in addition to microphone 110A may measure the acoustic response to the sound generated by speaker 108A.
  • processing system 114 may subtract the acoustic response generated by the first microphone from the acoustic response generated by the second microphone in order to help identify a notch frequency.
  • the notch frequency is a frequency in the range of frequencies having a level that is attenuated in the acoustic response relative to levels in the acoustic response of frequencies surrounding the frequency. Use of the notch frequency in classifying the depth of insertion of an in-ear assembly of a hearing instrument into an ear canal of user 104 is described in greater detail elsewhere in this disclosure.
  • housing 218 may define two ports for microphone 110A.
  • the two ports may be spaced at least 4 millimeters apart.
  • Measuring sounds arriving through the two separate ports may improve the ability of processing system 114 to determine the notch frequency. Measurements of the acoustic response that are made through different ports at different positions within the ear canal will have different notch frequencies. Therefore, when processing system 114 subtracts one measurement of the acoustic response from the other measurement of the acoustic response, there may be large differences in the levels at these notch frequencies, making the notch frequencies easy to identify. If two measurements are made very dose to each other in the ear canal, there will be overlap in their notch locations (frequencies), and when subtracting one measurement from the other, the level differences will be less, and therefore it will be less obvious where the notch is occurring.
  • a shell of in-ear assembly 116A may define a first port and a second port.
  • Processing system 114 may obtain the acoustic response to the sound as measured by a microphone through the first port and obtain the acoustic response to the sound as measured by the microphone through the second port, in this example, the processing system 114 may determine the notch frequency based on the acoustic response as measured by the microphone through the first port or the acoustic response as measured by the m icrophone through the second port or the difference between the two acoustic responses.
  • microphone 110A is detachable from hearing instrument 102A.
  • microphone 110A may be detached from hearing instrument 102 A. Removing microphone 110A may decrease the size of in-ear assembly 116A of hearing instrument 102A and may increase the comfort of user 104.
  • an earbud is positioned over die tips of speaker 108 A and microphone 110A.
  • an earbud is a flexible, rigid, or semi-rigid component that is configured to fit within an ear canal of a user.
  • the earbud may protect speaker 108A and microphone IIOA from earwax. Additionally, the earbud may help to hold in-ear assembly 116A in place.
  • the earbud may comprise a biocompatible, flexible material, such as a silicone material, that fits snugly into the ear canal of user 104.
  • hearing instrument 102 A may include a set of one or more sensors 212,
  • the fitting operation of this disclosure may help with the placement of sensors 212 (e.g., a heartrate sensor and/or a temperature sensor). That is, if processing system 114 is able to determine, based on the acoustic response to the sound generated by speaker I08A, a depth of insertion of an in-ear assembly of hearing instrument 102A, processing system 114 may, in doing so, determine locations of sensors 212. In this case, processing system 114 may be preconfigured with data regarding positional relationships (e.g., the distances) between the additional sensors and in-ear assembly 116A.
  • positional relationships e.g., the distances
  • processing system 114 may classify the depth of insertion of the sensors of the hearing instrument into the ear canal based on whether the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal is appropriate for one or more sensors included in the in-ear assembly of the hearing instalment.
  • processing system 114 may ⁇ be configured with data indicating that a temperature sensor is “x” mm from an end of in-ear assembly 116A of hearing instrument 10.' A. If the components (e.g., sensors) are custom, distances between components may be measured (e.g., by the shell modelers who design the pl acem ent of the hearing aid components in the earmold) and programmed into hearing instrument I02A. In some examples, the components themselves, once assembled into an earmold, communicate with each other to determine their relative positions; this may be done using hard wired or wireless signals.
  • FIG. 3 is a block diagram illustrating exampl e components of computing device 300, in accordance with one or more aspects of this disclosure.
  • FIG. 3 illustrates only one particular example of computing device 300, and many other example configurations of computing device 300 exist.
  • Computing device 300 may be a computing device in computing system 106 (FIG. I).
  • computing device 300 includes one or more processors 302, one or more communication units 304, one or more input devices 308, one or more output device(s) 310, a display screen 312, a power source 314. one or more storage device(s) 316, and one or more communication channels 318.
  • Computing de vice 300 may include other components.
  • computing device 300 may include physical buttons, microphones, speakers, communication ports, and so on.
  • Communication channel (s) 318 may interconnect each of components 302, 304, 308, 310, 312, and 316 for inter-component communications (physically, communicatively, and/or operatively), in some examples, communication channel(s) 318 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data.
  • Power source 314 may provide electrical energy to components 302, 304, 308, 310, 312 and 316.
  • Storage de vice(s) 316 may store information required for use during operation of computing device 300.
  • storage device(s) 316 have the primary' purpose of being a short-term and not a long-term computer-readable storage medium.
  • Storage deviee(s) 316 may be volatile memory and may therefore not retain stored contents if powered off.
  • Storage dcvice(s) 316 m ay be configured for long-term storage of information as non-volatile memory space and re tain information after power on/off cycles.
  • processor(s) 302 on computing device 300 read and may execute instructions stored by storage device (s) 316.
  • Computing device 300 may include one or more input devices 308 that computing device 300 uses to receive user input. Examples of user input include tactile, audio, and video user input. Input deviee(s) 308 may include presence-sensitive screens, touch-sensitive screens, mice, keyboards, voice responsive systems, microphones or other types of devices for detecting input from a human or machine. [0059] Communication unit(s) 304 may enable computing device 300 to send data to and receive data from one or more other computing devices (e.g., via a communications network, such as a local area network or the Internet).
  • a communications network such as a local area network or the Internet
  • communication unit(s) 304 may be configured to receive data sent by hearing instrument(s) 102, receive data generated by user 104 of hearing instmment(s) 102, receive and send request data, receive and send messages, and so on.
  • communication unit(s) 304 may include wireless transmitters and receivers that enable computing de vice 300 to communicate wirelessly with the other computing devices.
  • communication unit(s) 304 include a radio 306 that enables computing device 300 to communicate wirelessly with other computing devices, such as hearing instruments 102 (FIG. 1).
  • Examples of communication unit(s) 304 may include network interface cards, Ethernet cards, optical transceivers, radio frequency transceivers, or other types of devices that are able to send and receive information.
  • Computing device 300 may use communication unit(s) 304 to communicate with one or more hearing instruments (e.g., hearing instrument 102 (FIG. 1, FIG. 2)). Additionally, computing device 300 may use communication unit(s) 304 to communicate with one or more other remote devices.
  • hearing instruments e.g., hearing instrument 102 (FIG. 1, FIG. 2)
  • computing device 300 may use communication unit(s) 304 to communicate with one or more other remote devices.
  • Output device(s) 310 may generate output. Examples of ou tput include tactile, audio, and video output. Output device(s) 310 may include presence-sensitive screens, sound cards, video graphics adapter cards, speakers, liquid crystal displays (LCD), or other types of devices tor generating output. Output device(s) 310 may include display screen 312.
  • Processor(s) 302 may read instructions from storage device(s) 316 and may execute instructions stored by storage device(s) 316. Execution of the instructions by processors) 302 may configure or cause computing device 300 to provide at least some of the functionality ascribed in this disclosure to computing device 300.
  • storage device(s) 316 include computer-readable instructions associated with operating system 320, application modules 322A-322N (collectively, “application modules 322”), and a companion application 324.
  • Execution of instructions associated with operating system 320 may cause computing device 300 to perform various functions to manage hardware resources of computing device 300 and to provide various common sendees for other computer programs.
  • Execution of instructions associated with application modules 32.2 may cause computing device 300 to provide one or more of various applications (e.g.,
  • Application modules 322 may provide applications, such as text messaging (e.g., SMS) applications, instant messaging applications, email applications, social media applications, text composition applications, and so on.
  • text messaging e.g., SMS
  • instant messaging applications e.g., instant messaging applications
  • email applications email applications
  • social media applications e.g., social media applications
  • text composition applications e.g., text composition applications
  • Execution of instructions associated with companion application 324 by processor(s) 302 may cause computing devi ce 300 to perform one or more of various functions.
  • execution of instructions associated with companion application 324 may cause computing device 300 to configure communication unit(s) 304 to receive data from hearing instruments 102 and use the received data to present data to a user, such as user 104 or a third-party user.
  • companion application 324 is an instance of a weh application or server application, in some examples, such as examples where computing device 300 is a mobile device or other type of computing device, companion application 324 may he a native application.
  • companion application 324 may classify a depth of insertion of the in-ear assembly of a hearing instrument based on the acoustic response to the sound generated by a speaker of the hearing instrument. Furthermore, in some examples, companion application 324 may generate an indication based on the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of user 104. For example, companion application 324 may output, tor display on display screen 312, a message that includes tire indication. In some examples, companion application 324 may send data to a hearing instrument (e.g., one of hearing instruments 102) that causes the hearing instrument to output an audible and/or tactile indication of the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of the user.
  • a hearing instrument e.g., one of hearing instruments 102
  • companion application 324 may send a notification (e.g., a text message, email message, push notification message, etc.) to a device (e.g., a mobile phone, smart watch, remote control, tablet computer, personal computer, etc.) associated with user 104 to notify user 104 of the insertion level of the in-ear assembly of the hearing instrument,
  • a notification e.g., a text message, email message, push notification message, etc.
  • a device e.g., a mobile phone, smart watch, remote control, tablet computer, personal computer, etc.
  • FIG. 4 is a fknvchart illustrating an example fitting operation 400, in accordance with one or more aspects of this disclosure.
  • Other examples of tins disclosure may include more, fewer, or different actions.
  • this disclosure describes FIG, 4 with reference to hearing instrument 102A.
  • operation 400 may be performed in the same way with respect to hearing instrument 102B, or another hearing instrument.
  • Hie fitting operation 400 of FIG. 4 may begin in response to one or more different types of events.
  • user 104 may initiate fitting operation 400.
  • processing system 114 may initiate fitting operation 400 in response to input from user 104.
  • user 104 may initiate fitting operation 400 using a voice command or by providing appropriate input to a device (e.g., a smartphone, accessory' device, or other type of device).
  • processing system 114 automatically initiates fitting operation 400.
  • processing system 114 may automatically initiate fitting operation 400 on a periodic basis.
  • processing system 114 may use a determination of a depth of insertion of in-ear assembly 116A of hearing instrument 102A for a fixed or variable amount of time before automatically initiating fitting operation 400 again.
  • fitting operation 400 may be performed a specific number of times before processing system 114 determines that results of fitting operation 400 are acceptable. For instance, after fitting operation 400 has been performed a specific number of times with user 104 achieving a proper depth of insertion of in-ear assembly 116A of hearing instrument I02A, processing system 114 may stop automatically 7 initiating fitting operation 400. In other words, after several correct placements of hearing instrument 102 A, processing system 114 may stop automatically initiating fiting operation 400 or may phase out initiating fitting operation 400 over time.
  • processing system 114 may determine, based on a history of attempts by user 104 to insert in-ear assembly 116A of hearing instrument 102 A into the ear canal of user 104, whether to initiate a fitting process that comprises generating the sound, measuring the acoustic response, and classifying the depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104.
  • processing system 114 may automatically 7 initiate fitting operation 400 in response to detecting that one or more of hearing instruments 102 have been removed from a charger, such as a charging case. In some examples, processing system 114 may detect that one or more of hearing instruments 102 have been removed from the charger by detecting an interruption of an electrical current between the charger and one or more of hearing instruments 102. Furthermore, in some examples, processing system 114 may automatically initiate fitting operation 400 in response to determining that one or more of hearing instruments 102 are in contact with the ears of user 104.
  • processing system 114 may determine that one or more of hearing instruments 102 are in contact with the ears of user 104 based on signals from one or more capacitive switches or other sensors of hearing instalments 102. Thus, in this way, processing system 114 may determine whether an initiation event has occurred.
  • Example types of initiation events may include one or more of removal of one or more of hearing instruments 102 from a charger, contact of the in-ear assembly of a hearing instrument with skin, detecting that the hearing instrument is on an ear of a user (e.g., using positional sensors, using wireless communications, etc.), input from user 104.
  • Processing system 114 may initiate a fitting process in response to the initiation event, wherein the fitting process includes generating the sound, measuring the acoustic response, and classifying the depth of insertion of the in-ear assembly of the hearing instrument into the ear canal of the user.
  • processing system 114 may automatically initiate fitting operation 400 in response to determining that one or more of hearing instruments 102 are generally positioned in the ears of user 104.
  • processing system 114 may automatically initiate fitting operation 400 in response to determining, based on signals from IMUs (e.g., IMU 226) of hearing instalments 102, that hearing instruments 102 are likely positioned on the head of user 104. For instance, in this example, if the IMU signals indicate synchronized motion in one or more patterns consistent with movements of a human head (e.g., nodding, rotating, tilting, head mo vements associated with walking, etc.), processing system 114 may determine that hearing instruments 102 are likely positioned on the head of user 104.
  • processing system 114 may automatically initiate fitting operation 400 in response to determining, based on wireless communication signals exchanged between hearing instruments 102, that hearing instruments 102 are likely positioned on the head of user 104. For instance, in this example, processing system 114 may determine that hearing instalments 102 are likely positioned on the head of user 104 when hearing instalments 102 are able to wirelessly communicate with each other (and, in some examples, an amount of signal attenuation is consistent with communication between hearing instruments positioned on opposite ears of a human head).
  • processing system 114 may determine that hearing instalments 102 are generally positioned on the head of user 104 based on a combination of factors, such as IMU signals indicating synchronized motion in one or more paterns consistent with movements of the human head and hearing instruments 102 being able to wirelessly communicate with each other. In some examples, processing system 114 may determine that hearing instruments 102 are generally position on the head of user 104 based on a specific time delay for wireless communication between hearing instruments 102.
  • speaker 108A generates a sound (402).
  • a sound (402)
  • the sound includes a range of frequencies.
  • user 104 may be able to hear the sound. However, this typically is not a concern tor user 104 because the sound is generated as part of the fitting operation and not during typical use of hearing instrument I02A.
  • Microphone 110A measures an acoustic response to the sound (404). That is, microphone 110A may generate an electrical signal representing soundwaves that reflect back to in-ear assembly 116A of hearing instrument I02A when speaker 108A generates the sound. In some examples, microphone 110A, or another component, converts this electrical signal from an analog form to a digital form.
  • processing system 114 may classify, based on the acoustic response to the sound, a depth of insertion of in-ear assembly 116A of hearing instalment 102 A into the ear canal of user 104 (406).
  • one or more processors 112A classify the depth of insertion of in-ear assembly 116A of hearing instrument 102A.
  • one or more processors 112C classify the depth of insertion of in-ear assembly 116A of hearing instrument 102 A.
  • one or more processors of another hearing instrument e.g., one or more processors 112B of hearing instrument 102B
  • a combination of two or more of processors 112A, 112B, and 112C classify the depth of insertion of in-ear assembly 116A of hearing instrument 1Q2A.
  • Processing system 114 may classify the depth of insertion in various ways. For example, processing system 114 may determine a notch frequency based on the acoustic response.
  • the notch frequency is a frequency in the range of frequencies that has a level that is attenuated in the acoustic response relative to levels in the acoustic response of the frequencies surrounding the frequency.
  • the notch frequency occurs because sound within the sound at the notch frequency is at least partially canceled by sound reflecting from the tympanic membrane of user 104.
  • processing system 114 may estimate, based on the notch frequency, a distance metric associated with a distance from in-ear assembly 116A to the tympanic membrane of user 104 of hearing instrument 102A.
  • the distance metric is the distance from in-ear assembly 116A to the tympanic membrane of user 104.
  • the distance metric is a value having a mathematic relationship to the distance from in-ear assembly 116A to the tympanic membrane of user 104.
  • processing system 114 may determine a distance metric associated with one-quarter wavelength (i.e., l/4, where l is the wavelength) of the notch frequency.
  • hearing instrument 102A may, in some examples, include two or more microphones.
  • microphone 110A may be a first microphone 110A and hearing instrument 102B may include at least a second, additional microphone.
  • Processing system 114 may determine the notch frequency based on the acoustic response to the sound as measured by the two or more microphones (e.g., the first and second microphones). For example, processing system i 14 may determine the notch frequency based on the acoustic response as measured by the first microphone minus the acoustic response as measured by the second microphone.
  • in-ear assemblies 116 of hearing instalments 102 each include one microphone (e.g., microphone 1 IGA, 1 lOB) facing into the ear canal.
  • the measured response would be analyzed to determine a frequency at which the notch is occurring (e.g. by determining where the output is the lowest within some (expected) range of frequencies).
  • each of microphones 110 has one port (i ,e., an entrance for sound).
  • each of microphone 110 has two ports (entrances tor sound) that are located at least a specific distance (e.g., > 4 mm) apart.
  • processing system 114 may differentiate between the sounds detected from the different ports of the same microphone based on an amount of delay in the acoustic response reaching the different ports. In such examples, sound arriving at the microphone through one port is effectively subtracted (e.g., due to opposing pressure on opposite sides of a diaphragm of the microphone) from the sound arriving at the microphone through the other port. Processing system 114 may then use the resulting signal to determine the notch frequency.
  • the in-ear postions 116 of hearing instruments 102 may each have two separate microphones facing into the ear canal that are at least a specific distance (e.g., > 4 mm) apart.
  • Having two ports may have the advantages previously listed (e.g., that subtracting these two measurements from each other makes it easier to identify the notch frequency and therefore estimate the distance to the eardrum). Both implementations . one microphone with two ports or two separate microphones are commonly used with directional microphones.
  • processing system 114 may classify ' the depth of insertion of in-ear assembly 116A of hearing instrument 102.4 into the ear canal of user 104 based on the distance metric. For instance, processing system 114 may classify, based on the distance metric and a range of ear canal lengths for the user, the depth of insertion of in-ear assembly 116A of hearing instrument 102.4 into the ear canal of user 104. For example, processing system 114 may classify the depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104 as being under-inserted, properly inserted, or over-inserted into the ear canal of user 104.
  • processing system 114 may use different normative data for different types of people (e.g., children vs. adults, or those with conductive hearing loss vs. those without conductive hearing loss). Accordingly, processing system 114 may estimate the range of ear canal lengths tor user 104 based on demographic or personal data regarding user 104. For example, processing system 114 may estimate the range of ear canal lengths for 104 based on information such as the sex, race, age, height, and/or other demographic or personal information about user 104. In some examples, processing system 114 may receive the demographic and/or personal information via a user interface, such as a graphical user interface or a voice interface. Processing system 114 may use the received demographic and/or personal information to look up estimated ranges of ear canal lengths from a local or remote database.
  • a user interface such as a graphical user interface or a voice interface
  • processing system 114 may determine some or all of the demographic and/or personal data based on a sound of a voice of user 104 , For example, processing system 114 may obtain an audio signal of the voice of user 104. in some examples, processing system 114 obtains the audio signal from one or more of microphones 110. Processing system 114 may then use the audio signal to determine the demographic and/or personal data about user 104. For example, processing system j 14 may determine a gender of user 104. an age group of user 104, and or other data about user 104 based on the audio signal. For instance, processing system 114 may determine the gender of user 104 and/or age group of user 104 based on a fundamental frequency of the voice of user 104. That is, the voices of men typically have lower fundamental frequencies than women. Similarly, the voices of adults typically have lower fundamental frequencies than children
  • processing system 114 may classify the depth of insertion of the in-ear assembly 116A of hearing instalment I02A into the ear canal of user 104.
  • processing system 114 may determine whether the depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104 is in a first class or a second class, in such examples, the first class may correspond to under- insertion of the in-ear assembly 116A of hearing instrument 10.' A into the ear canal of the user and the second class may correspond to adequate insertion of in-ear assembly 116A of hearing instalment 102A into the ear canal of user 104.
  • processing system 114 may determine whether the depth of insertion of in-ear assembly 116 A of hearing instrument 102A into the ear canal of user 104 is in a first class, a second class, or a third class, in such examples, the first class may correspond to under- insertion of the m-ear assembly 116A of hearing instalment 102A into the ear canal of user 104, the second class may correspond to adequate insertion of the in-ear assembly of hearing instrument into the ear canal of user 104, and the third class may correspond to an ambiguous level of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104.
  • in-ear assembly 116A of hearing instalment 102 There may be an ambiguous level of insertion of in-ear assembly 116A of hearing instalment 102 into the ear canal of user 104 when in-ear assembly 116A may be inserted properly for someone with a larger ear canal but not for someone with a smaller ear canal.
  • FIG. 5 A, FIG. 5B, FIG. 5C, and FIG. 5D are conceptual diagrams illustrating example in-ear assemblies inserted into ear canals of users, in accordance with one or more aspects of this disclosure.
  • processing system 114 may determine that the depth of insertion of m-ear assembly 116A of hearing instalment 102A into the ear canal is the first class or the second class depending on whether the distance metric is associated with a distance within a specified range.
  • the specified range may be defined by (1) an upper end of the range of ear canal lengths for the user minus a length of all or part of in-ear assembly 116A of hearing instalment 102A and (2) a lower end of the range of ear canal lengths of the user minus the length of all or part of in-ear assembly 116A of hearing instrument 102A.
  • the specified range may take into account the size of in-ear assembly 116A, which may contain speaker 108A, microphone 110A, and earbud 500.
  • the length of all or part of in- ear assembly 116A may be limited to earbud 500; a portion of in-ear assembly 116A that contains speaker 108A, microphone 110 A, and earbud 500; or all of in-ear assembly 116A.
  • in-ear assembly 1I6A includes speaker 108A, microphone I10A, and an earbud 500.
  • FIGS. 5.A-5D correspond to the user's ear canal.
  • FIGS. 5A-5D also show' a tympanic membrane 502 of user 104.
  • FIG. 5A shows proper insertion when the total length of the user’s ear canal is at the short end of the range of typical ear canal lengths for females (i.e., 17.9 mm).
  • FIG. 5B shows proper insertion when the total length of the user’s ear canal is at the long end of the range of typical ear canal lengths for females (i.e., 27.1 mm).
  • FIGS. 5A-5D show tympanic membrane 502 as an arc-shaped structure.
  • tympanic membrane 502 may be angled relative to the ear canal and may span a length of approximately 6 mm from the superior end of tympanic membrane 502 to a v ertex of tympanic membrane, which is more medial than the superior end of tympanic membrane 502.
  • the acoustically estimated distance metric from in-ear assembly 116A to tympanic membrane 502 is typically considered to be (or otherwise associated with) a distance from in-ear assembly 116A to a location between a superior end of tympanic membrane 502 and the umbo of tympanic membrane 502, which is located in the center part of tympanic membrane 502.
  • the 1 ⁇ 4 wavelength of the notch frequency implies that the distance from in-ear assembly 116A to tympanic membrane 502 is less than 11 mm.
  • processing system 114 may detemiine that in-ear assembly 116A is likely inserted properly (e.g., as shown in FIG. 5A and FIG. 5B). However, if the 1 ⁇ 4 wavelength of the notch frequency implies that the distance from in-ear assembly ! 16A to tympanic membrane 502 is greater than 12.3 mm (e.g., as shown in FIG. 5D), processing system 114 may determine that in-ear assembly 116A is likely not inserted properly.
  • processing system 114 may output an indication instructing user 104 to try' inserting in-ear assembly 116A more deeply into the ear canal of user 104 and/or to try a differently sized earbud (e.g., because earbud 500 may be too big and may be preventing user 104 from inserting in-ear assembly I16A deeply enough into the ear canal of user 104.
  • processing system 114 may output an indication instructing user 104 to perform fitting operation 400 again. If the distance from in-ear assembly 116A to tympanic membrane 502 is now within tire acceptable range, it is likely that in-ear assembly 116A w r as not inserted deeply enough. However, if the estimated distance from in-ear assembly 116A to tympanic membrane 502 does not change, this may suggest that user 104 just has longer ear canals than average. The measurement of the distance from in-ear assembly 116A to tympanic membrane 502 may be made multiple times over days, weeks, month, years, etc. and the results monitored over time to determine a range of normal placement for user 104.
  • FIG. 6 is a conceptual diagram illustrating example cutoffs for classifying levels of insertion of an in-ear assembly of hearing instrument 102A into an ear canal of user 104, in accordance with one or more aspects of this disclosure.
  • FIG. 6 is described with reference to hearing instrument 102A but may be equally applicable to hearing instrument I02B.
  • the vertical axis corresponds to a distance from in-ear assembly 116A to the tympanic membrane (e.g., tympanic membrane 502 of FIGS. 5A-5D).
  • cutoffs that represent proper, ambiguous, or underinsertion of in-ear assembly 116A are indicated tor adult females.
  • Hie white diamonds represent endpoints of ranges of proper insertion and under-insertion given in the examples of FIGS. 5A-5D, with textured (e.g., single or double diagonal cross-hatching) regions representing cutoffs below and above which a depth of insertion of in-ear assembly 116A is considered to be properly inserted or under-inserted.
  • vertical bar 600 indicates a range of distances that may be associated with proper insertion of in-ear assembly 116A into the ear canal of user 104.
  • a vertical bar 602 indicates a range of distances that may be associated with under-insertion of in-ear assembly 116A into the ear canal of user 104.
  • processing system 114 may generate an indication based on the depth of insertion of in-ear assembly 116A of hearing instrument 102 A into the ear canal of user 104 (408). Processing system 114 may generate the indication in one or more ways. For instance, in some examples, processing system 114 may cause speaker 108A of hearing instrument 102A to generate an audible and/or tactile indication to direct the user to insert in-ear assembly 116A of hearing instalment 102A further into the ear canal of user 104. In some examples, processing system 114 may cause a mobile device to display an indication of whether or not to insert in-ear assembly 116A of hearing instrument 102A further into the ear canal of user 104.
  • microphone 110A may be detached from in-ear assembly 116A. This may reduce the size and weight of in-ear assembly 116A, which may increase the comfort of the fit of in-ear assembly 116A and reduce any occlusion that may he caused by having additional components in the ear canal of user 104. In some examples, microphone 110A may subsequently be reattached to in-ear assembly 116A for future fitting operations. In other examples, microphone 110 A may remain within or attached to in-ear assembly 116A during normal use of hearing instrument 102A.
  • the techniques of this disclosure may be used to monitor positions of in-ear assemblies 116 of hearing instruments 102 overtime, e.g., during daily wear or over the course of days, weeks, months, years, etc. That is, rather than only performing fitting operation 400 when user 104 is first using hearing instruments 102, fitting operation 400 may be performed for ongoing monitoring of the levels of insertion of hearing instruments 102 during wear (e.g., after user 104 has inserted in-ear assemblies 116 of heating instruments 102 to a proper depth of insertion). Continued monitoring of the insertion levels of in-ear assemblies 116 of hearing instruments 102 may be useful for users for whom in-ear assemblies 116 of hearing instruments 102, tend to waggle out.
  • processing system 114 may automatically initiate fitting operation 400 and, if an in-ear assembly of a hearing instrument is not at a proper depth of insertion, processing system 114 may generate an indication (e.g., an audible, tactile, visual indication) instructing user 104 to push the in-ear assembly further into the user’s ear canal, in some examples, processing system 114 may be configured such that, as part of generating the indication based on the depth of insertion, the one or more processors causing a notification to appear (e.g., on a display screen of a device) indicating the depth of insertion.
  • an indication e.g., an audible, tactile, visual indication
  • processing system 114 may be configured such that, as part of generating the indication based on the depth of insertion, the one or more processors causing a notification to appear (e.g., on a display screen of a device) indicating the depth of insertion.
  • processing system 114 may track the number of times and/or frequency with which an in-ear assembly of a hearing instrument goes from a proper depth of insertion to an improper depth of insertion during use. if this occurs a sufficient number of times and/or at a specific rate, processing system 114 may perform various actions. For example, processing system 114 may generate an indication to user 104 recommending user 104 perform an action, such as change a size of an earbud of the in-ear assembly, or consult a hearing specialist or audiologist to determine if an alternative (e.g., custom, semi -custom, etc.) earmold may provide greater benefit to user 104.
  • an alternative e.g., custom, semi -custom, etc.
  • processing system 114 may generate, based at least in part on the depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104, an indication that user 104 should change a size of an earbud of the in-ear assembly 116A of hearing instrument 102A. Furthermore, in some examples, if processing system 114 receives an indication that user 104 indicated (to the hearing instruments 102. via an application, or other device) that user 104 is interested in pursuing this option, processing system 114 may connect to the Intemet/Ioeation services to find an appropriate healthcare provider in an area of user 104.
  • fitting operation 400 in some examples where fitting operation 400 is performed periodically, user 104 may simply need to be reminded of proper insertion. Ho wever, changes to the determined levels of insertion of in-ear assemblies 116 of hearing instruments 102. may signify that a change has occurred with the hearing status of user 104. Certain conditions, especially those causing conductive hearing losses, can affect the impedance of the user’s ears and therefore may change the measured response to the sound generated be speakers 108.
  • processing system 114 may output, for presentation to user 104, an indication regarding a potential change to the hearing status of user 104. For instance, processing system 114 may- output, for presentation to user 104, one or more follow-up questions (e.g., “Do you currently have a cold or an ear infection?” “Have you recently had any ear surgeries?” etc.).
  • follow-up questions e.g., “Do you currently have a cold or an ear infection?” “Have you recently had any ear surgeries?” etc.
  • processing system 114 may generate, based at least in part on the depth of insertion of in-ear assembly 116A of hearing instrument 102 A into the ear canal of user 104, an indication that a potential change to a hearing status of user 104.
  • processing system 114 may generate output recommending that user 104 consult a healthcare provider, such as a medical doctor. Furthermore, in this example, if processing system 114 receives indications of user input to questions indicating that changes to the hearing status of user 104 have not occurred (e.g., if user 104 answers “no” to the example questions mentioned above), processing system 114 may generate output recommending cleaning of hearing instruments 102 and repeating fitting operation 400, or refer user 104 to a hearing instrument specialist/audiologist to determine whether there is something else wrong with one or more of hearing instruments 102.
  • changes in conductive hearing pathways may refer to any physical changes in the external or middle ear that could signify a change in the individual’s hearing and/or the need for follow-up with a medical professional. Monitoring for such changes may be especially helpful for purchasers of over-the-counter hearing instruments because this population is unlikely to have seen a doctor before purchasing their hearing instruments.
  • the indication may advise user 104 to consult a hearing professional.
  • processing system 114 may generate, based at least in part on the depth of insertion of in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104, an indication that user 104 should consult a hearing professional.
  • processing system 114 may make a determination to generate the indication that user 104 should consult a hearing professional in response to determining that user 104 should change a size or style of an earbud of in-ear assembly 116A of hearing instrument 102A.
  • Processing system 114 may determine that user 104 should change a size or style of the earbud if user 104 is consistently unable to insert in-ear assembly 116A past a particular depth (e.g., because the earbud is too large) or user 104 consistently over-inserts in-ear assembly 116A (e.g., because the earbud is too small) or a depth of in-ear assembly 116A changes during use (e.g., because the earbud is too small to hold in-ear assembly 116A in place during use), or in response to other conditions.
  • processing system 114 may make a determination to generate the indication that user 104 should consult a hearing professional in response to determining that there is a potential change to a hearing status of user 104. In some examples, processing system 114 may make a determination to generate the indication that user 104 should consult a hearing professional when user 104 has failed to insert in-ear assembly 116A of hearing instrument 102A into the ear canal of user 104 a sufficient number of times.
  • processing system 114 may access one or more online sendees via a communication system (e.g., the Internet) to identify an appropriate hearing professional for user 104.
  • processing system 114 may automatically interact with an online search engine to identify an appropriate hearing professional for user 104.
  • processing system 114 may interact with an online registry of qualified hearing professionals to identify the appropriate hearing professional.
  • the indication generated by processing system 114 may include information indicating the identified hearing professional.
  • processing system 114 may initiate a voice communication session between a computing system associated with a hearing professional and a computing system (e.g., hearing instruments 102, computing system 106, etc.) associated with user 104.
  • a computing system e.g., hearing instruments 102, computing system 106, etc.
  • processing system 114 may provide, to a computing system associated with a hearing professional, information related to the suspected insertion problems being experienced by user 104.
  • processing system 114 may send an email, insert a note in an electronic medical record system, or otherwise provide the information to the healthcare professional .
  • the information provided to the healthcare professional may include data regarding the depths of insertion achieved by- user 104, numbers of attempts of insert in-ear assembly 116A, average depth of insertion, detected movement of in-ear assembly 116A within the ear canal during use, a summary- of suspected changes to the conductive auditory- pathways, and/or other types of information.
  • ordinal terms such as “first,” “second,” “third,” and so on, are not necessarily indicators of positions within an order, but rather may be used to distinguish different instances of the same thing. Examples provided in this disclosure may be used together, separately, or in various combinations. Furthermore, with respect to examples that involve personal data regarding a user, it may be required that such personal data only be used with the permission of the user. Furthermore, it is to be understood that discussion in this disclosure of hearing instrument 102A (including components thereof, such as in-ear assembly 116A, speaker 108A, microphone 110A, processors 112. A, etc.) may apply with respect to hearing instrument 102B.
  • Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g,, according to a communication protocol.
  • computer- readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitosy or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processing circuits to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may include a computer-readable medium.
  • such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, cache memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly temied a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • DSL digital subscriber line
  • computer-readable storage media and date storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.
  • processing circuitry may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits ⁇ ' .
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules.
  • Processing eireuits may be coupled to other components in various ways.
  • a processing circuit may be coupled to other components via an internal device interconnect, a wired or wireless network connection, or another communication medium.
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, an integrated circuit (IC) or a set of ICs (e.g., a chip set).
  • IC integrated circuit
  • set of ICs e.g., a chip set

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Headphones And Earphones (AREA)
EP20841809.5A 2019-12-31 2020-12-15 Verfahren und systeme zur bewertung der einsetzposition einer ohrinternen anordnung eines hörgeräts Pending EP4085654A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962955798P 2019-12-31 2019-12-31
PCT/US2020/065122 WO2021138049A1 (en) 2019-12-31 2020-12-15 Methods and systems for assessing insertion position of an in-ear assembly of a hearing instrument

Publications (1)

Publication Number Publication Date
EP4085654A1 true EP4085654A1 (de) 2022-11-09

Family

ID=76546817

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20841809.5A Pending EP4085654A1 (de) 2019-12-31 2020-12-15 Verfahren und systeme zur bewertung der einsetzposition einer ohrinternen anordnung eines hörgeräts

Country Status (2)

Country Link
US (1) US11523231B2 (de)
EP (1) EP4085654A1 (de)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10097913B2 (en) * 2015-09-30 2018-10-09 Apple Inc. Earbud case with charging system
US20220386048A1 (en) * 2021-05-28 2022-12-01 Starkey Laboratories, Inc. Methods and systems for assessing insertion position of hearing instrument
US11849286B1 (en) * 2021-10-25 2023-12-19 Chromatic Inc. Ear-worn device configured for over-the-counter and prescription use
EP4333463A1 (de) * 2022-08-31 2024-03-06 Sonova AG Verfahren zum überwachen einer in-ohr-platzierung eines hörgeräts

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4809708A (en) 1987-08-12 1989-03-07 Nicolet Instrument Corporation Method and apparatus for real bar measurements
US5825894A (en) 1994-08-17 1998-10-20 Decibel Instruments, Inc. Spatialization for hearing evaluation
KR20000029582A (ko) 1999-01-26 2000-05-25 애드난 쉰니브;리차드 와일드 청력평가를위한이도내부인공보철물
DK1703770T3 (en) 2005-03-14 2017-06-12 Gn Resound As Hearing aid fitting system with a camera
US7558622B2 (en) 2006-05-24 2009-07-07 Bao Tran Mesh network stroke monitoring appliance
DK2098097T3 (da) 2006-12-21 2019-08-26 Gn Hearing As Høreinstrument med brugergrænseflade
CN101411613A (zh) 2007-10-18 2009-04-22 周常安 具有延伸装置的可携式居家生理检测系统
JP5099436B2 (ja) 2008-03-25 2012-12-19 カシオ計算機株式会社 補聴器および当該補聴器の処理プログラム
EP2374287B1 (de) * 2008-12-04 2018-02-14 Insound Medical, Inc Einfügungseinrichtung für tief im kanal sitzende hörgeräte
US9539147B2 (en) 2009-02-13 2017-01-10 Personics Holdings, Llc Method and device for acoustic sealing and occlusion effect mitigation
US8184842B2 (en) * 2009-03-20 2012-05-22 Insound Medical, Inc. Tool for insertion and removal of in-canal hearing devices
US8372351B2 (en) 2009-05-29 2013-02-12 Abbott Diabetes Care Inc. Glucose monitoring system with wireless communications
EP2320682B1 (de) 2009-10-16 2014-08-06 Starkey Laboratories, Inc. Verfahren und Vorrichtung für In-dem-Ohr Hörhilfe mit kapazitivem Sensor
EP2348758B1 (de) 2009-10-17 2019-08-14 Starkey Laboratories, Inc. Verfahren und Vorrichtung für Hinterohr-Hörhilfe mit kapazitivem Sensor
WO2010049543A2 (en) 2010-02-19 2010-05-06 Phonak Ag Method for monitoring a fit of a hearing device as well as a hearing device
DE102010012622B4 (de) 2010-03-24 2015-04-30 Siemens Medical Instruments Pte. Ltd. Binaurales Verfahren und binaurale Anordnung zur Sprachsteuerung von Hörgeräten
US8462969B2 (en) 2010-04-22 2013-06-11 Siemens Audiologische Technik Gmbh Systems and methods for own voice recognition with adaptations for noise robustness
WO2012044278A1 (en) 2010-09-28 2012-04-05 Siemens Hearing Instruments, Inc. A hearing instrument
WO2012149955A1 (en) 2011-05-03 2012-11-08 Widex A/S Hearing aid with acoustic guiding means
JP5883160B2 (ja) 2012-02-07 2016-03-09 ヴェーデクス・アクティーセルスカプ 補聴器フィッティング・システムおよび補聴器システムをフィッティングする方法
US9900712B2 (en) 2012-06-14 2018-02-20 Starkey Laboratories, Inc. User adjustments to a tinnitus therapy generator within a hearing assistance device
US9445768B2 (en) 2012-11-29 2016-09-20 Neurosky, Inc. Personal biosensor accessory attachment
EP3917167A3 (de) 2013-06-14 2022-03-09 Oticon A/s Hörhilfevorrichtung mit Gehirn-Computer-Schnittstelle
EP2887692B1 (de) 2013-12-20 2019-07-10 Valencell, Inc. Montagesystem für einen Kopfhörer mit physiologischem Sensor
US10283162B2 (en) 2014-02-05 2019-05-07 Avatar Merger Sub II, LLC Method for triggering events in a video
EP2908549A1 (de) 2014-02-13 2015-08-19 Oticon A/s Hörgerätevorrichtung mit Sensorelement
DK2991380T3 (da) 2014-08-25 2020-01-20 Oticon As Hørehjælpeanordning der omfatter en placeringsidentifikationsenhed
GB2532745B (en) 2014-11-25 2017-11-22 Inova Design Solution Ltd Portable physiology monitor
US10413240B2 (en) * 2014-12-10 2019-09-17 Staton Techiya, Llc Membrane and balloon systems and designs for conduits
WO2016102971A2 (en) 2014-12-23 2016-06-30 Fertility Focus Limited Processing a Physical Signal
US20180014784A1 (en) 2015-01-30 2018-01-18 New York University System and method for electrophysiological monitoring
EP3086574A3 (de) 2015-04-20 2017-03-15 Oticon A/s Hörgerätevorrichtung und hörgerätevorrichtungssystem
DK3113519T3 (en) 2015-07-02 2019-01-21 Oticon As PROCEDURES AND DEVICES FOR CORRECT AND SECURE PLACEMENT OF AN I-EAR COMMUNICATION DEVICE IN A USER'S EARNINGS
US9992585B1 (en) 2017-05-24 2018-06-05 Starkey Laboratories, Inc. Hearing assistance system incorporating directional microphone customization
DK3448064T3 (da) 2017-08-25 2021-12-20 Oticon As Høreapparatanordning, der indbefatter en selvkontrollerende enhed til at bestemme status for en eller flere funktioner i høreapparatanordningen, som er baseret på feedback-respons
EP3457714A1 (de) 2017-09-13 2019-03-20 GN Hearing A/S Verfahren zur schätzung der ohrgeometrie und entsprechende hörgeräte
US20190132683A1 (en) 2017-10-31 2019-05-02 Starkey Laboratories, Inc. Hearing device including a sensor and a method of forming same
EP4000278A1 (de) 2019-07-17 2022-05-25 Starkey Laboratories, Inc. Im ohr getragene elektronische vorrichtung mit gestensteuerungssystem unter verwendung von frequenzsprung-spreizspektrumübertragung
WO2022066307A2 (en) 2020-09-28 2022-03-31 Starkey Laboratories, Inc. Temperature sensor based ear-worn electronic device fit assessment

Also Published As

Publication number Publication date
US11523231B2 (en) 2022-12-06
US20210204074A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
US11523231B2 (en) Methods and systems for assessing insertion position of hearing instrument
Stenfelt et al. A model of the occlusion effect with bone-conducted stimulation
US11223915B2 (en) Detecting user's eye movement using sensors in hearing instruments
US7817803B2 (en) Methods and devices for hearing damage notification and intervention
Hosoi et al. Cartilage conduction as the third pathway for sound transmission
Fay et al. Preliminary evaluation of a light-based contact hearing device for the hearing impaired
Flynn et al. Hearing performance benefits of a programmable power baha® sound processor with a directional microphone for patients with a mixed hearing loss
US20220201404A1 (en) Self-fit hearing instruments with self-reported measures of hearing loss and listening
Munro et al. Deriving the real-ear SPL of audiometric data using the “coupler to dial difference” and the “real ear to coupler difference”
O'Brien et al. Validity and reliability of in-situ air conduction thresholds measured through hearing aids coupled to closed and open instant-fit tips
Schuster-Bruce et al. Conventional hearing aid indications and selection
US20220386048A1 (en) Methods and systems for assessing insertion position of hearing instrument
Bagatto et al. Clinical consensus document for fitting non-surgical transcutaneous bone conduction hearing devices to children
US20230000395A1 (en) Posture detection using hearing instruments
Hosoi Cartilage conduction hearing aids: the third pathway for sound transmission and its application
WO2021138049A1 (en) Methods and systems for assessing insertion position of an in-ear assembly of a hearing instrument
US20220192541A1 (en) Hearing assessment using a hearing instrument
Wolfe et al. Evaluation of extended-wear hearing technology for children with hearing loss
US11528566B2 (en) Battery life estimation for hearing instruments
McPherson Self‐Reported Benefit and Satisfaction with a Beamforming Body‐Worn Hearing Aid for Elderly Adults
Gosnell et al. Hearing Aid Fitting for Children
Palmer Ring, ring! Is anybody there? Telephone solutions for hearing aid users
Valente et al. Transcranial contralateral routing of the signal as a fitting option for patients with single-sided deafness
Zitelli et al. Setting the Hearing Aid Response and Verifying Signal Processing and Features with Real-Ear Probe Microphone Measures
WO2023283569A1 (en) Context-based user availability for notifications

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220712

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240502