WO2021026126A1 - Interface utilisateur d'ajustement dynamique de réglages d'appareils auditifs - Google Patents

Interface utilisateur d'ajustement dynamique de réglages d'appareils auditifs Download PDF

Info

Publication number
WO2021026126A1
WO2021026126A1 PCT/US2020/044847 US2020044847W WO2021026126A1 WO 2021026126 A1 WO2021026126 A1 WO 2021026126A1 US 2020044847 W US2020044847 W US 2020044847W WO 2021026126 A1 WO2021026126 A1 WO 2021026126A1
Authority
WO
WIPO (PCT)
Prior art keywords
hearing
marker
user
examples
processor
Prior art date
Application number
PCT/US2020/044847
Other languages
English (en)
Inventor
Karrie Recker
Original Assignee
Starkey Laboratories, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Starkey Laboratories, Inc. filed Critical Starkey Laboratories, Inc.
Priority to EP20764490.7A priority Critical patent/EP4011098A1/fr
Publication of WO2021026126A1 publication Critical patent/WO2021026126A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting

Definitions

  • This disclosure relates to hearing instruments.
  • Hearing instruments are devices designed to be worn on, in, or near one or more of a user’s ears.
  • Common types of hearing instruments include hearing assistance devices (e.g , ‘"hearing aids”), earbuds, headphones, hearables, cochlear implants, and so on.
  • a hearing instrument may be implanted or osseointegrated into a user.
  • Some hearing instruments include additional features beyond just environmental sound-amplification.
  • some modem hearing instruments include advanced audio processing for improved device functionality, controlling and programming the devices, and beamforming, and some can even communicate wirelessly with external devices including other hearing instruments (e.g., for streaming media).
  • techniques drat allow' for on-the-fly configuration of hearing instruments using information received from a user are introduced.
  • the disclosed techniques allow hearing instruments, as well as other companion devices, to identify hearing thresholds of the user at various frequencies.
  • the hearing instruments and other devices can determine fitting information for the hearing instalments without having to display to the user information regarding each parameter that may adjusted to achieve a hearing instrument setting capable of accommodating the hearing thresholds of the user.
  • This disclosure describes techniques for providing a user interface (UI) that allows a user to dynamically adjust setings of a hearing instalment.
  • UI user interface
  • settings for hearing instruments may be adjusted to correspond to hearing thresholds for a user at various frequency bands.
  • a hearing threshold of a user corresponds to the minimum setting of a hearing instalment at which a user can perceive sound with respect to frequency band, region or discrete frequencies.
  • a hearing threshold of a user correspond to the minimum setting of a hearing instalment at which a user can perceive sound with respect to a frequency band and/or an adjacent region thereto.
  • a hearing threshold of a user may correspond to the minimum setting at which a user can perceive sound at least with respect to a particular frequency band (e.g., with latitude on either side of the band to encompass outer frequency regions that encompass or surround the band).
  • a hearing threshold may be expressed in terms of decibels (dB).
  • dB decibels
  • a user may use the UI to identify hearing thresholds at one or more frequency bands, where a frequency band includes a region of frequencies, typically defined by a iow'er boundary (e.g., 1000 Hertz (Hz)) and an upper boundary ' (e.g., 2000 Hz).
  • a iow'er boundary e.g., 1000 Hertz (Hz)
  • ' e.g. 2000 Hz
  • a hearing instrument may be set to one or more profile settings for different frequencies in order for the user to perceive sound at each frequency. For example, a user having different hearing thresholds at different frequencies may use the UI to determine a setting of the hearing instrument tailored to correct for the hearing thresholds of the user.
  • a profile seting for a hearing instrument may include a combination of adjustable audio setings.
  • a hearing instrument may have one or more profile settings, in accordance with techniques described herein.
  • a hearing instrument may have a first profile setting directed to a first set of frequencies (e.g., frequencies within and adjacent to a first frequency band) and/or a second profile setting directed to a second set of frequencies (e.g., frequencies within and adjacent to a second frequency band).
  • a single profile setting may correspond to a particular environment.
  • a profile setting may be determined for particular environments, discrete frequencies, frequency regions (e.g., frequencies within and adjacent to a frequency band), different hearing instruments (e.g., left hearing instrument and/or right hearing instrument), or any combination thereof.
  • the one or more profile settings may include a combination of adjustments to gain, compression, and frequency response for sounds across an entire frequency spectrum, where the entire frequency spectrum may be separated into identifiable frequency bands and where each profile setting may target individual frequency bands.
  • a profile seting may include the activation of certain audio features, such as frequency translation, noise reduction, and/or directional microphones.
  • Gain generally refers to the volume of sound within a frequency band.
  • gain may refer to the difference between the input dB Sound Pressure Level (SPL) and the output dB SPL of a signal.
  • SPL Sound Pressure Level
  • Compression generally refers to how a hearing instrument redistributes loudness within a frequency band. For instance, a user may not be able to perceive sounds in a particular frequency that are quieter than a particular threshold. Thus, a hearing instrument can increase the volume of those quieter sounds to approximate that threshold. At the same time, however, the hearing instrument cannot simply allow sounds that are naturally at the particular threshold to remain at the particular threshold because that would eliminate the distinction in volume between sounds naturally at the threshold and sounds that are naturally below the threshold. To keep the distinction, the hearing instrument may increase the volume of sounds that naturally occur at the threshold to something louder.
  • the hearing instrument may not increase all sounds at a frequency by the same amount because that would result in uncomfortably loud sounds in that frequency (e.g., sounds that are already loud would become too loud). So, the hearing instrument can compress the volume range for the frequency into a range of volumes that the user can perceive comfortably.
  • compression refers to the variation in effective gain applied to a signal as a function of the magnitude of the signal.
  • the effective gain may be greater for small, rather than for large signals.
  • a compression ratio may be adjusted as part of the profile settings, compression ratio generally referring to the ratio of (1) the magnitude of the gain (or amplification) at a reference signal level to (2) the magnitude of the signal at a higher stated signal level . Fitting formulas may use identified hearing thresholds to make adjustments to compression and/or a compression ratio as part of a profile setting.
  • Frequency response generally refers to the relationship between amplitude or gain of a signal and frequency.
  • the frequency response may refer to a frequency-response curve.
  • a frequency-response curve may plot frequency against amplitude or gain and control the frequency response of a signal. Compression parameters or settings for a frequency band may affect the frequency response for the frequency band.
  • frequency response refers to the output level of X for a frequency when given an input level of Y for the frequency.
  • the disclosed techniques may allow a hearing instrument to update multiple settings for one or more frequencies based on a single input from a user. This may include adjustments to the gain, compression, frequency response and other hearing instrument parameters and features (e.g., frequency translation/compression, noise reduction, directional microphones, etc.).
  • the gain, compression, frequency response and other hearing instrument parameters and features e.g., frequency translation/compression, noise reduction, directional microphones, etc.
  • the UI may only present limited frequency band information as a visual aid for the user, where those frequency bands do not necessarily correspond to the entire frequency spectrum for which parameters and features would be adjusted.
  • the adjusted parameters may be captured as one profile setting or multiple profile settings.
  • a processor of a hearing instrument may apply the profile settings directly to audio signals as the signals enter the hearing instrument.
  • a secondary device e.g., a smartphone, smart television, radio, mobile device, etc.
  • the secondary device may then transmit the conditioned audio signal to the hearing instruments.
  • the user rnay make multiple adjustments to determine a profile setting, at which point, any parameters used to determine the profile setting could be transmitted to and programmed into the hearing instruments.
  • processor(s) may apply transfer functions to the profile setings to account for differences between live signals and streamed signals. The hearing instalment may use less power and memory ' resources than the hearing instrument would otherwise use while, for example, continuously modifying the hearing instrument parameters as audio signals enter the hearing instruments.
  • the disclosed techniques provide a mapping of individual hearing thresholds that correspond to control indicator marker values presented via a UI that a user may adjust.
  • the control indicators may be each allocated to various frequency bands such that the mapping is specific to those frequency bands.
  • a marker of a control indicator on a UI a user may simultaneously affect multiple sound parameters of incoming sound media, such as gain, compression, and frequency response, without having to manually adjust each of those parameters individually. In this way, the user may be able to self-program the hearing instruments to compensate his/her hearing loss with satisfaction.
  • a method including providing a user interface by a device configured to interface with a hearing instrument, the user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values, determining an initial marker value for a first control indicator based at least in part on an initial position of a first marker, determining that a change in state has occurred with respect to the initial marker value, determining a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker, accessing a mapping that identifies the one or more relationships between marker values and hearing thresholds, identifying, from the mapping, a hearing threshold that corresponds to the first adjusted marker value, determining a setting to configure the hearing instrument based at least in part on the hearing threshold, and storing the one or more settings for the hearing instrument to a memory device.
  • a device configured to determine hearing instrument setings.
  • the device includes a memory ' configured to store a mapping that identifies one or more relationships between marker values and hearing thresholds, and one or more processors coupled to the memory, and configured to provide a user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values, determine an initial marker value for a first control indicator based at least in part on an initial position of a first marker, determine that a change in state has occurred with respect to the initial marker value, determine a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker, access the mapping that identifies the one or more relationships between marker values and hearing thresholds, identify, from the mapping, a hearing threshold that corresponds to the first adjusted marker value, and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.
  • a method including providing a user interface by a device configured to interface with a hearing instrument, the user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker positioned along the control indicator to indicate a marker value, determining an initial marker value for the control indicator based at least in part on an initial position of tire marker, determining an adjusted marker value for the control indicator based at least m part on an adjusted position of the marker, accessing a mapping that identifies one or more relationships between marker values and hearing thresholds, identifying, from the mapping, a hearing threshold that corresponds to the adjusted marker value, and determining one or more settings to configure the hearing instrument based at least in part on the hearing threshold.
  • a device configured to determine hearing instrument settings, the device including a memory configured to store a mapping that identifies one or more relationships between marker values and hearing thresholds, and one or more processors coupled to the memory, and configured to provide a user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker that is positioned along the control indicator to indicate a marker value, determine an initial marker value for the control indicator based at least in part on an initial position of the m arker, determine an adjusted marker value for the control indicator based at least in part on an adjusted position of the marker, access the mapping that identifies the one or more relationships between marker values and hearing thresholds, identify, from the mapping, a hearing threshold that corresponds to the adjusted marker value, and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.
  • FIG. 1 is a conceptual diagram illustrating an example system that includes one or more hearing instrument(s), in accordance with one or snore techniques of this disclosure
  • FIG. 2 is a block diagram illustrating example components of a hearing instrument, in accordance with one or more aspects of this disclosure.
  • FIG. 3 is a block diagram illustrating example components of a computing device, in accordance with one or more aspects of this disclosure.
  • FIG. 4A is a sample user interface (UI) illustrating example control indicators, in accordance with one or more aspects of this disclosure.
  • FIG. 4B is an illustrative visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.
  • FIG. 4C is an example of a predicted real -ear response, in accordance with one or more aspects of this disclosure.
  • FIG. 5 A is a sample UI illustrating example control indicators, in accordance with one or more aspects of this disclosure.
  • FIG. 5B is an illustrati ve visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.
  • FIG. 5C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.
  • FIG. 5D is an illustrative visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.
  • FIG. 5 E is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.
  • FIG. 6A is a sample UI illustrating example control indicators, in accordance with one or more aspects of this disclosure.
  • FIG. 6B is an illustrative visual depiction of a hearing threshold map, m accordance with one or more aspects of this disclosure.
  • FIG. 6C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.
  • FIG. 7A is a sample UI illustrating example control indicators, in accordance with one or more aspects of tins disclosure.
  • FIG. 7B is an illustrative visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.
  • FIG. 7C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.
  • FIG. 8A is a sample UI illustrating example control indicators, in accordance with one or more aspects of this disclosure.
  • FIG. 8B is an illustrative visual depiction of a hearing threshold map, m accordance with one or more aspects of this disclosure.
  • FIG. 8C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.
  • FIG. 9 is a sample UI, in accordance with one or more aspects of this disclosure.
  • FIG. 10 is a flowchart illustrating an example operation in accordance with one or more example techniques described in this disclosure.
  • FIG. 11 is a flowchart illustrating an example operation in accordance with one or more example techniques described in this disclosure.
  • FIG. 1 is a conceptual diagram illustrating an example system 100 that includes hearing instalments 102A, 1G2B, in accordance with one or more techniques of this disclosure.
  • This disclosure may refer to hearing instalments 102A and 102B collectively, as “hearing instruments 102.”
  • a user 104 may wear hearing instalments 102.
  • user 104 may wear a single hearing instrument.
  • the user may wear two hearing instruments, with one hearing instalment for each ear of the user.
  • Hearing instruments 102 may include one or more of various types of devices that are configured to provide auditory stimuli to user 104 and that are designed for wear and/or implantation at on, or near an ear of the user. Hearing instruments 102 may be worn, at least partially, in the ear canal or concha. One or m ore of hearing instruments 102 may include behind-the-ear (BTE) components that are worn behind the ears of user 104. In some examples, hearing instruments 102 include devices that are at least partially implanted into or osseointegrated with the skull of the user. In some examples, one or more of hearing instruments 102 is able to provide auditory stimuli to user 104 via a bone conduction pathway.
  • BTE behind-the-ear
  • each of hearing instruments 102 may include a hearing assistance device.
  • Hearing assistance devices include devices that help user 104 perceive sounds in the environment of user 104.
  • Example types of hearing assistance devices may include hearing aid devices, Personal Sound Amplification Products (PSAPs), hearables, healthables, cochlear implant systems (which may include cochlear implant magnets, cochlear implant transducers, and cochlear implant processors), and so on.
  • PSAPs Personal Sound Amplification Products
  • hearing instruments 102 are over-the-counter, direct-to-consumer, or prescription devices.
  • hearing instruments 102 include devices that provide auditory stimuli to the user that correspond to artificial sounds or sounds that are not naturally in the user’s environment, such as recorded music, computer-generated sounds, or other types of sounds.
  • hearing instruments 102 may include so-called “hearables,” earbuds, earphones, or other types of devices.
  • Some types of hearing instruments provide auditory stimuli to the user corresponding to sounds from the user’s environmental and also artificial sounds.
  • one or more of hearing instruments 102 includes a housing or shell that is designed to be worn in the ear for both aesthetic and functional reasons and encloses the electronic components of the hearing instrument.
  • Such hearing instruments may be referred to as in-the-ear (ITE), in-the-canal (ITC), completely-in-the-canal (CIC), or invisible-in-the-canal (IIC) devices.
  • ITE in-the-ear
  • ITC in-the-canal
  • CIC completely-in-the-canal
  • IIC invisible-in-the-canal
  • one or more of hearing instruments 102 may be BTE devices, which include a housing worn behind the ear contains all of the electronic components of the hearing instalment, including the receiver (e.g., a speaker). The receiver conducts sound to an earbud inside the ear via an audio tube.
  • hearing instruments 102 may be receiver-in-canal (RIC) hearing -assistance devices, which include a housing worn behind the ear that contains electronic components and a housing worn in the ear canal that contains the receiver.
  • Hearing instruments 102 m ay implement a variety of features that help user 104 hear better. For example, hearing instruments 102 may amplify the intensity of incoming sound, amplify the intensity of certain frequencies of the incoming sound, or translate or compress frequencies of the incoming sound.
  • hearing instruments 102 may implement a directional processing mode in which hearing instruments 102 selectively amplify sound originating from a particular direction (e.g., to the front of the user) while potentially fully or partially canceling sound originating from other directions.
  • a directional processing mode may selectively attenuate off-axis unwanted sounds.
  • the directional processing mode may help users understand conversations occurring in crowds or other noisy environments.
  • hearing instruments 102 may use beamforming or directional processing cues to implement or augment directional processing modes.
  • hearing instruments 102 may reduce noise by canceling out or attenuating certain frequencies. Furthermore, in some examples, hearing instalments 102 may help user 104 enjoy audio media, such as music or sound components of visual media, by outputting sound based on audio data wirelessly transmitted to hearing instalments 102.
  • Hearing instruments 102 may be configured to communicate with each other.
  • hearing instalments 102 may communicate with each other using one or more wireless communication technologies.
  • Example types of wireless communication technology include Near-Field Magnetic Induction (NFM1) technology, a 900 megahertz (MHz) technology, a BLUETOOTHTM technology, a Wi-FiTM technology, radio frequency (RF) technology, audible sound signals, ultrasonic communication technology, infrared communication technology, an inducti ve communication technology, or another type of communication that does not rely on wires to transmit signals between devices.
  • hearing instruments 102 use a 2.4 GHz frequency band for wireless communication.
  • hearing instruments 102 may communicate with each o ther via non-wireless communication links, such as via one or more cables, direct electrical contacts, and so on.
  • system 100 may also include a computing system 108.
  • Computing system 108 includes one or more computing devices, each of which may include one or more processors.
  • computing system 108 includes devices 106A through 106N (collectively.
  • Devices 106 may include various types of devices, such as one or more mobile devices, server devices (e.g., wired or wireless remote servers), tablets, personal computer devices, handheld devices, virtual reality (VR) headsets, wireless access points, smart speaker devices, smart televisions, radio devices, medical alarm devices, smart key fobs, smartwatches and other wearable devices, smartphones, intemet-of-things (loT) devices, such as voice-activated network devices, motion or presence sensor devices, smart displays, screen-enhanced smart speakers, wireless routers, wireless communication hubs, prosthetic devices, mobility devices, special- purpose devices, mesh networks, cloud computers, and/or other types of devices.
  • server devices e.g., wired or wireless remote servers
  • VR virtual reality
  • wireless access points such as voice-activated network devices, motion or presence sensor devices, smart displays, screen-enhanced smart speakers, wireless routers, wireless communication hubs, prosthetic devices, mobility devices, special- purpose devices, mesh networks, cloud computers, and/or other types of devices.
  • devices 106 may include personal computing devices of user 104, such as mobi le phone or smartwatches of user 104.
  • devices 106 may include computing devices running on-board a vehicle (e.g., car, plane, boat, etc.).
  • hearing instruments 102 may be paired to an audio streaming device that may transm it audio from a vehicle to hearing instruments 102.
  • multiple devices 106 may be used in con j unction with one another.
  • hearing instrument 102A may be paired to a smart television and a personal mobile phone, while hearing instrument 102B may be paired to a smartwatch.
  • each of hearing instruments 102 may communicate data between one another.
  • hearing instrument 102 A may relay data received from the personal mobile phone to hearing instrument 102B and hearing instrument 102B may relay data received from tire smartwatch to hearing instrument 102A.
  • hearing instrument 102A may relay data received from the personal mobile phone to hearing instrument 102B and hearing instrument 102B may relay data received from tire smartwatch to hearing instrument 102A.
  • device 106 may include accessory devices.
  • Accessory devices may include devices that are configured specifically for use with hearing instruments 102.
  • Example types of accessory devices may include charging cases for hearing instruments 102, storage cases forbearing instruments 102, media streamer devices, phone streamer devices, external microphone devices, remote controls for hearing instruments 102, and other types of devices specifically designed for use with hearing instruments 102.
  • Actions described in this disclosure as being performed by computing system 108 may be performed by one or more of the computing devices of computing system 108.
  • One or more of hearing instruments 102 may communicate with computing system 108 using wireless or non-wireless communication links. For instance, hearing instruments 102 may communicate with computing system 108 using any of the example types of communication technologies described elsewhere in this disclosure.
  • User 104 may have one or more hearing instalments 102.
  • user 104 may have hearing instrument 102A and 102B to be worn on the right and left ear of user 104.
  • user 104 may have multiple hearing instruments 102 for the same ear.
  • hearing instalment 102A may be configured for everyday wear on the right ear of user 104
  • hearing instalment 102B may be configured for use in the same ear but specialized for a particular activity' (e.g., swimming, listening to music, etc.).
  • user 104 may configure hearing instruments 102 so that hearing instalments 102 may meet the specific hearing needs of user 104.
  • computing system 108 provides user 104 with the ability to customize and configure hearing instruments 102.
  • device 106A may be a mobile phone of user 104.
  • Device 106A may present a UI that user 104 may use during a configuration process for one, both, or several of hearing instruments 102.
  • the configuration process described herein may be used to determine a profile setting for hearing instalment 102 A and a second profile setting for hearing instalment 102B.
  • the configuration process for each hearing instrument maybe done in parallel or separately.
  • user 104 may configure a right hearing instalment in a first instance of a configuration process and a left hearing instrument in a second instance.
  • Device 106A may transmit a profile setting to hearing instalments 102 once the profile setting is determined through a configuration process as described herein.
  • a first hearing instrument may include a left hearing instrument and a second hearing instalment may include a right hearing instrument.
  • devices 106 or hearing instruments 102 may determine a first of one or more settings for the left hearing instrument and determine a second of one or more settings for the right hearing instrument (e.g., right hearing instalment settings).
  • FIG. 2 is a block diagram illustrating example components of hearing instalment 200, in accordance with one or more aspects of tins disclosure.
  • Hearing instalment 200 may- be either one of hearing instruments 102.
  • hearing instalment 200 includes one or more storage devices 202, one or more communication unit(s) 204, a receiver 206, one or more processor(s) 208, one or more microphone(s) 210, a set of sensors 212, a power source 214, and one or more communication channels 216.
  • Communication channels 216 provide communication between storage devices 202, communication unit(s) 204, receiver 206, processor(s) 208, rnicrophone(s) 210, and sensors 212.
  • Components 202, 204, 206, 208, 210, and 212 may draw electrical power from power source 214.
  • each of components 202, 204, 206, 208, 210, 212, 214, and 216 are contained within a single housing 218.
  • components 202, 204, 206, 208, 210, 212, 214, and 216 may be distributed among two or more housings.
  • receiver 206 and one or more of sensors 212 may include in an in-ear housing separate from a BTE housing that contains the remaining components of hearing instrument 200.
  • a RIC cable may connect the two housings.
  • sensors 212 include an inertial measurement unit (IMU) 226 that is configured to generate data regarding the motion of hearing instrument 200.
  • IMU 226 may include a set of sensors.
  • IMU 226 includes one or more of accelerometers 228, a gyroscope 230, a magnetometer 232, combinations thereof, and/or other sensors for determining the motion of hearing instrument 200.
  • hearing instrument 200 may include one or more additional sensors 236.
  • Additional sensors 236 may include a photoplethysmography (PPG) sensor, blood oximetry sensors, blood pressure sensors, electrocardiograph (EKG) sensors, body temperature sensors, electroencephalography (EEG) sensors, environmental temperature sensors, environmental pressure sensors, environmental humidity sensors, skin galvanic response sensors, and/or other types of sensors.
  • PPG photoplethysmography
  • EKG electrocardiograph
  • EEG electroencephalography
  • environmental temperature sensors environmental pressure sensors
  • environmental humidity sensors environmental humidity sensors
  • skin galvanic response sensors and/or other types of sensors.
  • hearing instrument 200 and sensors 212 may include more, fewer, or different components.
  • Storage devices 202 may store data.
  • storage devices 202 may store a hearing threshold mapping 220 that identifies one or more relationships between marker values and hearing thresholds.
  • a marker value of 1 may relate to a specific hearing threshold value, whereas a marker value of 2 may relate to a different hearing threshold value, the mapping delineating those corresponding relationships in a single or as multiple data structures.
  • the one or more mapping relationships may, additionally or alternatively, define equations or mathematical links between marker values and hearing thresholds or profile settings themselves, in accordance with one or more of the techniques disclosed herein.
  • Storage devices 202 may include volatile memory and may therefore not retain stored contents when powered off. Examples of volatile memories may include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. Storage devices 202 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memory' configurations may include magnetic hard discs, optical discs, floppy discs, flash memories, read-only memory' (ROM), or forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM).
  • RAM random access memories
  • DRAM dynamic random access memories
  • SRAM static random access memories
  • Storage devices 202 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memory' configurations may include magnetic hard discs, optical discs, floppy discs, flash memories, read-only memory' (ROM
  • Communication unit(s) 204 may enable hearing instrument 200 to send data to and receive data from one or more other devices, such as another hearing instrument, an accessory device, a mobile device, or another types of device.
  • Communication unit(s) 204 may enable hearing instrument 200 using wireless or non-wireless communication technologies.
  • communication unit(s) 204 enable hearing instrument 200 to communicate using one or more of various types of wireless technology, such as a BLUETOOTHTM technology, third generation (3G) communications, fourth generation (4G) communications, 4G Long Term Evolution (LTE), fifth generation (5G) communications, ZigBec, Wi-FiTM, NFMI, ultrasonic communication, infrared (IR) communication, or another wireless communication technology.
  • communication unit(s) 204 may enable hearing instrument 200 to communicate using a cable-based technology, such as a Universal Serial Bus (USB) technology.
  • USB Universal Serial Bus
  • Receiver 206 includes one or more speakers for generating audible sound.
  • Microphone(s) 210 detects incoming sound and generates one or more electrical signals (e.g., an analog or digital electrical signal) representing the incoming sound.
  • Processor(s) 208 may be processing circuits configured to perform various activities.
  • processors) 208 may process the signal generated by microphone(s) 210 to enhance, amplify, or cancel-out particular channels within the incoming sound.
  • Processor(s) 208 may then cause receiver 206 to generate sound based on the processed signal.
  • processor! s) 208 include one or more digital signal processors (DSPs)
  • processor(s) 208 may cause communication unit(s) 204 to transmit one or more of various types of data.
  • processor(s) 208 may cause communication unit(s) 204 to transmit data to computing system 108.
  • communication unit(s) 204 may receive audio data from computing system 108 and processor(s) 208 may cause receiver 206 to output sound based on the audio data.
  • computing system 108 may be used to fit, configure and/or customize hearing instrument 200.
  • receiver 206 rnay generate audible sound at different frequencies for user 104 to listen for during the process.
  • user 104 may use a U1 on one of devices 106 to fit, configure, and/or customize hearing instrument 200 in response to the generated audible sound.
  • device 106A may receive input from user 104 via the Ul.
  • Device 106A may identify hearing thresholds for user 104 with respect to frequency bands using a mapping that links or draws a connection between III input values (e.g., marker values, adjustments to marker values, etc.) and hearing threshold values.
  • device 106A or hearing instrument 200 may generate a configuration file (e.g., a profile seting) based on input received from user 104.
  • Processor! s) 208 of hearing instrument 200 may receive the configuration file from device 106A (e.g., via communication unit(s) 204).
  • the configuration file specifies a profile setting specific to user 104 that corresponds to the hearing thresholds of user 104 with respect to predefined frequency bands.
  • Processor(s) 208 may use the profile setting to determine how an audio signal received through microphone(s) 210 should be conditioned based on the hearing thresholds of user 104.
  • processors 208 may fit, configure, and/or customize hearing instrument 200 for user 104.
  • devices 106 may use the profile settings to condition outgoing audio signals (e.g., a streaming audio signal) that rnay be transmited from one of devices 106 to hearing instrument 200.
  • outgoing audio signals e.g., a streaming audio signal
  • FIG. 3 is a block diagram illustrating example components of computing device 300, in accordance with one or more aspects of this disclosure.
  • FIG. 3 illustrates only one particular example of computing de vice 300, and many other example configurations of computing device 300 exist.
  • Computing device 300 may be a computing device 106A-106N in computing system 108 (FIG. 1).
  • computing device 300 includes one or more processors) 302, one or more communication unit(s) 304, one or more input device(s) 308, one or more output device(s) 310, a display screen 312, a power source 314, one or more storage device(s) 316, and one or more communication channels 318.
  • Computing device 300 may include other components.
  • computing device 300 may include physical buttons, microphones, speakers, communication ports, and so on
  • Communication channel(s) 318 may interconnect each of components 302, 304, 308, 310, 312, and 316 for inter-component communications (physically, communicatively, and/or operatively).
  • communication channel(s) 318 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data.
  • Power source 314 may provide electrical energy to components 302, 304, 308, 310, 312 and 316.
  • Storage device(s) 316 may store information required for use during operation of computing device 300.
  • storage device(s) 316 have tire primary purpose of being a shortterm and not a long-term computer-readable storage medium.
  • Storage device(s) 316 may be volatile memory and may therefore not retain stored contents when powered off.
  • Storage device(s) 316 may further be configured for long term storage of information as non-volatile memory space and retain information after power on/off cycles.
  • storage device(s) 316 may store information pertaining to user 104, such as profile settings, environment indicators, history logs, etc. and may also store other information, such as Wi-FiTM passwords, remembered BLUETOOTHTM devices for pairing purposes, and so forth.
  • computing device 300 may be a remote cloud server or a mobile device that stores all or a portion of the information.
  • storage devices 202 of hearing instalment 200 may store the same type of information or may store duplicates of information stored by storage device(s) 316.
  • storage device(s) 316 may also store a hearing threshold mapping 326 that identifies one or more relationships between marker values and hearing thresholds.
  • hearing threshold mapping 326 and hearing threshold mapping 220 may or may not be identical, such as where the mappings may need to be synchronized following a routine software update.
  • hearing instrument 200 may transmit information to computing de vice 300 and vice versa so that hearing instrument 200 and computing device 300 may share information pertaining to the use of hearing instrument 200 in any setting.
  • hearing instrument 200 may transmit mapping data, marker data (e.g., adjusted marker values), settings data, environment data, etc. to computing device 300.
  • processor(s) 302 may transmit a mapping, such as a hearing threshold mapping, to another device for utilization and/or further processing.
  • computing devices 300 may also transmit information between each other in performing the disclosed techniques.
  • one computing device 300 may be a remote server having the hearing threshold mapping 326 stored thereon, and may transmit information, such as hearing threshold mapping 326 to another computing device 300 or to hearing instrument 200 directly.
  • computing device 300 may include a cloud server m which the hearing threshold mapping 326 may be stored on the cloud server network on one or more storage device(s) 316.
  • processors) 302 on computing device 300 may read and execute instructions stored by storage device(s) 316 or storage devices 202.
  • processor(s) 208 of hearing instrument 200 may read and execute instructions stored by storage device(s) 316 or storage devices 202.
  • Processors) 302 may receive and register input via the UI, render updates to the UI (e.g., changes in setting values), and process input data to determine profile settings.
  • processor) s) 302 may generate or otherwise retrieve UI data in the form of computer-readable instructions.
  • the UI data may include instructions that cause proeessor(s) 302 to render a UI, such as one of the Ills described herein, on a display screen 312 or to present the UI via output devices 310.
  • processor(s) 208 may render a UI on a display device of hearing instrument 200 (not shown).
  • Processor(s) 302 may also coordinate data transm ission and timing between devices 106 and hearing instruments 102 described herein.
  • Computing device 300 may include one or more input device(s) 308 that computing device 300 uses to receive user input. Examples of user input include tactile, audio, and video user input. Input device(s) 308 may include presence-sensitive screens, touch-sensitive screens, mice, keyboards, voice responsive systems, microphones or other types of devices for detecting input from a human or machine.
  • computing device 300 may include a VR headset that receives input from user 104 through gaze detection and tracking. In such examples, computing device 300 may use eye movements, in conjunction with other bodily movements (e.g., hand movements), to receive and process input from user 104 via a VR UI.
  • computing device 300 may include an augmented reality (AR) headset, a mixed reality headset, or other wearable device, such as a watch, ring, and so forth
  • AR augmented reality
  • computing device 300 may be coupled to a separate audio headset configured to assist user 104 in self-fitting a hearing instrument.
  • user 104 may use computing device 300 coupled to one or more dummy earpieces (e.g., hearing instrument 200), such as an over-the-ear audio headset used primarily for fitting purposes (e.g., in-store, at a nursing home, at a personal residence, etc.) to allow user 104 to perform a self-fitting process before being in physical possession or otherwise acquiring a personal hearing instrument 200 that user 104 would use on a more permanent basis.
  • user 104 may use a UI on computing device 300 to arrive at a setting with respect to the dummy earpiece that may then be stored to computing device 300 or the dummy earpiece itself.
  • processors) 302 may provide a UI on display screen 312. Processor(s) 302 may receive input from user 104 via the UI. The received input may cause processor(s) 302 to determine a profile seting for hearing instrument 200. In some examples, processor(s) 302 may aggregate details based on the user input and/or derive patterns of the user input (e.g., through one or more machine learning algorithms or artificial intelligence techniques), before determining a profile setting.
  • computing device 300 or the dummy earpiece may aid user 104 in determining the profile setting for a personal hearing instrument of user 104
  • Computing device 300 or the dummy earpiece may determine a profile setting for hearing instrument 200 in accordance with techniques disclosed herein.
  • computing device 300 or the dummy earpiece may load the profile seting to the personal hearing instrument of user 104.
  • user 104 may have acquired multiple sets of hearing instruments 102 that need to be configured (e.g., spare sets for the car, for outdoors, etc.). Using the UIs disclosed herein, or similar UIs to those disclosed herein, user 104 may be able to configure each set simultaneously without having to put on any one of the hearing instruments themselves.
  • user 104 may configure each hearing instrument 200 separately depending on the environment in which hearing instrument 200 is to be worn.
  • Processor(s) 302 may store multiple profile settings to storage device(s) 316 for the various environments.
  • processor(s) 302 may store profile setings irrespective of the environment in which the profile setting was determined.
  • hearing instrument 200 may implement multiple setting profiles simultaneously.
  • hearing instrument 200 may be paired with one of devices 106 (e.g., a smart television, radio, vehicle, mobile device), where device 106 is streaming audio to hearing instrument 200 (e.g., via a BLUETOOTHTM connection).
  • hearing instrument 200 may be receiving other audio (e.g., a nearby conversation) that hearing instrument 200 may condition for user 104 in a different way relative to, for example, audio being streamed from device 106.
  • hearing instrument 200 may implement a first profile setting that conditions the streaming audio as it is received from device 106 and a second profile setting that conditions audio received from other sources.
  • hearing instrument 200 may receive instructions from user 104 via computing device 300 to implement a music seting for audio recei ved from a radio device and another seting for conditioning nearby human speech.
  • computing device 300 or hearing instrument 200 may detect or receive an indication of a change in the environment of user 104.
  • An environment may be location or time-based (e.g., an evening environment, weekend environment, etc.).
  • Computing device 300 or hearing instrument 200 may automatically access a corresponding setting profile to configure hearing instrument 200.
  • Processor(s) 208 may configure hearing instrument 200 based on the seting profile for use in the environment.
  • hearing instalment 200 may receive an affirmative request from user 104 to load a particular profile setting.
  • hearing instalment 200 may receive the request from computing device 300.
  • computing device 300 may first receive the request from user 104 and relay the request to hearing instalment 200.
  • computing device 300 may retrieve the profile setting from storage device(s) 316 or from another of storage device(s) 316 of another computing device 300 upon receiving the request, and transmit the profile setting to hearing instrument 200.
  • processor(s) 208 may configure hearing instrument 200 so as to implement the retrieved profile setting.
  • processors) 302 or proeessor(s) 208 may receive indications from user 104 that computing device 300 and/or hearing instrument 200 have permission to automatically detect an environment of user 104 and automatically load a profile setting to hearing instrument 200 based on the detected environment.
  • hearing instrument 200 may detect certain frequencies that indicate user 104 has entered a particular type of environment (e.g., an outdoor area with a freeway or noisy street nearby, a sports stadium, a restaurant, etc.).
  • processors 302 may use location information, such as global satellite navigation system information (e.g., Global Positioning System (GPS)) information or RF signal detection, to automatically detect when user 104 is in a particular area (e.g., near a freeway, nearing a sports stadium, etc.).
  • location information such as global satellite navigation system information (e.g., Global Positioning System (GPS)) information or RF signal detection, to automatically detect when user 104 is in a particular area (e.g., near a freeway, nearing a sports stadium, etc.).
  • GPS Global Positioning System
  • RF signal detection e.g., RF signal detection
  • hearing instrument 200 may suggest to user 104 that a new' profile setting may be desirable based on the newly detected environment.
  • hearing instrument 200 may detect an environment in which no profile setting corresponds to the type of sound detected in the environment (e.g., an outdoor concert, a restaurant with live music, etc.).
  • Hearing instrument 200 may transmit a message that is to appear on computing device 300.
  • the message may suggest to user 104 that user 104 should complete a new' fitting or configuration process based on the newly detected environment.
  • computing device 300 may detect the new' environment and display the message on display screen 312, rather than receiving the message from hearing instrument 200.
  • the computing device 300 and the hearing instrument 200 may both detect the environment.
  • computing device 300 or hearing instrument 200 may identify the setting as corresponding to the detected environment. For example, computing device 300 or hearing instrument 200 may store the setting as corresponding to a music environment, an indoor environment, a restaurant environment, etc.
  • Communication unit(s) 304 may enable computing device 300 to send data to and receive data from one or more other computing devices (e.g., via a communications network, such as a local area network or the Internet).
  • communication unit(s) 304 may be configured to receive source data exported by hearing instrument(s) 102, recei ve comment data generated by user 104 of hearing instrument(s) 102, receive and send request data, receive and send messages, and so on.
  • communication unit(s) 304 may include wireless transmitters and receivers that enable computing device 300 to communicate wirelessly with the other computing devices. For instance, in the example of FIG.
  • communication unit(s) 304 include a radio 506 that enables computing device 300 to communicate wirelessly with other computing devices, such as hearing instruments 102 (FIG. 1).
  • Examples of communication unit(s) 304 may include network interface cards, Ethernet cards, optical transceivers, radio frequency transceivers, or other types of devices that are able to send and receive infonnation.
  • Other examples of such communication units may include BLUETOOTHTM, 3G, 4G, 5G, and Wi-FiTM radios, USB interfaces, etc.
  • Computing device 300 may use communication unit(s) 304 to communicate with one or more hearing instruments (e.g., hearing instrument 102A (FIG. 1, FIG. 2)).
  • computing device 300 may use communication unit(s) 304 to communicate with one or more other remote devices.
  • communication unit(s) 304 may transmit profile settings from computing device 300 to another computing device 300 (e.g , a remote server) for subsequent access. In another example, communication unit(s) 304 may transmit profile settings from computing device 300 to one or more of hearing instruments 102. In some examples, communication unit(s) 304 may transmit pre-processed data values to other devices 106 or to one or more of hearing instruments 102 (e.g., control indicator values), such that devices 106 or hearing instruments 102 are able to determine the proper profile setting for hearing instruments 102 based at least in part on the pre-processed data values. In some examples, communication unit(s) 304 may transmit pre-processed data to another device as user 104 updates data values on computing device 300. In other examples, communication unit(s) 304 may only transmit the data once user 104 has indicated that user 104 is no longer adjusting hearing instruments 102, for example, by manually indicating as such on a UI.
  • pre-processed data values to other devices 106 or to one or more of
  • Output device(s) 310 may generate output. Examples of output include tactile, audio, and video output. Output device(s) 310 may include presence-sensitive screens, sound cards, video graphics adapter cards, speakers, liquid crystal displays (LCD), or other types of devices for generating output. In some examples, output device(s) 310 may include hologram devices that may project light onto a surface or a medium (e.g., air) to form a holographic UI. For example, output device(s) 310 may project a UI onto a table that user 104 may use similar to how user 104 would use a UI displayed on display screen 312 but with differences in how input is registered, such as by using image capturing methods known m the art.
  • hologram devices may project light onto a surface or a medium (e.g., air) to form a holographic UI.
  • output device(s) 310 may project a UI onto a table that user 104 may use similar to how user 104 would use a UI displayed
  • Processor(s) 302 may read instructions from storage device(s) 316 and may execute instructions stored by storage device(s) 316. Execution of the instructions by processor(s) 302 may configure or cause computing device 300 to provide at least some of the functionality ascribed in this disclosure to computing device 300.
  • storage device(s) 316 include computer-readable instructions associated with operating system 320, application modules 322A-322N (collectively, ‘"application modules 322”), and a companion application 324.
  • storage device(s) 316 may store existing profile settings for hearing instrument 200 or newly determined profile settings for hearing instrument 200.
  • Execution of instructions associated with operating system 320 may cause computing device 300 to perform various functions to manage hardware resources of computing device 300 and to provide various common sendees for other computer programs.
  • Execution of instructions associated with application modules 322 may cause computing device 300 to pro vide one or more of various applications (e.g., “apps,” operating system applications, etc.).
  • Application modules 322A-322N may cause computing device 300 to provide one or more configuration applications meant to fit, configure, or otherwise customize one or more of hearing instruments 102. Example implementations of such configuration applications are described with respect to FIGS. 4A-11.
  • Application modules 322 may provide other applications, such as text messaging (e.g., SMS) applications, instant messaging applications, email applications, social media applications, text composition applications, and so on. Application modules 322 may also store profile setings that may be used to enhance the experience of user 104 with respect to hearing instalments 102
  • Execution of instructions associated with companion application 324 by processors) 302 may cause computing de vice 300 to perform one or more of various functions. For example, execution of instructions associated with companion application 324 may cause computing device 300 to configure communication umt(s) 304 to receive data from hearing instruments 102 and use the received data to present data to a user, such as user 104 or a third-party user.
  • companion application 324 is an instance of a web application or server application. In some examples, such as examples where computing device 300 is a mobile device or other type of computing de vice, companion application 324 may be a native application.
  • user 104 may launch the configuration application.
  • processor(s) 302 may receive input from user 104 requesting that the configuration application be launched.
  • the configuration application may initiate a configuration process for one of hearing instruments 102 (e.g , a fiting process, customization process, etc ).
  • Instantiation of the configuration process includes implementation of a UI meant to elicit input from user 104 throughout the process.
  • Processor(s) 302 may launch the configuration process by presenting the UI to user 104.
  • processors) 302 may present the UI to user 104 on a viewing window of computing device 300
  • processor(s) 302 may leverage multiple UIs throughout the course of a single configuration process.
  • processor(s) 302 may render the UI or features thereof on computing device 300.
  • processor(s) 302 are described as performing Ul-related rendering tasks. It is to be understood, however, that the actual rendering of any particular UI feature, such as an interactive graphical unit, or the UI itself may be based on UI data only generated by processor(s) 302, whereas display- screen 312 may actually use the UI data to perform the rendering. In other instances, UI data or portions thereof may be generated by another processing system, such as processor(s) 208.
  • FIG. 4A is an example UI 400 that may be presented to user 104 in connection with one or more hearing instrument configuration processes (e.g., fitting process, fine- tuning process, hearing test, customization process, etc.).
  • the example UI 400 may assist user 104 in determining an acceptable hearing instrument seting for hearing instrument 200.
  • UI 400 may be a graphical user interface (GUI), an interactive user interface, a command line interface, etc.
  • GUI graphical user interface
  • user 104 may need to configure a newly purchased hearing instrument 200 that is to be worn on a right or left ear of user 104.
  • hearing instrument 200 may need to first be powered on and paired to computing device 300 using any suitable pairing mechanism known in the art.
  • User 104 may then provide input via UI 400 as part of the configuration process.
  • User 104 may provide additional or continuous input throughout the configuration process until a satisfactory profile setting has been determined.
  • Processor(s) 302 may use the user input to identify hearing thresholds of user 104.
  • Processors) 302 may use the hearing thresholds to determine a profile setting for hearing instrument 200.
  • processor(s) 302 may use the user input, including user manipulations, to determine a profile setting directly.
  • UI 400 includes control indicators 402A-402N (collectively, “‘control indicators 402”) and markers 406A-406N (collectively, “markers 406”).
  • Control indicators 402 may include interactive graphical units that may be presented to user 104 via UI 400.
  • processors 302 may render control indicators 402 on computing device 300.
  • Each of control indicators 402 may have one or more markers 406A-406N that can be manipulated with respect to control indicators 402.
  • Markers 406 may include interactive graphical units that may be presented to user 104 via UI 400.
  • processor(s) 302 may render markers 406 on computing device 300.
  • Processor) s) 302 may render markers 406 as any shape, size, color, etc.
  • processor(s) 302 may render one or more of markers 406 as a dash mark, a square mark, a circular mark, a number mark, a leter mark, a string of letters, a dy namic mark (e.g , an emoji that changes with position, etc.), or any other type of mark.
  • processor(s) 302 may render a first mark in con j unction with another mark as one of markers 406.
  • processors) 302 may render a circular mark with a number mark inside, as in FIG. 4A.
  • processor(s) 302 may render only one mark, such as a number mark, on UI 400.
  • Processor(s) 302 may be configured to provide a second UI that identifies a second one of control indicators 402 and a second one of markers 406, in which adjustment to the second one of markers 406 causes processors) 302 to determine a new profile setting or otherwise cause one or more profile settings to change. For example, processor(s) 302 may first provide the second UI identifying at least one additional control indicator 402N and at least one additional marker 406N. The control indicator 402N and marker 406N would be in addition to those control indicators 402 and markers 406 that were presented on the first UI.
  • Processor(s) 302 may then detect an adjustment to the at least one additional marker 406N, in much the same way as processor(s) 302 would detect adjustment of the markers with respect to the first UI. Processor(s) 302 may then update one or more settings of the hearing instalment in response to detecting that adjustment to additional marker 406N.
  • processor(s) 302 may receive input from user 104 that causes processors) 302 to manipulate the position of markers 406 on UI 400.
  • markers 406 may be configured to be slid or dragged along control indicators 402.
  • processor ⁇ ) 302 may receive an indication of user input to drag or slide markers 406 along the lengths of control indicators 402.
  • user 104 may use a scroll wheel or a lever that can be actuated in a given direction to change the posi tion of markers 406.
  • processor(s) 302 may implement a gaze tracker to determine a position at which user 104 would like to move one of markers 406.
  • processor(s) 302 may be programmed to receive input from user 104 regarding the adjustment of markers 406.
  • Each of markers 406 may have values that correspond to the particular position of markers 406 and that may change fluidly based on user input.
  • processor(s) 302 portray the value of markers 406 as number values that change as markers 406 move along control indicators 402.
  • processor(s) 302 may portray the value of markers 406 as changing in an incremental fashion.
  • processor(s) 302 may portray the value of markers 406 as a color changing as processor(s) 302 detect a changing position of markers 406.
  • processor(s) 302 may display marker values on another interface separate from UI 400 on which processor(s) 302 are rendering markers 406. In other examples, processors) 302 may not display marker values at all.
  • user 104 may be able to slide the marker using a wearable device, such as a watch, in which case, the screen of the device may not be large enough to show* incrementing marker values.
  • computing device 300 may audibly state the current value through a speaker, may provide tactile feedback, or may simply show the position of markers 406 without numbers.
  • Processor(s) 302 may receive input from user 104 that causes processor(s) 302 to modify control indicators 402 or markers 406.
  • processors) 302 may change the shape, size and/or scale of control indicators 402.
  • Control indicators 402 may be of various shapes.
  • control indicators 402 may be circular, semi-circular, triangular, rectangular or any other shape that a user, such as user 104 or a third-party user, would be comfortable with during the configuration process in addition, control indicators 402 may be of different lengths. In the example of FIG. 4 A, control indicators 402 are shown as elongated bars of equal length.
  • processors) 302 may cause tick marks to appear on control indicators 402 that are meant to assist user 104 in percei ving scale or size of control indicators 402.
  • processor(s) 302 may receive input from user 104 that causes processor(s) 302 to portray markers 406 as changing position. For example, processor(s) 302 may portray markers 406 as moving in a single direction (e.g., up, down, left, right, etc.). In some examples, processor(s) 302 may increment the marker values displayed in connection with a position of markers 406 as the position of markers 406 changes. In some examples, processors) 302 may receive input from user 104 specifying a value for markers 406. For example, user 104 may input a number value manually in a text box. In such instances, user 104 may then adjust the number using arrow keys. For example, processor(s) 302 may receive user input indicating that user 104 desires marker 406A to move upward. Processor(s) 302 may cause marker 406A to move to an updated position based on the user input.
  • processor(s) 302 may receive input from user 104 specifying that marker 406N should be placed at a number value of 4 for the mid-range frequency band (e.g., as shown in FIG. 5 A). Next, processor(s) 302 may receive input from user 104 indicating that marker 406N should be adjusted from a value of 4 to a value of 5 (e.g., with an up-arrow key). Processor(s) 302 may cause the position of marker 406N to change based on the user input.
  • markers 406 may have a displayed value and a non-disp!ayed value, such as a metadata value.
  • the displayed value may not correspond directly to the non-displayed value, such as when an adjusted scale is used as described herein.
  • Processors 302 may cause either value to change based on the user input.
  • processor(s) 302 may overlay control indicators 402 on top of one another, rather than being horizontally staggered as shown in FIG. 4A.
  • processors) 302 may present control indicators 402 in a three-dimensional space where user 104 may navigate through the space to access control indicators 402.
  • processor(s) 302 may present the control indicators 402 to user 104 through different pages on a III.
  • processor! s) 302 may render a first UI that presents control indicator 402A without presenting control indicators 402B or 402C (e.g., a first page).
  • Processors) 302 may receive input from user 104 that causes processor(s) 302 to advance from the first UI to a second UI (e.g., a second page).
  • the second UI may- present control indicator 402B without presenting control indicators 402A or 402C. In this way, a user 104 may perceive this type of navigation as advancing through pages of a book.
  • processor(s) 302 may automatically advance from one UI to another UI (e.g., another UI page).
  • processor(s) 302 may receive input from user 104 that indicates a desire to advance to a new page where processor(s) 302 may present one of control indicators 402 in isolation on each page or with less than the fill! number of control indicators 402 shown on the page. In such examples, processors) 302 may help user 104 understand which control indicators 402 still need to be manipulated and which have already been set.
  • help indicators or clues rnay be presented to user 104 to guide user 104 through the configuration process.
  • processor(s) 302 may highlight or cause one of control indicators 402 to blink a particular color.
  • processor(s) 302 may cause aspects of control indicators 402, such as markers 406, to indicate to user 104 that a particular action needs to be taken.
  • a pointer such as an arrow, may appear that tells user 104 whether a marker should be moved upward or downward.
  • processor ⁇ ) 302 may receive information from user 104 indicating that user 104 is unable to perceive a particular incoming sound. Accordingly, a portion of control indicators 402 (e.g., an estimated range) may appear highlighted or processor! s) 302 may present an arrow pointing in a direction suggesting to user 104 that particular adjustments may be necessary.
  • UI 400 shows three control indicators 402.
  • Control indicators 402 correspond to frequency bands. In tins way, the output of hearing instrument 200 may be varied across the frequency bands, including frequency regions within and adjacent to each frequency band.
  • control indicator 402A corresponds to a low frequency band 404A
  • control indicator 402B corresponds to a middle frequency band 404B
  • control indicator 402N corresponds to a high frequency band 404N.
  • the low frequency band rnay be 250-750 Hertz (Hz)
  • the middle frequency band may be 1000-2000 Hz
  • the high frequency range may be 3000-6000 Hz.
  • processors) 302 may be configured to select frequency bands to encompass various frequencies.
  • the frequency bands may be selected to encompass any combination of frequency ranges.
  • processor(s) 302 may display the frequency band on UI 400 as a band label .
  • the frequency bands may be selected so as to encompass the full range of frequency values without gaps.
  • the number of control indicators 402 may be more or less than the three shown.
  • the full frequency range may be 250-6000 Hz, where the low frequency band may be 250-750 Hz, a first middle frequency band may be 750-1000 Hz, a second middle frequency band may be 1000-2000 Hz, and the high frequency band may be 2000-6000 Hz.
  • Processor(s) 302 may display the ranges for each band as labels on UI 400. However, even in instances where a gap separates the ranges for two bands, adjustments to markers 406 (that correspond to the bands) are mapping to hearing thresholds at discrete audiometric frequencies, which are typically measured at 250 Hz, 500 Hz, 750 Hz, 1000 Hz, 1500 Hz, 2000 Hz, 3000 Hz, 4000 Hz, 6000 Hz, and 8000 Hz. It should be understood, however, that higher frequencies (up to 20,000 Hz), lower frequencies (down to 20 Hz) or intermediate frequencies (e.g., 5000 Hz) could additionally or alternatively be represented.
  • discrete audiometric frequencies which are typically measured at 250 Hz, 500 Hz, 750 Hz, 1000 Hz, 1500 Hz, 2000 Hz, 3000 Hz, 4000 Hz, 6000 Hz, and 8000 Hz. It should be understood, however, that higher frequencies (up to 20,000 Hz), lower frequencies (down to 20 Hz) or intermediate frequencies (
  • proeessor(s) 302 may use the mapping to identify hearing thresholds at discrete audiometric frequencies that are within and adjacent to the range of values that correspond to a frequency band label.
  • processors) 302 may display frequency band labels as a guide for user 104.
  • the identified hearing thresholds serve as input to a fitting formula, which prescribes sound parameters (e.g., gain, compression, etc.) across the entire frequency region (without gaps).
  • hearing thresholds for this region may be estimated using interpolation, and higher or lower thresholds could be estimated using extrapolation.
  • Any number of control indicators 402 may be used for any number of frequency bands. In some instances, the number of control indicators 402 may be limited to the number of channels available in hearing instalment 200.
  • the frequency regions or bands may be delimited as frequencies of sounds that a user is likely to encounter in a particular environment. For example, the various speech sounds for conversation tend to fall within a frequency range of approximately 250 Hz to approximately 7500 Hz-8000 Hz.
  • processor(s) 302 may adjust (e.g., increase or decrease) the number of control indicators 402 based on input received from user 104. For example, user 104 may indicate that the sound within a particular frequency band is not satisfactory' no matter what adjustments user 104 makes in that frequency band. Processor) s) 302 may automatically divide the frequency band into two or more separate bands to provide more tailored adjustments through use of additional control indicators 402.
  • processor(s) 302 may automatically divide one or more frequency bands while user 104 is conducting the configuration process (e.g., in real-time for user 104).
  • processor(s) 302 may receive input from user 104 at an initial information -gathering screen (e.g., an initial home screen).
  • Processor) s) 302 may receive input such as information related to a particular frequency or frequency band that user 104 believes present more problems for user 104 than others.
  • processor(s) 302 may automatically divide one or more frequency bands or add individual frequencies prior to or at the commencement of the configuration process.
  • Processor(s) 302 may provide user 104 with as few as one or two control indicators 402 to as many as 20 or more.
  • proeessor(s) 302 may- link the frequency bands.
  • control indicator 402A may correspond to a frequency band of 250-8000 FIz
  • control indicator 402B may correspond to a frequency band of 250-1750 Hz
  • control indicator 402N may correspond to a frequency band of 1750-8000 Hz.
  • processor(s) 302 may register user input to adjust marker 406A to a value of 2.
  • processor(s) 302 may identify hearing thresholds of 40-dB HL for the entire frequency spectrum. Processor(s) 302 may then register user input to adjust marker 406B to a value of 1, in which case, the hearing thresholds for 250-1500 Hz would drop to 30-dB. Further, when processors) 302 register user input to adjust marker 406N to a value of 3, the hearing thresholds for 2000-8000 Hz would increase to 50-dB HL. A benefit of linking the frequency bands in this way would be to increase speed of the configuration process. Tliis would be especially advantageous where user 104 had a fair amount of hearing loss at all frequencies, where the step-size of the changes were small, or where processor(s) 302 rendered a high number of control indicators 402.
  • control indicator 402A may correspond to a frequency band of 250-8000 Hz
  • control indicator 402B may correspond to a frequency band of 250-1000 Hz
  • control indicator 402N may correspond to a frequency band of 3000-8000 Hz.
  • Processors) 302 may register user input to adjust marker 406A to a value of 2.
  • a marker value of zero corresponds to a hearing threshold of 20-dB HL
  • the step-size is 10-dB
  • proeessor(s) 302 may identify hearing thresholds of 40-dB HL for the entire frequency spectrum.
  • Processor(s) 302 may then register user input to adjust marker 406B to a value of 1, in which case, the hearing thresholds for 250-1000 Hz would drop to 30-dB. Further, where processor(s) 302 then register user input to adjust marker 406N to a value of 3, the hearing thresholds for 3000-8000 Hz would increase to 50-dB HL. However, control indicator 402A may identify hearing thresholds within the gap between frequency bands that correspond to control indicators 402B and 402N. That is, control indicator 402A would identify hearing thresholds for frequencies 1500 and 2000 Hz, and thus, the hearing thresholds in the gapped region would remain at 40-dB HL because marker 406A was adjusted to have a value of 2.
  • proeessor(s) 302 may implement one of control indicators 402 as a broadband controller and implement another one of control indicators 402 as a high- or a low- frequency controller and achieve the same functionality that processor(s) 302 could by implementing three mutually exclusive control indicators 402
  • processors) 302 may determine a configuration for control indicators 402 based on preferences received from user 104 [0123]
  • Control indicators 402 may correspond to key audiometric frequency bands for a hearing instalments 102 fitting, with some tolerance around that frequency (e.g , 480-520, 990-1010, 1990-2010, and 3990-4010 Hz)
  • control indicators 402 may correspond to frequencies that are typically tested throughout die duration of a hearing test.
  • control indicators may be associated with octave frequencies 250-8000 Hz, interoctave frequencies 750-6000 Hz, or extended high frequencies 8000-20,000 Hz.
  • control indicators 402 may correspond to very low frequencies (e.g., 20-250 Hz). It should be noted that control indicators 402 may correspond to frequency bands that have a different number of individual frequencies that make up the band compared to other frequency bands. In some instances, it may be desirable to provide a greater number of control indicators 402 corresponding to the high frequency ranges where hearing loss is more prevalent. Accordingly, the number of control indicators 402 may be greater for higher frequency bands as compared to middle or lower frequency bands.
  • individual control indicators 402 may correspond to indi vidual frequencies rather than groups of frequencies.
  • control indicator 402A may correspond to 500 Hz.
  • individual frequencies and frequency bands may include approximations or error margins for corresponding values.
  • control indicator 402A may correspond to 500 Hz with an error margin of 1 Hz, 5 Hz, 10 Hz, 15 Hz, etc.
  • the marker values are shown as being set to all zeros.
  • the position at all zeros may correspond to normal hearing for user 104.
  • Normal hearing generally refers to hearing thresholds that are 0-dB HL to 20-dB HL at a given frequency, but in practical terms, normal hearing generally refers to 20-dB HL, where a normal range is 0-dB HL to 20-dB HL.
  • a hearing threshold generally refers to the minimum decibel level at which user 104 can perceive the sound. Hearing thresholds may vary across the frequency spectrum. For example, a hearing threshold of user 104 may be higher for sounds above a certain frequency, whereas the hearing threshold of user 104 m ay be in the normal range for sounds below a certain frequency.
  • user 104 may adjust the position of markers 4G6A-N during a configuration process for one or more of hearing instruments 102.
  • user 104 may listen for a sound through hearing instrument 200 that is being configured.
  • Processor(s) 302 may prompt user 104 to adjust one of markers 406 for a particular frequency depending on the ability of user 104 to perceive the sound.
  • FIG. 4B is an illustrative visual depiction of a hearing threshold mapping 420.
  • hearing threshold m apping 420 is illustrating the mapping of marker values to hearing thresholds with respect to the position of markers 406 in FIG. 4A (e.g., preset to a default of all zeros).
  • the y-axis in FIG. 4B refers to ‘level (dB HI)”, which corresponds to hearing threshold level as expressed in terms of dB HL
  • dB HL maps to dB SPL.
  • hearing threshold mapping 420 is only a visual representation of a hearing threshold mapping.
  • Hearing threshold mapping 420 may manifest as a background software application performing a hearing threshold mapping algorithm that converts marker values to hearing thresholds such that a graph as shown in FIG. 4B could be generated based on the output from the algorithm.
  • FIG. 4C is an example of a predicted real-ear response 430 that corresponds to the hearing thresholds identified for the marker positions in FIG. 4A.
  • FIG. 4C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 4B using a standardized or proprietary fitting formula.
  • FIG. 4C provides an example where changes to the UI result in bilateral adjustments.
  • the changes/fitting of hearing instalm ents 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102).
  • FIG. 4C depicts the predicted real ear responses with respect to soft, moderate, and loud sounds (e.g., 50, 65, and 80 dB SPL input levels, although one or more processors may use other levels, as well).
  • FIG 5A depicts the UI of FIG. 4A, where adjustments have been made to the positions of markers 406A-406N.
  • marker 406B has been adjusted to a level 3
  • marker 406N has been adjusted to a level 4.
  • the positions, or levels at which markers 406 are placed, are used to determine profile settings for hearing instrument 200.
  • the placement of markers 406 in FIG. 5A corresponds to one or more profile settings for hearing instrument 200.
  • marker 406N is positioned at level 5, rather than level 4. Each position corresponds to a different seting for hearing instrument 200.
  • storage device(s) 316 may store a mapping that links marker values 406 to hearing thresholds.
  • the mapping may include an algorithm that calculates a conversion between values of markers 406 to hearing thresholds with respect to frequencies or frequency bands.
  • Processor(s) 302 may use the hearing thresholds to determine a hearing instrument setting (e.g., a profile setting or set of one or more profile settings). Processor(s) 302 may then transmit tire one or more settings to hearing instrument 200 to fit hearing instrument 200.
  • processor(s) 208 may use the hearing thresholds to fit hearing instrument 200 using a standardized or proprietary fitting formula.
  • tire fiting formula may be a National Acoustic Laboratories (NAL) fitting formula (e.g., NAL-NL2, NAL-NLl), a Desired Sensation Level (DSL) method (e.g., DSL i/o v5), an e-STAT fitting formula, or any other formula known in the art.
  • NAL National Acoustic Laboratories
  • DSL Desired Sensation Level
  • the hearing thresholds are determined by referring to the mapping stored in memory. In some examples, the mapping allows the marker values to be translated directly into a configuration setting based on the hearing thresholds referenced in the mapping file.
  • processor(s) 302 store the mapping as a reference file on storage device(s) 316 of computing device 300. In other examples, the mapping may be stored on storage device 202 of hearing instrument 200. As such, either processors) 208 or processor(s) 302, or both, may store the mapping that identifies one or more relationships between marker values and hearing thresholds.
  • the mapping may provide a conversion of input values to hearing thresholds.
  • user 104 may increase markers 406.
  • the hearing thresholds m the frequency band would increase as processors) 302 detect user input intended to cause an increase m marker value.
  • the hearing threshold in a particular frequency band may increase by 1-dB, 2-dB, 3-dB, 5-dB, or IQ-dB
  • the mapping provides a conversion of marker values directly to hearing instrument 200 settings.
  • the mapping may take the form of a look-up table, a graph, or another transformation system.
  • marker values may be adjusted as shown in FIG. 5 A, with those adjusted values being provided as input to a mapping algorithm.
  • the mapping algorithm may then determine what setting corresponds to the adjusted marker values.
  • a single seting may be determined based on the adjusted marker values for each control indicator. For example, a first setting may ⁇ be determined for the first frequency band that corresponds to a marker level of 0, a second setting may be determined for the second frequency band that corresponds to a marker level of 3, and a third setting may be determined for the first frequency band that corresponds to a marker level of 4.
  • the positions of markers 406 may begin at a default of 0 as in FIG. 4A.
  • Each change in position of markers 406 (sliding upward along control indicators 402) may correspond to a specific threshold increase (e.g., 1-dB,
  • an initial marker value may be zero, where an adjusted marker value of one corresponds to a X-dB change in hearing threshold, where “X” corresponds to a predetermined decibel value (e.g., 1, 2, 3, 5, 10, or any other decibel level).
  • FIG. 5B provides an illustrative visual depiction of a hearing threshold mapping 520.
  • each incremental change i.e., ‘ X”
  • a normal hearing threshold corresponds to 20-dB HL
  • processors 302 may be configured to provide the ability to adjust left and right ear profile settings independently of one another.
  • a change in marker position, and thus, hearing threshold may result in the change in a sound parameter (e.g., gain of hearing instrument 200) to increase or decrease in a linear or non-linear fashion (e.g., logarithmic or an inconsistent interval).
  • a sound parameter e.g., gain of hearing instrument 200
  • a linear or non-linear fashion e.g., logarithmic or an inconsistent interval.
  • the gain of the hearing instrument rnay be increased by “x” dB in that frequency range for all input levels, or by “x” dB for soft input levels and some number less than ‘ x ’ ’ for moderate and/or loud input levels.
  • Non-linearity may be based on the way in which hearing thresholds vary across frequency bands.
  • FIG. 5C is an example of a predicted real-ear response 530 that corresponds to the hearing thresholds identified for the marker positions in FIG. 5A.
  • FIG. 5C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 5B using a standardized or proprietary fitting formula.
  • FIG. 5C provides an example where changes to the UI result in bilateral adjustments.
  • a person of skill in the art will understand that the changes/fitting of hearing instruments 102 may he done separately for the left and right ears (e.g., left and right hearing instruments 102).
  • an alternative number of control indicators 402 as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.
  • Each frequency band may have a center value.
  • the middle frequency band described above may have a center value of 1,500 Hz (halfway between 1 ,000 Hz and 2,000 Hz). Therefore, the 50-dB HL value may only correspond to tire center value of 1,500 Hz.
  • the mapping for higher and lower frequencies may be an interpolation or extrapolation of hearing diresholds for surrounding frequency data points.
  • the threshold for 2,000 Hz may be a few dB higher than 50-dB, but less than 60-dB (the hearing threshold at the high frequency band in FIG. 5 A).
  • the threshold for 1,000 Hz would be less than 50-dB but greater than 20-dB (the hearing level at the low frequency band in FIG. 5 A).
  • processor(s) 302 may estimate or identify the same hearing thresholds for both ears based on adjustments to markers 406. However, adjustments could be made separately for the left and right ears, which would result in ear-specific hearing instrument programming. In such examples, processor(s) 302 may implement a first profile setting for the left ear of user 104 and a second profile setting for the righ t ear of user 104 that is different than the first profile setting.
  • FIG. 5D provides an illustrative visual depiction of a hearing threshold mapping 540.
  • Hearing threshold mapping 540 illustrates how processor(s) 302 may interpolate and extrapolate hearing thresholds for data points.
  • the interpolation and extrapolation may he done with respect to other frequencies that are not the center of a frequency band.
  • the mapping algorithm may select a key audiometric frequency that is not the band center frequency to use as a reference frequency.
  • processor(s) 302 may estimate, from the mapping, a hearing threshold data point from one or more other hearing threshold data points to identify the hearing threshold. Any number of data points may be interpolated or extrapolated with respect to the reference frequency so as to identify an accurate set of hearing thresholds.
  • the set of hearing thresholds may then serve as input to a standardized or proprietary fiting formula in order to determine a set of one or more profile settings.
  • FIG. 5E is an example of a predicted real-ear response 550 that corresponds to the hearing thresholds identified for the marker positions in FIG. 4A, as well as interpolated and extrapolated hearing thresholds as shown in FIG . 3D.
  • FIG. 5E shows the predicted real ear response for hearing instruments 102 that have been best-fit to fiting targets based on the hearing thresholds of FIG. 5D using a standardized or proprietary ' fitting formula.
  • FIG. 5E provides an example where changes to the UI result in bilateral adjustments.
  • the changes/fitting of hearing instruments 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102).
  • an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas may be used.
  • processors) 302 may detect that user 104 has moved one of markers 406 to the outer limits of one of control indicators 402 (e.g., the top of control indicators 402). In such examples, processor(s) 302 may trigger a message to be displayed on display screen 312. The message may include a notification that user 104 has reached the limits of the hearing instalment configuration process or of hearing instruments 102 and that user 104 should consider visiting an audiologist/hearing healthcare specialist to acquire a more powerful hearing instalment. Processors) 302 may provide user 104 with the option of finding a nearby audiologist or hearing instrument specialist. When processor(s) 302 detect an affirmative reply from user 104 to explore options, processor(s) 302 may then access a computer network and use location services to identify contact information for local providers.
  • processor(s) 302 may set the position of markers 406 to a default starting value, such as all zeros, when initially rendering UI 400
  • processors) 302 may determine the starting positions based on preset profile settings. For example, processor(s) 302 may provide a UI with a selection of default profile settings that user 104 may browse through and test out by listening to sound conditioned by each default profile setting. User 104 may select one or more default profile settings and may listen to audio conditioned by the default profile settings prior to making such selections. Processor(s) 302 may detect selection of one or more default profile settings (e.g., 3 or 4 default profile settings) from user 104.
  • default profile settings e.g., 3 or 4 default profile settings
  • processor(s) 302 may render UI 400 with markers 406 preset at default starting values that correspond to the default profile settmg(s) selected. For example, processor(s) 302 may reference the hearing threshold mapping to determine the starting values of markers 406 based on the default profile setting(s), similar to how processor(s) 302 would reference the mapping in the other direction to convert adjusted marker values to one or more profile settings. Further, processor(s) 302 may determine different starting positions for the left and right ear and/or for different detected environments.
  • FIG. 6A illustrates UI 400 having a default starting point that corresponds to a default profile setting.
  • Processor(s) 302 may receive the default setting from user 104 or from the manufacturer of hearing instruments 102.
  • processor(s) 208 may receive the default setting from the manufacturer and store the default setting to storage device(s) 202.
  • processors) 302 may recommend the default configuration based on input from user 104 (e.g., based on responses to a questionnaire from user 104) In examples involving a default setting,
  • UI 400 may display markers 406 having values of all zero. However, values of all zeros may not correspond to the same hearing thresholds as discussed in FIG. 4A, even though FIG. 4A also showed all zeros. All zeros in this example would correspond to processor(s) 302 not detecting a change to the default setting.
  • the default setting may correspond to any default setting or any hearing threshold mapping.
  • FIG. 6B shows an example v isual depiction of a hearing threshold mapping 620 for a default seting, where markers 406 having values of all zeros correspond to the hearing thresholds as shown in FIG 6B. It should be noted that the hearing thresholds are markedly different than is shown in FIG. 4B, even though the marker positions are set to zero in both examples.
  • FIG. 6C is an example of a predicted real-ear response 630 that corresponds to the hearing thresholds identified for the marker positions in FIG. 6A.
  • FIG. 6C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fiting targets based on the hearing thresholds of FIG. 6B using a standardized or proprietary fitting formula.
  • FIG. 6C provides an example where changes to the UI result in bilateral adjustments. Ho-wever, a person of skill in the art will understand that the change s/fittmg of hearing instalments 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102). In addition, a person of skill in the art will understand that an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.
  • FIG. 7A provides an example of marker adjustments with respect to a default setting. Adjustments to markers 406 represent offsets to a default setting. In this example, the default setting is the same as discussed in connection to FIGS. 6A-6C.
  • the hearing thresholds of the center frequencies of the bands are changed by 10-dB with each stepped change in marker value.
  • an increase of one step corresponds to a 10-dB increase (worsening) in hearing threshold (from 15 to 25-dB HL).
  • a decrease of one step corresponds to a 10-dB decrease (improvement) in hearing threshold (from 45 to 35-dB HL).
  • thresholds at non-center frequencies are either interpolated or extrapolated; however, in some examples, processor(s) 302 may adjust all hearing thresholds within a band by the same amount.
  • a visual depiction of the hearing threshold mapping for FIG. 7A is shown as mapping 720 in FIG. 7B.
  • FIG. 7C is an example of a predicted real-ear response 730 that corresponds to the hearing thresholds identified for the marker positions in FIG. 7A.
  • FIG. 7C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 7B using a standardized or proprietary fitting formula.
  • FIG. 7C provides an example where changes to the UI result in bilateral adjustments.
  • a person of skill in the art will understand that the change s/fitting of hearing instruments 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102).
  • an alternative number of control indicators 402 as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.
  • FIG. 8A provides an illustrative example of an example UI 400.
  • the example UI 400 illustrates that any number of control indicators 402A-402N may be used, including two control indicators 402, as shown in FIG. 8A.
  • processor(s) 302 may display only one control indicator that corresponds to a frequency band of width encompassing low, mid, and/or high frequency bands.
  • one control indicator may have a bandwidth that encompasses both mid and high frequency bands, low and mid frequency bands, low and high frequency bands, or low, mid and high frequency bands.
  • control indicators 402 have markers 406A-N and correspond to a low frequency band 404A and a high frequency- band 404N, respectively.
  • FIG. 8B provides an illustrative visual depiction of a hearing threshold mapping 820.
  • each incremental change i.e., “X”
  • the marker positions of FIG. 8A would map to hearing thresholds of 40-dB HL at the first frequency band 404A and a 60-dB HL for the second frequency band 404N.
  • processor(s) 302 have interpolated and extrapolated a set of hearing thresholds on either side of the 40-dB HL identified for 750 Hz and the 60-dB HL identified for 4000 Hz.
  • FIG. 8C is an example of a predicted real-ear response 830 that corresponds to the hearing thresholds identified for the marker positions in FIG. 8A.
  • FIG. 8C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 8B using a standardized or proprietary fitting formula.
  • FIG. 8C provides an example where changes to the UI result in bilateral adjustments.
  • a person of skill in the art will understand that the change s/fitting of hearing instruments 102 may be done separately for the left and right ears (e.g., left and right hearing instalments 102).
  • an alternative number of control indicators 402 as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.
  • FIG. 9 provides another illustrative example of an example UI for facilitating a hearing instalment configuration process.
  • the example UI 910 of FIG. 9 may take the form of an equalizer line 912 that can have markers 914A-914N slid or dragged along the length of line 912.
  • FIG. 9 illustrates a y-axis that corresponds to a decibel change to the hearing thresholds and an x-axis that corresponds to frequency.
  • User 104 may slide markers 914 along the length of line 912 to identify a hearing threshold for a particular frequency band. Hearing thresholds for certain frequencies (e.g., frequencies that are needed to implement a fitting formula) would be extracted from the position of markers 914.
  • frequencies e.g., frequencies that are needed to implement a fitting formula
  • the y-axis may correspond to an offset to a hearing threshold, which would be measured in units of dB.
  • units of dB may be used where processors(s) 302 first determine a preset setting, in which UI 910 provides the ability to determine an offset to a hearing threshold as defined by the preset.
  • the y-axis could correspond directly to hearing threshold, in which case the mapping would be in units of dB HL.
  • processor(s) 302 may cause markers 914A-914N to appear on or near equalizer line 412.
  • processor(s) 302 may cause markers 914A-914N to appear in predetermined positions along equalizer line 912 to correspond to predetermined frequency bands.
  • marker 914C may correspond to a frequency band 250-750 Hz, similar to the example in which control indicator 402A would correspond to a low frequency band in the illustrative example of FIG 4A.
  • UI 410 may provide more or fewer markers 914 than are shown in FIG. 9.
  • UI 410 may provide markers 914 at any position along equalizer line 912.
  • Processor! s) 302 may determine the optimal number of markers 914 and optimal positions for markers 914 along equalizer line 912, for example, under various configuration circumstances as described herein (e.g., left ear, right ear, for detected environments, etc.).
  • processor(s) 302 may only generate markers 914 in response to detecting user input. For example, UI 910 may present equalizer line 912 without any markers 914 Processor(s) 302 may detect that user 104 has touched UI 410 at a position along equalizer line 912. In response to detecting the touch input, processor(s) 302 may cause one of markers 914 to appear at the touched position. In other examples, processor(s) 302 may not cause a marker to appear in any instance but may instead allow user 104 to manipulate equalizer line 912 without any markers (markers 914 or otherwise). The same may be said for UI 400 or UI 800 with respect to markers 406 and 806.
  • Processor(s) 302 may detect user 104 sliding markers 914 horizontally along equalizer line 912 (e.g., to a new position along the x-axis). Processor(s) 302 may detect user 104 moving one of markers 914 upw3 ⁇ 4rd or downward in the vertical direction. In addition, processors) 302 may detect more than one touch input via UI 400, UI 800, or UI 410 (e.g , more than one finger). For example, processor(s) 302 may track adjustments made simultaneously to multiple markers 406 (or markers 914) and in turn, simultaneously identify hearing thresholds and determine profile settings in response to the tracked adjustments. In another example, processor(s) 302 may utilize multiple touch inputs to change the size of the frequency band.
  • processor(s) 302 may detect more than one touch input via UI 400 or UI 910 and detect the touch input moving in toward one another or outward away from the other touch input, such as a pinching gesture. In such examples, processors) 302 may adjust the width of the frequency band to encompass a broader or narrower frequency range.
  • processor(s) 302 may detect user 104 moving marker 914A upward from 0-dB to 20-dB at a frequency band that corresponds to marker 914.4.
  • the frequency band that corresponds to marker 914A may be established based on the position of marker 914A. In some cases, the frequency band may be established relative to the position of other markers 914 along equalizer line 912.
  • processor(s) 302 may cause marker 914A to move in the same direction as the touch input (e.g., left, upward, right, downward, etc.).
  • UI 910 may also cause other parts of equali zer line 912 to move proportional to marker 914A and/or the position of other markers.
  • processor(s) 302 may cause areas of equalizer line 912 that are to the left and right of marker 914A to move upward as shown. In the example of FIG.
  • processor(s) 302 may cause marker 914D to move upward along with equalizer hue 912, where the frequency band associated with marker 914A is of wide enough breadth such that a part of equalizer line 912 that corresponds to marker 914D moves upward as a result of moving marker 914A upward.
  • User 104 may then desire to adjust marker 914D, in which case processor(s) 302 may cause marker 9141) and equalizer line 412 to move in the direction user 104 requests. For example, where user 104 desires to then move marker 914D downward, processor(s) 302 may cause marker 914D to move downward while maintaining marker 414A at 20-dB.
  • Processor(s) 302 may dynamically adjust equalizer line 912 to maintain fluidity between markers 914 as the position of markers 914 changes.
  • the frequency mapping may be at linear, logarithmic, or inconsistent intervals.
  • UI 910 is displaying the x-axis on a logarithmic scale.
  • Processors 302 may do the same for the control indicators of FIG 4A.
  • control indicator 402A may correspond to a frequency band that includes frequency regions expressed on a logarithmic scale. This may be true even when UI 400 does not visually depict the frequency band range, as they are visually depicted in UI 910 with a continuous equalizer line 912 that extends horizontally across the entire graph.
  • UI 400 may depict control indicators 402 and markers 406 without visually depicting the spectrum of individual frequencies within a frequency band, where proeessor(s) 302 may take into account the frequencies incrementing on a logarithmic, linear, or other scale, when identifying hearing thresholds with respect to the frequency band or regions within and adjacent to the frequency band.
  • Processor(s) 302 may use a fitting formula to determine one or more settings for hearing instruments 102 based on hearing thresholds determined for user 104 with respect to the frequency bands.
  • Processor(s) 302 may individually store the settings in a storage device 316, such as one of application modules 322 or companion application 324.
  • Processor(s) 302 may combine the settings into a single combined setting at a later time, such as when hearing instruments 102 are ready to receive the configuration file (e.g, the seting information).
  • the mapping provides a single setting based on a combination of values for markers 406.
  • the mapping may have a single seting that corresponds to the marker levels for each of control indicators 402 provided with UI 400.
  • hearing instalment 200 will modify the way that sound is being processed with each frequency band based on which of markers 406 are being manipulated.
  • processor! s) 302 may detect movement of marker 406A and transmit an instruction to hearing instrument 200 to increase or decrease the low- frequency gain that corresponds to the adjusted position of marker 406A.
  • proeessor(s) 302 may continuously change the way in which the Sow- frequency sound is processed based on the detected movements of marker 406A
  • processor(s) 302 may alter the gain, compression, and frequency response of hearing instrument 200 for one or more sounds that resonant within a frequency band in response to the detected movements of marker 406A.
  • processor(s) 302 automatically cause markers 406A to move as displayed on the UI.
  • processor(s) 302 may receive voice input from user 104 that causes the markers to move
  • Processor(s) 302 may receive input from user 104, such as “better’ or “worse”, that may cause markers 406 to move in a direction that corresponds to the voice input of user 104.
  • user 104 may audibly state that they are unable to hear sounds in the low frequency range
  • Processor(s) 302 upon receiving this voice input, may automatically adjust one of markers 406 to a new position to account for user 104 being unable to hear sounds in the low frequency range.
  • processor(s) 302 or another device 106 paired to hearing instrument 200 can cause an audio signal to change based on the corresponding setting or hearing threshold that maps to the adjusted marker value as markers 406 automatically move to each new position.
  • Processor(s) 302 may then produce an audio signal for the user 104 asking user 104 whether the sound is now better in the low frequency range.
  • user 104 may state yes or no, thereby causing processor(s) 302 to make further adjustments until user 104 is satisfied with the final profile setting or set of profile settings.
  • processor(s) 302 may prompt user 104 to adjust one of markers 406.
  • processor(s) 302 may receive an indication from user 104 that user 104 is unable to hear sounds in the low frequency range (e.g., using voice input from user 104 or a questionnaire, etc.).
  • processors) 302 may prompt user 104 to adjust one of markers 406 that correspond to the low frequency range.
  • processor(s) 302 may cause markers 406 to blink or pulsate so as to indicate to user 104 which markers 406 should be adjusted to handle the problem user 104 is having with the low frequency range.
  • devices 106 may include devices configured to transmit media data to one or more hearing instruments 102.
  • hearing instalments 102 may receive the media data transmitted from devices 106.
  • Media data may include acoustic sound waves, waveforms, binary audio data, audio signals, radio waves, acoustic energy waveforms, etc.
  • devices 106 may be configured to condition the media data based on the environment, and/or profile settings of user 104 prior to or while transmitting the media data to hearing instruments 102.
  • a transmitter device such as a dongle, may be plugged into one of devices 106, such as a television or smart television.
  • the transmitter device may be configured to stream media data (e.g., audio signals) from one or more of devices 106 to hearing instalments 102.
  • Processor(s) 302 may communicate tire environment, and/or profile settings of user 104 to the transmitter device or devices 106 directly.
  • processor(s) 302 may communicate marker values for frequency bands with devices 106 or the transmitter device, such that devices 106 or the transmitter device may identify hearing thresholds and/or determine profile settings using the mapping.
  • a file including the mapping may be transmitted from one of devices 106 (e.g., a mobile phone) or from hearing instruments 102 to the transmitter device (e.g., the dongle) or to another one of devices 106 that is paired to the transmitter device (e.g., a smart television).
  • the transmitter device or one of devices 106 may read the file in order to access the mapping.
  • the transmitter device or devices 106 may determine the environment of hearing instruments 102.
  • the transmitter de v ice or devices 106 may pre-condition content of the media data based on the environment, and/or profile settings of user 104. In this way, hearing instruments 102 may not need to perform additional processing or conditioning to the media data upon receiving the media data from the transmitter device. Allowing other devices to pre-condition media data prior to or while transmitting media data to hearing instalments 102, in turn, allows hearing instruments 102 to conserve power resources for other tasks.
  • a time delay may be present from when processors) 302 detect a change to the position of markers 406 to when processor(s) transmit a resultant profile setting to hearing instruments 102 and implement the profile seting on hearing instruments 102.
  • processor(s) 302 may selectively choose the source of sound for conducting an instance of a configuration process in such a way that avoids time delays as much as possible and maximizes efficiency with respect to hearing instruments 102 and processor(s) 302.
  • processor(s) 302 may select device 106A (e.g , a smartphone providing the UI) as the source of sound for conducting the configuration process. As such, processor(s) 302 may facilitate transmission of media data streamed from device 106A to hearing instruments 102.
  • device 106A e.g , a smartphone providing the UI
  • processor(s) 302 may facilitate transmission of media data streamed from device 106A to hearing instruments 102.
  • processor! s) 302 may pre-condition the media data used during the configuration process before or while transmitting the media data to hearing instruments 102.
  • Processor(s) 302 may continue to determine preliminary profile settings while proeessor(s) 302 detect adjustments to marker values during the configuration process. For example, processors) 302 may determine new preliminary profile settings based on each individual adjustment to markers 406, but each new profile setting may or may not be satisfactory to user 104.
  • Processors) 302 may pre-condition the media data regardless based on each new preliminary profile setting and transmit the pre-conditioned media data to hearing instruments 102. Hearing instruments 102 may not conduct any further conditioning of the received media data because processor(s) 302 have already pre-conditioned the media data.
  • Processor(s) 302 may detect from user 104 action requesting conclusion of the configuration process or that the currently modified condition of sound coming from hearing instruments 102 is satisfactory' to user 104. For example, processor(s) 302 may receive, via UI 400, an indication that the hearing instrument seting or setings are ready to be finalized or are to be finalized. Device 106A may transmit a final profile setting to hearing instruments 102 (e.g., as a configuration file).
  • processors) 302 may generate an instruction to implement the setting and transmit the instruction to hearing instruments 102. Hearing instruments 102 may then implement the final profile setting, for example, by storing the final profile setting to storage device(s) 202. In such examples, user 104 need not wait for hearing instruments 102 to implement each of the potential ly numerous preliminary profile setings and instead may only need to wait for hearing instruments 102 to implement the final profile setting. In addition, allowing other devices to pre-condition media data prior to or while transmitting media data to hearing instruments 102, in turn, allows hearing instruments 102 to conserve power sources for other tasks. In addition, the configuration process may take less time due to the avoidance of time delays and as a result, processor(s) 302 may operate more efficiently to arrive at final profile settings.
  • processor(s) 302 may apply a transfer function to account for differences in types of acoustic sources (e.g., live audio sources vs. streaming audio sources).
  • Live audio sources may include live conversation or other live sounds (e.g., music, wind noise, machine noise, or some combination thereof), whereas streaming audio sources may include sources that receive audio data that has been transmitted from a transmitter device (e.g., one of devices 106).
  • processors) 302 may determine a profile setting based on streamed audio used during the configuration process, but user 104 may desire to implement the same profile seting to live audio, as well. In this way, processor(s) 302 may perform one configuration process to determine one profile seting or set of profile settings that may be applied differently based on the audio source and the transfer function.
  • a transfer function may include a signal processing algorithm configured to condition audio signals based on source type in order to maintain uniformity of audio signals provided to the user regardless of source type used to perform the configuration process.
  • the transfer function may include predefined conversion parameters and constants that may be applied to a profile setting.
  • processors) 302 may only be storing one set of profile settings, processor(s) 302, in some examples, may effecti vely apply two sets of profile settings based on output from the transfer function.
  • processor(s) 302 may apply a first set of profile settings to live audio signals and a second set of profile settings to streamed audio signals, where processors) 302 derived the second set of profi le settings from the first set of profile settings through application of the transfer function or where processors) 302 derived die first set of profile settings from the second set of profile settings through application of the transfer function.
  • the directionality of the derivation may depend on what source processor(s) 302 used to perform the configuration process in the first place.
  • dien processor ⁇ 302 may derive, through application of the transfer function, the first set of profi le settings for live audio signals from the second set of profile settings for streamed audio signals.
  • processors) 302 may select a transfer function from a set of transfer functions based on the type of audio signals detected during the configuration process (e.g , speech, music, wind noise, machine noise, or some combination thereof) or based on other characteristics of the incoming audio used to determine the first set of one or more profile settings. For example, processor ⁇ ) 302 may detect five speech and/or wind noise when determining a first profile setting from live sources. As such, processor(s) 302 may determine a set of one or more characteristics of the incoming audio (e.g., that the audio includes speech and/or wind noise).
  • Processor(s) 302 may determine a particular transfer function and/or tailor a set of parameters and/or constants for a particular transfer function based on the particular sounds or combination of sounds detected (e.g., the sound characteristics). As such, the transfer function may be based at least in part on the one or more characteristics.
  • proeessor(s) 302 may determine a transfer function and/or apply the transfer function to the first profile setting in order to detennine a second set of one or more profile settings, the transfer function being based at least in part on tire set of one or more characteristics.
  • processor(s) 302 may apply the transfer function to one or more settings to determ ine a second set of one or more setings, transformed based on parameters, coefficients and/or constants specified by the transfer function being deployed.
  • the transfer function may be used to alter the initial mapping in a way that fits tire transfer function.
  • a marker adjustment from 0 to 1 may, based at least in part on the selected or identified transfer function, map to a dB value, which may or may not be distinct from the 30-dB HL as in the example of a 10-dB step-size and a 20-dB normal hearing threshold at a marker value of 0, the distinction depending on the transfer function that proeessor(s) 302 have identified or selected for the particular application.
  • processor(s) 302 may receive a request from user 104 to further modify the final profile setting as needed via a new instance of the configuration process. For example, user 104 may determine at a later point in time that the profile setting requires fine-tune adjustments or micro-adjustments that allow processors) 302 to rely on previously determined profile setings to further enhance the profile seting without having to determine an enhanced profile setting from a clean slate.
  • proeessor(s) 302 may provide the ability for user 104 to make micro-adjustments using control indicators 402.
  • processor(s) 302 may first allow adjustments on a macro scale, where each individual increment (e.g., from 0 to 1) corresponds to a certain degree of change in hearing thresholds.
  • processor(s) 302 may register a gesture (e.g., pinching fingers inward or outward) that indicate to processor(s) 302 that the scale of control indicators 402 is to he adjusted.
  • Processor(s) 302 can then adjust the view of control indicators 402 to a smaller, adjusted scale that allows for smaller scaled changes.
  • processor(s) 302 may be able to register fractional changes in marker values (e.g., from 1 to 1.2), where ordinarily, processor(s) 302 may only register integer changes.
  • processor(s) 302 may register incremental changes on a scale that provides a granular degree of control over changes to values of m arkers 406. In some examples, processor(s) 302 may interpolate or extrapolate hearing threshold values where the exact value (e.g., a fraction value) is not explicitly provided for in the mapping.
  • processors) 302 may determine a hearing threshold step-size that corresponds to a particular marker value adjustment (e.g., a standard marker value adjustment). For example, processors) 302 may determine that the hearing threshold step-size is X-decibeis (e.g., 10-dB, 20-dB, etc.) per standard marker value adjustment, where the standard marker value adjustment is a default scale of integer value increases. For example, processors) 302 may determine that the standard marker value adjustment is an integer value adjustment, such as 0 to 1 to 2 to 3, etc. In such examples, processor(s) 302 may be able to also provide an option for and detect micro-adjustments to marker values and identify hearing thresholds accordingly.
  • a particular marker value adjustment e.g., a standard marker value adjustment
  • processors) 302 may determine that the hearing threshold step-size is X-decibeis (e.g., 10-dB, 20-dB, etc.) per standard marker value adjustment, where the standard marker value adjustment is a default scale of integer value
  • processors) 302 may detect additional adjustments to one or more of markers 406 that correspond to changes in hearing thresholds that are less than, equal to, or greater than the hearing threshold step-size of X-decibels.
  • processor! s) 302 may detect fractional adjustments to markers 406, such as non-integer value adjustments, or larger adjustments, where some or all adjustment values may be less than, equal to or greater than the standard marker value adjustment (e.g., 0 to 0.5 to 1 to 1.5 to 2 to 3 to 4 to 6, etc.).
  • processor(s) 302 may detect adjustments that correspond to something different than the standard step-size.
  • a change from 0 to 0.5 may correspond to a change in hearing threshold from 20-dB to 25-dB, where the standard step-size is 10-dB for every standard integer change in marker value (e.g , from 0 to 1 to 2).
  • a change from 4 to 6 may correspond to an increase of 20-dB HL, with value 5 in between.
  • processors) 302 may skip values (e.g., the value 5 in the previous example) or apply fractional adjustments, as proeessor(s) 302 detect movement of markers 406 in one direction (e.g., upward along control indicators 402), but may revert to a different adjustment scale (e.g., fractional, integer, etc.) when detecting movement of markers 406 in the other direction (e.g., downward along control indicators 402). In this way, user 104 may adjust markers 406 at a desirable rate in order to allow processor(s) 302 to efficiently identify hearing thresholds.
  • processors) 302 may skip values (e.g., the value 5 in the previous example) or apply fractional adjustments, as proeessor(s) 302 detect movement of markers 406 in one direction (e.g., upward along control indicators 402), but may revert to a different adjustment scale (e.g., fractional, integer, etc.) when detecting movement of markers 406 in the other direction (e.g., downward along control indicators 402).
  • Processor(s) 302 may invoke fractional or micro-adjustments in response to receiving input from user 104, such as a user request to make fine-tune adjustments, or may do so automatically, for example, in response to identifying a need to provide micro-adjustments with respect to one or more control indicators 402.
  • processor(s) 302 may determine the need for micro-adjustments based on a rate at which user 104 causes markers 406 to move along control indicators 402. For example, processor(s) 302 may detect slow, incremental adjustments to markers 406 upward along control indicators 402, processors) 302 may determine that micro adjustments may be beneficial and invoke micro-adjustments as such. On the other hand, processor(s) 302 may detect fast adjustments to markers 406 indicating that micro-adjustments may be unnecessary.
  • User 104 may perform micro-adjustments to a preliminary profile setting or a final profile setting after having setled on the preliminary profile setting or final profile setting from UI 400.
  • processor(s) 302 may have saved a profile setting to storage device(s) 316.
  • Processor! s) 302 may receive an indication that user 104 would like to perform micro-adjustments to the seting of hearing instrument 200.
  • User 104 may provide such an indication at any time, for example, hours, days, or weeks after settling on a hearing instrument setting.
  • user 104 may only want to perform micro-adjustments for a particular environment or for a particular hearing instrument 200 (e.g., left or right).
  • processor 302 may present the previous or current profile setting to user 104 via UI 400.
  • processors) 302 may provide UI 400 to user 104 via display screen 312, where markers 406 are preset along control indicators 402 based on the previously determined profile seting.
  • Processor(s) 302 may receive input from user 104 via input device(s) 308.
  • Processor(s) 302 may determine that the input corresponds to a fractional or micro-adjustment to one of markers 406. As such, processor(s) 302 be configured to detect micro-adjustments to a first marker 406A.
  • processor(s) 302 may identify various step-sizes for markers 406.
  • a step-size may determine tire difference in hearing threshold between one marker value to a next marker value.
  • the step-size may depend on the corresponding frequency bands.
  • marker 406A may provide a different step-size than marker 406B.
  • processor(s) 302 may register a 10-dB change in hearing threshold upon detecting an adjustment of marker 406 A from 0 to 1, but may register a 20-dB change in hearing threshold upon detecting an adjustment of marker 406B from 0 to 1.
  • processor(s) 302 may base the step-size on just-noticeable difference (JND) values that may be known for user 104.
  • JND values specify the degree to which sound parameters must change in order for user 104 to perceive the change.
  • processors) 302 may leain JND values for user 104 (e.g., through a machine learning or artificial intelligence algorithm) and store JND values in storage device(s) 2.02 or storage device(s) 316.
  • processor(s) 302 are configured to adjust the hearing level for an entire audio signal, user 104 may perceive the change sooner relative to an adjustment to the profile setting for only a small segment of the frequency response.
  • processor(s) 302 may implement a larger step-size.
  • Processor(s) 302 may update the profile setting based on the micro-adjustment based on the newly adjusted marker values and corresponding hearing thresholds.
  • Processor(s) 302 may update setings for any number of environments or for particular frequency bands within an environment. For example, processor(s) 302 may only present the option to update a profile setting with respect to the low frequency band.
  • processor(s) 302 may aid or guide user 104 in determining how a setting may need to be updated.
  • processor(s) 302 may implement a process-of-elimination type strategy that allows user 104 to hone-in on where a problem might exist for a profile setting.
  • processor(s) 302 may present an option on UI 400 that allows user 104 to toggle between adjustment scales (e.g., macro-, micro-, or normal-scaled adjustments).
  • processor(s) 302. may register selection of an option to adjust marker values on a micro-scale.
  • processor(s) 302 may detect selection of one of markers 406.
  • processor(s) 302 may automatically provide a zoomed-in view of control indicators 402.
  • processor(s) 302 may update the marker values on UI 400.
  • processor(s) 302 may continue to outwardly present the changes as integer values (e.g., 0 to 1, 1 to 2), but in reality, may be registering changes on a smaller scale (e.g., 0 to 0.1, 0.1 to 0.2).
  • processors) 302 may perform an automatic conversion of changes in marker values based on the scale processor(s) 302 are using to elicit input from user 104.
  • processors) 302 may present the changes in marker values based on the actual change.
  • micro-adjustments to marker value 406B may read as actual changes from 3 to 3.05 to 3 10, etc., rather than as changes from 3 to 4 to 5.
  • the mapping algorithm may take into account either scenario using correction factors or other techniques that take these changes into account. These micro adjustments may result in new marker values.
  • Processor(s) 302 may reference the new marker value in the mapping as described herein.
  • the hearing thresholds may need to be interpolated or extrapolated from the data, where the values are not explicitly provided for in the mapping algorithm.
  • the mapping may have a first setting that corresponds to a value of 4 and another that corresponds to a value of 5.
  • a micro-adjustment may- result m a value of 4.3, in which case a new setting may be interpolated between the two known data points.
  • the position of markers 406 may be relative to a setting that was previously set, in which case, user 104 can fine-tune the previous setting.
  • FIG. 10 is a flowchart illustrating an example operation 1000 of this disclosure.
  • computing system 108 may determine a setting for a hearing instrument, such as one of hearing instruments 102, using input received from user 104 via a UI.
  • computing system 108 may present a UI to user 104 (1000)
  • Tire UI may be similar to that of FIGS. 4A, 8.4, 9, or any other UI that is able to solicit or elicit input from user 104 regarding adjustable marker values.
  • processors) 302 may provide UI 400 by one of devices 106 configured to interface with hearing instalments 102.
  • processors 208 may provide UI 400, for example, on one of devices 106, such as by generating user interface data for one of devices 106.
  • processors 208 may provide the user interface (e.g., UI 400).
  • UI 400 may be presented to user 104 on one of devices 106 paired to one or more hearing instruments 102.
  • the user interface may include one or more control indicators 402 that each correspond to a frequency band.
  • the user interface may include one or more control indicators 402 that each correspond to a frequency region.
  • a single frequency band e.g., 1000-2000 Hz
  • a frequency region may include one or more discrete frequencies within or adjacent to a frequency band, such as within 50,
  • the frequency region adjacent to a frequency band may be determined based on an adjacent frequency band or frequency region, such that the settings may account for as much of the entire frequency spectrum of normal human hearing as is possible.
  • Control indicators 402 may each include markers 406 that are individually positioned along control indicators 402 to indicate marker values.
  • control indicators 402 and markers 406 may be integrated as a single interactive graphical unit. In other examples, control indicators 402 and markers 406 may all be separate interactive graphical units.
  • Processors) 302. may overlay the interactive graphical units as layered constructs of UI 400.
  • Processor(s) 302 may determine initial marker values for markers 406 along control indicators 402 of UI 400 (1002). As discussed above, processor(s) 302 may set initial values for markers 406 to 0 by default. In some examples, processor(s) 302 may preset one or more of markers 406 at certain non-zero positions. For example, user 104 may have already saved a setting for markers 406 but would like to make further adjustments or micro-ad j ustments. Processor(s) 302 may preset one or more of markers 406 to positions that correspond to the previously saved setting
  • Processor(s) 302 may have stored the marker values or may perform a back calculation from the setting to derive tire marker values. In some examples, processor(s) 302 may calculate assumed hearing thresholds based on information regarding user 104 (e.g., age, gender, history, etc.) and/or on a detected environment. Processor(s) 302 may set markers 406 at initial starting points above 0 based on the assumed hearing thresholds. For example, where user 104 used hearing instruments 102 in the past, processor(s) 302 may attempt to approximate the setting of the past hearing instruments 102 of user 104 and provide an initial position for markers 406. As such, processors) 302 may determine an initial marker value for one of control indicators 402 based at least in part on the initial position of one of markers 406. For example, the initial position may be 0 by default.
  • processor(s) 302 may determine that a change in state has occurred with respect to the initial marker value. For example, processor(s) 302 may detect that user 104 has manipulated a marker in some way. In a non-limiting example, processor(s) 302 may register a change in state as soon as user 104 provides input, such as a touch input, that selects one of markers 406 for manipulation.
  • Processor(s) 302 may determ ine adjusted values of markers 406 (1004). For example, processor(s) 302 may detect that user 104 has manipulated marker 406B for control indicator 402 B to a new position along control indicator 402 B via UI 400. Processor(s) 304 may determine an adjusted marker value for control indicator 402B based at least in part on the adjusted position of marker 406B For example, processor(s) 302 may register a change in value of marker 406B as marker 406 moves along control indicator 402B from 0 to 1, 1 to 2, and 2 to 3.
  • processor(s) 302 may register marker values by determining when to increment a marker value and may commit the value to a specified register location in storage device(s) 316. For example, processor ⁇ ) 302 may determine that the value of marker 406B corresponds to a value of 3 based on tire adjusted position of marker 406B.
  • processor(s) 302 may receive a manual input of a marker value. For example, processors) 302 may provide a tillable field 408A-408N via UI 400 that corresponds to one or more frequency bands. Processor(s) 302 may register a value inputed via fillable field 408A-408N and use the input as adjusted values for subsequent use in determining a hearing threshold or profile setting Processor(s) 302 may transmit the adjusted values to hearing instrument 200 or another computing device 300, where those values may be processed or stored upon receipt.
  • fillable field 408A-408N is optional and that other UI modes may be provided in order to elicit input from user 104.
  • processor(s) 302 may receive a command signaling that user 104 is done making adjustments, either temporarily or permanently, before processor(s) 302 register the adjusted values.
  • UI butons may be provided on UI 400 that user 104 may activate to communicate at what stage user 104 is in the configuration process.
  • processor(s) 304 may register that marker 406B has been idle for a predetermined amount of time (e.g., 0.5 seconds,
  • processor(s) 302 register the adjusted values.
  • processor(s) 302 may automatically control adjustment of the marker positions based on feedback or input from user 104.
  • processor(s) 302 may cause markers 406 to move or stop moving in response to input received from user 104.
  • input device 308 may detect input from user 104 (e.g., hand gesture, eye movement, head movement, etc.).
  • Input device 308 may relay the detected input to processors) 302.
  • input device 308 may relay coordinates of a touch input, a length of time a touch input occurred, directionality of the touch input, an amount of pressure applied, and so forth.
  • Processor s) 302 may use the input information to identify a corresponding action that processor s) 302 are to take.
  • processor(s) 302 may reference a file stored in application module 322A that maps the input information to actions. In some examples, such a file includes logical arguments, decision trees, and so forth.
  • input device 308 may detect a first gesture (e.g., a hand gesture) that causes processor(s) 302 to portray marker 406A as moving in a first direction (e.g., upward along control indicator 402A).
  • a second gesture that causes processors) 302 to cease movement of marker 406A.
  • Input device 308 may detect a third gesture that causes processor(s) 302 to portray marker 406A as moving in a second direction (e.g., downward along control indicator 402A).
  • a single gesture may cause processor(s) 302. to portray multiple markers 406 as moving.
  • Processor(s) 302 may adjust sound input based on the movement of markers 406. For example, processor(s) 302 may pre-condition sound that is to be transmitted to hearing instruments 102.
  • User 104 may select from UI 400 that user 104 is complete with the ad j ustment process, for example, by clicking a 'DONE’ key (not shown) on UI 400.
  • Processors) 302 may receive the selection from user 104 via UI 400.
  • Processor(s) 302 may store the seting as a profile, memory', profile setting, or memory setting.
  • processor(s) 302 may update one or more profiles depending on the circumstances based on the setting.
  • Processors) 302 may toggle between profiles depending on tire circumstances or environment of user 104.
  • selecting the ‘DONE’ key may signal to computing system 108 to perform accessing tire mapping, identifying the hearing threshold, and/or determining the seting.
  • computing system 108 may not identify the hearing threshold, determine the setting and/or access the mapping until an affirmative action is taken by user 104 and in some cases, one or more of: identifying the hearing threshold, determining the seting and/or accessing the mapping may be optional.
  • the setting may' be transmitted to the hearing instrument for final installation following identifying the hearing threshold and/or determining the profile setting.
  • Processors) 302 may then access a mapping of hearing thresholds that map to marker values (1006).
  • the mapping may be stored in memory' ⁇ of one or more of computing devices 106 or hearing instrument 200.
  • processor(s) 302 may access a mapping that identifies one or more relationships between marker values and hearing thresholds. For example, the mapping may link marker values and adjusted marker values to hearing thresholds. In some examples, a single mapping may link marker values to hearing thresholds with respect to a particular frequency band.
  • a first mapping may correspond to a first frequency band, whereas a second mapping may correspond to a second frequency band.
  • a single mapping may link marker values to hearing thresholds across a range of frequencies.
  • a single mapping may link marker values to hearing thresholds with respect to a plurality of frequency bands.
  • separate mappings may link marker values to hearing thresholds with respect to each of a plurality of frequency bands.
  • proeessor(s) 302 may estimate hearing threshold data points based on the mapping where certain data points are not provided for explicitly in the mapping hut may he determined based on other data points of the mapping.
  • processor(s) 302 may use interpolation or extrapolation techniques to estimate a hearing threshold data point missing from the one or more mappings. For example, proeessor(s) 302 may estimate, from the one or more mappings, a hearing threshold data point from one or more other hearing threshold data points. This estimation may be done in order to identify the hearing threshold that corresponds to an adjusted marker value.
  • Processor(s) 302 may use the mapping to identify a hearing threshold value that corresponds to the adjusted marker value (1008). In some examples, processor(s) 302 may identify a plurality of hearing threshold values that correspond to a single adjusted marker value. For example, processor(s) 302 may interpolate or extrapolate hearing threshold values for frequencies of a particular frequency band. In another example, processor(s) 302 may determine a plurality of hearing threshold values for a plurality of adjusted markers 406. In a non-limiting example with reference to FIG. 5A, processor(s) 302 may identify from the mapping a first hearing threshold value that corresponds to the adjusted position of marker 406B or the adjusted marker value.
  • processors) 302 may identify from the mapping a second, a third, and a fourth hearing threshold value based on the adjusted positions of marker 406N. In some examples, processors) 302 may identify a default hearing threshold value that corresponds to an unadjusted position of marker 406A. In such examples, processor(s) 302 may select the default hearing threshold based on programming instructions stored on storage device(s) 202 or storage device(s) 316.
  • a device manufacturer or user 104 may generate and store such programming instructions based on the results of a listening task (e.g., in which user 104 selects from a certain number of pre-configured defaults (e.g., 3 or 4)), answers to certain survey questions (e.g., “Do you currently use hearing instruments?”), pooled historical data for other individuals who have already determined their profile settings, or some combination thereof.
  • proeessor(s) 302 may identify from the mapping a hearing threshold value that corresponds to the adjusted position of one of markers 406 based on the frequency band or frequency to which the particular marker corresponds.
  • the mapping may compartmentalize hearing threshold values for each frequency band or frequency region, such that only certain compartments may be accessed, referenced, or utilized based on which of markers 406 are adjusted.
  • processor(s) 302 may register an adjustment to marker 406A and as such, may access, reference, or utilize a mapping for a particular frequency band that corresponds to marker 406A (e.g., low frequency band).
  • Processor(s) 302 may use the one or more hearing threshold values to determine a profile setting for hearing instalment 200 (1010). In some examples, processor(s) 302 may determine one or more settings to configure hearing instalment 200 based at least in part on the hearing threshold. Processor(s) 302 may use a standard fit formula to convert the identified hearing threshold to a setting for hearing instrument 200. In some examples, the setting includes a combined setting for the hearing thresholds at the various frequency bands. In some examples, processor(s) 302 may determine multiple settings. For example, in response to receiving an indication to further adjust marker 406A, processor(s) 302 may determine a second adj usted marker value for marker 406A or for markers 406 as a whole. Processor(s) 302 may then update the setting based at least in part on the second adjusted marker value.
  • processor(s) 302 may transmit the hearing threshold values to another computing de vice 300 or to hearing instrument 200 for processing.
  • processor(s) 302 may transmit the hearing threshold values to hearing instrument 200 or to a remote server, where processor(s) 208 or the remote server may determine the setting for hearing instrument 200 based on the received hearing threshold values.
  • computing device 300 may determine the setting and transmit instructions to implement the setting to one or more of hearing instruments 102.
  • computing device 300 may transmit instructions to another computing device 300, such as a smart television or vehicle, that may be paired to hearing instruments 102.
  • Computing device 300 may implement the setting on computing device 300 and pre-condition audio signals before transmitting the audio signals to hearing instruments 102.
  • the device receiving the setting for hearing instrument 200 may subsequently store the setting for hearing instrument 200 to a memory device.
  • any one of devices 106 or hearing instruments 102 may receive the setting for hearing instalment 200 and subsequently store the setting for one or more of hearing instruments 102 to a memos)' device, such as storage device(s) 202 or storage device(s) 316.
  • proeessor(s) 302 may determine multiple settings for one of hearing instruments 102 and subsequen tly cause any one of devices 106 or hearing instruments 102 to store a first set of the multiple settings to a first memory device, such as storage device(s) 316.
  • processors) 302 may cause one of devices 106 or hearing instruments 102 to store a second set of multiple settings to a second memory device, such as storage device(s) 202.
  • storing a setting may include storing a profile setting to cache memory or some other RAM or may include storing a profile seting to ROM depending on processing instructions.
  • processor(s) 302 may generate instructions that cause preliminary profile settings to be stored in RAM and final profile settings to be stored in ROM or transferred from RAM to ROM.
  • processor(s) 302 may generate instructions that cause the profile seting to be stored in a cloud storage device, either exclusively or as another copy of the profile setting, for subsequent access.
  • FIG. 11 is a flowchart illustrating an example operation 1100 of this disclosure.
  • hearing instrument 200 or device 300 may receive information transmitted from another device 106.
  • the other device 106 may be configured to present UI 400 to user 104.
  • device 106 may have a configuration application as one of the application module(s) 322 that is configured to present UI 400 to user 104.
  • Hearing instrument 200 or de vice 300 may receive a set of marker values from device 106 (1100).
  • the marker values correspond to the position of markers 406 along control indicators 402.
  • Hearing instrument 200 or device 300 may access the mapping of hearing thresholds (1102).
  • the mapping may be stored on storage device(s) 2.02 or on storage device(s) 316.
  • Hearing instrument 200 or device 300 may identify the hearing threshold that corresponds to the marker values (1104). This may be done for any number of control indicators 402, control indicators 802, or markers 914. For example, a hearing threshold may be determined for each adjusted or non-adjusted marker value for each of control indicators 402, control indicators 802, or markers 914. [0213] Hearing instrument 200 or device 300 may determine one or more profile settings based on the determined hearing thresholds (1106). The profile setting may be a combination of gain, compression and frequency response parameters for a particular frequency band. In some examples, hearing instrument 200 or device 300 may indi vidually determine multiple profile settings with respect to each of control indicators 402 (e.g., one profile setting for each frequency band).
  • processor(s) 302 when processor(s) 302 detect an adjustment to markers 406, processor(s) 302 cause an adjustment to a plurality of sound parameters of hearing instruments 102 within a single frequency region or band. For example, adjusting a first marker results in processor(s) 302 adjusting the sound parameters for multiple frequencies. In another example, adjusting a first marker may result in processor(s) 302 determining an adjustment to a plurality of sound parameters at least with respect to a single frequency band. For example, adjusting a first marker may result in processor(s) 302 determining an adjustment to a plurality of sound parameters within a single frequency region or band.
  • processor(s) 302 may detect an adjusted position of marker 406A. In response to detecting the adjusted position, processors) 302 may control or adjust sound parameters for the frequency region or band that corresponds to control indicator 402A, as well as control sound parameters for frequencies that are adjacent or related to the frequency region or band that corresponds to control indicator 402A (e.g., through extrapolation techniques). In some examples, the sound parameters include one or more of gain, frequency response, compression, etc. for hearing instruments 102.
  • device 300 may transmit the profile settings to the hearing instrument 200 (1108).
  • Hearing instrument 200 may implement the profile settings, for example, by activating the profile settings so that audio signals received through the hearing instrument 200 are conditioned based on the determined profile setings.
  • the hearing instrument 200 itself determines the profiles settings, then transmitting the profile settings to hearing instrument 200 is unnecessary in some examples, the profile settings may be transmitted externally to another computing device 300 (e.g., a cloud server) for storage and later retrieval (e.g., from a network).
  • the settings may be stored and accessed at-will or automatically depending on the circumstances and environment surrounding device 300 or hearing instrument 200.
  • processors) 302 may use marker values to determine profile settings directly, without intermediately identifying hearing thresholds.
  • proeessor(s) 302 may identify a relationship between marker values and hearing instrument settings (e.g., gains).
  • Processors) 302 may be configured to identify the relationship (e.g., through ML models) or may receive such relationship information from an external source (e.g., a manufacturer, programmer, etc.). In some instances, the relationship may be based on processor(s) 302 observing how marker values map to hearing thresholds, which then map to profile settings, and once identifying that a relationship has been established between marker values and profile settings, processor(s) 302 may directly map marker values to profile setings.
  • a particular fitting formula may prescribe different settings (e.g., different gain) for different frequencies, regardless of whether the hearing thresholds are the same for those different frequencies.
  • processor(s) 302 may determine a frequency-specific mapping between marker values and profile settings, once identifying that a relationship has been established between marker values and profile setings.
  • Example 1 A method including: providing a user interface by a device configured to interface with a hearing instrument, the user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values; determining an initial marker value for a first control indicator based at least in part on an initial position of a first marker: determining that a change in state has occurred with respect to the initial marker value; determining a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker; accessing a mapping that identifies tire one or more relationships between marker values and hearing thresholds; identifying, from the mapping, a hearing threshold that corresponds to the fi rst adjusted marker value; determining one or more settings to configure the hearing instrument based at least in part on the hearing threshold; and storing the one or more setings for the hearing instrument to a memory device.
  • Example 2 A method according to Example 1, further including : adjusting a number of
  • Example 3 A method according to Example 2, wherein the input received from the user includes an indication that sound corresponding to at least one of the frequency bands is not satisfactory.
  • Example 4 A method according to any combination of Examples 1 through 3, further including: registering a gesture that indicates that a scale of the control indicator is to be adjusted.
  • Example 5 A method according to any combination of Examples 1 through 4, further including: registering the adjusted marker value as a fractional change from the initial marker value.
  • Example 6 A method according to any combination of Examples 1 through 5, wherein the control indicators include interactive graphical units presented to a user via the user interface, wherein the markers are configured to be slid along the control indicators.
  • Example 7 A method according to any combination of Examples 1 through 6, wherein the mapping links marker values to hearing thresholds with respect to die frequency band that corresponds to the first control indicator.
  • Example 8 A method according to any combination of Examples 1 through 7, wherein the mapping links marker values to hearing thresholds with respect to each of the frequency bands that correspond to the plurality of control indicators.
  • Example 9 A method according to any combination of Examples 1 through 8, wherein identifying the hearing threshold further includes estimating, from the mapping, a hearing threshold data point from one or more other hearing threshold data points.
  • Example 10 A method according to any combination of Examples 1 through 9, further including: in response to receiving an indication to further adjust the first marker, determining a second adjusted marker value; and updating the one or more settings based at least in part on the second adjusted marker value.
  • Example 11 A method according to any combination of Examples 1 through 10, further including: receiving, via the user interface, an indication that the one or more settings are ready to be finalized; and generating an instruction to implement the one or more settings.
  • Example 12 A method according to any combination of Examples 1 through 1 !, wherein the hearing instrument includes a left hearing instrument, and wherein a second of one or more settings is determined for a right hearing instrument.
  • Example 13 A method according to any combination of Examples 1 through 12, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to a particular frequency band.
  • Example 14 A method according to any combination of Examples 1 through 13, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of the hearing instrument within a single frequency region or band.
  • Example 15 A method according to Example 14, wherein the sound parameters include one or more of: gain, frequency response, and compression for the hearing instrument.
  • Example 16 A method according to any combination of Examples 1 through 15, further including: providing a second user interface identifying at least one additional control indicator and at least one additional marker; detecting the adjustment to the at least one additional marker; and updating the one or more settings in response to detecting the adjustment to the at least one additional marker.
  • Example 17 A method according to any combination of Examples 1 through 16, further including: determining a hearing threshold step-size that corresponds to a particular marker value adjustment; and detecting an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.
  • Example 18 A method according to any combination of Examples 1 through 17, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where ‘"X” corresponds to a predetermined decibel value.
  • Example 19 A method according to any combination of Examples 1 through 18, further including: applying a transfer function to the one or more settings to determine a second set of one or more settings.
  • Example 20 A method according to Example 19, further including: determining one or more characteristics of incoming audio, wherein the transfer function is based at least in part on the one or more characteristics.
  • Example 21 A method according to any combination of Examples 1 through 20, wherein the frequency bands are delimited as frequencies of sounds that a user is likely to encounter in a particular environment.
  • Example 22 A method according to any combination of Examples 1 through 21, wherein the method is performed by a personal computing device.
  • Example 23 A method according to any combination of Examples 1 through 22, wherein the personal computing device includes the memory device that stores the one or more settings.
  • Example 24 A method according to any of Examples 22 or 23, wherein the personal computing device includes the mapping.
  • Example 25 A method according to any combination of Examples 1 through 24, further including: receiving media data from another device.
  • Example 26 A method according to any combination of Examples 1 through 25, further including: transmiting the one or more settings.
  • Example 27 A device configured to determine hearing instrument settings, the device including: a memory configured to store a mapping that identifies one or more relationships between marker values and hearing thresholds; and one or more processors coupled to the memory, and configured to: provide a user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values; determine an initial marker value for a first control indicator based at least in part on an initial position of a first marker; determine that a change in state has occurred with respect to the initial marker value; determine a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker; access the mapping that identifies the one or more relationships between marker values and hearing thresholds; identify, from the mapping, a hearing threshold that corresponds to the first adjusted marker value; and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.
  • Example 28 A device according to Example 27, wherein the device is further configured to adjust a number of control indicators based on input received from a user.
  • Example 29 A device according to Example 28, wherein the input received from the user includes an indication that sound corresponding to at least one of the frequency bands is not satisfactory.
  • Example 30 A de vice according to any combination of Examples 27 through 29, wherein the device is further configured to register a gesture that indicates that a scale of the control indicators is to be adjusted.
  • Example 31 A device according to any combination of Examples 27 through 30, wherein the adjusted marker value is registered as a fractional change from the initial marker value
  • Example 32 A device according to Example 31, wherein the device is further configured to store the one or more settings for the hearing instrument.
  • Example 33 A device according to any combination of Examples 27 through 32, wherein the device is a personal computing device.
  • Example 34 A device according to any combination of Examples 27 through 33, wherein the device is further configured to transmit the mapping
  • Example 35 A device according to any combination of Examples 27 through 34, wherein the device is further configured to receive media data from another device.
  • Example 36 A device according to any combination of Examples 27 through 35, wherein the device is further configured to transmit the one or more settings.
  • Example 37 A device according to any combination of Examples 27 through 36, wherein the device is further configured to: provide a second user interface identifying at least one additional control indicator and at least one additional marker; detect an adjustment to the at least one additional marker; and update the one or more settings in response to detecting the adjustment to the at least one additional marker.
  • Example 38 A device according to any combination of Examples 27 through 37, wherein the device is further configured to: determine a hearing threshold step-size that corresponds to a particular marker value adjustment; and detect an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.
  • Example 39 A device according to any combination of Examples 27 through 38, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where "‘X” corresponds to a predetermined decibel value.
  • Example 40 A device according to any combination of Examples 27 through 39, wherein the device is further configured to apply a transfer function to the one or more settings to determine a second set of one or more settings.
  • Example 41 A device according to Example 40, wherein the device is further configured to determine one or more characteristics of incoming audio, wherein the transfer function is based at least in part on the one or more characteristics.
  • Example 42 A device according to any combination of Examples 27 through 41, wherein the frequency bands are delimited as frequencies of sounds that a user is likely to encounter in a particular environment.
  • Example 43 A device according to any combination of Examples 27 through 42, wherein the control indicators take the form of interactive graphical units presented to a user via the user interface, wherein the markers are configured to be slid along the control indicators.
  • Example 44 A device according to any combination of Examples 27 through 43, wherein the mapping links marker values to hearing thresholds with respect to the frequency band that corresponds to the first control indicator.
  • Example 45 A device according to any combination of Examples 27 through 44, wherein the mapping links marker values to hearing thresholds with respect to each of the frequency bands that correspond to the plurality of control indicators.
  • Example 46 A device according to any combination of Examples 27 through 45, wherein the device is further configured to: estimate, from the mapping, a hearing threshold data point from one or more other hearing threshold data points.
  • Example 47 A device according to any combination of Examples 27 through 46, wherein the device is further configured to: in response to receiving an indication to further adjust the first marker, determine a second adjusted marker value; and update the one or more settings based at least in part on the second adjusted marker value.
  • Example 48 A device according to any combination of Examples 27 through 47, wherein the device is further configured to: receive, via the user interface, an indication that the one or more settings are ready to be finalized: and generate an instruction to implement the setting.
  • Example 49 A device according to any combination of Examples 2.7 through 48, wherein the hearing instrument includes a left hearing instrument, and wherein a second of one or more settings is determined for a right hearing instrument.
  • Example 50 A de vice according to any combination of Examples 27 through 49, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to a particular frequency band.
  • Example SE A de vice according to any combination of Examples 27 through 50, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of the hearing instrument within a single frequency region or band.
  • Example 52 A device according to Example 51, wherein the sound parameters include: gain, frequency response, and/or compression for the hearing instrument.
  • Example 53 A method including: providing a user interface by a device configured to interface with a hearing instrument, the user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker positioned along the control indicator to indicate a marker value; determining an initial marker value for the control indicator based at least in pari on an initial position of the marker; determining an adjusted marker value for the control indicator based at least in part on an adjusted position of the marker; accessing a mapping that identifies one or more relationships between marker values and hearing thresholds; identifying, from the mapping, a hearing threshold that corresponds to the adjusted marker value; determining one or more settings to configure the hearing instrument based at least in part on the hearing threshold; and storing the one or more settings for the hearing instrument to a memory device.
  • Example 54 A method according to Example 53, further including: adjusting a number of control indicators to include a plurality of control indicators based on input received from a user.
  • Example 55 A method according to Example 54, wherein the input received from the user includes an indication that sound corresponding to the frequency band is not satisfactory.
  • Example 56 A method according to any combination of Examples 53 through 55, further including: registering a gesture that indicates that a scale of the control indicator is to be adjusted.
  • Example 57 A method according to any combination of Examples 53 through 56, further including: registering the adjusted marker value as a fractional change from the initial marker value.
  • Example 58 A method according to any combination of Examples 53 through 57, wherein the control indicator is represented by an interactive graphical unit presented to a user via the user interface, wherein the marker is configured to he slid along the control indicator.
  • Example 59 A method according to any combination of Examples 53 through 58, wherein the mapping links marker values to hearing thresholds with respect to the frequency band.
  • Example 60 A method according to any combination of Examples 53 through 59, wherein identifying the hearing threshold further includes estimating, from the mapping, a hearing threshold data point from one or more other hearing threshold data points.
  • Example 61 A method according to any combination of Examples 53 through 60, further including: in response to receiving an indication to further adjust the first marker, determining a second adjusted marker value; and updating the one or more settings based at least in part on the second adjusted marker value.
  • Example 62 A method according to any combination of Examples 53 through 61, further including: receiving, via the user interface, an indication that the one or more settings are ready to be finalized; and generating an instruction to implement the one or more settings with respect to the hearing instrument.
  • Example 63 A method according to any combination of Examples 53 through 62, wherein the hearing instrument includes a left hearing instrument, and wherein one or more right hearing instrument settings are determined for a right hearing instrument.
  • Example 64 A method according to any combination of Examples 53 through 63, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to the frequency band.
  • Example 65 A method according to any combination of Examples 53 through 64, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of tire hearing instrument within the frequency band.
  • Example 66 A method according to Example 65, wherein the sound parameters include one or more of: gain, frequency response, and/or compression for the hearing instrument.
  • Example 67 A method according to any combination of Examples 53 through 66, further including: providing a second user interface identifying at least one additional control indicator and at least one additional marker; detecting the adjustment to the at least one additional marker; and updating the one or more settings in response to detecting the adjustment to the at least one additional marker.
  • Example 68 A method according to any combination of Examples 53 through 67, further including: determining a hearing threshold step-size that corresponds to a particular marker value adjustment; and detecting an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.
  • Example 69 A method according to any combination of Examples 53 through 68, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where “X” corresponds to a predetermined decibel value.
  • Example 70 A method according to any combination of Examples 53 through 69, further including: applying a transfer function to the one or more setings to determine a second set of one or more settings.
  • Example 71 A method according to Example 70, further including: determining one or more characteristics of incoming audio, wherein the transfer function is based at least in part on the one or more characteristics
  • Example 72 A method according to any combination of Examples 53 through 71, wherein tire frequency band is delimited as frequencies of sounds that a user is likely to encounter in a particular environment.
  • Example 73 A method according to any combination of Examples 53 through 72, wherein the method is performed by a personal computing device.
  • Example 74 A method according to Example 73, wherein tire personal computing device includes the memory device that stores the one or more setings.
  • Example 75 A method according to any of Examples 73 or 74, wherein the personal computing device includes the mapping.
  • Example 76 A method according to any combination of Examples 53 through 75, further including: receiving media data from another device.
  • Example 77 A method according to any combination of Examples 53 through 76, further including: transmitting the one or more settings.
  • Example 78 A device configured to determine hearing instrument settings, the device including: a memory configured to store a mapping that identities one or more relationships between marker values and hearing thresholds; and one or more processors coupled to the memory ' , and configured to: provide a user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker that is positioned along the control indicator to indicate a marker value; determine an initial marker value for the control indicator based at least in part on an initial position of the m arker; determine that a change in state has occurred with respect to the initial marker value; determine an adjusted marker value for the control indicator based at least in part on an adjusted position of the marker; access the mapping that identifies the relationship between marker values and hearing thresholds; identify, from the mapping, a hearing threshold that corresponds to the adjusted marker value; and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.
  • Example 79 A device according to Example 78, wherein the device is further configured to adjust a number of control indicators based on input received from a user.
  • Example 80 A device according to Example 79, wherein the input received from the user includes an indication that sound corresponding to the frequency band is not satisfactory.
  • Example 81 A device according to any combination of Examples 78 through 80, wherein the device is further configured to register a gesture that indicates that a scale of the control indicators is to be adju sted.
  • Example 82 A device according to any combination of Examples 78 through 81, wherein the adjusted marker value is registered as a fractional change from the initial marker value.
  • Example 83 A device according to Example 82, wherein the device is further configured to store the one or more setings for the hearing instrument.
  • Example 84 A de vice according to any combination of claims 78 through 83, wherein the device is a personal computing device.
  • Example 85 A device according to any combination of Examples 78 through 84, wherein the device is further configured to transmit the mapping
  • Example 86 A device according to any combination of Examples 78 through 85, wherein the device is further configured to receive media data from another device.
  • Example 87 A device according to Example 86, wherein the device is further configured to transmit tire one or more settings
  • Example 88 A device according to any combination of Examples 78 through 87, wherein the device is further configured to: provide a second user interface identifying at least one additional control indicator and at least one additional marker; detect an adjustment to the at least one additional marker; and update the one or more settings in response to detecting the adjustment to the at least one additional marker.
  • Example 89 A device according to any combination of Examples 78 through 88, wherein the device is further configured to: determine a hearing threshold step-size that corresponds to a particular marker value adjustment; and detect an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.
  • Example 90 A de vice according to any combination of Examples 78 through 89, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where “X” corresponds to a predetermined decibel value.
  • Example 91 A device according to any combination of Examples 78 through 90, wherein the device is further configured to apply a transfer function to the one or more settings to determine a second set of one or more settings.
  • Example 92 A device according to Example 91, wherein the device is further configured to determine a one or more characteri stics of incom ing audio, wherein the transfer function is based at least in part on the one or more characteristics.
  • Example 93 A device according to any combination of Examples 78 through 92, wherein the frequency band is delimited as frequencies of sounds that a user is likely to encounter in a particular environment.
  • Example 94 A device according to any combination of Examples 78 through 93, wherein the control indicator is represented as an interactive graphical unit presented to a user via the user interface, wherein the marker is configured to be slid along the control indicator.
  • Example 95 A device according to any combination of Examples 78 through 94, wherein the mapping links marker values to hearing thresholds with respect to the frequency band.
  • Example 96 A device according to any combination of Examples 78 through 95, wherein the device is further configured to: estimate, from the mapping, a hearing threshold data point from one or more other hearing threshold data points to identify the hearing threshold.
  • Example 97 A device according to any combination of Examples 78 through 96, wherein the device is further configured to: in response to receiving an indication to further adjust the first marker, determine a second adjusted marker value; and update the one or more settings based at least in part on the second adjusted marker value.
  • Example 98 A device according to any combination of Examples 78 through 97, wherein the device is further configured to: receive, via the user interface, an indication that the one or more settings are ready to be finalized; and generate an instruction to implement the one or more settings with respect to the hearing instrument.
  • Example 99 A device according to any combination of Examples 78 through 98, wherein the hearing instrument includes a left hearing instrument, and wherein one or more right hearing instrument settings are determined for a right hearing instrument.
  • Example 100 A device according to any combination of Examples 78 through 99, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to the frequency band.
  • Example 101 A device according to any combination of Examples 78 through 100, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of the hearing instrument within the frequency band.
  • Example 102 A device according to Example 101, wherein the sound parameters include one or more of: gain, frequency response, and/or compression for the hearing instrument.
  • ordinal terms such as “first,” “second,” “third,” and so on, are not necessarily indicators of positions within an order, but rather may be used to distinguish different instances of the same thing. Examples provided in this disclosure may be used together, separately, or in various combinations. Furthermore, with respect to examples that involve personal data regarding user 104, it may be required that such personal data only be used with the permission of user 104.
  • Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
  • computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non -transitory ' or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processing circuits to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may include a computer-readable medium.
  • such computer-readable storage media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, cache memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly tenned a computer-readable medium.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media
  • processing circuitry may include one or more processors, such as one or more DSPs, processing systems, general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • processors such as one or more DSPs, processing systems, general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processing circuits may be coupled to other components m various ways. For example, a processing circuit may be coupled to other components via an internal device interconnect, a wired or wireless network connection, or another communication medium.
  • any of the oilier devices described herein may perform or share in the performance of some or all aspects of the disclosed technology.
  • hearing instrument 200 a separate computing device 300, computing system, or a combination thereof, may perform some or all of the techniques or actions described herein.
  • some or all the techniques described herein may be performed by personal computing device 300
  • personal computing device 300 may store the one or more settings (e.g., profile setings) for subsequent use, retrieval, finalization, and/or implementation.
  • personal computing dev ice 300 may transmit data to another device to perform the mapping.
  • personal computing device 300 may receive the heating threshold mapping from another device, such as a remote server, and perform the mapping on personal computing device 300.
  • personal computing device 300 may already include the mapping (e.g., stored in a memory device, such as storage device(s) 316 of personal computing device 300, as shown in FIG. 3).
  • a hearing instrument 200 may be designed so as to be interchangeable between a left and right ear. In such instances, multiple profile settings may be stored for a right ear and a left ear. In any case, the configuration process may be performed separately for the right and left ear of user 104.
  • the techniques of this disclosure may be implemented in a wide variety of de vices or apparatuses, including a wireless handset, an integrated circuit (1C) or a set of ICs (e.g., a chip set).
  • Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined m a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne une interface utilisateur permettant de déterminer un réglage d'un appareil auditif. L'interface utilisateur permet à un utilisateur de configurer un appareil auditif à l'aide d'indicateurs et de marqueurs de commande pouvant être manipulés. Les appareils auditifs peuvent faire référence à une mise en correspondance qui associe des valeurs de marqueurs à des seuils auditifs par rapport à des bandes de fréquences particulières. Des processeurs des appareils auditifs ou des dispositifs informatiques couplés à un appareil auditif peuvent utiliser les seuils auditifs d'un utilisateur identifiés parmi de multiples bandes de fréquences de façon à déterminer un réglage permettant de configurer les appareils auditifs.
PCT/US2020/044847 2019-08-05 2020-08-04 Interface utilisateur d'ajustement dynamique de réglages d'appareils auditifs WO2021026126A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20764490.7A EP4011098A1 (fr) 2019-08-05 2020-08-04 Interface utilisateur d'ajustement dynamique de réglages d'appareils auditifs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962882878P 2019-08-05 2019-08-05
US62/882,878 2019-08-05

Publications (1)

Publication Number Publication Date
WO2021026126A1 true WO2021026126A1 (fr) 2021-02-11

Family

ID=72291100

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/044847 WO2021026126A1 (fr) 2019-08-05 2020-08-04 Interface utilisateur d'ajustement dynamique de réglages d'appareils auditifs

Country Status (2)

Country Link
EP (1) EP4011098A1 (fr)
WO (1) WO2021026126A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114554379A (zh) * 2022-02-11 2022-05-27 海沃源声科技(深圳)有限公司 助听器验配方法、装置、充电盒和计算机可读介质
WO2023278062A1 (fr) * 2021-06-27 2023-01-05 Eargo, Inc. Évaluation audiologique in situ et réglage personnalisé de dispositifs auditifs

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835611A (en) * 1994-05-25 1998-11-10 Siemens Audiologische Technik Gmbh Method for adapting the transmission characteristic of a hearing aid to the hearing impairment of the wearer
WO2010091480A1 (fr) * 2009-02-16 2010-08-19 Peter John Blamey Ajustement automatisé de dispositifs auditifs
JP2012182647A (ja) * 2011-03-01 2012-09-20 Panasonic Corp 補聴器調整装置
US20170046120A1 (en) * 2015-06-29 2017-02-16 Audeara Pty Ltd. Customizable Personal Sound Delivery System

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835611A (en) * 1994-05-25 1998-11-10 Siemens Audiologische Technik Gmbh Method for adapting the transmission characteristic of a hearing aid to the hearing impairment of the wearer
WO2010091480A1 (fr) * 2009-02-16 2010-08-19 Peter John Blamey Ajustement automatisé de dispositifs auditifs
JP2012182647A (ja) * 2011-03-01 2012-09-20 Panasonic Corp 補聴器調整装置
US20170046120A1 (en) * 2015-06-29 2017-02-16 Audeara Pty Ltd. Customizable Personal Sound Delivery System

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023278062A1 (fr) * 2021-06-27 2023-01-05 Eargo, Inc. Évaluation audiologique in situ et réglage personnalisé de dispositifs auditifs
CN114554379A (zh) * 2022-02-11 2022-05-27 海沃源声科技(深圳)有限公司 助听器验配方法、装置、充电盒和计算机可读介质

Also Published As

Publication number Publication date
EP4011098A1 (fr) 2022-06-15

Similar Documents

Publication Publication Date Title
EP4125279A1 (fr) Procédé et appareil d'ajustement pour un écouteur auditif
US9930456B2 (en) Method and apparatus for localization of streaming sources in hearing assistance system
US9613028B2 (en) Remotely updating a hearing and profile
KR101779641B1 (ko) 보청기를 가진 개인 통신 장치 및 이를 제공하기 위한 방법
CN106233754B (zh) 听力辅助设备控制
EP2870779B1 (fr) Méthode et dispostif pour l'adaptation des prothèses auditives, pour instruire des personnes à entendre avec des prothèses auditives et/ou pour des tests audiométriques diagnostiques de personnes utilisant des prothèses auditives
CN105392096B (zh) 双耳听力系统及方法
CN108540899B (zh) 包含用户交互式听觉显示器的听觉设备
US9894446B2 (en) Customization of adaptive directionality for hearing aids using a portable device
US9398386B2 (en) Method for remote fitting of a hearing device
US10284943B2 (en) Method and apparatus for adjusting sound field of an earphone and a terminal
JP6193844B2 (ja) 選択可能な知覚空間的な音源の位置決めを備える聴覚装置
US11595766B2 (en) Remotely updating a hearing aid profile
US20220201404A1 (en) Self-fit hearing instruments with self-reported measures of hearing loss and listening
US20230262391A1 (en) Devices and method for hearing device parameter configuration
WO2021026126A1 (fr) Interface utilisateur d'ajustement dynamique de réglages d'appareils auditifs
US20190141462A1 (en) System and method for performing an audiometric test and calibrating a hearing aid
EP4290886A1 (fr) Capture de statistiques de contexte dans des instruments auditifs
US20190090057A1 (en) Audio processing device
CN116367050A (zh) 处理音频信号的方法、存储介质、电子设备和音频设备
CN116208886A (zh) 用于调节扬声器音频的方法、主机以及计算机可读介质
TWM552614U (zh) 智慧型耳機裝置個人化系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20764490

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020764490

Country of ref document: EP

Effective date: 20220307