WO2023081219A1 - Simulateur d'ouïe normale - Google Patents

Simulateur d'ouïe normale Download PDF

Info

Publication number
WO2023081219A1
WO2023081219A1 PCT/US2022/048715 US2022048715W WO2023081219A1 WO 2023081219 A1 WO2023081219 A1 WO 2023081219A1 US 2022048715 W US2022048715 W US 2022048715W WO 2023081219 A1 WO2023081219 A1 WO 2023081219A1
Authority
WO
WIPO (PCT)
Prior art keywords
hearing
audio
user
normal
simulator
Prior art date
Application number
PCT/US2022/048715
Other languages
English (en)
Inventor
Vince VAN DE WEIJER
Sjors VAN DE WEIJER
Original Assignee
Eargo, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eargo, Inc. filed Critical Eargo, Inc.
Publication of WO2023081219A1 publication Critical patent/WO2023081219A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • A61B5/123Audiometering evaluating hearing capacity subjective methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/041Adaptation of stereophonic signal reproduction for the hearing impaired
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting

Definitions

  • Embodiments generally can include audiology awareness, hearing screening, software applications, etc.
  • an aspect of an embodiment relates to simulating normal hearing for a person to experience many different hearing scenarios and effects of manipulating audio in accordance with hearing screening results and/or audiological profiles.
  • An aspect of an embodiment also intentionally collects personal hearing data (e.g., audiogram, speech tests, hearing health questionnaires, etc.), and then manipulates audio files to simulate normal-like hearing.
  • personal hearing data e.g., audiogram, speech tests, hearing health questionnaires, etc.
  • Hearing loss is often a result of sensorineural deterioration in the auditory system and is common among people.
  • Hearing aids are the main treatment for hearing loss, but several factors may prevent people from obtaining solutions for hearing loss. Since hearing loss follows a slow and progressive process, sufferers generally are not aware of the lost capabilities and its impact on their wellbeing. Obtaining a standard hearing test and then, if needed, a hearing aid tailored to compensate for that person’s particular hearing loss can be a long, expensive, and for some people an embarrassing experience.
  • Figure 1 shows side by side diagrams of the results of a standard audiogram for a hearing test for a person experiencing ‘normal hearing’ and a person experiencing a certain degree of mild hearing loss.
  • the test frequencies measured in Hertz (Hz) usually range from 500 Hz (around the middle of a piano’s scale) up to 6000 or 8000 Hz (a little above the highest note a piano can play). . . .
  • your hearing threshold levels are measured in decibels (dB) at different frequencies from low (500 Hz) to high (8000 Hz). . . .
  • the example diagram (e.g. audiogram) shows test results for a person experiencing ‘normal hearing’ 020 and a person experiencing mild hearing loss 025.
  • a hearing loss of up to 20 decibels below the hearing threshold is still considered to be normal hearing. More severe hearing loss can be described according to severity, as follows: Mild hearing loss: Hearing loss of 20 to 40 decibels; Moderate hearing loss: Hearing loss of 41 to 60 decibels; Severe hearing loss: Hearing loss of 61 to 80 decibels; and Profound hearing loss or deafness: Hearing loss of more than 81 decibels, (source https://www.ncbi.nlm.nih.gov/books/NBK390300/)
  • normal hearing is either -20 decibels or - 25 decibels from a standard audiogram test for that frequency being tested.
  • the design is directed to hearing simulation.
  • a normal-like hearing simulator may consist of several example modules and other components.
  • a hearing-profile collector module is coded to at least one of i) load a local file of audiometry data associated with a user from a local database, ii) import audiometry data associated with the user through a communications platform module into the normal-like hearing simulator, and iii) collect audiometry data from a hearing screening facilitated by a hearing screening module in real-time to determine a hearing capability of the user under test, iv) collect data from a questionnaire, and then to store a profile of a hearing capability of the user in the local database.
  • An audio environment selector module is coded to 1 ) determine a combination of at least two audio tracks to be present in an audio environment that the user is exposed to, and 2) to cooperate with a multitrack audio file combinator module to create a multitrack audio file combination of the at least two audio tracks in the audio environment.
  • the audio manipulation module and the multitrack audio-file combinator module are coded to cooperate to manipulate the audio characteristics of one or more of the audio tracks in the multitrack audio file combination that has two or more audio tracks in the audio environment to compensate for audio hearing loss based upon the profile of the hearing capability of the user.
  • An audio playback module is coded to play the audio tracks that have had the audio characteristics manipulated and combined into the multitrack audio file combination through speakers (for example: headphones, earbuds, speaker cabinets, etc.) coupled to the normal-like hearing simulator to allow the user to experience what normal hearing would sound like.
  • speakers for example: headphones, earbuds, speaker cabinets, etc.
  • any portions of the audio playback module, the multitrack audio-file combinator module, the audio manipulation module, the hearing screening module, the hearing-profile collector module, and the hearing-profile collector module coded in software are stored on one or more computing device readable mediums and executed by one or more processors.
  • Figure 1 shows side by side diagrams of the results of a standard audiogram for a hearing test for a person experiencing ‘normal hearing’ and a person experiencing some mild hearing loss.
  • Figure 2 illustrates a diagram of an embodiment of an example normal-like hearing simulator implemented on a computing device (desktop computer, kiosk, cloud server, etc.) with a number of modules and other components.
  • a computing device desktop computer, kiosk, cloud server, etc.
  • Figure 3 illustrates a diagram of an embodiment of example real-time processing functions and steps ‘performed on the device’ and/or during the testing of the person/user hearing the normal-like hearing simulation with the computing device implementing the normal-like hearing simulator.
  • Figure 4 illustrates a diagram of an embodiment example pre-processing functions and steps ‘performed off the computing device’ and/or not during the testing of the person/user hearing the normal-like hearing simulation.
  • Figure 5 illustrates a diagram of an embodiment of an example communications network for the normal-like hearing simulator
  • Figure 6 illustrates a diagram of an embodiment of an example user interface for the normal-like hearing simulator.
  • the normal-like hearing simulator has many features, and some example features will be discussed below.
  • An aspect of an embodiment also intentionally collects personal hearing data (e.g., audiogram or other hearing health related data), and then manipulates audio files to simulate normal hearing.
  • personal hearing data e.g., audiogram or other hearing health related data
  • a hearing assistance device can be a hearing aid, earbuds, headphones, etc., that assists with a hearing capability of an individual.
  • One factor of hearing loss is the person suffering from hearing loss has limited awareness to detect the impact of declined hearing health. Since hearing loss follows a slow and progressive process, sufferers generally are not aware of the lost capabilities and its’ impact on their wellbeing.
  • a normal-like hearing simulation system can enable people with hearing loss to experience normal hearing again; thereby, creating awareness for the missed opportunities and stimulating a person suffering from hearing loss to look for solutions.
  • FIG. 2 illustrates a diagram of an embodiment of an example normal-like hearing simulator implemented on a computing device with a number of modules and other components.
  • the normal-like hearing simulator 100 may include a hearing loss pattern module, a unique identifier module, a hearing screening module, a hearingprofile collector module, an audio environment selector module, a multitrack audio-file combinator module, an audio manipulation module, a database, an audio playback module, a communications platform module, a user interface 190, and one or more hearing assistance devices - headset, ear bud, etc.
  • the normal-like hearing simulator 100 and its modules and other components cooperate to simulate normal hearing for a person/user, under test, to experience many different hearing scenarios and effects of manipulating audio in accordance with hearing screening results.
  • the hearing loss pattern module may use, for example, a library of predetermined standard hearing loss patterns.
  • the unique identifier module may create and assign a unique identifier for each user being tested.
  • the hearing screening module may perform a real-time hearing screening on the person/user, under test, during the test to obtain hearing screening results.
  • the hearing-profile collector module is coded with various routines to collect hearing profiles either locally stored in the local database, from external third-party databases through the communications platform module, directly from the user, and/or some other manner. Additional data source to provide hearing health data can include, for example, 1 ) an integration with Apple Health (or similar), 2) a photo scan or import function, 3) a manual audiogram input, 4) a PDF import, and 5) other similar data sources.
  • the audio environment selector module is coded to personalize the audio environment (potentially in real-time).
  • the multitrack audio-file combinator module combines multiple audio files to play through the hearing assistance device.
  • simulator audio is played through speakers of for example hearing assistance devices such as headphones, earbuds, etc.
  • the audio manipulation module allows real-time audio manipulation of the multiple audio files.
  • the normal-like hearing simulator 100 implemented on a computing device can cooperate with a hearing platform database, and one or more third party databases through the communication platform module.
  • Simulation of the normal-like hearing can be implemented with, for example, 1 ) real-time processing functions and steps ‘performed on the device’ and/or during the testing of the person/user hearing the normal-like hearing simulation with the computing device implementing the normal-like hearing simulator, 2) pre-processing functions and steps ‘performed off the computing device’ and/or not during the testing of the person/user hearing the normal-like hearing simulation, 3) a hybrid processing consisting of partly preprocessed manipulations that happened off-device and partly with real time manipulations that occur to sounds on the device during the normal-like hearing simulation.
  • video can be used and the audio in the video is processed, as well as audio manipulation can differ per a target computing device’s hardware (e.g. device types, operating system, headphone types, etc.).
  • the multitrack audio-file combinator module and the audio environment selector module can cooperate to produce multitrack audio files with a preprocessed environment combination and preprocessed manipulation of the multitrack audio, a realtime environment combination and real-time manipulation of the multitrack audio, and any combination of both.
  • real-time means when the simulator is being used as opposed to prior to or after the normal-like hearing simulation. In practice, a hearing test could first be administered, and then afterwards the simulator uses this hearing test to IN REAL TIME to manipulate the audio files the user will hear during the normal-like hearing simulation.
  • the audio manipulation module and the multitrack audio-file combinator module cooperate to manipulate a plurality of multitrack audio files/layers to compensate for audio hearing loss based upon a profile of a hearing loss capabilities of the user who will experience the normal-like hearing simulation.
  • the audio manipulation module allows manipulation of audio characteristics including, for example: i) equalization of sound across low frequencies to high frequencies, ii) (multi band) compression of amplitude (e.g. clamping a max amplitude allowed) across low to high frequencies, iii) noise reduction e.g. algorithms to perform noise cancellation on sounds detected in the environment that are not usually important to an average user e.g. a passing car noise, moving air sound in air conditioning, machinery low pitch sounds, etc.), iv) automatic environmental classification and adaptation of what environment is presented, v) speech frequencies amplification, vi) the amplitude of the audio file, and vii) other similar audio characteristics.
  • audio characteristics including, for example: i) equalization of sound across low frequencies to high frequencies, ii) (multi band) compression of amplitude (e.g. clamping a max amplitude allowed) across low to high frequencies, iii) noise reduction e.g. algorithms to perform noise cancellation on sounds detected in the environment that are not usually important to an average user
  • the audio manipulation module can be coded to modify audio characteristics in real-time (during the testing/execution of the normal-like hearing simulation) or pre-processing (prior to the test) to manipulate the plurality of multitrack audio files/layers on how they sound to compensate based upon a direct or estimated profile of the hearing loss capabilities of the user.
  • the hearing-profile collector module is coded to at least one of i) load from a local file of audiometry data (e.g. a user’s personal hearing profile and/or a standard hearing loss profile) associated with a user from a local database, ii) import audiometry data (e.g. a measure of a hearing loss pattern and/or importing a personal hearing profile) associated with the user through a communications platform module into the normal-like hearing simulator 100, and/or iii) collect audiometry data (e.g. a personal hearing profile) from a hearing screening facilitated by a hearing screening module in real-time to determine a hearing capability of the user under test, and then to store a profile of a hearing capability of the user in the local database.
  • a local file of audiometry data e.g. a user’s personal hearing profile and/or a standard hearing loss profile
  • import audiometry data e.g. a measure of a hearing loss pattern and/or importing a personal hearing profile
  • audiometry data e
  • the hearing-profile collector module is coded to facilitate all of the following i) load from a local file of audiometry data from the local database, ii) import audiometry data through the communications platform module, and/or iii) collect audiometry data from a hearing screening during an execution of the normal like hearing simulation to determine the hearing capabilities of a user and then store the user’s hearing profile in the local database.
  • the hearing profile collector module may retrieve or import a standard hearing loss pattern track appropriate for the user under test when no user specific hearing profile is available.
  • the hearing profile collector module is coded to at least one of 1 ) retrieve from the local device database a standard hearing loss pattern audio track appropriate for the user under test and 2) import through the communications platform module the standard hearing loss pattern audio track appropriate for the user under test, when no user specific audiometry data from the hearing screening on the hearing capability of the user is available.
  • the hearing profile collector may map the collected hearing profile to the closest Bisgaard Standard Audiograms or similar. Note, the hearing profile can also be defined based on other methods, such as SNR/DIN/SIN but also questionnaire.
  • the audio environment selector module can select the audio tracks to be present in the audio environment audio tracks, which have been manipulated and preprocessed prior to starting a normal-like hearing simulation. Those audio tracks to be present in the audio environment are pre-matched to the standard hearing loss pattern.
  • the hearing screening module is coded to perform a hearing screening on a user in real-time to obtain the user’s personal hearing profile.
  • the audio environment selector module is coded to tailor the hearing/audio environment to the user either by the selection of tracks and/or allowing the user to upload specific audio files into the computing device implementing the normal-like hearing simulator 100.
  • the audio environment selector module is coded to create/select hearing scenarios/environments incorporating manipulation of audio characteristics based on the user’s personal hearing profile.
  • the audio environment selector module can 1 ) determine a combination of at least two audio tracks/audio files to be present in an audio environment that the user is exposed to, and 2) to cooperate with a multitrack audio file combinator module to create a multitrack audio file combination of the at least two audio tracks in the audio environment.
  • the audio manipulation module and the multitrack audio-file combinator module cooperate to manipulate audio characteristics of one or more of the audio tracks in the multitrack audio file combination in the audio environment to compensate for audio hearing loss based upon the profile of the hearing capability of the user.
  • the audio playback module is coded to play the audio tracks that have had the audio characteristics manipulated and combined into the multitrack audio file combination through a hearing assistance device coupled to the normal-like hearing simulator 100 to allow the user to experience what normal hearing would sound like.
  • Each audio file contains its own audio sounds, which are heard through a hearing assistance device (e.g. headset/ speaker/ ear buds) in order to let the user experience the difference in hearing the same set of multiple audio files (e.g. the original version of the audio files and the manipulated version of the multiple track audio files simulating normal hearing for the person/user to experience many different hearing scenarios and the effects of the manipulation of audio characteristics of the audio files.
  • the audio playback module plays back the manipulated sequence of the multitrack audio files from the audio manipulation module to the user through the hearing assistance device (e.g. headset, earbuds) coupled to the (wired or wireless) audio output of the normal-like hearing simulator 100 and then switches between an original version of the multitrack audio file combination of the at least two audio tracks and the manipulated version of the audio tracks/files in the multitrack audio file combination so that the user can experience the audio environment that the user hears i) without audio characteristics manipulations and ii) then with the audio characteristics manipulations to simulate normal hearing that the user is exposed to, and thereby; create an awareness of a hearing health of the user and the what the user could/would experience if they obtained a hearing assistance device properly tailored to that user’s hearing loss profile.
  • the hearing assistance device e.g. headset, earbuds
  • the normal-like hearing simulator 100 implemented on a computing device with the number of modules can have portions coded in software (data and instructions) stored on one or more computing device readable mediums and the software is executed by one or more processors.
  • the normal-like hearing simulator 100 can modify audio files to simulate a restoration to normal hearing (at least improved hearing) to the person suffering from hearing loss.
  • the audio manipulation module of the normal-like hearing simulator 100 manipulates multi-layer audio source files’ characteristics such as frequency, amplitude, compression, etc., based on the user’s personal audiometric results.
  • the normal-like hearing simulator 100 does this by the hearing profile collector module collecting or importing audiometric results (audiogram data), the audio environment selector module determining appropriate multi-layer audio source file combinations, and then the audio manipulation module manipulating these files. This manipulation process can happen in real-time ‘on device’ (and/or in an internet browser) and in another implementation preprocessing can take place ‘off device’.
  • the manipulation process can manipulate these files, store them, and then cooperate with the audio playback module to play those manipulated files at a subsequent time.
  • the normal-like hearing simulator 100 is an awareness-increasing software application to simulate a restoration back to normal-like hearing based on the audiogram data.
  • the normal-like hearing simulator 100 can create an awareness for hearing loss and simulate the difference between a current hearing of a user and what environments used to sound like (e.g., normal hearing) for that user.
  • the hearing-profile collector module can have a routine to import the audiometry data associated with the user through the communications platform module into the normal-like hearing simulator 100 from an external third-party database, and then store the profile of the hearing capability of the user from the external third-party database in the local device database to facilitate re-use of the profile of the hearing capability of the user.
  • the hearing-profile collector module can also have a routine to collect the audiometry data from the hearing screening facilitated by the hearing screening module in real-time to determine the hearing capability of the user under test, and then store the profile of the hearing capability of the user from the hearing screening in the local device database.
  • Figure 4 illustrates a diagram of an embodiment of an example preprocessing functions and steps ‘performed off’ and/or not during the testing of the person/user under test, of the computing device implementing the normal-like hearing simulator.
  • Figure 3 illustrates a diagram of an embodiment of an example real-time processing functions and steps ‘performed on’ and/or during the testing of the person/user under test, the computing device implementing the normal-like hearing simulator.
  • the method is executable on one or more computing devices to simulate normal hearing, which can include several example steps. Note, except for where logically impossible, these steps can be performed in any order.
  • the diagram of the example real-time processing functions and steps ‘performed on’ and/or during the normal-like hearing simulation 125 in the normal-like hearing simulator 100 can overlap and be similar to the diagram of pre-processing functions and steps ‘performed off’ and/or not during the normal-like hearing simulation 150.
  • the hearing screening module determines the hearing capabilities of the user and then the hearing profile collector module uses software and/or electronic hardware to collect and/or import a hearing profile of 1 ) importing audiometry data from a database of this specific user, and/or 2) results from a real-time hearing screening of this specific user performed by the hearing screening module, and/or 3) importing a standard high frequency hearing loss pattern (e.g. generic hearing profile).
  • a standard high frequency hearing loss pattern e.g. generic hearing profile
  • the hearing loss pattern module and the local database can store audiometry data (including generic hearing profiles and user specific hearing profiles).
  • the hearing-profile collector module can import standard audiometry data (e.g. a generic hearing profile).
  • the hearing-profile collector module can import a typical pattern of a moderate, high-frequency hearing loss.
  • the hearing-profile collector module of the normal-like hearing simulator 100 is coded to collect, import, and/or upload audiometry data (e.g., personal hearing profile).
  • audiometry data e.g., personal hearing profile
  • the hearing-profile collector module can collect audiometry data 1 ) through a real-time hearing screening or 2) through importing retrospective third party audiometry data.
  • a standard moderate high frequency hearing loss pattern is below (e.g., see Table 1 ) Table 1 .
  • Generic hearing profile An example typical pattern of a moderate, high-frequency hearing loss.
  • the patient/user’s personal hearing profile can be collected by the hearing screening module administering a real-time hearing screening and/or collecting audiometry data from this user from a (third party) database through the communications platform module (see Figure 2).
  • a first routine in the hearing profile collector module is coded to determine whether to select importing a standard high frequency hearing loss pattern versus conducting a hearing screening or importing audiometry data of this specific user, from a database or memory. If the person has already taken a hearing screening prior to this testing, the first routine can then collect data from where that personal hearing profile is stored locally in the local database and/or from third party databases (see Figure 2).
  • the first routine also allows the personal hearing profile from multiple possible third-party systems as well as locally to be utilized by the normal-like hearing simulator 100.
  • the communications platform module is configured to work with multiple APIs so that the system can import audiometry or hearing screening information from multiple sources, just have communications links to download or pull information regarding audiometry or hearing screening information from these third-party databases, otherwise, the system will use the standard/generic hearing profile (Figure 2).
  • the normal-like hearing simulator 100 can use standard/generic hearing profile, use the specific hearing profile for that user under test either imported as a digital file, an audio sound adjusting system such as httDs://suDDort.aoDle.com/en-us/HT21 1218, user responses to a questionnaire on the user’s hearing, and/or a printed audio gram that is scanned in and processed to be turned into a digital file.
  • the local database can store the preprocessed and pre-manipulated audio files.
  • the user interface 190 with the routine can import the preprocessed and pre-manipulated audio files in the predetermined standard hearing loss pattern import step in Figure 2. Note, as discussed later below, some variation will occur between Figure 3 and Figure 4.
  • the audio environment selector module purposely uses software and/or electronic hardware to create challenging audio environments (example discussed herein), well-known to people with hearing loss. Thus, these hearing profiles are used to determine relevant and appropriate challenging acoustical environments for that individual.
  • the audio environment selector module is coded to determine at least three relevant audio environments to test a generic hearing profile
  • the multitrack audio-file combinator module is coded to combine the multiple tracks of audio files that have had audio characteristics of those corresponding to the relevant audio environments pre-selected to the generic hearing profile based upon at least the age and gender of the user under test.
  • the audio environment selector module is likewise coded to determine at least three relevant audio environments associated with the user’s personal hearing profile, and the multitrack audio-file combinator module is coded to combine the multiple tracks of audio files corresponding to the relevant environments.
  • the audio environment selector module is likewise coded to determine at least a combination of at least two or more audio tracks/layers in different hearing/acoustic scenarios/environments.
  • the example audio environments include audio files/tracks/layers as follows. A. Background restaurant noise e.g., sometimes by the kitchen, sometimes in a bar, sometimes in a person’s home at a party, etc.
  • Action taken by the audio environment selector module can be from multi- layer/multitrack audio files, determining a combination of at least two audio files/tracks/layers; and thus, then combine those into environments/scenes/scenarios, for example:
  • the audio environment selector module can present many sounds whistling of birds, the rustling of leaves, clanking of glasses in a club, etc., and then present these manipulated audio files so that the hearing-impaired user can experience the environment (e.g., nature) like a person with a normal hearing profile would.
  • Inputs The routine in the audio environment selector module cooperates with the multitrack audio file combinator module to obtain and create multi-layer/multitrack audio files for the audio environment.
  • Outputs For example, the audio environment selector module then will output three or more challenging environments consisting of audio-layer combinations.
  • a routine is configured with intelligence to select different audio tracks from a database (the local database or an external 3 rd party database, or from a source supplied by the user), and then how to put them in various combinations.
  • the audio files for each of the audio tracks may come from sound booth recordings of the audio sounds and/or multiple speakers, each speaker outputting its own audio track, and then the combined audio being recorded.
  • an audio manipulation module of the normal-like hearing simulator 100 calculates the manipulation of the audio characteristics for each audio track/layer making up the audio environment that is appropriate for the patient’s/user’s hearing profile, and then manipulates the audio layers, either preprocessed for the standard hearing pattern or in real-time for the collected personal audiometry data.
  • the audiometric hearing profile determines the algorithmic changes that are required to be made to the layers of the audio files.
  • the software is coded to run algorithms to perform different types of manipulation of audio characteristics in tracks of audio such as audio equalization - adjusting the volume of specific/different frequency bands within an audio signal rather than uniformly adjust across all frequency bands, (multi band) audio compression - splitting the audio frequency band into two or more bands and applying different types of compression on each of the two or more formed frequency bands, audio noise reduction - acoustic echo cancellation and audio quality enhancement, automatic environmental classification and adaptation to eliminate specific background noise sources such as car noises, machinery frequencies, etc.
  • Software algorithms can automatically manipulate these audio characteristics in tracks of audio based on the audio profile with the goal of producing a manipulated version that simulates what the user would experience if they had normal hearing (e.g., restored to normal hearing with a hearing assistance device).
  • a user interface 190 is configured to allow a professional audiologist running the normal-like hearing simulator 100 (e.g., an operator of an audio kiosk or other computing device administering the test) ways to adjust the output and these audio characteristics on an individual basis.
  • Audio manipulation can also be configured to change audio characteristics in tracks of the audio dependent on the audio performance (dynamic range, etc.) of the audio hardware used: computing device - laptop, desktop, smart phone, kiosk, server, headphones/earbuds, etc.
  • the audio manipulation module and the multitrack audio-file combinator module are configured to cooperate to manipulate the audio characteristics of the at least two audio tracks in the multitrack audio file combination in real-time or preprocessing. Note, in real-time means during a conducting of a normal-like hearing simulation to allow the user to experience what normal hearing would sound like.
  • the audio manipulation module can reference and use premanipulation of a sequence of multitrack audio files to compensate for standard hearing loss.
  • the audio playback module allows through the hearing assistance device (e.g., headset), playback of multitrack sequenced audio files, automatically switching between the original version of the audio multitracks of the audio environment and the manipulated version of the audio multitracks forming the audio environment.
  • the audio manipulation module can reference and use real-time manipulation of audio characteristics in a sequence of multitrack audio files on the computing device to compensate for the user’s personal hearing profile.
  • the audio manipulation module allows real-time ‘on device’ manipulation of audio tracks.
  • Inputs A routine obtains a hearing profile for the user and two or more multilayer audio environments.
  • the audio environment selector module can output virtual audio environments consisting of manipulated, personalized multitrack audio files that are ready to play through the audio playback module of the computing device.
  • the audio layer manipulation module performs manipulation of the plurality of multitrack audio files, off-device (prior to playback), to compensate for the hearing capabilities of the user (e.g., to compensate for generic hearing profile); and/or in real-time on the device manipulation of the audio files in the computing device (Figure 2) associated with the personal hearing profile.
  • Some example manipulations of audio environments an outdoor cafe or an indoor restaurant: attenuate high frequencies in the audio layer in background noise (A) and amplify the speaking audio layer/track (B).
  • the ‘preprocessed’ off device manipulation is the alternative for when no personal audiometry details are available to tailor the audio characteristic manipulations for that user’s specific audio profile but rather a standard audio profile typical to that user’s/patient’s age, gender, and other human features.
  • the audio environment selector module can build scenarios/environments based on multiple (e.g., four) elements and audio core elements.
  • the audio manipulation module can cooperate with the audio environment selector module to then use various auditory algorithms/routines to manipulate those scenarios and audio core elements, such as frequency, amplitude, compression, etc., in, for example, background noise, on an individual basis.
  • the audio environment selector module can be coded to both 1 ) automatically select the audio environment as well as 2) allow the user through a user interface 190 to select the audio environment.
  • the audio environment is selected to be used to compare i) a user’s current hearing capability when exposed to the audio environment as compared to ii) the manipulated audio characteristics of one or more of the audio tracks in the multitrack audio file combination of the audio environment, where the audio characteristics of the one or more of the audio tracks were manipulated to simulate what a human with normal hearing would experience in that audio environment.
  • an audio playback module uses software and/or electronic hardware that actually plays and/or presents the patient/subject with the manipulated audio tracks.
  • the modules in the computing device implementing the normal-like hearing simulator 100 manipulates all of the layers I tracks of the audio files and combines them into audio environments.
  • the audio playback module plays back the manipulated sequence of multitrack audio files to the user through a hearing assistance device (e.g., audio transducers, headphones, earbuds, etc.) coupled to the (wired or wireless) audio output of the computing device so they can experience hearing the scenario/environment without audio enhancements and with audio enhancements to simulate normal hearing; thereby, creating awareness for personal hearing health.
  • a hearing assistance device e.g., audio transducers, headphones, earbuds, etc.
  • the normal-like hearing simulator 100 can create an awareness for a user with (decreased) personal hearing health as a start to cause many subsequent (call-to-) actions: e.g., planning a meeting with a PHP, thinking about potential hearing solution providers (such as Eargo’s Eargo MaxTM hearing aid, Eargo’s Eargo Neo HiFiTM hearing aid, etc.), diving deeper in hearing screening results, etc.
  • potential hearing solution providers such as Eargo’s Eargo MaxTM hearing aid, Eargo’s Eargo Neo HiFiTM hearing aid, etc.
  • the audio playback module allows through the hearing assistance device (e.g., headset), playback of multitrack sequenced audio files that automatically (and responsive to manual direction) switches between the original version of the audio multitracks and the manipulated version of the audio multitracks.
  • the hearing assistance device e.g., headset
  • Action - A routine in the audio playback module plays back the manipulated and sequenced multi-layer audio files (e.g., audio environments) to the user through a headset coupled to a playback computing device.
  • the audio playback module cooperates with the user interface 190 to present icons to playback the manipulated and sequenced multi-layer audio files.
  • the manipulated and sequenced multi-layer audio files may be a preselected, pre-processed, and pre-manipulated ‘off device’ sequence of multitrack audio files (for the generic/standard hearing loss pattern) and/or a real-time ‘on device’ manipulated sequence of multitrack audio files (personalized for the personal hearing profile).
  • the environments are played on the computing device through the headset to the patient/subject, automatically switching between original/source and manipulated audio files.
  • the simulator automatically switches between manipulation and source audio files each, for example, 10 seconds while showing a visual indicator, to indicate to the user if the current audio is manipulated or not.
  • the user interface 190 of the normal-like hearing simulator 100 presents pause and replay icons (e.g., buttons) to give the user control over listening to these comparisons again and again (see Figure 6).
  • the user interface 190 of the normal-like hearing simulator 100 presents the ability to replay virtual scenarios of preference.
  • the user interface 190 of the normal-like hearing simulator 100 may use other visuals in support of the overall experience (for example: videos).
  • the audio playback module outputs playback through a computing device and headset, automatically switching between manipulation and source files each, for example, 10 seconds while showing a visual indicator, to simulate potential hearing improvements and/or under the control of the user through the user interface 190.
  • Block 1 real audiometry data for that particular person will be collected through a [A] hearing screening performed in real-time by the hearing screening module or through importing retrospective audiometry data from a third-party database through the communications platform module.
  • Blocks 2 to 4 can all be performed on by the modules in the computing device in real-time.
  • Each computing device such as a desktop computer, kiosk, cloud server, as discussed further below (see Figure 5) will have varying computing power (memory plus CPU cycles) and each user customer experience can be tailored to the type of computing device that is performing the simulation.
  • the normal-like hearing simulator 100 can have a combined flow of (A) collecting (or importing) a personal hearing profile, (B) determining what multi-layer audio files are combined into environments/scenes and are appropriate for that hearing profile, (C) using predetermined algorithms to manipulate those audio files, and (D) playing back the audio files through the computing device and headset to simulate normal hearing.
  • a machine learning tool that automatically decides how to manipulate those audio files.
  • the normal-like hearing simulator 100 can create an emphasis on hearing health awareness.
  • the normal-like hearing simulator 100 can use multi-layer audio files to generate different environments/scenes with real-time on device manipulation of the audio files.
  • the normal-like hearing simulator 100 can use on-device software applications in the hearing profile collector module to determine the user’s personal hearing profile and then use the user’s personal hearing profile to in real-time to manipulate multi-layer audio source files based upon this particular user’s personal hearing profile.
  • the normal-like hearing simulator 100 can assist in lowering the barrier to complete a screening or survey.
  • the hearing screenings are automated screenings directed and performed by the hearing screening module that can be performed by users/subjects without additional instruction other than following the instructions on the screen of a user.
  • the hearing screenings can be conducted on the user’s own device and from their own home on the Internet-enabled implementation where a server implements the normal-like hearing simulator 100 through a web browser and/or a normal-like hearing simulator 100 application resident and running on the user’s computing device.
  • the normal-like hearing simulator 100 can be used in and/or with a digital audiometer (e.g., hearing screening device).
  • the normal-like hearing simulator 100 can run on a server accessed via a web-browser, as an application on a desktop or mobile computing device, and/or on a calibrated hearing screening device/kiosk, music systems, TVs, cars, etc.
  • the normal-like hearing simulator 100 enables subjects/people to immediately experience to what extent their hearing has worsened by simulating (and approximating) what their hearing used to sound like, thereby creating awareness for potential hearing loss.
  • the normal-like hearing simulator 100 allows people to really experience normal hearing without actually going out to a doctor’s office and getting a hearing aid. Instead, now the person can figure out what normal hearing would be from the comfort of their own home or in a retail space where a digital hearing screening system is located. In an embodiment, this allows the user to become aware of their current state of hearing loss without the need of visiting a professional for a booth audiometry hearing screening.
  • the normal-like hearing simulator 100 then performs actions based on the simulation.
  • the user interface 190 communicates with the user along with potentially other persons.
  • the user interface 190 communicates with the user to possibly tailor the report on the simulation to add additional data on top of the hearing profile and/or screening results, to contact personal hearing professional, to learn more about hearing health, to provide suggestions for purchase, etc.
  • the subject/patient is also provided the ability to take an action.
  • the user interface 190 of the normal-like hearing simulator 100 presents the ability to add a call-to-action (such as plan a consultation with an audiologist, or receive information about products that can help the user in approaching normal hearing health, etc.) and the communications platform module will launch the phone call or email.
  • a call-to-action such as plan a consultation with an audiologist, or receive information about products that can help the user in approaching normal hearing health, etc.
  • Figure 6 illustrates a diagram of an embodiment of an example user interface for the normal-like hearing simulator.
  • the example graphical representation of the user interface 190 for simulating normal hearing for people with hearing loss is presented on a display screen of the computing device.
  • the user interface 190 may be implemented in various computing devices, such as digital hearing screening devices, tablets, smartphones, laptops and/or desktop computers (Figure 5), either as standalone software or through an internet browser.
  • the example user interface 190 may be configured to present multiple functions and icons including a now playing..., a current hearing icon, an improved normal hearing icon, an audio environment ‘a’ icon, an audio environment ‘b’ icon, an environment ‘c’ icon, a call-to-action icon, a battery icon, a settings icon, a language switch icon, etc.
  • the normal-like hearing simulator 100 can automatically as well as allow a user to select an audio environment to compare a user’s current hearing of an audio environment to an improved simulated hearing of what a human with normal hearing would experience in that audio environment.
  • the user interface 190 is configured to present different icons with sound controls (play, pause, repeat, forward, reverse, and/or time shift/scroll).
  • the user interface 190 is coded to present different icons with sound controls provide the user with an ability to replay virtual scenarios of the audio environment that the user hears through the hearing assistance device i) without audio characteristics manipulations and ii) then with the audio characteristics manipulations.
  • the Ul also may present options to manipulate settings (e.g.
  • the user interface 190 presents an icon with an ability to add a call-to-action.
  • the user interface 190 can be coded to present on a display screen of the normal-like hearing simulator 100 a call-to-action icon and then cooperate with the communications platform module to carry out the call-to-action selected by the user.
  • the call-to-action can be coded to cause different functions such as i) booking a consultation with a hearing health expert, ii) receiving information about hearing health, and iii) receiving information on one or more products that can improve the user’s hearing health.
  • the user is presented a graphical user interface 190, in which at least one button plays a sequence of preselected and manipulated multitrack audio files (both with and without personalized manipulation, switching for example each 10 seconds), simulating three or more virtual auditory environments/scenarios (such as a busy restaurant, watching television, and walking and talking outdoors) that are well-known to provide challenges from a hearing loss perspective.
  • a progress bar shows the remaining duration of the sequence.
  • the multitrack audio files will be preprocessed and pre-manipulated based on generic/standard hearing profile - for moderate, high-frequency hearing loss - to demonstrate potential improvements by adjusting frequencies, compression, amplification, et cetera.
  • the user is able to, after the initial multitrack audio files sequence of, for example, three scenarios have been played, select one or more environments/scenarios to replay/restart the multitrack audio files for comparison.
  • the user interface 190 is coded to allow for the playback of the unmanipulated hearing scenarios and manipulated normal hearing scenarios as well as pause button a replay/restart buttons, and other similar buttons. There may also be buttons to manipulate the environments, for example, make the voice speaking to be louder (or softer) and then feed that new setting back into what the system knows about the profile for this particular person’s hearing loss.
  • the buttons and feedback may allow a determination of balance for the speech/signal to noise ratio, which in turn may allow audiologists to get a better understanding of what may be the most convenient speech to noise ratio for this person’s hearing profile.
  • the user interface 190 of the simulator can allow a user to make changes to the signal-to-noise ratio (SNR) that the system can store so that audiologists can do their jobs better when they know what SNR people found convenient.
  • SNR signal-to-noise ratio
  • the user interface 190 can cooperate with the unique identifier module to make an account profile for each user so that the system will fill in information when it already knows that information as well as post a history section for that person so that the person can interactively compare their hearing profile now and sometime earlier in the past or later in the future.
  • the user interface 190 can present the call-to-action icon which allows different functions, for example, book a consultation with a hearing health expert or receive information about hearing health or receive information on one or more products that can benefit the user’s hearing health, additionally providing more detailed information about the product and then cooperate with the communications platform module to carry out the call-to-action.
  • the multitrack audio files comprise a plurality of scenarios/environments, for example: person(s) talking, television sound, outdoor background noise, and busy restaurant background noise.
  • the user undergoing the normal-like hearing simulation can created part of the audio environment by supplying an audio file to be part of the audio tracks being manipulated.
  • the user may supply an audio track such as ‘a musical song’ that is then manipulated.
  • the user can provide his/her own recorded audio file recorded on their smart phone of their typical audio environment at work, which is then analyzed using machine learning and then manipulated to show what the normal hearing would sound like with the hearing assistance devices, such as hearing aids.
  • the user interface 190 is configured to present options for a user and/or operator of the normal-like hearing simulator 100 to combine audio environmental parameters themself.
  • the user interface 190 is configured to present options for which noises do you want to include into this environment.
  • specific tailoring of the virtual environment to the pre-defined hearing profile can be established.
  • the normal-like hearing simulation can be tailored to the type and version of the hearing aid being simulated in place. Different types and versions of the hearing aid can produce slightly different acoustic effects, which may be distinguishable or not distinguishable by a particular user being tested. Likewise, an older and cheaper version of a hearing aid can produce essentially the same audio improvements compared to a newer version and the normal-like hearing simulator 100 will allow a user to test out that comparison.
  • a routine in the simulator can be coded to be more interactive. For example, the more the simulator knows about the user and where they typically work and the extent to which they are suffering from speech and noise loss, and then adjust the environments accordingly to fit the user’s frame of reference.
  • Another routine allows a user to, for example, experience what did their hearing sound like five years ago, 10 years ago or will sound like in 10 years based on what is known about hearing loss progression.
  • the hearing simulator can present the user with a virtual environment in which hearing assistance devices’ performance can be experienced by the user.
  • the normal-like hearing simulator 100 can also reverse the hearing simulation process to be a hearing loss simulator to allow friends and family of a user to experience the hearing loss and what the user is experiencing on a daily basis.
  • the normal-like hearing simulator 100 may allow the user to use their device microphone as a source for the manipulations.
  • FIG. 5 illustrates a diagram of an embodiment of an example communications network for the normal-like hearing simulator.
  • the example communications network 175 for the normal-like hearing simulator 100 may include a third-party database A, a third-party database B, a communications network, a communication platform such as API, a hearing platform database, example computing devices - a tablet, a smart phone, a laptop I personal computer, a web browser, a digital hearing screening device, one or more hearing assistance devices -a headset.
  • the example network environment shows that a number of electronic systems and devices can communicate with each other to implement the normal-like hearing simulator 100.
  • the network environment has a communications network.
  • the communications network can include one or more networks selected from an optical network, a cellular network, the Internet, a Local Area Network ("LAN”), a Wide Area Network ("WAN”), a satellite network, a fiber network, a cable network, and combinations thereof.
  • the communications network is the Internet.
  • the communications network can connect one or more database server computing systems selected from at least a first server computing system and a second server computing system to each other and to at least one or more client computing systems as well.
  • the database server computing systems can each optionally include organized data structures such as databases.
  • Each of the one or more server computing systems can have one or more firewalls to protect data integrity.
  • the at least one or more client computing systems can be selected from a first mobile computing device (e.g., smartphone with an Android-based operating system), a second mobile computing device (e.g., smartphone with an iOS-based operating system), a first wearable electronic device (e.g., a smartwatch which works in cooperation with a smart phone and/or laptop computer), a first portable computer (e.g., laptop computer I PC), a third mobile computing device or second portable computer (e.g., tablet with an Android- or iOS-based operating system), a first electric personal transport vehicle, a second electric personal transport vehicle, and the like.
  • a first mobile computing device e.g., smartphone with an Android-based operating system
  • a second mobile computing device e.g., smartphone with an iOS-based operating system
  • a first wearable electronic device e.g., a smartwatch which works in cooperation with a smart phone and/or laptop computer
  • a first portable computer e.g., laptop computer I PC
  • the client computing system can include, for example, the software application or the hardwarebased system in which may be able exchange communications with the first electric personal transport vehicle, and/or the second electric personal transport vehicle.
  • Each of the one or more client computing systems can have one or more firewalls to protect data integrity.
  • client computing system and “database server computing system” is intended to indicate the system that generally initiates a communication and the system that generally responds to the communication.
  • a client computing system can generally initiate a communication and a server computing system generally responds to the communication.
  • No hierarchy is implied unless explicitly stated. Both functions can be in a single communicating system or device, in which case, the client-server and serverclient relationship can be viewed as peer-to-peer.
  • the server computing systems include circuitry and software enabling communication with each other across the network.
  • Any one or more of the server computing systems can be a cloud provider.
  • a cloud provider can install and operate application software in a cloud (e.g., the network such as the Internet) and cloud users can access the application software from one or more of the client computing systems.
  • cloud users that have a cloud-based site in the cloud cannot solely manage a cloud infrastructure or platform where the application software runs.
  • the database server computing systems and organized data structures thereof can be shared resources, where each cloud user is given a certain amount of dedicated use of the shared resources, taking privacy aspects into account.
  • Each cloud user's cloud-based site can be given a virtual amount of dedicated space and bandwidth in the cloud.
  • Cloud applications can be different from other applications in their scalability, which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access point.
  • Cloud-based remote access can be coded to utilize a protocol, such as Hypertext Transfer Protocol ("HTTP"), to engage in a request and response cycle with an application on a client computing system such as a web-browser application resident on the client computing system.
  • HTTP Hypertext Transfer Protocol
  • the cloud-based remote access can be accessed by a smartphone, a desktop computer, a tablet, or any other client computing systems, anytime and/or anywhere.
  • the cloud-based remote access is coded to engage in 1 ) the request and response cycle from all web browser-based applications, 3) the request and response cycle from a dedicated on-line server, 4) the request and response cycle directly between a native application resident on a client device and the cloud-based remote access to another client computing system, and 5) combinations of these.
  • the database server computing system can include a server engine, a web page management component, a content management component, and a database management component.
  • a resident application on the device rather than the browser may directly call and communicate with the backend server.
  • the server engine can perform basic processing and operating-system level tasks.
  • the web page management component can handle creation and display, or routing of web pages or screens associated with receiving and providing digital content and digital advertisements. Users (e.g., cloud users) can access one or more of the server computing systems by means of a Uniform Resource Locator ("URL") associated therewith.
  • the content management component can handle most of the functions in the embodiments described herein.
  • the database management component can include storage and retrieval tasks with respect to the database, queries to the database, and storage of data.
  • a database server computing system can be configured to display information in a window, a web page, or the like.
  • An application including any program modules, applications, services, processes, and other similar software executable when executed on, for example, the server computing system, can cause the server computing system to display windows and user interface screens in a portion of a display screen space.
  • a web page for example, a user via a browser on the client computing system can interact with the web page, and then supply input to the query/fields and/or service presented by the user interface screens.
  • the web page can be served by a web server, for example, the server computing system, on any Hypertext Markup Language (“HTML”) or Wireless Access Protocol (“WAP”) enabled client computing system (e.g., the client computing system) or any equivalent thereof.
  • the client computing system can host a browser and/or a specific application to interact with the database server computing system to, for example, import audiometry data.
  • Each application has a code scripted to perform the functions that the software component is coded to carry out such as presenting fields to take details of desired information.
  • Algorithms, routines, and engines within, for example, the server computing system can take the information from the presenting fields and put that information into an appropriate storage medium such as a database (e.g., database).
  • a comparison wizard can be scripted to refer to a database and make use of such data.
  • the applications may be hosted on, for example, the server computing system and served to the specific application or browser of, for example, the client computing system.
  • the applications then serve windows or pages that allow entry of details.
  • An example computing device can include components to implement the normal-like hearing simulator 100.
  • a computing system can be, wholly or partially, part of one or more of the server or client computing devices. The computing systems are specifically configured and adapted to carry out the processes discussed herein.
  • Components of the computing system can include, but are not limited to, a processing unit having one or more processing cores, a system memory, and a system bus that couples various system components including the system memory to the processing unit.
  • the system bus may be any of several types of bus structures selected from a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • the normal-like hearing simulator 100 can run on “computing devices” that include a mobile phone, a laptops computer, a desktop computer, etc. that has internet connectivity and a touchscreen and/or other user input mechanism.
  • the computing system typically includes a variety of computing machine- readable media.
  • Computing machine-readable media can be any available media that can be accessed by computing system and includes both volatile and nonvolatile media, and removable and non-removable media.
  • computing machine-readable media use includes storage of information, such as computer-readable instructions, data structures, other executable software, or other data.
  • Computer-storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information, and which can be accessed by the computing device 900.
  • Transitory media such as wireless channels are not included in the machine-readable media.
  • Communication media typically embody computer readable instructions, data structures, other executable software, or other transport mechanism and includes any information delivery media.
  • the system memory includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and random-access memory (RAM).
  • ROM read only memory
  • RAM random-access memory
  • BIOS basic input/output system
  • RAM typically contains data and/or software that are immediately accessible to and/or presently being operated on by the processing unit.
  • the RAM can include a portion of the operating system, application programs, other executable software, and program data.
  • the drives and their associated computer storage media discussed above, provide storage of computer readable instructions, data structures, other executable software, and other data for the computing system.
  • a user may enter commands and information into the computing system through input devices such as a keyboard, touchscreen, or software or hardware input buttons, a microphone, a pointing device and/or scrolling input component, such as a mouse, trackball, or touch pad.
  • the microphone can cooperate with speech recognition software.
  • These and other input devices are often connected to the processing unit through a user input interface that is coupled to the system bus, but can be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).
  • a display monitor or other type of display screen device is also connected to the system bus via an interface, such as a display interface.
  • computing devices may also include other peripheral output devices such as speakers, a vibrator, lights, and other output devices, which may be connected through an output peripheral interface.
  • the computing system can operate in a networked environment using logical connections to one or more remote computers/client devices, such as a remote computing system.
  • the logical connections can include a personal area network (“PAN”) (e.g., Bluetooth®), a local area network (“LAN”) (e.g., Wi-Fi), and a wide area network (“WAN”) (e.g., cellular network), but may also include other networks.
  • PAN personal area network
  • LAN local area network
  • WAN wide area network
  • a browser application may be resident on the computing device and stored in the memory.
  • the present design can be carried out on a computing system.
  • the present design can be carried out on a server, a computing device devoted to message handling, or on a distributed system in which different portions of the present design are carried out on different parts of the distributed computing system.
  • a power supply such as a DC power supply (e.g., battery) or an AC adapter circuit.
  • the DC power supply may be a battery, a fuel cell, or similar DC power source that needs to be recharged on a periodic basis.
  • a wireless communication module can employ a Wireless Application Protocol to establish a wireless communication channel.
  • the wireless communication module can implement a wireless networking standard.
  • a machine- readable medium includes any mechanism that stores information in a form readable by a machine (e.g., a computer).
  • a non-transitory machine-readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; Digital Versatile Disc (DVD's), EPROMs, EEPROMs, FLASH memory, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • an application described herein includes but is not limited to software applications, mobile apps, and programs that are part of an operating system application.
  • Some portions of this description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art.
  • An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated.
  • a module can be implemented in electronic hardware, software instruction cooperating with one or more memories for storage and one of more computing devices for execution, and a combination of electronic hardware circuitry cooperating with software.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Otolaryngology (AREA)
  • Biophysics (AREA)
  • Signal Processing (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Un simulateur d'ouïe normale comprend plusieurs modules à titre d'exemple et d'autres composants. Un module de sélection d'environnement audio qui 1) détermine une combinaison d'au moins deux pistes audio devant être présentes dans un environnement audio auquel l'utilisateur est exposé, puis qui 2) crée une combinaison de fichiers audio multipiste desdites pistes audio dans l'environnement audio. Un module de manipulation audio manipule des caractéristiques audio des pistes audio dans l'environnement audio pour compenser une perte d'audition audio sur la base du profil de la capacité auditive de l'utilisateur. Un module de lecture audio lit les pistes audio qui ont eu les caractéristiques audio manipulées et combinées dans la combinaison de fichiers audio multipiste par l'intermédiaire d'un dispositif d'aide auditive couplé au simulateur d'ouïe normale dans une simulation auditive de type normal pour permettre à l'utilisateur d'expérimenter l'audition normale.
PCT/US2022/048715 2021-11-03 2022-11-02 Simulateur d'ouïe normale WO2023081219A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163275041P 2021-11-03 2021-11-03
US63/275,041 2021-11-03

Publications (1)

Publication Number Publication Date
WO2023081219A1 true WO2023081219A1 (fr) 2023-05-11

Family

ID=86241873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/048715 WO2023081219A1 (fr) 2021-11-03 2022-11-02 Simulateur d'ouïe normale

Country Status (1)

Country Link
WO (1) WO2023081219A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080165978A1 (en) * 2004-06-14 2008-07-10 Johnson & Johnson Consumer Companies, Inc. Hearing Device Sound Simulation System and Method of Using the System
US20110200214A1 (en) * 2010-02-12 2011-08-18 Audiotoniq, Inc. Hearing aid and computing device for providing audio labels
US20150382106A1 (en) * 2014-04-08 2015-12-31 Doppler Labs, Inc. Real-time combination of ambient audio and a secondary audio source
US20170316718A1 (en) * 2011-06-03 2017-11-02 Apple Inc. Converting Audio to Haptic Feedback in an Electronic Device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080165978A1 (en) * 2004-06-14 2008-07-10 Johnson & Johnson Consumer Companies, Inc. Hearing Device Sound Simulation System and Method of Using the System
US20110200214A1 (en) * 2010-02-12 2011-08-18 Audiotoniq, Inc. Hearing aid and computing device for providing audio labels
US20170316718A1 (en) * 2011-06-03 2017-11-02 Apple Inc. Converting Audio to Haptic Feedback in an Electronic Device
US20150382106A1 (en) * 2014-04-08 2015-12-31 Doppler Labs, Inc. Real-time combination of ambient audio and a secondary audio source

Similar Documents

Publication Publication Date Title
EP2109934B1 (fr) Sélection d'un profil d'écoute personnalisé dans un système sonore
Edwards The future of hearing aid technology
CN103098492B (zh) 用于自我管理的声音增强的方法和系统
AU2010213370B2 (en) Automated fitting of hearing devices
US8447042B2 (en) System and method for audiometric assessment and user-specific audio enhancement
CN105877914B (zh) 耳鸣治疗系统和方法
US20120183164A1 (en) Social network for sharing a hearing aid setting
US10896020B2 (en) System for processing service requests relating to unsatisfactory performance of hearing devices, and components of such system
Vlaming et al. HearCom: Hearing in the communication society
TW201820315A (zh) 改良型音訊耳機裝置及其聲音播放方法、電腦程式
KR20150104626A (ko) 자율 관리 음향 개선을 위한 방법 및 시스템
EP4078995A1 (fr) Dispositif auditif comprenant un évaluateur de stress
US20190261102A1 (en) Remotely updating a hearing aid profile
Tan et al. Perception of nonlinear distortion by hearing-impaired people
Vaisberg et al. Perceived sound quality dimensions influencing frequency-gain shaping preferences for hearing aid-amplified speech and music
Bouserhal et al. An in-ear speech database in varying conditions of the audio-phonation loop
WO2024001463A1 (fr) Procédé et appareil de traitement de signal audio, et dispositif électronique, support de stockage lisible par ordinateur et produit-programme d'ordinateur
Zurek et al. Hearing loss and prosthesis simulation in audiology
Gößwein et al. Evaluation of a semi-supervised self-adjustment fine-tuning procedure for hearing aids
WO2023081219A1 (fr) Simulateur d'ouïe normale
Gilbert et al. Discrimination of release time constants in hearing-aid compressors
EP4078994A1 (fr) Système comprenant un programme informatique, dispositif auditif et dispositif d'évaluation de stress
CN115831344A (zh) 听觉辅助方法、装置、设备及计算机可读存储介质
Edwards The future of digital hearing aids

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22890726

Country of ref document: EP

Kind code of ref document: A1