WO2004004414A1 - Method of calibrating an intelligent earphone - Google Patents

Method of calibrating an intelligent earphone Download PDF

Info

Publication number
WO2004004414A1
WO2004004414A1 PCT/DK2003/000442 DK0300442W WO2004004414A1 WO 2004004414 A1 WO2004004414 A1 WO 2004004414A1 DK 0300442 W DK0300442 W DK 0300442W WO 2004004414 A1 WO2004004414 A1 WO 2004004414A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
intelligent
calibrating
audio signals
audio
Prior art date
Application number
PCT/DK2003/000442
Other languages
French (fr)
Inventor
Søren Louis PEDERSEN
Original Assignee
Microsound A/S
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsound A/S filed Critical Microsound A/S
Priority to AU2003236811A priority Critical patent/AU2003236811A1/en
Publication of WO2004004414A1 publication Critical patent/WO2004004414A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting

Definitions

  • the present invention relates to a method of calibrating an intelligent earphone, e.g. a hearing aid, to a user.
  • hearing aids are provided to a user based on initial tests performed for the purpose of determined user hearing representative data.
  • data may for example be provided by means of e.g. an audiometer, determining the hearing level (HL), i.e. hearing threshold representative data.
  • HL hearing level
  • a problem of the conventional hearing aid tuning methods is that the results apparently are somewhat questionable when evaluating the consumers' satisfaction. Very often, a carefully tuned hearing aid is simply put aside due to the fact that the user feels most comfortable without it. Evidently, such lack of satisfaction by a relatively large group of consumers represents a problem, if not to the manufacturers of the hearing aid, at least to the consumers, which are basically left alone with an annoying uncompensated hearing loss.
  • the invention relates to a method of calibrating an intelligent earphone (17), e.g. hearing aid said method comprising the steps of establishing a series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) to a user by means of a sound transducer,
  • An “intelligent” is here used to express the idea of using a built-in signal processor for processing of data.
  • An “intelligent earphone” may accordingly comprise a sound reproducing device comprising active filtering or signal processing, e.g. hearing aids, headsets, active ear defenders, personal in-ear monitors, ear phones, mobile phones, handsets, etc.
  • the user selection of preferred audio signals may generally be performed explicitly or implicitly.
  • a user may typically select preferred audio signals directly, but may else select the signal from e.g. its discomfort or muddiness, or intelligibility and thereby indirectly selecting preferred audio signals.
  • the most complicated tests may be performed in the simplest way.
  • a preferred embodiment of the invention facilitates a direct testing.
  • the signal processing parameters instead of establishing the signal processing parameters on the basis of a good guess how the user will react on different complex audio signals, why not apply those signals directly in the calibration or fitting process.
  • the invention features comparative tests.
  • some or all calibration parameters of the earphone to be calibrated are established on the basis of comparative tests, thereby enabling the user of the earphone to react on certain audio inputs, - typically and preferably real-life sounds such as speech, etc. - and provide feedback to the calibration routine.
  • the feedback will then, when established on a comparative basis, indicate not only what the user can hear, but also what the user prefers to hear.
  • the duration of a calibration procedure according to a preferred embodiment of the invention is preferably relative short, e.g. 15 minutes, and is carried out by the user in front of a computer, at home or in a store.
  • the calibration procedure according to a less preferred embodiment of the invention may as well be of much longer duration, e.g. two weeks, and carried out by the user wearing a feedback unit, as e.g. a wristwatch-like device comprising buttons, and letting the series of comparative audio presentations be the user's real sound environments established concurrently with the user experiencing them, and letting the user respond to these audio experiences using the buttons, thereby facilitating the establishment of signal processing parameters on the basis of the user-selected preferred everyday audio experience.
  • this can be regarded as adaptive fitting.
  • the most advantageous comparative test comprises paired tests of two audio signals, due to the fact that the human audio memory is very restricted.
  • paired tests of two audio signals due to the fact that the human audio memory is very restricted.
  • said user-selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.) comprise audio signals selected to be most comfortable by the user, a further advantageous embodiment of the invention has been obtained.
  • said user-selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.) comprise audio signals selected to be most clear by the user, a further advantageous embodiment of the invention has been obtained.
  • said comparative audio presentations comprises at least two consecutive audio signals (1.1 and 1.2, 2.1 and 2.2, 2.3 and 2.4, etc.) presented to the user and where the user selects the preferred audio signals, a further advantageous embodiment of the invention has been obtained.
  • the user may be presented with a number of consecutive audio signals, preferably two consecutive signals.
  • the user informs which of the consecutive signals feels most comfortable or perceivable.
  • the user may also be able to express a "don't care"-state, i.e. a situation in which the user feels that both or all the presented signals sound equal.
  • the signal features by which the audio signals differ may comprise psycho-acoustically motivated features such as e.g. clarity, comfort, loudness, harmonicity, pitch and/or technically based signal differences such as e.g.
  • frequency spectrum formant structure, fundamental frequency, level, intensity, vowels, semivowel, consonants, nasals, diphthongs and triphthongs, time variance, fricatives, affricatives, laterals, coarticulation and transitions, tone, accent, stress, tempo, rhythm, pausing, intonation, laryngeal settings, phonation type, etc., and/or in addition other parameters such as e.g. cancellation of background noise or prevention of loud sounds, etc.
  • the invention it is moreover possible to obtain a registration of not only the static parameters of the measured hearing, but it is moreover possible to test technical compensations for time variant conditions influencing the hearing perception of the user in real-life.
  • real-life audio signals are provided to the user during the calibration and at the end of the calibration the user is provided with the preferred settings, thereby enabling the user to establish not only "probably- OK"- settings but rather verified and preferred settings.
  • the invention fully grasps the idea that the user, when it comes to an end, is the person best qualified for determining whether a signal, e.g. speech or other real-life signal is preferred.
  • a user type may here be referred to as a group of persons, which may advantageously be facing the same comparative audio presentations of audio signals.
  • a user type may e.g. comprise children, which may e.g. face other types of audio- signals, certain types of background masking signals, e.g. kindergarten noise etc.
  • the user should preferably face a "temperament - fitting algorithm".
  • speech is a significant audio presentation in the sense that speech signals are what the user needs to hear by means of the calibrated intelligent earphone, e.g. hearing aid.
  • a predefined criteria may e.g. comprise a threshold level, e.g. determined as a difference in amplification between the two ears, e.g. within one or several different frequency bands of the user-selected preferred audio signals.
  • a warning may be delivered in several different ways within the scope of the invention.
  • the invention relates to an audio rendering system for rendering of audio signals according to any of the claims.
  • the invention further relates to a method of evaluating the hearing representing parameters of a user comprising the steps of
  • said audio signals comprising time variant audio signals.
  • the invention also in this aspect, facilitates a "keep-it-simple-approach" to the time variant information in the sense that the applied time variant audio signals typically are restricted to audio signal actually relevant to a user.
  • calibration should primarily focus on audio signals relevant to the user instead of conventional calibration of filters based on artificial signals, such as pure tones hardly relevant to the user.
  • Time variant information may e.g. include speech in which the frequency spectrum varies over time.
  • the invention further relates to a method of calibrating an intelligent earphone, e.g. hearing aid to a user, said method comprising the steps of
  • said method being executed by signal processing means under the control of the user.
  • Fig. 1A-1D show four examples of applications targeted by the present invention
  • Fig. 2 illustrates a calibration setup according to the invention
  • Fig. 3 illustrates a an overall calibration procedure according to a preferred embodiment of the invention
  • F Fiigg.. 4 4 aanndd 55 illustrate a more detailed calibration and conversion procedure setup according the invention
  • Fig. 6 illustrates an example of a calibration layer
  • F Fiigg.. 7 7AA aanndd 71B. illustrate two exemplary audio signals for a comparative test
  • Fig. 8 illustrates an example of a transfer function used to alter acoustic perception parameters of audio signals
  • FIG. 9 illustrates the rendering of audio signals by means of an intelligent earphone according to the invention.
  • Fig. 10A-10E illustrate different calibration menu points applied according to a preferred embodiment of the invention.
  • any kind of sound reproducing device comprising active filtering, e.g. hearing aids, headsets, active ear defenders, personal in-ear monitors, ear phones, mobile phones, etc.
  • active filtering e.g. hearing aids, headsets, active ear defenders, personal in-ear monitors, ear phones, mobile phones, etc.
  • the following description will use the term "intelligent earphone" as a reference to such sound reproducing devices.
  • the most preferred embodiment of the invention is provided when calibrating or fitting hearing aids.
  • FIG. 1 A to ID Examples of different applications performing actively filtered sound reproduction, and thus being target applications to the present invention, are shown in figures 1 A to ID.
  • the examples are given to illustrate that these applications, though having completely different usages, may be implemented quite similarly, and that the calibration method of the present invention thus easily may be adapted to any of these exemplary applications, and similarly to any other actively filtering sound reproduction device.
  • Figure 1A shows a preferred embodiment of a hearing aid in principle. It comprises an active filtering unit 2, a loudspeaker 3 and a microphone 4. A signal from the microphone 4 is input to the filtering unit 2, and an output from the filtering unit 2 is used as input to the loudspeaker.
  • the active filtering unit 2 typically comprises a digital signal processor DSP, but may be any kind of circuit able to apply active filtering to a signal, e.g. an operational amplifier setup, a gate array etc.
  • the filtering unit 2 may also comprise electronics or software to support or enable the filtering means, e.g. analog-to-digital and digital-to-analog conversion, amplification, passive filtering, etc.
  • the loudspeaker 3 is preferably arranged in an earplug adjusted with respect to size and shape so that it exactly fits the opening of the user's auditory canal most accurately, but other kinds of sound reproducers may be used as well.
  • the active filtering unit 2 When the active filtering unit 2 is loaded with filter parameters, it will filter a signal received by the microphone 4 and send the filtered signal to the loudspeaker 3, which again converts the filtered signal to sound in the user's ear, i.e. sound pressure.
  • Different optimization targets are possible, comprising e.g. psycho-acoustic parameters, such as clarity and comfort, specific environmental parameters, such as noise-attenuation, prevention of loud sounds, etc.
  • the filter parameters are preferably optimized to compensate for the hearing loss of the user, meaning that using the hearing aid will make the surrounding sounds sound clearer than if not using the hearing aid.
  • the filter may as well be optimized to the comfort of the user, meaning that using the hearing aid will make the surrounding sounds sound more comfortable than if not using the hearing aid. Examples of calibration of the above-described circuit will be described in fig. 2 and the following figures.
  • Figure IB shows an alternative embodiment of an active ear defender in principle. It is exactly the same drawing as figure 1 A.
  • the filter parameters are preferable optimized to adaptively attenuate any surrounding sounds, preferably as much as possible.
  • the filter may instead be optimized to attenuate specific sounds or characteristic noise, or to attenuate any sounds which exceed a certain amplitude level.
  • the filter may also be optimized to attenuate loud noise when it is present for longer than a predefined time.
  • the filter may also be optimized to attenuate specific sounds or noise and in addition amplify e.g. speech.
  • Figure 1C shows an embodiment of a headset in principle.
  • this embodiment also comprises an active filtering unit 2, a loudspeaker 3 and a microphone 4. Furthermore, it comprises an input signal 5 and an output signal 6.
  • the signal from the microphone 4 is input to the filtering unit 2, possibly filtered by the filtering unit 2, and at last sent to the output 6.
  • the input 5 is sent to the filtering unit 2, possibly filtered by the filtering unit 2, and at last sent to the loudspeaker 3.
  • the input and output signals 5, 6 are preferably connected to an audio renderer, e.g. a mobile phone, a telephone, a radio transceiver, a portable CD-player, an MP3 -player, etc.
  • the connection from the input and output 5, 6 to the preferred audio renderer may be established by means of wires, light or radio communication, etc.
  • this embodiment may be built into a mobile phone, a phone handset, any other kind of handset, etc., possibly using the microphone and loudspeaker already present in these devices.
  • a further alternative embodiment omits the microphone, leaving just the active filtering unit 2 and the loudspeaker 3.
  • This embodiment may preferably be connected to an MP3 -player, a portable CD-player, a radio or any other audio rendering device.
  • the filter parameters are preferably optimized to the comfort of the user, but may as well be optimized with regards to attenuating background noise from either sent or received signals, enhancing the received signals regarding clarity generally or with respect to a specific sound component, e.g. the speaker, etc.
  • Figure ID shows an embodiment of a personal in-ear monitor, ear monitor for short, in principle.
  • the ear monitor is used by e.g. musicians doing a live concert, enabling them to have their preferred mixing sent to an earplug.
  • the shown embodiment of an ear-monitor comprises an active filtering unit 2, a loudspeaker 3 and an input signal 5.
  • the input signal 5 is preferably connected to a mixing board, but may be connected to any audio renderer.
  • the connection is preferably established by means of radio communication, but may as well be established by means of light communication or wires, etc.
  • the filter parameters are preferably optimized to the comfort of the user, but may as well be optimized with regards to attenuating background noise from either sent or received signals, enhancing the received signals regarding clarity generally or with respect to a specific sound component, e.g. the user's voice, etc.
  • Fig. 2 shows an embodiment of a calibration setup of the present invention.
  • the setup comprises a calibration unit 1, a person equipped with sound reproducers or sound transducers 10 and a programming unit 16.
  • the calibration unit 1 comprises a processing unit 15, an audio renderer 11 and a monitor 12.
  • the sound reproducers 10 are connected to the audio renderer 11, which again is connected to the processing unit 15. To the processing unit 15 is further connected the monitor 12 and the programming unit 16. To the programming unit may be connected one or more intelligent earphones 17.
  • the sound reproducers 10 are preferably earplugs having the same plug dimensions as the final intelligent earphones 17, but may as well be headphones, speakers, hearing aids or other sound reproducing devices, analog or digital, with the possibility of connecting them to an audio renderer.
  • the criteria may be overcome if the test earphones applied for the calibration tests are the same as the earphones to be used by the user.
  • connection to the audio renderer 11 is preferably established by means of wires, but may be established by any kind of connection available for analog or digital signal transfer, e.g. by means of infrared communication such as described by the IrDA standard, radiowaves such as described by the BlueTooth standard or any other possible means.
  • the audio renderer 11 is preferably a computer soundcard, able to render data processed by the processing unit 15 to produce audio signals reproducible by the sound reproducers 10.
  • the audio renderer may also be an external unit, e.g. an external soundcard, an MP3-player, a CD-player or any other available audio renderer.
  • the processing unit 15 comprises a digital signal processor DSP 13 and a data storage unit 14.
  • the processing unit is preferably a computer comprising a CPU and persistent as well as non-persistent memory. But the use of any other kind of hardware configuration comprising means for digital signal processing and data storage is within the scope of the invention. Such possible configurations comprise Laptop-computers, hand-held devices such as Palm-Pilot's and PDA's, mobile phones and custom configurations comprising e.g. a DSP.
  • the processing unit may also be incorporated in the intelligent earphone.
  • the monitor 12 is preferably a touch screen reachable by the user, but any kind of device able to display messages to the user is within the scope of the invention. If the monitor is not a touch screen other means for giving the processing unit feedback from the user must be established. These may comprise a keyboard, a number of buttons, a joystick or pad, a voice recognition device or any other means for giving input to a processing unit.
  • the monitor 12 is discarded as the instructions and feedback given to the user by means of the monitor is now given by means of simple visible methods, such as a box with light emitting diodes.
  • the monitor is discarded, as the instructions and feedback given to the user by means of the monitor is now given by means of audible methods, such as voice synthesis or prerecorded messages.
  • the programming unit 16 comprises means for transferring data to an intelligent earphone 17 or a pair of intelligent earphones 17.
  • the connection between the programming unit 16 and the intelligent earphones 17 maybe wireless, e.g. inductive, infrared or using radiowaves, or it may be established by placing the intelligent earphones 17 in a specific way on the programming unit 16, so that a number of contact points on the intelligent earphones 17 are coupled to a number of contact points on the programming unit 16. Further the connection may be established by plugging a wire from the programming unit into the intelligent earphones 17.
  • the task of the programming unit 16 is to transfer filter data and/or other signal processing parameters produced by the processing unit 15 to the intelligent earphones 17 for optimizing their performance to the user in question.
  • the processing unit 15, the audio renderer 11 and the programming unit 16 are all implemented within the intelligent earphone 17, thus making the user able to go through the calibration procedure guided by the intelligent earphone itself.
  • Feedback to the processing unit 15 from the user may be communicated by speech or by touching the intelligent earphones 17.
  • a box with buttons and possibly a display may be connected to the intelligent earphones 17 for giving and receiving feedback.
  • An advantageous feature of the calibration setup of the present invention is the fact that the user may go through the complete calibration procedure all by himself, without the need for an ear specialist, otologist or trained person.
  • the system is self- explanatory and the user alone is able to make all decisions, which is also the most logical method, as no doctor can measure the hearing abilities of another person to its full extent, including psychological parameters.
  • the user may, however, within the scope of the present invention, be instructed or guided by a trained person, thus enabling the invention to be used by children, mentally or physically handicapped persons or persons not used to operate computer-like devices.
  • Fig. 3 shows an overview of a process, according to an embodiment of the invention, of screening a person's needs and wishes regarding hearing loss compensation, and thereafter optimizing an intelligent earphone accordingly.
  • the person is assumed suffering from hearing loss, and the intelligent earphone would then be an embodiment of a hearing aid.
  • a calibration 31 is performed. This comprises a screening of the hearing loss of the person in question.
  • a conversion 32 is performed. This comprises a conversion of the data collected through the calibration procedure, together with an optimization to use the data with a specific type of hearing aid or other intelligent earphone.
  • the last element of the process is rendering 33. This comprises the use of a hearing aid or other kind of intelligent earphone programmed with the data calculated through the previous steps.
  • Fig. 4 illustrates the calibration process, above referenced as 31, in more detail. It comprises a number N of layers LI 41, Ln 42, Ln+1 43 and LN 44. Within each layer 41, 42, 43, 44 a calibration is performed with respect to one or several specific signal features, such as e.g. clarity, comfort, loudness, frequency spectrum, formant structure, fundamental frequency, intensity, vowels, semivowel, consonants, nasals, diphthongs and triphthongs, time variance, fricatives, affricatives, laterals, coarticulation and transitions, tone, accent, stress, tempo, rhythm, pausing, intonation, laryngeal settings, phonation type, etc.
  • specific signal features such as e.g. clarity, comfort, loudness, frequency spectrum, formant structure, fundamental frequency, intensity, vowels, semivowel, consonants, nasals, diphthongs and triphthongs, time variance, fricatives, affricatives, laterals, coarticul
  • the number of layers preferably equals the number of different acoustic perception parameters for which a calibration is desired. Some acoustic perception parameters may be possible to calibrate at the same time, letting another embodiment of the invention having fewer layers, than the number of desired calibrated acoustic perception parameters.
  • Each layer is possibly, but not necessarily, dependent on the previous layers, either by needing other parameters to be calibrated already or to be able to suggest similar calibration or a starting point for calibration within the current layer.
  • both ears are individually calibrated for each layer before proceeding with the next layer. In such case the decision-box 45 is omitted. In yet another embodiment of the invention, both ears are calibrated at the same time, which also omits the decision-box 45. And in yet another embodiment of the invention, only one ear is calibrated, thus omitting decision-box 45 and balance adjustment 46.
  • Fig. 5 illustrates the conversion process in more detail.
  • a filter is defined 51 on the basis of the data collected through the calibration process. This filter is adjusted to the person in question by the adaptive calibration process. Then the filter is optimized 52 according to the specific hearing aids or other intelligent earphone the person will be using. Last the filter is implemented 53 in the hearing aids.
  • Fig. 6 illustrates an exemplary flow chart of a calibration layer 41, 42, 43, 44 according to an embodiment of the invention.
  • the purpose of such a layer is to perform a calibration with respect to specific acoustic perception parameters.
  • this calibration is carried out by exposing the user to a comparative test, comprising the steps of presenting two or more sounds and letting the user choose the one he or she prefers. These steps are repeated with different sounds, sound environments, etc., until the user answers are sufficient to make a conclusion on that layer.
  • info may comprise details on the test sequence, the ear to be tested, the theories behind the test, etc.
  • info may be audible or visual or both.
  • sounds 1.1 and 1.2 e.g. two speech signals representing the same word, e.g. "two", but being mutually different regarding a certain acoustic perception parameter.
  • the user chooses the audio signal he or she prefers, and is again presented with two audio signals, either sounds 2.1 and 2.2 or sounds 2.3 and 2.4, according to the user's first decision.
  • the sounds 2.1, 2.2, 2.3 and 2.4 preferably representing the same content, e.g. "two", as the sounds 1.1 and 1.2, but with the acoustic perception parameter in test being altered, differently for each signal, from the previous signals, this alteration carried out in accordance with the user's previous choices.
  • Such altering may for a number of the next signals consist of doing no alteration, as it may be advantageous to compare the same signal to one of the next signals.
  • the filter settings of the user's personal hearing aid transfer function i.e. the transfer function(s) may be established directly.
  • Figures 7A and 7B are graphical representations of an example of two paired audio signals, e.g. sounds 1.1 and 1.2 from above, both being speech signals representing the word "two", but being mutually different regarding a certain acoustic perception parameter.
  • the graphs comprise a horizontal axis representing the time dimension measured in milliseconds (msec), and a vertical axis representing frequency measured in Hertz (Hz).
  • the symbols used to plot the signals into the graph also represent frequencies. Circular points correspond to low frequencies below 1000 Hertz, rectangular points correspond to frequencies above 1000 Hertz, but below 2000 Hertz, and triangular points correspond to frequencies above 2000 Hertz.
  • the dimension of the symbols corresponds to the intensity of the signal, at the time and frequency for a particular symbol. The actual frequency plot is indicated by the centers of the symbols.
  • both the signal of figure 7A and that of figure 7B comprise four main frequency components being a first frequency component between 300 Hz to 400 Hz, a second frequency component shifting from just above 1500 Hz down to 900 Hz, a third frequency component shifting from 2800 Hz down to 2300 Hz, having a dive to 2100 Hz at 1000 msec, and at last a fourth frequency component starting at 3300 Hz, shifting to 3400 Hz at 620 msec, shifting to 2900 Hz at 900 msec, and ending at about 3200 Hz.
  • the two signals are approximately equal with respect to their frequency components. This is because they are both derived from the same recording of the word "two".
  • the audio signal of figure 7B is that of figure 7A, being altered by a transfer function that amplifies high frequencies. This corresponds with the analysis made above.
  • these two audio signals are presented to a user whose hearing abilities are less sensitive to high frequencies, he or she will hear the high-frequency enhanced signal of figure 7B more clearly than the original signal of figure 7A, and will therefore choose that high-frequency enhanced signal.
  • the high-frequency enhanced signal of figure 7B will sound unnatural and uncomfortable compared to the signal of figure 7A, and the user will choose the original signal.
  • Fig. 8 illustrates an example of a transfer function used to alter acoustic perception parameters of audio signals. The example of figure 8 amplifies the high frequencies. Often hearing impaired persons will benefit from having the high frequencies amplified.
  • the figure comprises a horizontal axis representing frequency measured in Hertz (Hz) and a vertical axis representing amplification measured in decibel (dB).
  • Hz Hertz
  • dB decibel
  • the transfer function 81 used to alter the signal of figure 7A to become the signal of figure 7B.
  • the transfer function leaves DC-components with no alteration, but otherwise amplifies the signal the amplification being proportional to the frequency.
  • signal components with frequencies at 3000 Hz are amplified with approximately 20 dB, explaining the huge difference in size of the symbols representing high frequencies (the triangles) in the figures 7A and 7B, while the size of the symbols representing low frequencies are less different.
  • transfer functions used to establish the audio signal pairs used for the comparative tests comprise transfer functions related to signal features as e.g. clarity, comfort, loudness, frequency spectrum, formant structure, fundamental frequency, intensity, vowels, semivowel, consonants, nasals, diphthongs and triphthongs, time variance, fricatives, affricatives, laterals, coarticulation and transitions, tone, accent, stress, tempo, rhythm, pausing, intonation, laryngeal settings, phonation type, etc. It should be noted that other transfer functions, though not preferred, are within the scope of the invention as well.
  • Fig. 9 illustrates the principle state of the hearing aid or other kind of intelligent earphone when filter settings establishing the desired transfer function(s) of the hearing aid are controlling the rendering 91 of the hearing aid.
  • FIG. 10A-10E illustrate an exemplary implementation of the menus displayed on the monitor during the calibration process.
  • Figure 10A lets the user adjust the volume before the test begins. This is done to ensure that the user can hear the test sounds, but it may also be used as a calibration parameter itself.
  • the menu comprises three buttons, 711, 712, 713.
  • the first button 711 lets the user decrease the volume
  • the third button 713 lets the user increase volume
  • the second button 712 lets the user accept the volume level when any adjustments are done. All the time this menu is shown, an audio signal is being played.
  • the audio signal preferably comprises sound components easily heard by even heavily hearing impaired persons.
  • This menu and any other menu comprising sound replay, may also comprise a picture, moving or still, e.g. a video recording of the speaker, saying the test words. This may be beneficial to some calibration steps.
  • Figure 10B shows a menu used for the comparative tests. It comprises three buttons 721, 722, 723.
  • the first button 721 lets the user choose the first sound as the clearest or most comfortable.
  • the next button 722 lets the user choose the second sound as the clearest or most comfortable.
  • the third button 723 lets the user request a replay of the sounds. When this menu is shown, two audio signals are being played, and then there is silence. The user may be visually or audibly instructed about which sound is about to be played.
  • this remarkably simple audio presentation especially when performed with audio sounds, to which the user may actually relate, such as words, music, door bells, etc., may increase the knowledge of the combined functionality of user and earphones significantly, when compared to traditional tests applying artificial audio signals, such as pure sinus tones.
  • a translation between a conventional pure-tone-response from a user is typically very difficult to transform to meaningful setting of the earphone-related filter circuits.
  • Figure IOC shows a menu used when the comparative tests are finished. It is used for calibrating the balance between the left and right ear. It comprises three buttons 731, 732, 733. The first and third buttons 731 and 733 let the user adjust the balance. The second button 732 lets the user accept the balance, when any adjustments are done. All the time, when this menu is displayed, an audio signal is playing to help the user make the adjustment. The audio signal played is preferably filtered according to the calibration results just achieved through the comparative tests, to give the user the best basis to do the adjustments on. In another embodiment of the invention, the user is only calibrating one ear, and this balance adjustment menu is thus omitted.
  • Figure 10D shows a menu that lets the user calibrate the volume level of sounds. It comprises three buttons 741 , 742, 743. The first and third button 741 , 743 let the user adjust the sound level, and the second button 742 lets the user accept the adjustments made.
  • this menu is shown, an audio signal is being played. This audio signal is preferably filtered according to the calibration results just achieved through the comparative tests, to give the user the best basis to do the adjustments on.
  • Figure 10E shows a menu for calibrating the sound of the user's own voice. It comprises two buttons 751, 752.
  • the first button 751 lets the user accept the setting, and the second button 752 lets the user request an adjustment.
  • the user is first visually or audibly asked to say a word, e.g. "house”. The word is recorded, filtered according to the previous calibrations made, and is then presented to the user. If he or she presses the first button 751, the user is instructed to say another word, e.g. "zero”. This is repeated for a number of times. If the user presses the second button 752, the filtering of the recorded audio signal is changed, and then presented to the user again. This is repeated until the user presses the first button 751. It should be noted that the above comparative tests may be applied in numerous different ways within the scope of the invention.
  • the set of test sounds may e.g. vary from user to user, e.g. depending on the language of the user, the environment in which the user wishes to use his intelligent earphone, i.e. with different types of background noise, e.g. kindergarten noise, etc.
  • a user may also wish to obtain certain different settings, i.e. one for work (e.g. with heavy background noise), one setting for listening to music, one setting when going shopping, etc.

Abstract

The invention relates to a method of calibrating an intelligent earphone, e.g. hearing aid, said method comprising the steps of establishing a series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) to a user by means of a sound transducer, facilitating the user to select a series of preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.), establishing signal processing parameters of the intelligent earphone on the basis of the user selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.). According to the invention it is therefore not only possible to measure certain hearing perception descriptive parameters, but it is also possible to establish the technical compensation to that problem at the same time due to the comparative nature of the calibration setup.

Description

METHOD OF CALIBRATING AN INTELLIGENT EARPHONE
Field of the invention
The present invention relates to a method of calibrating an intelligent earphone, e.g. a hearing aid, to a user.
Background of the invention
Numerous methods have been applied for determining the optimal adaptation of an earphone, e.g. a hearing aid to a user of such an apparatus.
Traditionally, for example hearing aids are provided to a user based on initial tests performed for the purpose of determined user hearing representative data. Such data, well described within audiometry, may for example be provided by means of e.g. an audiometer, determining the hearing level (HL), i.e. hearing threshold representative data.
A problem of the conventional hearing aid tuning methods is that the results apparently are somewhat questionable when evaluating the consumers' satisfaction. Very often, a carefully tuned hearing aid is simply put aside due to the fact that the user feels most comfortable without it. Evidently, such lack of satisfaction by a relatively large group of consumers represents a problem, if not to the manufacturers of the hearing aid, at least to the consumers, which are basically left alone with an annoying uncompensated hearing loss.
It is an object of the invention to increase the satisfaction of the users and potential users of earphones, in particular hearing aids.
Summary of the invention
The invention relates to a method of calibrating an intelligent earphone (17), e.g. hearing aid said method comprising the steps of establishing a series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) to a user by means of a sound transducer,
facilitating the user to select a series of preferred audio signals (1.1,
1.2, 2.1, 2.2, etc.),
establishing signal processing parameters of the intelligent earphone on the basis of the user-selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.).
The term "intelligent" is here used to express the idea of using a built-in signal processor for processing of data. An "intelligent earphone" may accordingly comprise a sound reproducing device comprising active filtering or signal processing, e.g. hearing aids, headsets, active ear defenders, personal in-ear monitors, ear phones, mobile phones, handsets, etc.
The user selection of preferred audio signals may generally be performed explicitly or implicitly. Evidently, a user may typically select preferred audio signals directly, but may else select the signal from e.g. its discomfort or muddiness, or intelligibility and thereby indirectly selecting preferred audio signals.
According to the invention, the most complicated tests may be performed in the simplest way. Thus, instead of mapping evaluated data obtained on the basis of conventional artificial audio stimuli to complex algorithms trying to figure out whether a user is actually able to perceive complex audio signals, e.g. speech or music audio signals, with given signal processing parameters of a certain e.g. hearing aid system, a preferred embodiment of the invention facilitates a direct testing. In other words: instead ofestablishing the signal processing parameters on the basis of a good guess how the user will react on different complex audio signals, why not apply those signals directly in the calibration or fitting process. According to the invention it is therefore not only possible to measure certain hearing perception descriptive parameters, but it is also possible to establish the technical compensation to that problem at the same time due to the comparative nature of the calibration setup.
It should be noted that the invention features comparative tests. In other words, some or all calibration parameters of the earphone to be calibrated are established on the basis of comparative tests, thereby enabling the user of the earphone to react on certain audio inputs, - typically and preferably real-life sounds such as speech, etc. - and provide feedback to the calibration routine. The feedback will then, when established on a comparative basis, indicate not only what the user can hear, but also what the user prefers to hear.
It should be noted that the duration of a calibration procedure according to a preferred embodiment of the invention is preferably relative short, e.g. 15 minutes, and is carried out by the user in front of a computer, at home or in a store. However, the calibration procedure according to a less preferred embodiment of the invention may as well be of much longer duration, e.g. two weeks, and carried out by the user wearing a feedback unit, as e.g. a wristwatch-like device comprising buttons, and letting the series of comparative audio presentations be the user's real sound environments established concurrently with the user experiencing them, and letting the user respond to these audio experiences using the buttons, thereby facilitating the establishment of signal processing parameters on the basis of the user-selected preferred everyday audio experience. Within the general scope of the invention, this can be regarded as adaptive fitting.
When said comparative audio presentations comprise paired comparative tests, a further advantageous embodiment of the invention has been obtained.
Typically, the most advantageous comparative test comprises paired tests of two audio signals, due to the fact that the human audio memory is very restricted. When modifying the frequency content of said audio signals (1.1, 1.2, 2.1, 2.2, etc.) on the basis of the user's choice of selected preferred audio signals, a further advantageous embodiment of the invention has been obtained.
According to a preferred embodiment of the invention, slight or significant modifications are presented to the user with the purpose of increasing knowledge of how the user prefers to have certain acoustic perception parameters changed.
When said audio signals (1.1, 1.2, 2.1, 2.2, etc.) are presented to the user by means of an audio rendering system comprising said intelligent earphone (17), e.g. hearing aid or said technical equivalent thereof and a signal processing equipment (1) associated thereto, a further advantageous embodiment of the invention has been obtained.
When modifying the frequency content of said audio signals (1.1, 1.2, 2.1, 2.2, etc.) by establishing filter functions (81), a further advantageous embodiment of the invention has been obtained.
When said user-selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.) comprise audio signals selected to be most comfortable by the user, a further advantageous embodiment of the invention has been obtained.
When said user-selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.) comprise audio signals selected to be most clear by the user, a further advantageous embodiment of the invention has been obtained.
When registering the filter setting of the filter functions (81) of said signal processing equipment (1) providing the audio signal selected to be most comfortable to the user and storing of corresponding filter settings to an intelligent earphone (17), e.g. hearing aid, a further advantageous embodiment of the invention has been obtained. When performing an optimization of said filter settings prior to said storing of corresponding filter settings, a further advantageous embodiment of the invention has been obtained.
According to the invention, it may be necessary to optimize the filter settings to be used with a certain hardware configuration, e.g. intelligent earphone.
When said audio signals are presented to the user by an intelligent earphone (17), e.g. hearing aid and said intelligent earphone is finally calibrated at the end of said series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.), a further advantageous embodiment of the invention has been obtained.
When said comparative audio presentations comprises at least two consecutive audio signals (1.1 and 1.2, 2.1 and 2.2, 2.3 and 2.4, etc.) presented to the user and where the user selects the preferred audio signals, a further advantageous embodiment of the invention has been obtained.
According to the invention, the user may be presented with a number of consecutive audio signals, preferably two consecutive signals. As a response, the user informs which of the consecutive signals feels most comfortable or perceivable. In some applications, the user may also be able to express a "don't care"-state, i.e. a situation in which the user feels that both or all the presented signals sound equal.
When the user-selected preferred audio signals are registered and stored in associated calibration processing equipment (15) such as a hearing aid fitting system, a further advantageous embodiment of the invention has been obtained.
When said series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) comprise signals differing by certain well-described parameters, a further advantageous embodiment of the invention has been obtained. According to the invention, the signal features by which the audio signals differ, may comprise psycho-acoustically motivated features such as e.g. clarity, comfort, loudness, harmonicity, pitch and/or technically based signal differences such as e.g. frequency spectrum, formant structure, fundamental frequency, level, intensity, vowels, semivowel, consonants, nasals, diphthongs and triphthongs, time variance, fricatives, affricatives, laterals, coarticulation and transitions, tone, accent, stress, tempo, rhythm, pausing, intonation, laryngeal settings, phonation type, etc., and/or in addition other parameters such as e.g. cancellation of background noise or prevention of loud sounds, etc.
When said series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) differ in amplification of predetermined frequency bands, a further advantageous embodiment of the invention has been obtained.
When said series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) differ in time response descriptive parameters, a further advantageous embodiment of the invention has been obtained.
According to the invention, it is moreover possible to obtain a registration of not only the static parameters of the measured hearing, but it is moreover possible to test technical compensations for time variant conditions influencing the hearing perception of the user in real-life.
When said audio signals comprise real-life audio signals, a further advantageous embodiment of the invention has been obtained.
According to a very important aspect of one embodiment of the invention, real-life audio signals are provided to the user during the calibration and at the end of the calibration the user is provided with the preferred settings, thereby enabling the user to establish not only "probably- OK"- settings but rather verified and preferred settings. The invention fully grasps the idea that the user, when it comes to an end, is the person best qualified for determining whether a signal, e.g. speech or other real-life signal is preferred.
It should be noted that the comparative presentation of audio signals is even applicable to children and handicapped.
When said series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) are adapted pedagogically to the user type, a further advantageous embodiment of the invention has been obtained.
A user type may here be referred to as a group of persons, which may advantageously be facing the same comparative audio presentations of audio signals. A user type may e.g. comprise children, which may e.g. face other types of audio- signals, certain types of background masking signals, e.g. kindergarten noise etc. Moreover, the user should preferably face a "temperament - fitting algorithm".
When said audio signals comprise speech, a further advantageous embodiment of the invention has been obtained.
According to a preferred embodiment of the invention, speech is a significant audio presentation in the sense that speech signals are what the user needs to hear by means of the calibrated intelligent earphone, e.g. hearing aid.
When said audio signals comprise music, a further advantageous embodiment of the invention has been obtained.
When said comparative audio presentation comprises visual test elements, a further advantageous embodiment of the invention has been obtained.
When said visual test elements comprise explanatory guiding to the user of the calibration, a further advantageous embodiment of the invention has been obtained. When said user controls the calibration process, a further advantageous embodiment of the invention has been obtained.
When said calibration delivers a warning if certain predefined criteria differ between the user's ears to a certain predefined degree, a further advantageous embodiment of the invention has been obtained.
A predefined criteria may e.g. comprise a threshold level, e.g. determined as a difference in amplification between the two ears, e.g. within one or several different frequency bands of the user-selected preferred audio signals.
A warning may be delivered in several different ways within the scope of the invention.
When the comparative test is supplemented by a hearing threshold test, a further advantageous embodiment of the invention has been obtained.
Moreover, the invention relates to an audio rendering system for rendering of audio signals according to any of the claims.
The invention further relates to a method of evaluating the hearing representing parameters of a user comprising the steps of
presenting audio signals (1.1, 1.2, 2.1, 2.2, etc.) to the user,
evaluating hearing representing parameters of the user on the basis of the user's response to the audio signals (1.1, 1.2, 2.1, 2.2, etc.),
said audio signals comprising time variant audio signals. According to the invention, it is possible to obtain and use knowledge about time variant perception of a user. Moreover, it should be noted that the invention, also in this aspect, facilitates a "keep-it-simple-approach" to the time variant information in the sense that the applied time variant audio signals typically are restricted to audio signal actually relevant to a user. In other words, calibration should primarily focus on audio signals relevant to the user instead of conventional calibration of filters based on artificial signals, such as pure tones hardly relevant to the user.
Time variant information may e.g. include speech in which the frequency spectrum varies over time.
When said audio signals comprise real-life audio signals, a further advantageous embodiment of the invention has been obtained.
When adapting time variant parameters, a further advantageous embodiment of the invention has been obtained.
When said method being performed according to any of the claims, a further advantageous embodiment of the invention has been obtained.
The invention further relates to a method of calibrating an intelligent earphone, e.g. hearing aid to a user, said method comprising the steps of
establishing a set of preferred audio representing parameters, establishing settings of said intelligent earphone on the basis of said preferred audio representing parameters,
said method being executed by signal processing means under the control of the user.
When said settings being approved by the user by providing audio signals to the user, a further advantageous embodiment of the invention has been obtained. The figures
The invention will now be described in detail with reference to the drawings, in which
Fig. 1A-1D show four examples of applications targeted by the present invention,
Fig. 2 illustrates a calibration setup according to the invention,
Fig. 3 illustrates a an overall calibration procedure according to a preferred embodiment of the invention F Fiigg.. 4 4 aanndd 55 illustrate a more detailed calibration and conversion procedure setup according the invention,
Fig. 6 illustrates an example of a calibration layer, F Fiigg.. 7 7AA aanndd 71B. illustrate two exemplary audio signals for a comparative test,
Fig. 8 illustrates an example of a transfer function used to alter acoustic perception parameters of audio signals,
Fig. 9 illustrates the rendering of audio signals by means of an intelligent earphone according to the invention and where
Fig. 10A-10E illustrate different calibration menu points applied according to a preferred embodiment of the invention.
Detailed description
As the present invention may be used for tuning any kind of sound reproducing device comprising active filtering, e.g. hearing aids, headsets, active ear defenders, personal in-ear monitors, ear phones, mobile phones, etc., the following description will use the term "intelligent earphone" as a reference to such sound reproducing devices.
The most preferred embodiment of the invention is provided when calibrating or fitting hearing aids.
Examples of different applications performing actively filtered sound reproduction, and thus being target applications to the present invention, are shown in figures 1 A to ID. The examples are given to illustrate that these applications, though having completely different usages, may be implemented quite similarly, and that the calibration method of the present invention thus easily may be adapted to any of these exemplary applications, and similarly to any other actively filtering sound reproduction device.
Figure 1A shows a preferred embodiment of a hearing aid in principle. It comprises an active filtering unit 2, a loudspeaker 3 and a microphone 4. A signal from the microphone 4 is input to the filtering unit 2, and an output from the filtering unit 2 is used as input to the loudspeaker. The active filtering unit 2 typically comprises a digital signal processor DSP, but may be any kind of circuit able to apply active filtering to a signal, e.g. an operational amplifier setup, a gate array etc. The filtering unit 2 may also comprise electronics or software to support or enable the filtering means, e.g. analog-to-digital and digital-to-analog conversion, amplification, passive filtering, etc. The loudspeaker 3 is preferably arranged in an earplug adjusted with respect to size and shape so that it exactly fits the opening of the user's auditory canal most accurately, but other kinds of sound reproducers may be used as well.
When the active filtering unit 2 is loaded with filter parameters, it will filter a signal received by the microphone 4 and send the filtered signal to the loudspeaker 3, which again converts the filtered signal to sound in the user's ear, i.e. sound pressure. Different optimization targets are possible, comprising e.g. psycho-acoustic parameters, such as clarity and comfort, specific environmental parameters, such as noise-attenuation, prevention of loud sounds, etc.
The filter parameters are preferably optimized to compensate for the hearing loss of the user, meaning that using the hearing aid will make the surrounding sounds sound clearer than if not using the hearing aid. The filter may as well be optimized to the comfort of the user, meaning that using the hearing aid will make the surrounding sounds sound more comfortable than if not using the hearing aid. Examples of calibration of the above-described circuit will be described in fig. 2 and the following figures.
Figure IB shows an alternative embodiment of an active ear defender in principle. It is exactly the same drawing as figure 1 A. Here the filter parameters are preferable optimized to adaptively attenuate any surrounding sounds, preferably as much as possible. The filter may instead be optimized to attenuate specific sounds or characteristic noise, or to attenuate any sounds which exceed a certain amplitude level. The filter may also be optimized to attenuate loud noise when it is present for longer than a predefined time. The filter may also be optimized to attenuate specific sounds or noise and in addition amplify e.g. speech.
Figure 1C shows an embodiment of a headset in principle. As the embodiments of a hearing aid and an ear defender, this embodiment also comprises an active filtering unit 2, a loudspeaker 3 and a microphone 4. Furthermore, it comprises an input signal 5 and an output signal 6. The signal from the microphone 4 is input to the filtering unit 2, possibly filtered by the filtering unit 2, and at last sent to the output 6. The input 5 is sent to the filtering unit 2, possibly filtered by the filtering unit 2, and at last sent to the loudspeaker 3. The input and output signals 5, 6 are preferably connected to an audio renderer, e.g. a mobile phone, a telephone, a radio transceiver, a portable CD-player, an MP3 -player, etc. The connection from the input and output 5, 6 to the preferred audio renderer may be established by means of wires, light or radio communication, etc.
Alternatively, this embodiment may be built into a mobile phone, a phone handset, any other kind of handset, etc., possibly using the microphone and loudspeaker already present in these devices.
A further alternative embodiment omits the microphone, leaving just the active filtering unit 2 and the loudspeaker 3. This embodiment may preferably be connected to an MP3 -player, a portable CD-player, a radio or any other audio rendering device. The filter parameters are preferably optimized to the comfort of the user, but may as well be optimized with regards to attenuating background noise from either sent or received signals, enhancing the received signals regarding clarity generally or with respect to a specific sound component, e.g. the speaker, etc.
Figure ID shows an embodiment of a personal in-ear monitor, ear monitor for short, in principle. The ear monitor is used by e.g. musicians doing a live concert, enabling them to have their preferred mixing sent to an earplug. The shown embodiment of an ear-monitor comprises an active filtering unit 2, a loudspeaker 3 and an input signal 5. The input signal 5 is preferably connected to a mixing board, but may be connected to any audio renderer. The connection is preferably established by means of radio communication, but may as well be established by means of light communication or wires, etc.
The filter parameters are preferably optimized to the comfort of the user, but may as well be optimized with regards to attenuating background noise from either sent or received signals, enhancing the received signals regarding clarity generally or with respect to a specific sound component, e.g. the user's voice, etc.
For any of the above-described application examples and for any further applications to which the invention is targeted, an arbitrary combination of the filter optimization parameters are possible. For example it would be an advantageous improvement to optimize an ear-monitor filter with regards to the hearing loss of the user, as many musicians suffer from such defects.
Fig. 2 shows an embodiment of a calibration setup of the present invention. The setup comprises a calibration unit 1, a person equipped with sound reproducers or sound transducers 10 and a programming unit 16. The calibration unit 1 comprises a processing unit 15, an audio renderer 11 and a monitor 12.
The sound reproducers 10 are connected to the audio renderer 11, which again is connected to the processing unit 15. To the processing unit 15 is further connected the monitor 12 and the programming unit 16. To the programming unit may be connected one or more intelligent earphones 17.
The sound reproducers 10 are preferably earplugs having the same plug dimensions as the final intelligent earphones 17, but may as well be headphones, speakers, hearing aids or other sound reproducing devices, analog or digital, with the possibility of connecting them to an audio renderer.
It is, however, important that the intelligent earphones 17, if they are not the same as the earphones 10 (as in the current illustrated embodiment), correspond to the earphones applied when calibrating the test earphones 10 to the user. Of course, the criteria may be overcome if the test earphones applied for the calibration tests are the same as the earphones to be used by the user.
The connection to the audio renderer 11 is preferably established by means of wires, but may be established by any kind of connection available for analog or digital signal transfer, e.g. by means of infrared communication such as described by the IrDA standard, radiowaves such as described by the BlueTooth standard or any other possible means.
The audio renderer 11 is preferably a computer soundcard, able to render data processed by the processing unit 15 to produce audio signals reproducible by the sound reproducers 10. The audio renderer may also be an external unit, e.g. an external soundcard, an MP3-player, a CD-player or any other available audio renderer.
The processing unit 15 comprises a digital signal processor DSP 13 and a data storage unit 14. The processing unit is preferably a computer comprising a CPU and persistent as well as non-persistent memory. But the use of any other kind of hardware configuration comprising means for digital signal processing and data storage is within the scope of the invention. Such possible configurations comprise Laptop-computers, hand-held devices such as Palm-Pilot's and PDA's, mobile phones and custom configurations comprising e.g. a DSP. The processing unit may also be incorporated in the intelligent earphone.
The monitor 12 is preferably a touch screen reachable by the user, but any kind of device able to display messages to the user is within the scope of the invention. If the monitor is not a touch screen other means for giving the processing unit feedback from the user must be established. These may comprise a keyboard, a number of buttons, a joystick or pad, a voice recognition device or any other means for giving input to a processing unit. In another embodiment of the invention, the monitor 12 is discarded as the instructions and feedback given to the user by means of the monitor is now given by means of simple visible methods, such as a box with light emitting diodes. In a further embodiment of the invention, the monitor is discarded, as the instructions and feedback given to the user by means of the monitor is now given by means of audible methods, such as voice synthesis or prerecorded messages.
The programming unit 16 comprises means for transferring data to an intelligent earphone 17 or a pair of intelligent earphones 17. The connection between the programming unit 16 and the intelligent earphones 17 maybe wireless, e.g. inductive, infrared or using radiowaves, or it may be established by placing the intelligent earphones 17 in a specific way on the programming unit 16, so that a number of contact points on the intelligent earphones 17 are coupled to a number of contact points on the programming unit 16. Further the connection may be established by plugging a wire from the programming unit into the intelligent earphones 17. The task of the programming unit 16 is to transfer filter data and/or other signal processing parameters produced by the processing unit 15 to the intelligent earphones 17 for optimizing their performance to the user in question.
In a further embodiment of the invention, the processing unit 15, the audio renderer 11 and the programming unit 16 are all implemented within the intelligent earphone 17, thus making the user able to go through the calibration procedure guided by the intelligent earphone itself. Feedback to the processing unit 15 from the user may be communicated by speech or by touching the intelligent earphones 17. Alternatively, a box with buttons and possibly a display may be connected to the intelligent earphones 17 for giving and receiving feedback.
An advantageous feature of the calibration setup of the present invention is the fact that the user may go through the complete calibration procedure all by himself, without the need for an ear specialist, otologist or trained person. The system is self- explanatory and the user alone is able to make all decisions, which is also the most logical method, as no doctor can measure the hearing abilities of another person to its full extent, including psychological parameters. The user may, however, within the scope of the present invention, be instructed or guided by a trained person, thus enabling the invention to be used by children, mentally or physically handicapped persons or persons not used to operate computer-like devices.
Due to reasons of explanation, the following detailed description of the calibration method of the present invention will use the hearing aid embodiment of an intelligent earphone to concretize the description. It is here emphasized that the calibration method and setup now to be described is in no way limited or restricted to be used with hearing aids only, in fact any of the above-mentioned example applications and any other actively filtering sound reproducer may benefit from being optimized in accordance with the present invention, and such use is within the scope of the present invention.
Before digging into detail of exemplary calibration methods, it should be noted that the below-described methods feature comparative tests. In other words, some or all calibration parameters of the earphone to be calibrated are established on the basis of comparative tests, thereby enabling the user of the earphone to react on certain audio inputs - typically and preferably real-life sounds such as speech, etc. - and provide feedback to the test routine. The feedback will then, when established on a comparative basis, indicate not only what the user can hear, but also what the user prefers to hear. Fig. 3 shows an overview of a process, according to an embodiment of the invention, of screening a person's needs and wishes regarding hearing loss compensation, and thereafter optimizing an intelligent earphone accordingly. In the following, the person is assumed suffering from hearing loss, and the intelligent earphone would then be an embodiment of a hearing aid.
First a calibration 31 is performed. This comprises a screening of the hearing loss of the person in question. Next a conversion 32 is performed. This comprises a conversion of the data collected through the calibration procedure, together with an optimization to use the data with a specific type of hearing aid or other intelligent earphone. The last element of the process is rendering 33. This comprises the use of a hearing aid or other kind of intelligent earphone programmed with the data calculated through the previous steps.
Fig. 4 illustrates the calibration process, above referenced as 31, in more detail. It comprises a number N of layers LI 41, Ln 42, Ln+1 43 and LN 44. Within each layer 41, 42, 43, 44 a calibration is performed with respect to one or several specific signal features, such as e.g. clarity, comfort, loudness, frequency spectrum, formant structure, fundamental frequency, intensity, vowels, semivowel, consonants, nasals, diphthongs and triphthongs, time variance, fricatives, affricatives, laterals, coarticulation and transitions, tone, accent, stress, tempo, rhythm, pausing, intonation, laryngeal settings, phonation type, etc. Calibration with respect to other parameters is also possible, as e.g. cancellation of background noise, etc. The number of layers preferably equals the number of different acoustic perception parameters for which a calibration is desired. Some acoustic perception parameters may be possible to calibrate at the same time, letting another embodiment of the invention having fewer layers, than the number of desired calibrated acoustic perception parameters. Each layer is possibly, but not necessarily, dependent on the previous layers, either by needing other parameters to be calibrated already or to be able to suggest similar calibration or a starting point for calibration within the current layer. When all calibration layers are run through, the process is repeated with the next ear. When both ears have been individually calibrated, the balance between the ears is adjusted 46 and calibration is finished. In another embodiment of the invention, both ears are individually calibrated for each layer before proceeding with the next layer. In such case the decision-box 45 is omitted. In yet another embodiment of the invention, both ears are calibrated at the same time, which also omits the decision-box 45. And in yet another embodiment of the invention, only one ear is calibrated, thus omitting decision-box 45 and balance adjustment 46.
Fig. 5 illustrates the conversion process in more detail. First a filter is defined 51 on the basis of the data collected through the calibration process. This filter is adjusted to the person in question by the adaptive calibration process. Then the filter is optimized 52 according to the specific hearing aids or other intelligent earphone the person will be using. Last the filter is implemented 53 in the hearing aids.
Fig. 6 illustrates an exemplary flow chart of a calibration layer 41, 42, 43, 44 according to an embodiment of the invention. As described above, the purpose of such a layer is to perform a calibration with respect to specific acoustic perception parameters. In general, this calibration is carried out by exposing the user to a comparative test, comprising the steps of presenting two or more sounds and letting the user choose the one he or she prefers. These steps are repeated with different sounds, sound environments, etc., until the user answers are sufficient to make a conclusion on that layer.
In figure 6, the user is first provided with info. This info may comprise details on the test sequence, the ear to be tested, the theories behind the test, etc. The info may be audible or visual or both.
Thereafter the user is presented with two audio signals, sounds 1.1 and 1.2, e.g. two speech signals representing the same word, e.g. "two", but being mutually different regarding a certain acoustic perception parameter. The user chooses the audio signal he or she prefers, and is again presented with two audio signals, either sounds 2.1 and 2.2 or sounds 2.3 and 2.4, according to the user's first decision. The sounds 2.1, 2.2, 2.3 and 2.4 preferably representing the same content, e.g. "two", as the sounds 1.1 and 1.2, but with the acoustic perception parameter in test being altered, differently for each signal, from the previous signals, this alteration carried out in accordance with the user's previous choices. Such altering may for a number of the next signals consist of doing no alteration, as it may be advantageous to compare the same signal to one of the next signals.
Depending on the user's choices and the complexity of calibrating a specific acoustic perception parameter, the necessary number of comparisons may vary. In the example of figure 6 a conclusion can thus be made after only two comparisons, if the user first chooses the second sound 1.2 and then chooses the first sound 2.3. When sufficient comparisons have been made, the user is sent to the next calibration layer Ln+1. The dashed connections, being outputs of the "Play sounds 4.1 & 4.2" and "Play sounds 4.3 and 4.4" decision boxes, indicate that more comparisons are necessary for these answer combinations.
On the basis of the choices performed through all calibration layers solely by the user, different audio characteristics are registered. The comparative tests represent what the user would actually prefer to hear, when applying his or her hearing aids or other intelligent earphone. Therefore, the filter settings of the user's personal hearing aid transfer function, i.e. the transfer function(s) may be established directly.
Figures 7A and 7B are graphical representations of an example of two paired audio signals, e.g. sounds 1.1 and 1.2 from above, both being speech signals representing the word "two", but being mutually different regarding a certain acoustic perception parameter.
The graphs comprise a horizontal axis representing the time dimension measured in milliseconds (msec), and a vertical axis representing frequency measured in Hertz (Hz). The symbols used to plot the signals into the graph also represent frequencies. Circular points correspond to low frequencies below 1000 Hertz, rectangular points correspond to frequencies above 1000 Hertz, but below 2000 Hertz, and triangular points correspond to frequencies above 2000 Hertz. The dimension of the symbols corresponds to the intensity of the signal, at the time and frequency for a particular symbol. The actual frequency plot is indicated by the centers of the symbols.
According to the above, it can be seen that both the signal of figure 7A and that of figure 7B comprise four main frequency components being a first frequency component between 300 Hz to 400 Hz, a second frequency component shifting from just above 1500 Hz down to 900 Hz, a third frequency component shifting from 2800 Hz down to 2300 Hz, having a dive to 2100 Hz at 1000 msec, and at last a fourth frequency component starting at 3300 Hz, shifting to 3400 Hz at 620 msec, shifting to 2900 Hz at 900 msec, and ending at about 3200 Hz. As seen, the two signals are approximately equal with respect to their frequency components. This is because they are both derived from the same recording of the word "two".
On the other hand, it can be seen from the signal graphs that the two signals differ with respect to intensity, indicated by the size of the symbols. Taking the first frequency component, the one with the lowest frequencies, the intensities are almost the same. But the other three frequency components all have increased intensity for the signal shown in figure 7B. The difference in intensity between the two signals increases with frequency.
The audio signal of figure 7B is that of figure 7A, being altered by a transfer function that amplifies high frequencies. This corresponds with the analysis made above. When these two audio signals are presented to a user whose hearing abilities are less sensitive to high frequencies, he or she will hear the high-frequency enhanced signal of figure 7B more clearly than the original signal of figure 7A, and will therefore choose that high-frequency enhanced signal. To a user with normal hearing abilities, the high-frequency enhanced signal of figure 7B will sound unnatural and uncomfortable compared to the signal of figure 7A, and the user will choose the original signal. If the example just described was the first comparative test within the layer of calibrating according to high frequencies, a following comparison may present signals altered in such a way that a determination of other parameters with respect to high-frequency amplification, is achieved. Fig. 8 illustrates an example of a transfer function used to alter acoustic perception parameters of audio signals. The example of figure 8 amplifies the high frequencies. Often hearing impaired persons will benefit from having the high frequencies amplified.
The figure comprises a horizontal axis representing frequency measured in Hertz (Hz) and a vertical axis representing amplification measured in decibel (dB). In the graph is plotted the transfer function 81 used to alter the signal of figure 7A to become the signal of figure 7B. As seen, the transfer function leaves DC-components with no alteration, but otherwise amplifies the signal the amplification being proportional to the frequency. Thus signal components with frequencies at 3000 Hz are amplified with approximately 20 dB, explaining the huge difference in size of the symbols representing high frequencies (the triangles) in the figures 7A and 7B, while the size of the symbols representing low frequencies are less different.
Other preferred transfer functions used to establish the audio signal pairs used for the comparative tests comprise transfer functions related to signal features as e.g. clarity, comfort, loudness, frequency spectrum, formant structure, fundamental frequency, intensity, vowels, semivowel, consonants, nasals, diphthongs and triphthongs, time variance, fricatives, affricatives, laterals, coarticulation and transitions, tone, accent, stress, tempo, rhythm, pausing, intonation, laryngeal settings, phonation type, etc. It should be noted that other transfer functions, though not preferred, are within the scope of the invention as well.
Fig. 9 illustrates the principle state of the hearing aid or other kind of intelligent earphone when filter settings establishing the desired transfer function(s) of the hearing aid are controlling the rendering 91 of the hearing aid.
Fig. 10A-10E illustrate an exemplary implementation of the menus displayed on the monitor during the calibration process. Figure 10A lets the user adjust the volume before the test begins. This is done to ensure that the user can hear the test sounds, but it may also be used as a calibration parameter itself. The menu comprises three buttons, 711, 712, 713. The first button 711 lets the user decrease the volume, the third button 713 lets the user increase volume, and the second button 712 lets the user accept the volume level when any adjustments are done. All the time this menu is shown, an audio signal is being played. The audio signal preferably comprises sound components easily heard by even heavily hearing impaired persons.
This menu, and any other menu comprising sound replay, may also comprise a picture, moving or still, e.g. a video recording of the speaker, saying the test words. This may be beneficial to some calibration steps.
Figure 10B shows a menu used for the comparative tests. It comprises three buttons 721, 722, 723. The first button 721 lets the user choose the first sound as the clearest or most comfortable. The next button 722 lets the user choose the second sound as the clearest or most comfortable. The third button 723 lets the user request a replay of the sounds. When this menu is shown, two audio signals are being played, and then there is silence. The user may be visually or audibly instructed about which sound is about to be played.
It should be noted that this remarkably simple audio presentation, especially when performed with audio sounds, to which the user may actually relate, such as words, music, door bells, etc., may increase the knowledge of the combined functionality of user and earphones significantly, when compared to traditional tests applying artificial audio signals, such as pure sinus tones. Moreover, a translation between a conventional pure-tone-response from a user is typically very difficult to transform to meaningful setting of the earphone-related filter circuits.
It should also be mentioned that the user may be presented to further feedback options, e.g. T, '2' supplemented by a 'don't care'. Other comparative feedback may also be given within the scope of the invention, such as further comparative test signals, although two signals, 1 and 2, as illustrated in fig. 10b represent the most preferred comparative test.
Figure IOC shows a menu used when the comparative tests are finished. It is used for calibrating the balance between the left and right ear. It comprises three buttons 731, 732, 733. The first and third buttons 731 and 733 let the user adjust the balance. The second button 732 lets the user accept the balance, when any adjustments are done. All the time, when this menu is displayed, an audio signal is playing to help the user make the adjustment. The audio signal played is preferably filtered according to the calibration results just achieved through the comparative tests, to give the user the best basis to do the adjustments on. In another embodiment of the invention, the user is only calibrating one ear, and this balance adjustment menu is thus omitted.
Figure 10D shows a menu that lets the user calibrate the volume level of sounds. It comprises three buttons 741 , 742, 743. The first and third button 741 , 743 let the user adjust the sound level, and the second button 742 lets the user accept the adjustments made. When this menu is shown, an audio signal is being played. This audio signal is preferably filtered according to the calibration results just achieved through the comparative tests, to give the user the best basis to do the adjustments on.
Figure 10E shows a menu for calibrating the sound of the user's own voice. It comprises two buttons 751, 752. The first button 751 lets the user accept the setting, and the second button 752 lets the user request an adjustment. When this menu appears, the user is first visually or audibly asked to say a word, e.g. "house". The word is recorded, filtered according to the previous calibrations made, and is then presented to the user. If he or she presses the first button 751, the user is instructed to say another word, e.g. "zero". This is repeated for a number of times. If the user presses the second button 752, the filtering of the recorded audio signal is changed, and then presented to the user again. This is repeated until the user presses the first button 751. It should be noted that the above comparative tests may be applied in numerous different ways within the scope of the invention.
Hence, the set of test sounds may e.g. vary from user to user, e.g. depending on the language of the user, the environment in which the user wishes to use his intelligent earphone, i.e. with different types of background noise, e.g. kindergarten noise, etc. Moreover, a user may also wish to obtain certain different settings, i.e. one for work (e.g. with heavy background noise), one setting for listening to music, one setting when going shopping, etc.

Claims

Patent Claims
1. Method of calibrating an intelligent earphone (17), e.g. hearing aid said method comprising the steps of establishing a series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) to a user by means of a sound transducer,
facilitating the user to select a series of preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.),
establishing signal processing parameters of the intelligent earphone on the basis of the user selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.).
2. Method of calibrating an intelligent earphone according to claim 1, whereby said comparative audio presentations comprising paired comparative tests.
3. Method of calibrating an intelligent earphone according to claim 1 or 2, whereby modifying the frequency content of said audio signals (1.1, 1.2, 2.1, 2.2, etc.) on the basis of the users choice of selected preferred audio signals.
4. Method of calibrating an intelligent earphone according to any of the claims 1-3, whereby said audio signals (1.1, 1.2, 2.1, 2.2, etc.) are presented to the user by means of an audio rendering system comprising said intelligent earphone (17), e.g. hearing aid or said technical equivalent thereof and a signal processing equipment (1) associated thereto.
5. Method of calibrating an intelligent earphone according to any of the claims 1-4, whereby said sound transducer comprises said intelligent earphones or technical equivalent.
6. Method of calibrating an intelligent earphone according to any of the claims 1-5, whereby said user-selected preferred audio signals (1.1, 1.2, 2.1, 2.2, etc.) comprise audio signals selected to be most comfortable or clear or intelligible by the user.
7. Method of calibrating an intelligent earphone according to any of the claims 1-6, whereby said audio signals are presented to the user by an intelligent earphone (17), e.g. hearing aid and whereby said intelligent earphone is finally calibrated at the end of said series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.).
8. Method of calibrating an intelligent earphone according to any of the claims 1-7, whereby said comparative audio presentations comprises at least two consecutive audio signals (1.1 and 1.2, 2.1 and 2.2, 2.3 and 2.4, etc.) presented to the user and where the user selects the preferred audio signals.
9. Method of calibrating an intelligent eaφhone according to any of the claims 1-8, whereby the user-selected signals are registered and stored in associated calibration processing equipment (1).
10. Method of calibrating an intelligent eaφhone according to any of the claims 1-9, whereby said series of comparative audio presentations of audio signals (1.1, 1.2, 2.1, 2.2, etc.) differ in amplification of predetermined frequency bands.
11. Method of calibrating an intelligent eaφhone according to any of the claims 1- 10, whereby said series of comparative audio presentations of audio signals (1.1, 1.2,
2.1, 2.2, etc.) are adapted pedagogically to the user type.
12. Method of calibrating an intelligent eaφhone according to any of the claims 1- 11, whereby said comparative audio presentation comprising visual test elements.
13. Method of calibrating an intelligent eaφhone according to any of the claims 1-
12, whereby said visual test elements comprising explanatory guiding to the user of the calibration.
14. Method of calibrating an intelligent eaφhone according to any of the claims 1-
13, whereby said user controlling the calibration process.
15. Method of calibrating an intelligent eaφhone according to any of the claims 1-
14, whereby said calibration delivers a warning if certain predefined criteria differ between the user's ears to a certain predefined degree.
16. Method of calibrating an intelligent eaφhone according to any of the claims 1-
15, whereby the comparative test is supplemented by a hearing threshold test.
17. Audio rendering system for rendering of audio signals according to any of the claims 1-16.
18. Method of calibrating an intelligent eaφhone, e.g. hearing aid to a user, said method comprising the steps of
establishing a set of preferred audio representing parameters, establishing settings of said intelligent eaφhone on the basis of said preferred audio representing parameters,
said method being executed by signal processing means under the control of the user.
19. Method of calibrating an intelligent eaφhone according to claim 18, whereby said settings being approved by the user by providing audio signals to the user.
20. Method of calibrating an intelligent eaφhone according to claim 18 or 19, whereby said method being performed according to any of the claims 1-17.
PCT/DK2003/000442 2002-06-28 2003-06-26 Method of calibrating an intelligent earphone WO2004004414A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003236811A AU2003236811A1 (en) 2002-06-28 2003-06-26 Method of calibrating an intelligent earphone

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DKPA200201022 2002-06-28
DKPA200201022 2002-06-28

Publications (1)

Publication Number Publication Date
WO2004004414A1 true WO2004004414A1 (en) 2004-01-08

Family

ID=29797023

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DK2003/000442 WO2004004414A1 (en) 2002-06-28 2003-06-26 Method of calibrating an intelligent earphone

Country Status (2)

Country Link
AU (1) AU2003236811A1 (en)
WO (1) WO2004004414A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1030541C2 (en) * 2004-12-28 2007-06-12 Samsung Electronics Co Ltd Audio frequency response characteristic compensation method for portable sound system, involves generating acoustic compensation curve based on acoustic and frequency characteristic target curves
NL1030038C2 (en) * 2004-11-02 2007-07-10 Samsung Electronics Co Ltd Earphone`s frequency response characteristic compensating method for e.g. MP3 players, involves extracting filter coefficient by comparing measured response characteristic with reference frequency characteristic of target curve
WO2008009142A1 (en) * 2006-07-20 2008-01-24 Phonak Ag Learning by provocation
WO2008086112A1 (en) 2007-01-04 2008-07-17 Sound Id Personalized sound system hearing profile selection process
WO2008127221A1 (en) * 2006-03-01 2008-10-23 Cabot Safety Intermediate Corporation Wireless interface for audiometers
US7970146B2 (en) 2006-07-20 2011-06-28 Phonak Ag Learning by provocation
WO2016071221A1 (en) * 2014-11-04 2016-05-12 Jacoti Bvba Method for calibrating headphones
EP3413585A1 (en) * 2017-06-06 2018-12-12 GN Hearing A/S Audition of hearing device settings, associated system and hearing device
EP3493555A1 (en) * 2017-11-29 2019-06-05 GN Hearing A/S Hearing device and method for tuning hearing device parameters
US11032656B2 (en) 2017-06-06 2021-06-08 Gn Hearing A/S Audition of hearing device settings, associated system and hearing device
WO2022079476A1 (en) * 2020-10-15 2022-04-21 Palti Yoram Prof Telecommunication device that provides improved understanding of speech in noisy environments
US11418894B2 (en) 2019-06-01 2022-08-16 Apple Inc. Media system and method of amplifying audio signal using audio filter corresponding to hearing loss profile

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5825894A (en) * 1994-08-17 1998-10-20 Decibel Instruments, Inc. Spatialization for hearing evaluation
EP1073314A1 (en) * 1999-07-27 2001-01-31 Siemens Audiologische Technik GmbH Method for fitting a hearing aid and hearing aid
WO2001069970A2 (en) * 2000-03-13 2001-09-20 Sarnoff Corporation Hearing aid format selector
WO2001093627A2 (en) * 2000-06-01 2001-12-06 Otologics, Llc Method and apparatus measuring hearing aid performance
EP1261235A2 (en) * 2001-05-21 2002-11-27 Seiko Epson Corporation An IC chip for a hearing aid, a hearing aid and a system for adjusting a hearing aid

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5825894A (en) * 1994-08-17 1998-10-20 Decibel Instruments, Inc. Spatialization for hearing evaluation
EP1073314A1 (en) * 1999-07-27 2001-01-31 Siemens Audiologische Technik GmbH Method for fitting a hearing aid and hearing aid
WO2001069970A2 (en) * 2000-03-13 2001-09-20 Sarnoff Corporation Hearing aid format selector
WO2001093627A2 (en) * 2000-06-01 2001-12-06 Otologics, Llc Method and apparatus measuring hearing aid performance
EP1261235A2 (en) * 2001-05-21 2002-11-27 Seiko Epson Corporation An IC chip for a hearing aid, a hearing aid and a system for adjusting a hearing aid

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1030038C2 (en) * 2004-11-02 2007-07-10 Samsung Electronics Co Ltd Earphone`s frequency response characteristic compensating method for e.g. MP3 players, involves extracting filter coefficient by comparing measured response characteristic with reference frequency characteristic of target curve
NL1030541C2 (en) * 2004-12-28 2007-06-12 Samsung Electronics Co Ltd Audio frequency response characteristic compensation method for portable sound system, involves generating acoustic compensation curve based on acoustic and frequency characteristic target curves
US8059833B2 (en) 2004-12-28 2011-11-15 Samsung Electronics Co., Ltd. Method of compensating audio frequency response characteristics in real-time and a sound system using the same
AU2007349196B2 (en) * 2006-03-01 2013-04-04 3M Innovative Properties Company Wireless interface for audiometers
US8939031B2 (en) 2006-03-01 2015-01-27 3M Innovative Properties Company Wireless interface for audiometers
WO2008127221A1 (en) * 2006-03-01 2008-10-23 Cabot Safety Intermediate Corporation Wireless interface for audiometers
EP2316337A1 (en) * 2006-03-01 2011-05-04 3M Innovative Properties Company Wireless interface for audiometers
US8196470B2 (en) 2006-03-01 2012-06-12 3M Innovative Properties Company Wireless interface for audiometers
AU2007349196A9 (en) * 2006-03-01 2013-03-28 3M Innovative Properties Company Wireless interface for audiometers
WO2008009142A1 (en) * 2006-07-20 2008-01-24 Phonak Ag Learning by provocation
US7970146B2 (en) 2006-07-20 2011-06-28 Phonak Ag Learning by provocation
EP2109934A4 (en) * 2007-01-04 2013-08-21 Sound Id Personalized sound system hearing profile selection process
EP2109934A1 (en) * 2007-01-04 2009-10-21 Sound ID Personalized sound system hearing profile selection process
WO2008086112A1 (en) 2007-01-04 2008-07-17 Sound Id Personalized sound system hearing profile selection process
EP2109934B1 (en) 2007-01-04 2016-04-27 Cvf, Llc Personalized sound system hearing profile selection
WO2016071221A1 (en) * 2014-11-04 2016-05-12 Jacoti Bvba Method for calibrating headphones
JP2019004458A (en) * 2017-06-06 2019-01-10 ジーエヌ ヒアリング エー/エスGN Hearing A/S Trial-listening of setting of hearing device, related system, and hearing device
CN109005486A (en) * 2017-06-06 2018-12-14 大北欧听力公司 The audition of hearing device setting, related system and hearing device
EP3413585A1 (en) * 2017-06-06 2018-12-12 GN Hearing A/S Audition of hearing device settings, associated system and hearing device
US11032656B2 (en) 2017-06-06 2021-06-08 Gn Hearing A/S Audition of hearing device settings, associated system and hearing device
US11882412B2 (en) 2017-06-06 2024-01-23 Gn Hearing A/S Audition of hearing device settings, associated system and hearing device
EP3493555A1 (en) * 2017-11-29 2019-06-05 GN Hearing A/S Hearing device and method for tuning hearing device parameters
US10735877B2 (en) 2017-11-29 2020-08-04 Gn Hearing A/S Hearing device and method for tuning hearing device parameters
US11146899B2 (en) 2017-11-29 2021-10-12 Gn Hearing A/S Hearing device and method for tuning hearing device par
US11418894B2 (en) 2019-06-01 2022-08-16 Apple Inc. Media system and method of amplifying audio signal using audio filter corresponding to hearing loss profile
AU2021204971B2 (en) * 2019-06-01 2023-01-19 Apple Inc. Media system and method of accommodating hearing loss
WO2022079476A1 (en) * 2020-10-15 2022-04-21 Palti Yoram Prof Telecommunication device that provides improved understanding of speech in noisy environments

Also Published As

Publication number Publication date
AU2003236811A1 (en) 2004-01-19

Similar Documents

Publication Publication Date Title
CN107615651B (en) System and method for improved audio perception
US7933419B2 (en) In-situ-fitted hearing device
US7564979B2 (en) Listener specific audio reproduction system
EP2640095B2 (en) Method for fitting a hearing aid device with active occlusion control to a user
US6913578B2 (en) Method for customizing audio systems for hearing impaired
JP6374529B2 (en) Coordinated audio processing between headset and sound source
EP2566193A1 (en) System and method for fitting of a hearing device
US8867764B1 (en) Calibrated hearing aid tuning appliance
EP1617705B1 (en) In-situ-fitted hearing device
US20090285406A1 (en) Method of fitting a portable communication device to a hearing impaired user
JP2007508751A (en) Auditory adjustment device for electronic audio equipment
US20180098720A1 (en) A Method and Device for Conducting a Self-Administered Hearing Test
US20070223721A1 (en) Self-testing programmable listening system and method
US20230190141A1 (en) Method to estimate hearing impairment compensation function
JP2008042787A (en) Apparatus and method for adapting hearing
WO2004004414A1 (en) Method of calibrating an intelligent earphone
JP3482465B2 (en) Mobile fitting system
JP6950226B2 (en) Audio equipment, optimization processing methods and programs for audio equipment
US11653137B2 (en) Method at an electronic device involving a hearing device
US20240064487A1 (en) Customized selective attenuation of game audio
JP2011527158A (en) Hearing enhancement and hearing protection method and system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP