EP3614695A1 - A hearing instrument system and a method performed in such system - Google Patents

A hearing instrument system and a method performed in such system Download PDF

Info

Publication number
EP3614695A1
EP3614695A1 EP18190144.8A EP18190144A EP3614695A1 EP 3614695 A1 EP3614695 A1 EP 3614695A1 EP 18190144 A EP18190144 A EP 18190144A EP 3614695 A1 EP3614695 A1 EP 3614695A1
Authority
EP
European Patent Office
Prior art keywords
hearing instrument
electronic communication
hearing
communication device
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP18190144.8A
Other languages
German (de)
French (fr)
Inventor
Sergi Rotger Griful
Ariane LAPLANTE-LÉVESQUE
Eline Borch Petersen
Lars Bramsløw
Thomas Lunner
Claus Nielsen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oticon AS
Original Assignee
Oticon AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oticon AS filed Critical Oticon AS
Priority to EP18190144.8A priority Critical patent/EP3614695A1/en
Publication of EP3614695A1 publication Critical patent/EP3614695A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/558Remote control, e.g. of amplification, frequency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/39Aspects relating to automatic logging of sound environment parameters and the performance of the hearing aid during use, e.g. histogram logging, or of user selected programs or settings in the hearing aid, e.g. usage logging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/30Monitoring or testing of hearing aids, e.g. functioning, settings, battery power
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting

Definitions

  • the self-reported input can be obtained by interacting with the electronic communication device; voice interacting with a microphone of the hearing device or interacting with the hearing device.
  • the feedback information is any of cognitive state and mood of the user, gathered from hearing instrument sensors and/or from the electronic communication device, audiogram of user and insertion gain in soft, moderate and loud environments, users past behavioral patterns and preferences related to the above parameters and crowdsourced behavioral patterns based on collected audiograms and insertion gains.
  • the listening preferences in the hearing instrument is set directly by the user 15 interacting with electronic communication device 22; directly by the user 15, accepting recommendations presented on the display 29 of the electronic communication device 22 and directly by the electronic communication device 22 without user confirmation, once user preferences have been learned and approved by the user 15.
  • a hearing instrument system including at least one hearing instrument and an electronic communication device, for adjusting the settings of the hearing instrument.
  • the hearing instrument comprises an electronic communication circuitry configured to communicate with the electronic communication device.
  • the electronic communication device comprises a display unit and an electronic communication circuitry configured to communicate with the hearing instrument and to communicate with at least one external communication device.
  • the system is configured to, when the hearing instrument makes a decision that affects the performance of the hearing instrument, collect feedback information by using the electronic communication device and/or the hearing instrument; process the feedback information and transferring the processed information to a hearing instrument algorithm; present the listening preferences on the display of the electronic communication device as graphical objects representing the processed information and adjust the settings of the hearing instrument based on a chosen listening preferences.
  • Such auxiliary devices may include at least one of remote controls, remote microphones, audio gateway devices, mobile phones, public-address systems, car audio systems or music players or a combination thereof.
  • the audio gateway is adapted to receive a multitude of audio signals such as from an entertainment device like a TV or a music player, a telephone apparatus like a mobile telephone or a computer, a PC.
  • the audio gateway is further adapted to select and/or combine an appropriate one of the received audio signals (or combination of signals) for transmission to the at least one hearing device.
  • the remote control is adapted to control functionality and operation of the at least one hearing devices.
  • the function of the remote control may be implemented in a Smartphone or other electronic device, the Smartphone/ electronic device possibly running an application that controls functionality of the at least one hearing device.
  • a hearing device includes
  • the hearing device further includes a signal processing unit for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the processed audio signal.
  • the output unit may include an output transducer such as a loudspeaker/ receiver for providing an air-borne acoustic signal transcutaneous or percutaneous to the skull bone or a vibrator for providing a structure-borne or liquid-borne acoustic signal.
  • the output unit may include one or more output electrodes for providing the electric signals such as in a Cochlear Implant.
  • the method 40 comprising the steps of triggering S41 the hearing instrument system 10 when the hearing instrument 15 makes a decision that affects the performance of the hearing instrument; collecting S42 feedback information by using the electronic communication device 22 and/or the hearing instrument 11; processing S43 the feedback information and transferring the processed information to a hearing instrument algorithm 12; presenting the listening preferences on the display 29 of the electronic communication device 22 as graphical objects representing the processed information and adjusting the settings of the hearing instrument 11 based on a chosen listening preferences.
  • the self-reported input can be obtained by interacting S4211 with the electronic communication device 22; voice interacting S4212 with a microphone of the hearing device 11 or interacting S4213 with the hearing device 11.
  • the feedback information is any of cognitive state and mood of the user, gathered from hearing instrument sensors and/or from the electronic communication device, audiogram of user and insertion gain in soft, moderate and loud environments, users past behavioral patterns and preferences related to the above parameters and crowdsourced behavioral patterns based on collected audiograms and insertion gains.
  • FIG. 1 and 2 illustrates a hearing instrument system 10 including at least one hearing instrument 11 and an electronic communication device 22, for adjusting the settings of the hearing instrument 11.
  • the hearing instrument 11 comprises an electronic communication circuitry configured to communicate with the electronic communication device 22.
  • the electronic communication device 22 comprises a display unit 29 and an electronic communication circuitry configured to communicate with the hearing instrument 11 and to communicate with at least one external communication device 30.
  • the system is configured to, when the hearing instrument makes a decision that affects the performance of the hearing instrument, collect feedback information by using the electronic communication device 22 and/or the hearing instrument 11; process the feedback information and transferring the processed information to a hearing instrument algorithm 12; present the listening preferences on the display 29 of the electronic communication device 22 as graphical objects representing the processed information and adjust the settings of the hearing instrument 11 based on a chosen listening preferences.
  • the hearing instrument 11 is or comprises a hearing aid, a hearing device or a headset.
  • the user 15 is wearing the hearing aids 11 and enters a bar to meet some friends.
  • the hearing aids 11 detect a change of sound environment and trigger a different configuration of the hearing aids - a more aggressive noise reduction scheme is turned on.
  • the user 15 receives a message from an app on the smartphone asking if the change was good or bad, as shown in fig 2 .
  • the user 15 has personalized how often these prompts should occur.
  • the user 15 replies that it was a bad change as the user cannot hear his or her friends.
  • This self-reported feedback is provided to the hearing device algorithm 12 that put a label of "false-positive" to the sound environment detection algorithm. With this feedback, the user's hearing devices 11 are less prone to make similar mistakes in the future, when the user 15 is in similar sound environments.
  • a level regulation or loudness control algorithm 27 is provided, as shown in fig 2 , which is based on three different types of data: Subjective user-input, big-data collection, and physiological measurements using build-in sensors.
  • Figure 2 shows a schematic outline of the proposed loudness control algorithm 27.

Abstract

A method performed in a hearing instrument system (10) including at least one hearing instrument (11) and an electronic communication device (22), e.g. a smartphone, for adjusting the settings of the hearing instrument (11). The method (40) comprising the steps of triggering (S41) the hearing instrument system (10) when the hearing instrument (11) makes a decision that affects the performance of the hearing instrument, collecting (S42) feedback information by using the electronic communication device (22) and/or the hearing instrument (11); processing (S43) the feedback information and transferring the processed information to a hearing instrument algorithm (12); presenting the listening preferences on the display (29) of the electronic communication device (22) as graphical objects representing the processed information and adjusting the settings of the hearing instrument (11) based on a chosen listening preferences.

Description

    TECHNICAL FIELD
  • The present disclosure relates to hearing instrument system and a method performed in such hearing instrument system.
  • BACKGROUND ART
  • It is known to change the signal-processing algorithm in a hearing aid to help the end-user cope with the listening situation. This changing process is currently unsupervised and based on unseen scenarios. The decision-algorithms have no way of knowing whether or not the end-user of the device is satisfied with the performance of the device, i.e., choice of algorithm.
  • The hearing device cannot learn based on relative end-user performance or opinion. Therefore, without feedback from the end-user the hearing device algorithms have a limited performance, thus also limiting the end-user satisfaction.
  • The lack of feedback to the hearing device can be critical in classifier algorithms, like the analysis of the sound environment that consequently triggers a change in the current configuration of the hearing device. If these classifiers provide a wrong output, the hearing device will be running in a suboptimal configuration. With current hearing device algorithms, there is no way to know whether or not the classifiers have made a false-positive detection, i.e. a wrong decision.
  • Therefore, there is a need to provide a solution that addresses at least some of the above-mentioned problems.
  • SUMMARY
  • An object of the present invention is therefore to provide a hearing aid, which overcomes the problem stated above. In particular, an object of the present invention is to provide a hearing aid adapting to the user of a hearing aid based on the user's interactions with the hearing aid as well s in accordance with the acoustic environment presented to the user.
  • An aspect of the present invention relates to a method performed in a hearing instrument system including at least one hearing instrument and an electronic communication device, e.g. a smartphone, for adjusting the settings of the hearing instrument. The hearing instrument comprises an electronic communication circuitry configured to communicate with the electronic communication device. The electronic communication device comprises a display unit and an electronic communication circuitry configured to communicate with the hearing aid and to communicate with at least one external communication device. The method comprising the steps of triggering the hearing instrument system when the hearing instrument makes a decision that affects the performance of the hearing instrument; collecting feedback information by using the electronic communication device and/or the hearing instrument; processing the feedback information and transferring the processed information to a hearing instrument algorithm; presenting the listening preferences on the display of the electronic communication device as graphical objects representing the processed information and adjusting the settings of the hearing instrument based on a chosen listening preferences.
  • In one aspect, the step of collecting the feedback information comprises collecting self-reported input by a user of the hearing instrument; collecting objective data from at least one sensor worn by the user or embedded in the hearing instrument or collecting monitored behavioral changes of the voice of the user input.
  • In one aspect, the self-reported input can be obtained by interacting with the electronic communication device; voice interacting with a microphone of the hearing device or interacting with the hearing device.
  • In one aspect, the feedback information is any of cognitive state and mood of the user, gathered from hearing instrument sensors and/or from the electronic communication device, audiogram of user and insertion gain in soft, moderate and loud environments, users past behavioral patterns and preferences related to the above parameters and crowdsourced behavioral patterns based on collected audiograms and insertion gains.
  • In one aspect, the listening preferences in the hearing instrument is set directly by the user 15 interacting with electronic communication device 22; directly by the user 15, accepting recommendations presented on the display 29 of the electronic communication device 22 and directly by the electronic communication device 22 without user confirmation, once user preferences have been learned and approved by the user 15.
  • According to another aspect, a hearing instrument system including at least one hearing instrument and an electronic communication device, for adjusting the settings of the hearing instrument. The hearing instrument comprises an electronic communication circuitry configured to communicate with the electronic communication device. The electronic communication device comprises a display unit and an electronic communication circuitry configured to communicate with the hearing instrument and to communicate with at least one external communication device. The system is configured to, when the hearing instrument makes a decision that affects the performance of the hearing instrument, collect feedback information by using the electronic communication device and/or the hearing instrument; process the feedback information and transferring the processed information to a hearing instrument algorithm; present the listening preferences on the display of the electronic communication device as graphical objects representing the processed information and adjust the settings of the hearing instrument based on a chosen listening preferences.
  • In one aspect, the hearing instrument 11 is or comprises a hearing aid, a hearing device or a headset.
  • BRIEF DESCRIPTION OF DRAWINGS
  • Further objects, features and advantages of the present invention will appear from the following detailed description of the invention, wherein embodiments of the invention will be described in more detail with reference to accompanying drawings, in which:
    • Figure 1 shows a block diagram according to an embodiment of the present invention;
    • Figure 2 shows a block diagram according to an embodiment of the present invention; and
    • Figure 3 shows a flowchart of a method according to an embodiment of the present invention.
    DETAILED DESCRIPTION
  • The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the system and method are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as "elements"). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
  • The electronic hardware may include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • A hearing device may be or include a hearing aid that is adapted to improve or augment the hearing capability of a user by receiving an acoustic signal from a user's surroundings, generating a corresponding audio signal, possibly modifying the audio signal and providing the possibly modified audio signal as an audible signal to at least one of the user's ears. The "hearing device" may further refer to a device such as a hearable, an earphone or a headset adapted to receive an audio signal electronically, possibly modifying the audio signal and providing the possibly modified audio signals as an audible signal to at least one of the user's ears. Such audible signals may be provided in the form of an acoustic signal radiated into the user's outer ear, or an acoustic signal transferred as mechanical vibrations to the user's inner ears through bone structure of the user's head and/or through parts of middle ear of the user or electric signals transferred directly or indirectly to cochlear nerve and/or to auditory cortex of the user.
  • The hearing device is adapted to be worn in any known way. This may include
    1. i) arranging a unit of the hearing device behind the ear with a tube leading air-borne acoustic signals into the ear canal or with a receiver/ loudspeaker arranged close to or in the ear canal such as in a Behind-the-Ear type hearing aid, and/ or
    2. ii) arranging the hearing device entirely or partly in the pinna and/ or in the ear canal of the user such as in an In-the-Ear type hearing aid or In-the-Canal/ Completely-in-Canal type hearing aid, or
    3. iii) arranging a unit of the hearing device attached to a fixture implanted into the skull bone such as in Bone Anchored Hearing Aid or Cochlear Implant, or
    4. iv) arranging a unit of the hearing device as an entirely or partly implanted unit such as in Bone Anchored Hearing Aid or Cochlear Implant.
  • A "hearing system" refers to a system comprising one or two hearing devices, and a "binaural hearing system" refers to a system comprising two hearing devices where the devices are adapted to cooperatively provide audible signals to both of the user's ears. The hearing system or binaural hearing system may further include auxiliary device(s) that communicates with at least one hearing device, the auxiliary device affecting the operation of the hearing devices and/or benefitting from the functioning of the hearing devices. A wired or wireless communication link between the at least one hearing device and the auxiliary device is established that allows for exchanging information (e.g. control and status signals, possibly audio signals) between the at least one hearing device and the auxiliary device. Such auxiliary devices may include at least one of remote controls, remote microphones, audio gateway devices, mobile phones, public-address systems, car audio systems or music players or a combination thereof. The audio gateway is adapted to receive a multitude of audio signals such as from an entertainment device like a TV or a music player, a telephone apparatus like a mobile telephone or a computer, a PC. The audio gateway is further adapted to select and/or combine an appropriate one of the received audio signals (or combination of signals) for transmission to the at least one hearing device. The remote control is adapted to control functionality and operation of the at least one hearing devices. The function of the remote control may be implemented in a Smartphone or other electronic device, the Smartphone/ electronic device possibly running an application that controls functionality of the at least one hearing device.
  • In general, a hearing device includes
    1. i) an input unit such as a microphone for receiving an acoustic signal from a user's surroundings and providing a corresponding input audio signal, and/or
    2. ii) a receiving unit for electronically receiving an input audio signal.
  • The hearing device further includes a signal processing unit for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the processed audio signal.
  • The input unit may include multiple input microphones, e.g. for providing direction-dependent audio signal processing. Such directional microphone system is adapted to enhance a target acoustic source among a multitude of acoustic sources in the user's environment. In one aspect, the directional system is adapted to detect (such as adaptively detect) from which direction a particular part of the microphone signal originates. This may be achieved by using conventionally known methods. The signal processing unit may include amplifier that is adapted to apply a frequency dependent gain to the input audio signal. The signal processing unit may further be adapted to provide other relevant functionality such as compression, noise reduction, etc. The output unit may include an output transducer such as a loudspeaker/ receiver for providing an air-borne acoustic signal transcutaneous or percutaneous to the skull bone or a vibrator for providing a structure-borne or liquid-borne acoustic signal. In some hearing devices, the output unit may include one or more output electrodes for providing the electric signals such as in a Cochlear Implant.
  • A system and a method is provided to supervise learning to improve end-user satisfaction with the automatic hearing-device decisions.
  • To circumvent the problem of potentially making the wrong choice of hearing-device algorithm, a system and method is provided that enables collecting feedback from the end-user in the algorithm decisions that the hearing device makes. This user-feedback can be objective e.g., physiological measures obtained through biosensor monitoring, behavioral, self-reported, or a combination.
  • Figure 1 shows an overview of the provided system 10 and figure 3 shows the provided method. The system 10 and method 40 functionality:
    1. 1. The system 10 is triggered S41 when the hearing device 11, makes a decision that affects the configuration/performance of the hearing device e.g., sound environment type.
    2. 2. Once triggered, the hearing device 11 attempts to collect S42 feedback from the end-user 15 on whether or not the change of settings was good i.e., binary feedback outcome. The end- user feedback 14a, 14b, 14c can be of three types:
      • Self-reported S421 end-user input 14a: This is feedback that the end-user 15 consciously provides. This can be obtained in different ways:
        • ∘ Interaction S4211 with smartphone 22 with a question, e.g., "How is the sound now? Better?" and then the end-user can reply through the touch screen with a simple Yes/No. This could be via visual display 29 or through voice control using smartphone mics e.g., Siri alike interface.
        • ∘ Voice interaction S4212: The end-user can simply say Yes/No and the hearing device microphone captures the reply.
        • ∘ Other interactions S4213: This could be many different interactions, like touching the left hearing device for good decisions and right hearing device for wrong decisions or nodding/shaking the head to indicate whether the change in algorithm was helpful.
      • Obj ective S422 end-user input 14b: This feedback is still provided by the end-user 15 but there is no need for dedicated end-user interaction. The end-user input is collected without intervention from the end-user. Examples of objective measures include physiological measures related to hearing performance like heart rate variability and ECG, pupillometry, EEG, EOG, body and head movement etc. The acquisition of this feedback signal can be done either from external sensors that the end-user is wearing e.g., smart watch or from sensors embedded in the hearing device 11.
      • Behavioral S423 end-user input 14c: This feedback includes aided speech identification test or other forms of tests that informs on the performance of the hearing device algorithm 12. Another possibility is to monitor changes in the own-voice utterance of the end-user. If an algorithm change is unfavorable for the end-user changes in the pitch and loudness (the Lombard effect) might be detected in the speech of the end-user. Or analyzing the turn-taking behavior of the user via own-voice detection e.g. how much is the user speaking, how much is the user quiet etc.
    3. 3. Once the feedback has been provided S43, the feedback information is processed and sent back to the hearing device algorithms 12 with information if the feedback was positive or negative. Both positive and negative feedback are used by the hearing device algorithm 12 in a supervised fashion to improve its accuracy. By adding end- user feedback 14a, 14b, 14c, the hearing device algorithms 12 is tuned and thus the accuracy is improved, i.e. choose more suitable hearing-device settings, over time and enabling personalization of algorithms i.e. no two hearing device algorithm are the same.
  • This hearing device tuning 40 is logged in the hearing aid 11 and it is accessible by the hearing care professional. Thus the hearing care professional can monitor the amount of negative feedback and do configuration changes in the hearing device algorithms 12 if needed be. All these data can be stored in a cloud service 23, as shown in fig 2, and enable data analytics e.g., group-learning of algorithms settings/states.
  • Figure 3 illustrates a method 40 performed in a hearing instrument system 10 including at least one hearing instrument 11 and an electronic communication device 22, e.g. a smartphone, for adjusting the settings of the hearing instrument 11. The hearing instrument 11 comprises an electronic communication circuitry configured to communicate with the electronic communication device 22. The electronic communication device 22 comprises a display unit 29 and an electronic communication circuitry configured to communicate with the hearing aid 11 and to communicate with at least one external communication device 30. The method 40 comprising the steps of triggering S41 the hearing instrument system 10 when the hearing instrument 15 makes a decision that affects the performance of the hearing instrument; collecting S42 feedback information by using the electronic communication device 22 and/or the hearing instrument 11; processing S43 the feedback information and transferring the processed information to a hearing instrument algorithm 12; presenting the listening preferences on the display 29 of the electronic communication device 22 as graphical objects representing the processed information and adjusting the settings of the hearing instrument 11 based on a chosen listening preferences.
  • In one aspect, the step of collecting the feedback information comprises collecting self-reported S421 input by a user 15 of the hearing instrument 11; collecting objective data S422 from at least one sensor worn by the user 15 or embedded in the hearing instrument 11 or collecting monitored behavioral S423 changes of the voice of the user input.
  • In one aspect, the self-reported input can be obtained by interacting S4211 with the electronic communication device 22; voice interacting S4212 with a microphone of the hearing device 11 or interacting S4213 with the hearing device 11.
  • In one aspect, the feedback information is any of cognitive state and mood of the user, gathered from hearing instrument sensors and/or from the electronic communication device, audiogram of user and insertion gain in soft, moderate and loud environments, users past behavioral patterns and preferences related to the above parameters and crowdsourced behavioral patterns based on collected audiograms and insertion gains.
  • In one aspect, the listening preferences in the hearing instrument is set directly by the user 15 interacting with electronic communication device 22; directly by the user 15, accepting recommendations presented on the display 29 of the electronic communication device 22 and directly by the electronic communication device 22 without user confirmation, once user preferences have been learned and approved by the user 15.
  • Figure 1 and 2 illustrates a hearing instrument system 10 including at least one hearing instrument 11 and an electronic communication device 22, for adjusting the settings of the hearing instrument 11. The hearing instrument 11 comprises an electronic communication circuitry configured to communicate with the electronic communication device 22. The electronic communication device 22 comprises a display unit 29 and an electronic communication circuitry configured to communicate with the hearing instrument 11 and to communicate with at least one external communication device 30. The system is configured to, when the hearing instrument makes a decision that affects the performance of the hearing instrument, collect feedback information by using the electronic communication device 22 and/or the hearing instrument 11; process the feedback information and transferring the processed information to a hearing instrument algorithm 12; present the listening preferences on the display 29 of the electronic communication device 22 as graphical objects representing the processed information and adjust the settings of the hearing instrument 11 based on a chosen listening preferences.
  • In one aspect, the hearing instrument 11 is or comprises a hearing aid, a hearing device or a headset.
  • In one embodiment, where the wrong sound environment detection is presented, the user 15 is wearing the hearing aids 11 and enters a bar to meet some friends. The hearing aids 11 detect a change of sound environment and trigger a different configuration of the hearing aids - a more aggressive noise reduction scheme is turned on. When this happens, the user 15 receives a message from an app on the smartphone asking if the change was good or bad, as shown in fig 2. The user 15 has personalized how often these prompts should occur. The user 15 replies that it was a bad change as the user cannot hear his or her friends. This self-reported feedback is provided to the hearing device algorithm 12 that put a label of "false-positive" to the sound environment detection algorithm. With this feedback, the user's hearing devices 11 are less prone to make similar mistakes in the future, when the user 15 is in similar sound environments.
  • The user's 15 of hearing aids 11 are exposed to many inputs during the day both visual, tactile, and auditory. It is well known that we get tired during the day and need rest. During the day, we constantly process incoming information. Throughout the day, we often experience that we turn up the sound levels of our radio, television, or headphones, to experience the same level of satisfaction with the loudness. This phenomenon can arise from the fact that most information is processed by our working memory, which only has a limited capacity in contrast to long term memory that have much more capacity. In order to refuel our working memory capacity, we need to rest - sleep. If that is not possible we can then try to ease the condition, lower the listening effort e. g. turning up the volume of the car radio, the cell phone or the television to lower the concentration needed to hear. At the same time, the mechanic properties of the inner hear cells are such that they become less reactive when exposed to sound. A mechanism that protects the hearing from being damaged from loud sounds. However, after a full day of listening your hearing becomes less sensitive and a higher air pressure, i.e., more sound, is needed to obtain the same loudness perception as earlier in the day. Several researchers have found adaptation in the auditive system i.e. as a function of time the system will adapt to repetitive stimuli like speech but maintain responsiveness to stimuli with different physical characteristics. If the auditory system has some sort of memory that is then reset during sleep such then the system will start off with full responsiveness for all signals in the morning.
  • During a long night of rest, the auditory system 10, 20 and cognitive factors depending on working memory processing and the mechanical properties of the inner hear cells are 'reset'. The next morning when we then turn on the radio, the cell phone is ringing, or we turn on the television we often find the sound level to be too loud. This is typically because we have rested and no longer need the extra volume to ease the listening effort and make the inner hear cells respond. This phenomenon is known as hearing exhaustion, listening fatigue, or ear fatigue. In short, the volume we listen to often feels too low in the evening and too loud in the morning. The solution described below is primarily intended to be used with headphones and headsets, but could also be applicable for streaming of smartphone and television sounds directly into hearing devices.
  • In one embodiment, a level regulation or loudness control algorithm 27 is provided, as shown in fig 2, which is based on three different types of data: Subjective user-input, big-data collection, and physiological measurements using build-in sensors.
    • Subjective user-input: Using an app, such as the application (app) Captune used together with premium headsets like MB660, PCX 550 and similar, it would be possible to obtain a subjective loudness rating of the volume of a sound. The loudness rating could be carried out several times during the day to find the preferred listening level. Once that is completed, the headset will be able to present a pre-set volume that takes the time of day into consideration. See paths with reference 24 in Figure 2.
    • Big-data collection: Like the approach mentioned above, advanced learning algorithms could be used to monitor how the user adjusts the sound level across the day and later employ this knowledge to automatically adjust the loudness. Besides loudness level, the learning algorithm could take factors such as subjective user-input (see above) and the current auditory sound presented into account. It is likely that the auditory input type, e.g. music vs. computer gaming, requires different sound levels to be perceived as 'loud enough' for the user. The learning algorithm could potentially benefit from other inputs such as the environmental background noise, GPS-position, etc. This type of data may show global trends across many subjects, such that loudness regulation could be based on general observations (crowd-learning) across a large population of headset/device users. See paths with reference 25 in Figure 2.
    • Physiological measures: Physical reactions to the incoming auditory input can also be monitored using electrodes incorporated into the headset/device. On a gaming headset, it will be possible to make use of electrodes placed on the headband and the ear cups and pick up EEG-signals (electroencephalography) of the user. Characteristic changes in the EEG such as the event-related potential, can indicate that an uncomfortably loud sound is being heard and the loudness can have turned down accordingly. Besides changes in the EEG, uncomfortably loud sounds could also elicit other physiological changes such as irregular/fast eye or head movement. Such reactions to loud sounds could be captured using sensors build into the headset/device for monitoring changes in the eye gaze/movement (EOG), muscle activity (EMG), heart rate and pulse (ECG, PEP), and general body movement (IMUs). See paths with reference 26 in Figure 2.
  • Figure 2 shows a schematic outline of the proposed loudness control algorithm 27.
    • Using data obtained from subjective user-input rating of the loudness satisfaction collected e.g. collected via a smartphone app 24, collection of big-data such as time-of-day, auditory input characteristics, user input etc. 25, and physiological data measured using sensors build into the headset 26, a cloud-based learning algorithm 23 is trained.
    • Based on the output of this learning algorithm, the loudness control algorithm 27 of the headset is adapted, potentially through a connected smartphone 22.
    • For an immediate adaptation of the loudness control algorithm 27 using the (electro)physiological measurements, the headset/device 11 should also incorporate an algorithm for detecting uncomfortably loud sounds 28 which automatically reduces the loudness.
    A computer readable medium
  • In an aspect, the functions may be stored on or encoded as one or more instructions or code on a tangible computer-readable medium. The computer readable medium includes computer storage media adapted to store a computer program comprising program codes, which when run on a processing system causes the data processing system to perform at least some (such as a majority or all) of the steps of the method described above, in the and in the claims.
  • By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Combinations of the above should also be included within the scope of computer-readable media. In addition to being stored on a tangible medium, the computer program can also be transmitted via a transmission medium such as a wired or wireless link or a network, e.g. the Internet, and loaded into a data processing system for being executed at a location different from that of the tangible medium.
  • A data processing system
  • In an aspect, a data processing system comprising a processor adapted to execute the computer program for causing the processor to perform at least some (such as a majority or all) of the steps of the method described above and in the claims.
  • It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
  • As used, the singular forms "a," "an," and "the" are intended to include the plural forms as well (i.e. to have the meaning "at least one"), unless expressly stated otherwise. It will be further understood that the terms "includes," "comprises," "including," and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element but an intervening element may also be present, unless expressly stated otherwise. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.
  • It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "an aspect" or features included as "may" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
  • The claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more.
  • Accordingly, the scope should be judged in terms of the claims that follow.

Claims (7)

  1. A method (40) performed in a hearing instrument system (10) including at least one hearing instrument (11) and an electronic communication device (22), for adjusting the settings of the hearing instrument (11), wherein said hearing instrument (11) comprising:
    • an electronic communication circuitry configured to communicate with the electronic communication device (22);
    wherein the electronic communication device (22) comprising:
    • a display unit (29); and
    • an electronic communication circuitry configured to communicate with the hearing instrument (11) and to communicate with at least one external communication device (30),
    wherein said method (40) comprising the steps of:
    • triggering (S41) the hearing instrument system (10) when the hearing instrument (15) makes a decision that affects the performance of the hearing instrument;
    • collecting (S42) feedback information by using the electronic communication device (22) and/or the hearing instrument (11);
    • processing (S43) the feedback information and transferring the processed information to a hearing instrument algorithm (12);
    • presenting the listening preferences on the display (29) of the electronic communication device (22) as graphical objects representing the processed information; and
    • adjusting the settings of the hearing instrument (11) based on a chosen listening preferences.
  2. The method according to claim 1, wherein step of collecting the feedback information comprising:
    • collecting self-reported (S421) input by a user (15) of the hearing instrument (11):
    • collecting objective data S422 from at least one sensor worn by the user (15) or embedded in the hearing instrument (11); or
    • collecting monitored behavioral (S423) changes of the voice of the user (15).
  3. The method according to claim 2, wherein the self-reported input can be obtained by:
    • interacting (S4211) with the electronic communication device (22);
    • voice interacting (S4212) with a microphone of the hearing device (11); or
    • interacting (S4213) with the hearing device (11).
  4. The method according to claim 1, wherein the feedback information is any of:
    • cognitive state and mood of the user, gathered from hearing instrument sensors and/or from the electronic communication device;
    • rating of loudness of the volume of a sound;
    • sound level combined with the time of the day;
    • audiogram of user and insertion gain in soft, moderate and loud environments;
    • users past behavioral patterns and preferences related to the above parameters; and
    • crowdsourced behavioral patterns based on collected audiograms and insertion gains.
  5. The method according to claim 1, wherein listening preferences in the hearing instrument (11) is set:
    • directly by the user (15) interacting with electronic communication device (22);
    • directly by the user (15), accepting recommendations presented on the display (29) of the electronic communication device (22);
    • directly by the electronic communication device (22) without user confirmation, once user preferences have been learned and approved by the user (15).
  6. A hearing instrument system (10) including at least one hearing instrument (11) and an electronic communication device (22), for adjusting the settings of the hearing instrument (11), wherein said hearing instrument (11) comprising:
    • an electronic communication circuitry configured to communicate with the electronic communication device (22);
    wherein the electronic communication device (22) comprising:
    • a display unit (29); and
    • an electronic communication circuitry configured to communicate with the hearing instrument (11) and to communicate with at least one external communication device (30),
    wherein said system is configured to, when the hearing instrument makes a decision that affects the performance of the hearing instrument:
    • collect feedback information by using the electronic communication device (22) and/or the hearing instrument (11);
    • process the feedback information and transferring the processed information to a hearing instrument algorithm (12);
    • present the listening preferences on the display (29) of the electronic communication device (22) as graphical objects representing the processed information; and
    • adjust the settings of the hearing instrument (11) based on a chosen listening preferences.
  7. A hearing instrument (11) according to claim 1 - 8, wherein the hearing instrument is or comprises a hearing aid, a hearing device or a headset.
EP18190144.8A 2018-08-22 2018-08-22 A hearing instrument system and a method performed in such system Withdrawn EP3614695A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18190144.8A EP3614695A1 (en) 2018-08-22 2018-08-22 A hearing instrument system and a method performed in such system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP18190144.8A EP3614695A1 (en) 2018-08-22 2018-08-22 A hearing instrument system and a method performed in such system

Publications (1)

Publication Number Publication Date
EP3614695A1 true EP3614695A1 (en) 2020-02-26

Family

ID=63363921

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18190144.8A Withdrawn EP3614695A1 (en) 2018-08-22 2018-08-22 A hearing instrument system and a method performed in such system

Country Status (1)

Country Link
EP (1) EP3614695A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3873110A1 (en) * 2020-02-28 2021-09-01 Oticon A/s Hearing aid determining turn-taking
US11425516B1 (en) 2021-12-06 2022-08-23 Audiocare Technologies Ltd. System and method for personalized fitting of hearing aids
JP2022143174A (en) * 2021-03-17 2022-10-03 ソフトバンク株式会社 Hearing aid, voice control method, and voice control program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2736273A1 (en) * 2012-11-23 2014-05-28 Oticon A/s Listening device comprising an interface to signal communication quality and/or wearer load to surroundings
WO2017139218A1 (en) * 2016-02-08 2017-08-17 Nar Special Global, Llc. Hearing augmentation systems and methods
US20180063653A1 (en) * 2016-08-25 2018-03-01 Sivantos Pte. Ltd. Method and a device for adjusting a hearing aid device
US20180213339A1 (en) * 2017-01-23 2018-07-26 Intel Corporation Adapting hearing aids to different environments

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2736273A1 (en) * 2012-11-23 2014-05-28 Oticon A/s Listening device comprising an interface to signal communication quality and/or wearer load to surroundings
WO2017139218A1 (en) * 2016-02-08 2017-08-17 Nar Special Global, Llc. Hearing augmentation systems and methods
US20180063653A1 (en) * 2016-08-25 2018-03-01 Sivantos Pte. Ltd. Method and a device for adjusting a hearing aid device
US20180213339A1 (en) * 2017-01-23 2018-07-26 Intel Corporation Adapting hearing aids to different environments

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3873110A1 (en) * 2020-02-28 2021-09-01 Oticon A/s Hearing aid determining turn-taking
US11375322B2 (en) 2020-02-28 2022-06-28 Oticon A/S Hearing aid determining turn-taking
US11863938B2 (en) 2020-02-28 2024-01-02 Oticon A/S Hearing aid determining turn-taking
JP2022143174A (en) * 2021-03-17 2022-10-03 ソフトバンク株式会社 Hearing aid, voice control method, and voice control program
US11425516B1 (en) 2021-12-06 2022-08-23 Audiocare Technologies Ltd. System and method for personalized fitting of hearing aids
US11882413B2 (en) 2021-12-06 2024-01-23 Tuned Ltd. System and method for personalized fitting of hearing aids

Similar Documents

Publication Publication Date Title
US10542355B2 (en) Hearing aid system
JP6716647B2 (en) Hearing aid operating method and hearing aid
EP2071875B1 (en) System for customizing hearing assistance devices
US20170374477A1 (en) Control of a hearing device
US20130343585A1 (en) Multisensor hearing assist device for health
US9749753B2 (en) Hearing device with low-energy warning
US11671769B2 (en) Personalization of algorithm parameters of a hearing device
US11477583B2 (en) Stress and hearing device performance
US11601765B2 (en) Method for adapting a hearing instrument and hearing system therefor
EP3614695A1 (en) A hearing instrument system and a method performed in such system
US20220201404A1 (en) Self-fit hearing instruments with self-reported measures of hearing loss and listening
US11785404B2 (en) Method and system of fitting a hearing device
US20220053278A1 (en) Systems and methods for adjustment of auditory prostheses based on tactile response
EP3809725A2 (en) Hearing aid system configured to evaluate cognitive load
EP3142388A1 (en) Method for increasing battery lifetime in a hearing device
WO2020084342A1 (en) Systems and methods for customizing auditory devices
US20220192541A1 (en) Hearing assessment using a hearing instrument
CN115278492A (en) Hearing aid with hands-free control
EP4290886A1 (en) Capture of context statistics in hearing instruments
US11528566B2 (en) Battery life estimation for hearing instruments
EP4290885A1 (en) Context-based situational awareness for hearing instruments
US20220353625A1 (en) Electronic hearing device and method
EP4007309A1 (en) Method for calculating gain in a heraing aid
JP2022087062A (en) Spectro-temporal modulation test unit

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

17P Request for examination filed

Effective date: 20200826

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20200827