EP4068805A1 - Procédé, programme informatique et support lisible par ordinateur permettant de configurer un dispositif auditif, organe de commande pour faire fonctionner un appareil auditif et système auditif - Google Patents

Procédé, programme informatique et support lisible par ordinateur permettant de configurer un dispositif auditif, organe de commande pour faire fonctionner un appareil auditif et système auditif Download PDF

Info

Publication number
EP4068805A1
EP4068805A1 EP21166351.3A EP21166351A EP4068805A1 EP 4068805 A1 EP4068805 A1 EP 4068805A1 EP 21166351 A EP21166351 A EP 21166351A EP 4068805 A1 EP4068805 A1 EP 4068805A1
Authority
EP
European Patent Office
Prior art keywords
sound
user
hearing device
program
hearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP21166351.3A
Other languages
German (de)
English (en)
Inventor
Stephan Müller
Stefan Klockgether
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonova Holding AG
Original Assignee
Sonova AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonova AG filed Critical Sonova AG
Priority to EP21166351.3A priority Critical patent/EP4068805A1/fr
Publication of EP4068805A1 publication Critical patent/EP4068805A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/558Remote control, e.g. of amplification, frequency

Definitions

  • the invention relates to a method, a computer program, and a computer-readable medium, in which the computer program is stored, for configuring a hearing device. Furthermore, the invention relates to a controller for operating the hearing device, and to a hearing system comprising at least the one hearing device and optionally a connected user device, such as a smartphone.
  • Hearing devices are generally small and complex devices. Hearing devices can include a processor, a microphone as a sound input component, an integrated loudspeaker as a sound output component, a memory, a housing, and other electronical and mechanical components. Some example hearing devices are Behind-The-Ear (BTE), Receiver-In-Canal (RIC), In-The-Ear (ITE), Completely-In-Canal (CIC), and Invisible-In-The-Canal (IIC) devices. A user can prefer one of these hearing devices compared to another device based on hearing loss, aesthetic preferences, lifestyle needs, and budget.
  • BTE Behind-The-Ear
  • RIC Receiver-In-Canal
  • ITE In-The-Ear
  • CIC Completely-In-Canal
  • IIC Invisible-In-The-Canal
  • the accuracy of the classification system may be limited, what may lead to a misclassification of the situation.
  • the hearing intention of the user may not be considered by the classifier, e.g. the user wants to communicate at a concert and the hearing aid adapts to the music instead of the conversation partner.
  • a common way to consider the user's intention is to provide him or her with manual programs with predefined sets of feature parameters that the user can switch in between by pressing a button on the hearing instrument.
  • a modern approach is to allow the user to adjust the single parameters directly via a mobile application.
  • these solutions require a certain degree of understanding how the features affect the listening impression.
  • the benefit of many features come with compromises (e.g. stronger noise reduction leads to reduced sound quality). Understanding these compromises is a complex matter for non-technical affine users. It also may take too long time until the user has the paired smartphone available or finds the right manual program on the hearing aid. So, the user experience to the user wearing the user device may be not convenient for the user. Further, since the manual programs do have to be set up in advance, an adequate manual program for a specific situation might just not be quickly available.
  • a first aspect of the invention relates to a method for configuring a hearing device.
  • the hearing device comprises at least one sound input component, at least one sound output component, and a sound processor, which is coupled to the sound output component and which is configured in accordance with a first sound program for modifying a sound output of the hearing device.
  • the method comprises: receiving an audio signal from the at least one sound input component and/or a sensor signal from the at least one further sensor; determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal; determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value; configuring the sound processor in accordance with the second sound program such that the sound output is modified according to the second sound program; receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; and reconfiguring the sound processor in accordance with the first sound program.
  • the method may be a computer-implemented method, which may be performed automatically by a hearing system, part of which the user's hearing device is.
  • the hearing system may, for instance, comprise one or two hearing devices used by the same user. One or both of the hearing devices may be worn on and/or in an ear of the user.
  • a hearing device may be a hearing aid, which may be adapted for compensating a hearing loss of the user.
  • a cochlear implant may be a hearing device.
  • the hearing system may optionally further comprise at least one connected user device, such as a smartphone, smartwatch or other devices carried by the user and/or a personal computer etc.
  • the further sensor(s) may be any type(s) of physical sensor(s) - e.g.
  • the first and/or second sound program may be referred to as a sound processing feature.
  • the sound processing feature may for example be a Noise Canceller or a Beamformer Strength.
  • the sound input may correspond to the user's speaking activity and/or the user's acoustic environment.
  • the reconfiguration of the sound processor in accordance with the first sound program provides a revert function that allows the user to immediately return to the previous automatic setting, i.e. the first sound program.
  • the revert function empowers the user to revert automatic changes that are not in agreement with his hearing intention.
  • the user notices an undesired change to the acoustics of his surroundings, he provides the user input indicating that he does not agree with the configuration in accordance with the second sound program to return to the preferred previous setting.
  • the major advantage of the above revert function over common interfaces is that the user can make changes to the hearing system, in particular the hearing device, without needing knowledge about the technical details. The user only expresses his disagreement with the classification of his environment. This is considered a great facilitation compared to common methods of interacting with the hearing instrument.
  • a determination algorithm for determining, whether the first sound program is adapted to the determined classification value is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing device is able to learn the preferences of the user and to consider them in a future determination process.
  • the reverse function also may deliver real-life feedback data on how happy the user is with the current classifier system comprising the determination algorithm and/or for which situations the determination algorithm and the corresponding automatic sound program steering procedures may be adapted.
  • the adaption of the determination algorithm only may be carried out, if the predetermined user input has been given for a predetermined number of times under a similar speaking activity and/or acoustic environment.
  • the predetermined user input is input via input means of the hearing device, an application of a mobile device, and/or a gesture detection.
  • the gesture detection may be carried out by the hearing device, e.g. by a tap control with an accelerometer or pressure sensor of the hearing device. Alternatively, the gesture detection may be carried out by the connected user device.
  • the hearing device if the predetermined user input is input by the user, although the sound program has not been changed, the hearing device provides a predetermined output to the user, which informs the user that the sound program has not been changed. For example, the acoustic environment of the user changes and the user perceives a change of his listening experience. Then, the user may believe that this change was induced by an automatic change of the sound program and may provide the predetermined user input indicating that he does not agree with this alleged change of the sound program. In this case, the predetermined output provides the user with the information that the sound program has not been changed automatically. So, the user knows that the change of the listening experience has an external cause and is not induced by an internal change of the hearing device. So, the predetermined output may enable a differentiation between the internal and the external change.
  • the at least one classification value is determined by characterizing the user's speaking activity and/or the user's acoustic environment.
  • the at least one classification value is determined by identifying a predetermined state characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal, and by determining the at least one classification value depending on the identified state.
  • the one or more predetermined states are one or more of the following: Speech In Quiet; Speech In Noise; Being In Car; Reverberant Speech; Noise; Music; Quiet; Speech In Loud Noise.
  • two or more classification values characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined; and the second sound program is adapted to the corresponding determined classification values.
  • the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over a predetermined time interval.
  • the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over two identical predetermined time intervals separated by a predetermined pause interval.
  • the computer program may be executed in a processor of a hearing device, which hearing device, for example, may be carried by the person behind the ear.
  • the computer-readable medium may be a memory of this hearing device.
  • the computer program also may be executed by a processor of a connected user device, such as a smartphone or any other type of mobile device, which may be a part of the hearing system, and the computer-readable medium may be a memory of the connected user device. It also may be that some steps of the method are performed by the hearing device and other steps of the method are performed by the connected user device.
  • the computer-readable medium may be a floppy disk, a hard disk, an USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or a FLASH memory.
  • the computer-readable medium may also be a data communication network, e.g. the Internet, which allows downloading a program code.
  • the computer-readable medium may be a non-transitory or transitory medium.
  • a further aspect of the invention relates to a controller for operating the hearing device, the controller comprising a processor, which is adapted to carry out the steps of the above method.
  • a further aspect of the invention relates to a hearing system comprising the hearing device worn by the hearing device user and a connected user device, wherein the hearing system comprises: a sound input component; a processor for processing a signal from the sound input component; a sound output component for outputting the processed signal to an ear of the user of the hearing device; a transceiver for exchanging data with the connected user device; at least one classifier configured to identify one or more predetermined classification values based on a signal from the at least one sound input component and/or from at least one further sensor; and wherein the hearing system is adapted for performing the above method.
  • the hearing system may further include, by way of example, a second hearing device worn by the same user and/or a connected user device, such as a smartphone or other mobile device or personal computer, used by the same user.
  • a connected user device such as a smartphone or other mobile device or personal computer
  • the hearing system further comprises a mobile device, which includes the classifier.
  • Fig. 1 schematically shows a hearing device 12 according to an embodiment of the invention.
  • the hearing device 12 is formed as a behind-the-ear device carried by a hearing device user (not shown). It has to be noted that the hearing device 12 is a specific embodiment and that the method described herein also may be performed with other types of hearing devices, such as an in-the-ear device.
  • the hearing device 12 comprises a part 15 behind the ear and a part 16 to be put in the ear channel of the user.
  • the part 15 and the part 16 are connected by a tube 18.
  • at least one sound input component 20, e.g. a microphone, a sound processor 22 and a sound output component 24, such as a loudspeaker, are provided in the part 15.
  • the sound input component 20 may acquire environmental sound of the user and may generate a sound signal.
  • the sound processor 22 may amplify the sound signal.
  • the sound output component 24 may generate sound from the amplified sound signal and the sound may be guided through the tube 18 and the in-the-ear part 16 into the ear channel of the user.
  • the hearing device 12 may comprise a processor 26 which is adapted for adjusting parameters of the sound processor 22, e.g. such that an output volume of the sound signal is adjusted based on an input volume.
  • These parameters may be determined by a computer program which is referred to as a sound program run in the processor 26.
  • a user may select a modifier (such as bass, treble, noise suppression, dynamic volume, etc.) and levels and/or values of these modifiers may be selected, from this modifier, an adjustment command may be created and processed as described above and below.
  • processing parameters may be determined based on the adjustment command and based on this, for example, the frequency dependent gain and the dynamic volume of the sound processor 22 may be changed. All these functions may be implemented as different sound programs stored in a memory 30 of the hearing device 12, which sound programs may be executed by the processor 22.
  • the hearing device 12 further comprises a transceiver 32 which may be adapted for wireless data communication with a transceiver 34 of a connected user device 70 (see figure 2 ).
  • the hearing device 12 further comprises at least one classifier 48 configured to identify one or more predetermined classification values based on a signal from the sound input device 20 and/or from at least one further sensor 50 (see figure 2 ), e.g. an accelerometer and/or an optical and/or temperature sensor.
  • the classification value may be used to determine a sound program, which may be automatically used by the hearing device 12, in particular depending on a sound input received via the sound input component 20 and/or the sensor 50.
  • the sound input may correspond to a speaking activity and/or acoustic environment of the user.
  • the hearing device 12 is configured for performing a method for configuring the hearing device 12 according to the present invention.
  • Fig. 2 schematically shows a hearing system 60 according to an embodiment of the invention.
  • the hearing system 60 includes a hearing device, e.g. the above hearing device 12 and a connected user device 70, such as a smartphone or a tablet computer.
  • the connected user device 70 may comprise the transceiver 34, a processor 36, a memory 38, a graphical user interface 40 and a display 42.
  • the connected user device 70 may comprise the classifier 48 or a further classifier 48.
  • the hearing system 60 it is possible that the above-mentioned modifiers and their levels and/or values are adjusted with the connected user device 70 and/or that the adjustment command is generated with the connected user device 70.
  • This may be performed with a computer program run in the processor 36 of the connected user device 70 and stored in the memory 38 of the connected user device 70.
  • the computer program may provide the graphical user interface 40 on the display 42 of the connected user device 70.
  • the graphical user interface 40 may comprise a control element 44, such as a slider.
  • a control element 44 such as a slider.
  • an adjustment command may be generated, which will change the sound processing of the hearing device 12 as described above and below.
  • the user may adjust the modifier with the hearing device 12 itself, for example via the input means 28.
  • Fig. 3 shows an example for a flow diagram of a method for configuring a hearing device, according to an embodiment of the invention.
  • the method may be a computer-implemented method performed automatically in the hearing device 12 and/or the hearing system 60 of Fig. 1 .
  • step S2 of the method in case the hearing device 12 currently provides a sound output to the user, the sound output may be modified in accordance with a first sound program.
  • a sound program may be referred to as sound processing feature.
  • the first and/or second sound program may be referred to as a sound processing feature.
  • the sound processing feature may for example be a Noise Canceller or a Beamformer Strength.
  • step S4 of the method an audio signal from the at least one sound input component 20 and/or a sensor signal from the at least one further sensor is received, e.g. by the sound processor 22 and the processor 26 of the hearing device 12.
  • step S6 of the method the signal(s) received in step S4 are evaluated by the one or more classifiers 48 implemented in the hearing device 12 and/or the connected user device 70 so as to identify a state corresponding to the user's speaking activity and/or the user's acoustic environment, and at least one classification value is determined depending on the identified state.
  • the one or more classification values characterize the identified state.
  • the identified classification value(s) may be, for example, output by one of the classifiers 48 to one or both of the processors 26, 36. It also may be that at least one of the classifiers 48 is implemented in the corresponding processor 26, 36 itself or is stored as a program module in the memory 30, 38 so as to be performed by the corresponding processor 26, 36. As already mentioned herein above, all or some of the steps of the method are performed by the processor 26 of the hearing device 12 and/or by the processor 36 of the connected user device 70.
  • the identified state may be one or more of the group of Speech In Quiet, Speech In Noise, Being In Car, Reverberant Speech, Noise, Music, Quiet, and Speech In Loud Noise.
  • two or more classification values may be determined characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined.
  • the second sound program may be adapted to the corresponding determined two or more classification values.
  • the one or more predetermined classification values may be identified based on the audio signal from the at least one sound input component 20 and/or the sensor signal from the at least one further sensor 50 received over a one or more predetermined time intervals, e.g. over two identical predetermined time intervals separated by a predetermined pause interval.
  • a second sound program is determined.
  • the second sound program is different from the first sound program and is adapted in accordance with the determined classification value in order to provide the optimal listening experience to the user based on the identified speaking activity and/or acoustic environment of the user. For example, in the second sound program the setting of the Noise Canceller and/or the Beamformer Strength are/is different than in the first sound program.
  • step S10 of the method the sound processor 22 is configured in accordance with the second sound program.
  • a predetermined user input is received.
  • the predetermined user input indicates that the user listening to the sound output does not agree with the configuration in accordance with the second sound program.
  • the predetermined user input may be input via the input means 28 of the hearing device 12, an application of the connected user device 70, and/or a gesture detection, which may be carried out by the hearing device 12 and/or the connected user device 70.
  • step S14 of the method the sound processor 22 is reconfigured in accordance with the first sound program.
  • the hearing device 12 may provide a predetermined output to the user, which informs the user that the sound program has not been changed.
  • a determination algorithm for determining, whether the first sound program is adapted to the determined classification value is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing system 60 is able to learn the preferences of the user and to consider them in a future determination process.
  • an artificial intelligence may be integrated in the hearing system 60, which learns the preferences of the user in order to provide the optimal listening experience to the user.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
EP21166351.3A 2021-03-31 2021-03-31 Procédé, programme informatique et support lisible par ordinateur permettant de configurer un dispositif auditif, organe de commande pour faire fonctionner un appareil auditif et système auditif Withdrawn EP4068805A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21166351.3A EP4068805A1 (fr) 2021-03-31 2021-03-31 Procédé, programme informatique et support lisible par ordinateur permettant de configurer un dispositif auditif, organe de commande pour faire fonctionner un appareil auditif et système auditif

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP21166351.3A EP4068805A1 (fr) 2021-03-31 2021-03-31 Procédé, programme informatique et support lisible par ordinateur permettant de configurer un dispositif auditif, organe de commande pour faire fonctionner un appareil auditif et système auditif

Publications (1)

Publication Number Publication Date
EP4068805A1 true EP4068805A1 (fr) 2022-10-05

Family

ID=75339578

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21166351.3A Withdrawn EP4068805A1 (fr) 2021-03-31 2021-03-31 Procédé, programme informatique et support lisible par ordinateur permettant de configurer un dispositif auditif, organe de commande pour faire fonctionner un appareil auditif et système auditif

Country Status (1)

Country Link
EP (1) EP4068805A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2255548A2 (fr) * 2008-03-27 2010-12-01 Phonak AG Procédé pour faire fonctionner une prothèse auditive
EP3120578A1 (fr) * 2014-03-19 2017-01-25 Bose Corporation Recommandations externalisées à grande échelle pour dispositifs d'aide auditive
US20200314525A1 (en) * 2019-03-28 2020-10-01 Sonova Ag Tap detection
US20200380979A1 (en) * 2016-09-30 2020-12-03 Dolby Laboratories Licensing Corporation Context aware hearing optimization engine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2255548A2 (fr) * 2008-03-27 2010-12-01 Phonak AG Procédé pour faire fonctionner une prothèse auditive
EP3120578A1 (fr) * 2014-03-19 2017-01-25 Bose Corporation Recommandations externalisées à grande échelle pour dispositifs d'aide auditive
US20200380979A1 (en) * 2016-09-30 2020-12-03 Dolby Laboratories Licensing Corporation Context aware hearing optimization engine
US20200314525A1 (en) * 2019-03-28 2020-10-01 Sonova Ag Tap detection

Similar Documents

Publication Publication Date Title
US11641556B2 (en) Hearing device with user driven settings adjustment
US20200107139A1 (en) Method for processing microphone signals in a hearing system and hearing system
US11343618B2 (en) Intelligent, online hearing device performance management
US20220369048A1 (en) Ear-worn electronic device employing acoustic environment adaptation
CN113395647A (zh) 具有至少一个听力设备的听力系统及运行听力系统的方法
CN113473341A (zh) 包括有源通流口的被配置用于音频分类的助听设备及其操作方法
US20220201404A1 (en) Self-fit hearing instruments with self-reported measures of hearing loss and listening
US8139779B2 (en) Method for the operational control of a hearing device and corresponding hearing device
EP3641344A1 (fr) Procédé de fonctionnement d'un instrument auditif et système auditif comprenant un instrument auditif
EP4035420A1 (fr) Procédé de fonctionnement d'un système audio de niveau d'oreille et système audio de niveau d'oreille
EP4068805A1 (fr) Procédé, programme informatique et support lisible par ordinateur permettant de configurer un dispositif auditif, organe de commande pour faire fonctionner un appareil auditif et système auditif
CN111279721B (zh) 听力装置系统和动态地呈现听力装置修改建议的方法
US11882413B2 (en) System and method for personalized fitting of hearing aids
EP2688067B1 (fr) Système d'apprentissage et d'amélioration de la réduction du bruit dans des dispositifs d'assistance auditive
EP3843427B1 (fr) Ajustement de dispositif auditif avec support d'utilisateur
US11758341B2 (en) Coached fitting in the field
EP3941092A1 (fr) Ajustement de dispositif auditif dépendant de l'activité d'un programme
EP4178228A1 (fr) Procédé et programme informatique pour le fonctionnement d'un système auditif, système auditif et support lisible par ordinateur
US11996812B2 (en) Method of operating an ear level audio system and an ear level audio system
EP3996390A1 (fr) Procédé pour selectionner un programme d'écoute dans un dispositif auditif, basé sur la détection de la voix de l'utilisateur
CN113228710B (zh) 听力装置中的声源分离及相关方法
US11323809B2 (en) Method for controlling a sound output of a hearing device
US20230156410A1 (en) Hearing system containing a hearing instrument and a method for operating the hearing instrument

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20230406