EP3236673A1 - Adjusting a hearing aid based on user interaction scenarios - Google Patents

Adjusting a hearing aid based on user interaction scenarios Download PDF

Info

Publication number
EP3236673A1
EP3236673A1 EP16165711.9A EP16165711A EP3236673A1 EP 3236673 A1 EP3236673 A1 EP 3236673A1 EP 16165711 A EP16165711 A EP 16165711A EP 3236673 A1 EP3236673 A1 EP 3236673A1
Authority
EP
European Patent Office
Prior art keywords
hearing aid
user
user interaction
hearing
scenario
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP16165711.9A
Other languages
German (de)
French (fr)
Inventor
Philipp Schneider
Francois JULITA
Aliaksei TSITOVICH
Tim Roth
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonova Holding AG
Original Assignee
Sonova Holding AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonova Holding AG filed Critical Sonova Holding AG
Priority to EP16165711.9A priority Critical patent/EP3236673A1/en
Publication of EP3236673A1 publication Critical patent/EP3236673A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange

Abstract

A method for adjusting a hearing aid (10) comprises: creating a user profile (72) of the hearing aid user in a server device (34), the user profile (72) comprising at least a user identification (74) and hearing aid information (76) specifying the hearing aid (10); preparing a list (82) of user interaction scenarios (40) in the server device (34), based on the user profile (72); selecting a user interaction scenario (40) from the list (82) with a mobile device (36) of the user communicatively interconnected with the server device (34); determining one or more hearing aid adjustment parameters (64) based on the selected user interaction scenario (40); displaying one or more input controls (62, 66) for adjusting the determined one or more hearing aid adjustment parameters (64) with the mobile device (36); after adjustment of at least one of the hearing aid adjustment parameters (64) with the one or more input controls (62, 66): deriving sound processing parameters (30) for the hearing aid (10) based on the adjusted one or more hearing aid adjustment parameters (64); and applying the derived sound processing parameters (30) in the hearing aid (10), such that the hearing aid (10) is adapted for generating optimized sound signals based on the applied sound processing parameters (30).

Description

    FIELD OF THE INVENTION
  • The invention relates to a method, a computer program and a computer-readable medium for adjusting a hearing aid. Furthermore, the invention relates to a fitting system for adjusting a hearing aid.
  • BACKGROUND OF THE INVENTION
  • Today, the dispensing process or fitting process for hearing aids usually happens at special places called fitting rooms (located in a shop, audiologist office) and is usually performed by a hearing care professional. The fitting process may be a lengthy, complex process, supported by specific tools, primarily designed for highly trained hearing care professionals.
  • In such a situation, to improve his real-world hearing performance, a hearing aid user has to bring his real-world problems into the fitting room, which are then solved by the hearing care professional. However, the hearing aid user has to remember the problematic situation carefully, such that it can be reconstructed by the hearing aid carrier, which is often difficult.
  • WO 2015/009564 A1 relates to an online hearing and fitting system for non-expert users, who can interactively adjust a hearing aid based on acoustic test signals. WO 2013/117214 A1 relates to a hearing aid fitting system adapted for remote fitting of a hearing aid system.
  • WO 00/22874 A1 relates to a fitting system for hearing devices with a mobile telephone as input device.
  • EP 1 353 530 A1 relates to a system for training of hearing aid users, which comprises training units and individual user profiles stored in a database.
  • DESCRIPTION OF THE INVENTION
  • Objectives of the invention are to simplify the fitting process of a hearing aid, to provide a fitting process that may be performed by an untrained user of a hearing aid and to provide a fitting process, which results in a more individual adjustment of the hearing aid.
  • These objectives are achieved by the subject-matter of the independent claims. Further exemplary embodiments are evident from the dependent claims and the following description.
  • An aspect of the invention relates to a method for adjusting a hearing aid. A hearing aid may be a device, which is adapted for being carried by a user at least partially in or on the ear. In addition, a hearing aid also may be a Cochlear implant device, with parts implanted inside the head.
  • The hearing aid may be adapted for processing sound signals based on sound processing parameters stored in the hearing aid, for example such that hearing deficiencies of a user of the hearing aid are compensated. The sound signals may be generated by a microphone of the hearing aid and/or may be received via another input of the hearing aid such as a T-coil or other interface, like a radio receiver. The sound processing parameters, which may be stored in a memory of the hearing aid, may be parameters for a frequency dependent amplifier of the hearing aid and/or may encode how the sound signal is converted into a signal provided to the hearing sense of the user.
  • According to an embodiment of the invention, the method comprises: creating a user profile of the hearing aid user in a server device, the user profile comprising at least a user identification and hearing aid information specifying the hearing aid; and preparing a list of user interaction scenarios in the server device, based on the user profile. For example, the user profile may be created by a hearing care professional with a fitting application executed in a fitting device. This may be performed during a first fitting process of the hearing aid, in which also the list of user interaction scenarios may be defined, which may be supported with a fitting application.
  • The user profile may be a data structure that may be stored in a database provided by the server device. For example, the fitting device may be interconnected via Internet with the server device and the fitting application may cause the server device to create (automatically) the user profile and/or the list of user interaction scenarios in the database. The user profile may contain information about the user of the hearing aid and about the hearing aid itself. For example, a unique user identification may be used for identifying the data stored for the user in the database. The hearing aid information may comprise a type number and/or serial number of the hearing aid, a list of features of the hearing aid, and/or configuration data of the hearing aid, such as the actual sound processing parameters currently stored in the hearing aid.
  • A user interaction scenario also may be represented with a data structure stored in the database in the server device. A user interaction scenario may comprise a data structure, in which information relating to a scenario and/or situation, in which the user may interact with the hearing aid, is stored. A user interaction scenario may be seen as a container for information relating to a specific task, the user should perform with the hearing aid, and/or for information about the functionality of the hearing aid that is used, changed, adjusted, etc. during this task. For example, a user interaction scenario may comprise information about a specific place, typical location and/or situation (such as a restaurant), how the hearing aid may be used there and/or how the hearing aid functionality may be improved there.
  • Also, the list of user interaction scenarios may be stored in the server device. This list may be prepared with the fitting application and/or may be automatically generated by the server device and/or the fitting device. Every user may have an individual list of user interaction scenarios.
  • It has to be noted that the server device need not be a single device but also may be provided by a system of devices, for example, a cloud computing facility. The fitting device may be a PC or other computing device located at the site of the hearing aid professional.
  • According to an embodiment of the invention, the method comprises: selecting a user interaction scenario from the list of user interaction scenarios a mobile device of the user communicatively interconnected with the server device; determining one or more hearing aid adjustment parameters based on the selected user interaction scenario; and displaying one or more input controls for adjusting the determined one or more hearing aid adjustment parameters with the mobile device. Communicatively interconnected may mean that the respective two devices are adapted to exchange data with each other with a communication protocol.
  • The selection of the user interactions scenario and/or the adjustment of the one or more hearing aid adjustment parameters may be performed with an application executed in the mobile device and/or with web browser. When the user is in a situation and/or place specified by a specific user interaction scenario, the user may select the corresponding user interaction scenario manually. It also may be possible that the user interaction scenario is automatically selected, for example based on timing information, a timer and/or by automatically detecting the situation and/or place.
    The "mobile" application may be downloaded from the server device or an app store into the mobile device. It also may be possible that the application is a web based application mainly executed in the server device, only having a user interface processed by the mobile application, for example with a web browser. In general, the mobile device may be connected via Internet with the server device and the application may communicate with the server device via Internet. For example, the application may send the user identification to the server device and the server device may send the list of user interaction scenarios to the application.
  • When a specific user interaction scenario has been selected, the mobile device determines hearing aid adjustment parameters associated with the user interaction scenario. Hearing aid adjustment parameters have to be discriminated from the sound processing parameters. Hearing aid adjustment parameters may encode how the user may adjust the hearing aid, wherein sound processing parameters encode, how the hearing aid (and in particular its amplifier and/or further elements inside the hearing aid) process the input sound data into output sound data. The one or more hearing aid adjustment parameters of a specific user interaction scenario may be stored in the specific user interaction scenario.
  • When the one or more hearing aid adjustment parameters are determined, the mobile device may display an input control for the hearing aid adjustment parameter. Every hearing aid adjustment parameter may be of a specific type (such as yes/no, range, list of values, etc.) and based on this type, the input control (such as a toggle button, a slider, a selection list, etc.) may be selected and/or presented. It also may be possible that the type of input control to be used is stored together with the corresponding hearing aid adjustment parameter.
  • For example, the user may have the option to adjust the one or more hearing aid adjustment parameters associated with a user interaction scenario, when he is in the situation and/or at the place, which is specified in the user interaction scenario.
  • According to an embodiment of the invention, the method comprises: after adjustment of at least one of the hearing aid adjustment parameters with the one or more input controls: deriving sound processing parameters for the hearing aid based on the adjusted one or more hearing aid adjustment parameters; and optionally transferring and/or applying the derived sound processing parameters in the hearing aid, such that the hearing aid is adapted for generating optimized sound signals based on the applied sound processing parameters.
  • When the user has adjusted at least one of the hearing aid adjustment parameters, for example by toggling a button, selecting an entry from a list or moving a slider, sound processing parameters are derived from the changed/adjusted one or more hearing aid adjustment parameters. For example, the adjusted hearing aid parameters may be transmitted to the server device, which based on the hearing aid information in the user profile, determines the corresponding sound processing parameters. It also may be possible that the sound processing parameters are determined by the mobile device. For example, the sound processing parameters stored in the hearing aid may be retrieved from the hearing aid, altered based on the one or more adjusted hearing aid adjustment parameters (in the mobile device and/or the server device) and may be stored back to the hearing aid. Another possibility is that the one or more adjusted hearing aid adjustment parameters are sent to the hearing aid, which itself derives the corresponding sound processing parameters.
  • With the method, the hearing aid may be fitted by a user in specific situations and/or at specific places without the need to adjust complicated sound processing parameters. The user simply may adjust some input controls associated with the specific situation and/or place and/or may be guided by the mobile device. Furthermore, it is not anymore necessary for the user to remember a hearing problem until the hearing aid may be fitted by a hearing care specialist in a fitting session. The user directly can influence the hearing aid and has a direct feedback about the changed/adjusted behaviour of the hearing aid in a specific situation and/or at a specific place.
  • According to an embodiment of the invention, the user profile comprises first additional, diagnostic information specifying hearing deficiencies of the user. These diagnostic information (such as a hearing profile of the user and/or characteristics tinnitus) may be input by the hearing care specialist into the fitting application, which then updates the user profile in the server device. The diagnostic information may be used for at least partially automatically creating the list of user interaction scenarios.
  • According to an embodiment of the invention, the user profile comprises second additional, lifestyle information specifying hearing scenarios, the user is exposed. For example, the lifestyle information may comprise information on the gender, the age, the family situation, etc. of the user. The lifestyle information also may specify, which situations and/or places a user may be exposed (such as a church, a restaurant, the inside of a car, TV watching, etc.). The lifestyle information may be input by the hearing care specialist into the fitting application, which then updates the user profile in the server device. The lifestyle information may be used for at least partially automatically creating the list of user interaction scenarios.
  • According to an embodiment of the invention, user profiles of a plurality of users are stored in the server device. It has to be noted that one server device may administrate a plurality of users and their data. In particular, the server device may be connected to a plurality of mobile devices and/or a plurality of fitting devices.
  • According to an embodiment of the invention, wherein selecting the user interaction scenario from a list of possible user interaction scenarios is carried out automatically based on the user profile. For example, a list of possible user interaction scenarios may be stored in the server device and/or the preparation of the list of user interaction scenarios of a user may be based on automatically selecting user interaction scenarios from the list of possible user interaction scenarios. The list of possible user interactions may be configured for a plurality of possible situations, places, events and/or hearing aid types. This list may be provided for all users. When the user profile for a specific user is generated and/or modified, the server device may select only those possible user interaction scenarios, which fit to the user profile.
  • According to an embodiment of the invention, a possible user interaction scenario is automatically selected for a user, when the user profile of the user comprises hearing aid information, diagnostic information and/or lifestyle information assigned with the possible user interaction scenario. In general, the possible user interaction scenarios may be associated/assigned with specific entries and/or values that may be set in user profiles. For example, user interaction scenarios for a specific feature of a hearing aid only may be selected for users having a hearing aid with that feature. Furthermore, user interaction scenarios relating to tinnitus only may be selected for users having tinnitus. These selections may be performed by the server device and/or the fitting device.
  • According to an embodiment of the invention, at least some of the user interaction scenarios are displayed by the mobile device to be selected by the user. As already mentioned, the list of user interaction scenarios or at least a part of this list may be presented to the user by a user interface of an application executed in the mobile device. The user then may select the user interaction scenario, he wishes to deal with, with the user interface.
  • According to an embodiment of the invention, handling information provided by a user interaction scenario is displayed by the mobile device. Besides the adjustment of the hearing aid adjustment parameters, an application of the mobile device also may provide further information to the user via its user interface, for example, how the hearing aid adjustment parameters may influence the hearing aid and/or how the hearing aid directly may be adjusted (for example by turning on and off specific features of the hearing aid with switches and/or knobs provided by the hearing aid).
  • According to an embodiment of the invention, the user interaction scenario is selected automatically based on timing information stored in the user interaction scenario. A further possibility for selecting a user interaction scenario may be based on a date and/or time or time offset assigned to the user interaction scenario. For example, a specific user interaction scenario (such as a weekly or monthly hearing test) may be automatically provided to the user.
  • According to an embodiment of the invention, the user interaction scenario is selected automatically based on another associated user interaction scenario, which has been completed. It may be possible that there is a chain of user interaction scenarios that may be selected (and interacted with) one after the other.
  • According to an embodiment of the invention, the user interaction scenario is selected automatically based on a history of interaction scenarios, which have been completed.
  • According to an embodiment of the invention, the adjustment of at least one of the hearing aid adjustment parameters is based on an ambient sound signal, for example from sound generated in the ambient of the user. Before and/or after the adjustment of at least one of the hearing aid adjustment parameters, the hearing aid may process an ambient sound signal. An ambient sound signal may be a signal that is generated with a microphone of the hearing aid or the mobile device, i.e. which may be based on sound generated in the vicinity of the user. In particular, the ambient sound signal may not be stored in the mobile device or the server device. Thus, the user interaction scenario may be based on real sound and not on sound that was recorded before the user interaction scenario was selected.
  • According to an embodiment of the invention, the user interaction scenario is associated with one or more hearing scenarios an ambient sound signal is analyzed for determining a hearing scenario; wherein the selection of the user interaction scenario is based on the determined hearing scenario. An ambient sound signal may be analyzed for determining a hearing scenario, wherein the user interaction scenario is selected automatically based on the determined hearing scenario associated with the user interaction scenario. It may be possible that the hearing aid and/or the mobile device records ambient sound and that this sound is regularly analyzed, whether a specific hearing scenario for the user is present. For example, it may be determined, whether the user is in a room with strong echo, whether the user listens to a radio or to music, and/or whether the user is inside a car. Some user interaction events may be associated with specific characteristics of ambient sound and may be selected, when these characteristics are determined from the ambient sound signal. In other words, the user selection scenario may be selected in dependence of the acoustic environment of the user.
  • According to an embodiment of the invention, the ambient sound signal is processed and/or recorded by the hearing aid and/or by the mobile device and/or the ambient sound signal is analyzed by the mobile device and/or the server device. It may be possible that the ambient sound signal is recorded by a microphone of the hearing aid, processed by the hearing aid and then sent to the mobile device or via the mobile device to the server device. On the server device or on the mobile device, the ambient sound signal may be analyzed, whether it contains characteristics associated with a user interaction scenario.
  • According to an embodiment of the invention, the sound processing parameters are derived from the one or more hearing aid adjustment parameters in the mobile device. After the user has adjusted the one or more hearing aid adjustment parameters with the mobile device, the sound processing parameters may be directly determined in the mobile device and then sent to the hearing aid, where they are applied.
  • According to an embodiment of the invention, the sound processing parameters are derived from the one or more hearing aid adjustment parameters in the hearing aid. After the user has adjusted the one or more hearing aid adjustment parameters with the mobile device, the one or more hearing aid adjustment parameters may be sent to the hearing aid, where the sound processing parameters are determined and applied.
  • According to an embodiment of the invention, the sound processing parameters are derived from the one or more hearing aid adjustment parameters in the server device. After the user has adjusted the one or more hearing aid adjustment parameters with the mobile device, the one or more hearing aid adjustment parameters may be sent to the server device, where the sound processing parameters may be determined. The sound processing parameters then may be sent via the mobile device to the hearing aid, where they are applied.
  • According to an embodiment of the invention, an audible notification indicating an operation mode of the hearing aid is assigned with the user interaction scenario. The hearing aid may generate specific audible notification (such as specific beeps), which may indicate the user, that the hearing aid has switched into a specific mode or that it may be possible (or reasonable) that the hearing aid may switch into a specific mode. For example, the hearing aid may generate a beep, when the user enters into a magnetic field that may be processed by a T-coil. Specific user interaction scenarios, which may relate to scenarios, in which these operation modes of the hearing aid may be employed, may also be assigned with such audible indications.
  • An input control for triggering the audible notification in the hearing aid with the mobile device may be displayed by the mobile device, when such a user interaction scenario is selected. With such a trigger, a user may learn, which audible notification relates to which operation mode.
  • According to an embodiment of the invention, an input control for providing rating and/or further user input related to the user interaction scenario is displayed by the mobile device, wherein the rating and/or further user input is stored in the user profile. After the user has completed a user interaction scenario, for example has selected the user interaction scenario and has adjusted the hearing aid adjustment parameters, until he is content (or not) with the result, the user may rate the user interaction scenario, for example, whether the user interaction scenario was helpful for him or not. Furthermore, the user may input further information, such as a personal note. Rating and/or further information input to the mobile device, for example, the selection of yes and no, a selection from a list or a text input to the mobile device, may be sent from the mobile device to the server device and/or may be stored in the user profile.
  • Thus, the user may provide feedback to the server device, which also may store a history of feedback for the specific user and/or for a specific user interaction scenario with respect to different users.
  • The feedback or the rating of a user interaction scenario also may be provided by a so-called significant other (for example a relative of the user) and/or a hearing care professional. The rating of specific user interaction scenarios by a hearing care professional also may be performed with an application executed in the fitting device.
  • According to an embodiment of the invention, the one or more hearing aid adjustment parameters comprise at least one of: volume, gain, clarity, sound input type (such as a microphone, T-coil for processing a signal from an induction loop, an audio stream from a TV, a telephone connection), directivity, tonal balance, dynamic compression, loudness, a frequency transposition parameter describing for example a frequency shift between an input signal and an output signal and/or a frequency compression rate, hearing program type (speech, speech in noise, music). The hearing aid adjustment parameters may be defined as psycho acoustic parameters. The hearing aid parameters may be adjusted by selection from a list or by selection of a value from a range of values.
  • According to an embodiment of the invention, the sound processing parameters encode at least one of: an adjusting a frequency dependent amplifying of sound signals, a mixing of sound signals from more than one sound source, such as mixing of sound signals picked up by microphones and/or sound signals picked up through a wired or wireless connection. The sound processing parameters may be defined as technical parameters or control parameters of the hearing aid, which may directly control the behavior of the hearing aid. The sound processing signals may be derived from the hearing aid adjustment parameters based on formulas and/or lookup tables.
  • According to an embodiment of the invention, the method comprises: generating a user profile and/or modifying a user profile with a fitting application executed in a fitting device communicatively interconnected with the server device. As already mentioned, a hearing care professional may generate the user profile with a fitting application at his site, which also may be used for fitting the hearing aid in a more complicated way. For example, the sound processing parameters may be directly adjusted with the fitting application.
  • It also may be possible that the user profile is modified with the fitting application, for example, during a further visit of the hearing aid user at the hearing care professional. In such a case, it may be possible that a modified user profile (such as modified lifestyle information) results in the generation of further user interaction scenarios for the list of the user.
  • According to an embodiment of the invention, the method comprises: defining and/or modifying a user interaction scenario in the list of user interaction scenarios for a user with the fitting application. Another possibility is that one or more user interaction scenarios are manually defined with the fitting application and stored in the list of the user. It also may be possible that such manually generated user interaction scenarios are copied into the list of possible user interaction scenarios.
  • In general, the list of user interaction scenarios and and/or one or more user interaction scenarios (which may have been generated automatically) may be customized by a hearing care professional with the fitting application.
  • Further aspects of the invention relates to a computer program for adjusting a hearing aid, which, when being executed by a processor, is adapted to carry out the steps of the method as described in the above and in the following, and to a computer-readable medium, in which such a computer program is stored.
  • For example, the computer program may have different parts run in the mobile device, the server device and/or the fitting device. A computer-readable medium may be a floppy disk, a hard disk, an USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or a FLASH memory. A computer-readable medium may also be a data communication network, e.g. the Internet, which allows downloading a program code. In general, the computer-readable medium may be a non-transitory or transitory medium.
  • A further aspect of the invention relates to a fitting system for adjusting a hearing aid, the fitting system comprising a mobile device communicatively interconnected with the hearing aid and adapted for transferring sound processing parameters to the hearing aid, and/or a server device communicatively interconnected with the mobile device. The fitting system may be adapted for performing the method as described in the above and in the following.
  • According to an embodiment of the invention, the fitting system further comprises a fitting device communicatively interconnected with the server device. The fitting device may be adapted for defining a user profile and/or for defining user interaction scenarios for a specific user.
  • It has to be understood that features of the method as described in the above and in the following may be features of the computer program, the computer-readable medium and the fitting system as described in the above and in the following, and vice versa.
  • These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Below, embodiments of the present invention are described in more detail with reference to the attached drawings.
    • Fig. 1 schematically shows a hearing aid of a fitting system according to an embodiment of the invention.
    • Fig. 2 schematically shows a fitting system according to an embodiment of the invention.
    • Fig. 3 schematically shows a user interface for a mobile device of a fitting system according to an embodiment of the invention.
    • Fig. 4 schematically shows a structure of a database used in a fitting system according to an embodiment of the invention.
    • Fig. 5 shows a flow diagram for a method for adjusting a hearing aid according to an embodiment of the invention.
  • The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols. In principle, identical parts are provided with the same reference symbols in the figures.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • Fig. 1 schematically shows a hearing aid 10. A hearing aid 10 may be a device that may be put at least partially into an ear of a user to at least partially compensate an auditory defect of the user. In general, a hearing aid 10 may be near the ear, at least partially in the ear channel and/or carried on the ear. It has to be noted that a hearing aid 10 may comprise two separate devices for each ear of the user. It also may be possible that the hearing aid 10 comprises a Cochlear implant, which may be inside the head of the user.
  • In general, the hearing aid 10 comprises an input 12 for receiving sound data and an output 14 for generating signals stimulating the hearing sense of the user. The input 12 may comprise a microphone 16 and a sender/receiver 18 for digital signals, which may be transferred via infrared and/or electromagnetic waves (such as radio waves, for example Bluetooth ©). A further receiver for electromagnetic waves also may be a so-called T-coil or telecoil 20. The output 14 may comprise a loudspeaker 22 in the ear channel or a stimulation device inside the cochlear.
  • An analog signal from one if the inputs may be transformed by a corresponding transducer 24 into a digital signal. For example, the microphone 16 and/or the T-coil 20 may generate analog sound signals, which may be then transduced into digital sound signals or sound data that may be processed by an amplifier 26. The amplifier transforms input sound data into optimized output sound data for the output 14. For example, the optimized sound data then may be transduced by a further transducer 24 into an analog output signal for the loudspeaker 22.
  • The amplifier 26 may comprise a processor (or at least a digital electronic circuit), which may perform the transformation of the sound data.
  • The hearing aid 10 furthermore comprises a controller 28 with a memory and a processor, which controls the operation of the amplifier 26. It may be possible that the amplifier 26 is a module of the controller 28. In the memory of the controller 28, sound processing parameters 30 are stored, which parametrize the control of the amplifier 26. For example, the sound processing parameters 30 may encode an adjusting a frequency dependent amplifying of sound data in the amplifier 26 and/or a mixing of sound data from more than one source 16, 18, 20. In general, the sound processing parameters also may control a processing of sound data inside the hearing aid 10.
  • It furthermore may be possible that sound signals are received by the receiver 18 (which, for example, may have been transferred via Bluetooth ©) that may be evaluated by a controller 28 of the hearing aid and input into the amplifier 26.
  • Via the receiver 18 (for example via Bluetooth ©), the controller 28 also may receive control signals. For example, the controller 28 may receive modified sound processing parameters 30 or may receive commands to send the actual applied sound processing parameters via the receiver/sender 18 to a further device. Furthermore, the controller 28 may receive commands for modifying the sound processing parameters and/or for switching into another operation mode.
  • Fig. 2 shows a fitting system 32 for fitting the hearing aid 10. The fitting system 32 comprises a server device 34 and a plurality of mobile devices 36, which are communicatively connected with the server device, for example via Internet 38. Every mobile device 36 may be associated with a user, which carries a hearing aid 10.
  • The mobile device 36, which may be a Smartphone, a tablet computer or similar device, may communicate with the hearing aid 10 via its sender/receiver 18, for example via Bluetooth ©. Also, the mobile device 36 may communicate with the server device 24 via Internet. As will be described in detail below, the server device 34 may send the mobile device specific data structures called user interaction scenarios 40, which, inter alia, may comprise information about the usage of the hearing aid 10 in a specific scenario/situation and how the hearing aid 10 may be adjusted in such a situation. Based on the user interaction scenario 40, the mobile device 36 may generate modified sound processing parameters 30 and may send them to the hearing aid 10.
  • The visualization of the user interaction scenario 40 and the communication with the server device 34 and the hearing aid 10 may be performed by an application 42 running in the mobile device 36. For example, the application may be downloaded from the server device 34 (or another source) and/or may be a web application with parts running in the server device 34.
  • The server device 34 may be a virtual device, for example executed in a cloud computing facility. In the server device 34, a database 44 is provided, in which a plurality of user profiles and further data relating to the users and/or its hearing aids is stored. This database 44 also may store the user interaction scenarios 40 for each user.
  • The fitting system 32 furthermore may comprise one or more fitting devices 46. Each fitting device 46 may be a computer, PC (but also a tablet computer) situated in a shop/office of a hearing care specialist. In the fitting device 46, a fitting application 48 is executed that may be used for fitting a hearing aid 10. When the user is in the shop/office of the hearing aid specialist, the hearing aid specialist may use the fitting device 46 and/or the fitting application 48 for directly adjusting the hearing aid 10 by modifying the sound processing parameters 30 (which, however, may not be transferred via the mobile device 36, but via a direct connection).
  • The fitting device 46 may be communicatively connected with the server device 34, for example via Internet 38. During the fitting at his store/office, the hearing care specialist may input information about the user of the hearing aid, for example about his life situation, into the fitting application, which will then send this information to the server device 34, which will create a user profile of the user in the database and which may create user interaction scenarios 40 for the user based on this information.
  • In general, the fitting process of the hearing aid 10 that may be performed with the fitting system 32 may be divided into parts that are either performed at the store/office of the hearing care specialist (with the aid of the fitting application 48) and into parts that may be performed with the application 42, for example by the user. For example, a first fitting may be performed by the hearing care specialist and further improvements of the fitting may be performed by the user in real-life situations at home or at other places. This real-life fitting is supported by the application 42.
  • The real-life fitting part performed with the application 42 may be divided into individual task or user interaction scenarios 40. With every user interaction scenario 40, the user may try to improve his hearing experience by adjusting the hearing aid 10 with his mobile device 36. It has to be noted that the term "user interaction scenario" may either refer to the situation/scenario, in which the user performs the fitting of the hearing aid 10 himself in a specific scenario as well as to the data structure that supports this fitting and that may be stored in the server device 34, sent to the mobile device 36 and/or visualized by the application 42.
  • The user interaction scenarios 40 for a user may be defined by the hearing care professional, for example, together with the hearing aid user, and may be stored as a list in the server device 34 in the database 44. The user interaction scenarios 40 of a user (or at least some of them) may be sent from the server device 34 to the mobile device 36, for example, when requested by the application 42.
  • Fig. 3 schematically shows a user interface 50 of the mobile device 36. The application 42 may provide the user the possibility to select a user interaction scenario 40 from a list, for example, with a list input control 52 on the user interface 50.
  • For example, a user interaction scenario 40 may relate to "family gathering", "car driving", "TV watching", "church", "T-coil usage", etc. These names 54 of the user interaction scenarios 40 may be displayed on the user interface 50 (for example with the list input control 52) and based upon this, the user may select one of the user interaction scenarios 40.
  • When a user interaction scenario 40 has been selected, the application 42 shows controls and/or information on its user interface 50 relating to the selected user interaction scenario 40.
  • For example, the user interaction scenario 40 may comprise handling information 56, which is displayed on the user interface 40, for example in a textbox 58. For example, such handling information 56 may explain, how to switch on a TV-link of hearing aid 10 in a user interaction scenario 40 relating to TV, or how to disable a phone option in a user interaction scenario 40 relating to a church visit.
  • It is also possible that an audible notification is associated with the user interaction scenario 40. For example, when the user enters an environment, in which a T-coil 20 receives input signals, a hearing aid 10 usually generates a specific beep (audible notification), which may trigger the user to turn on the T-coil 20.
  • When a user interaction scenario 40 is associated with an audible notification, the user interface 52 may comprise an input control 60 for triggering the audible notification, such as a button. In such a way, the user may learn, which audible notification relates to which operation mode/function of his hearing aid 10.
  • In such a case, it also may be possible that the user interaction scenario 40 is automatically selected by the application 52, which receives an indication from the hearing aid 10 that an area with T-coil reception has been entered.
  • The user interaction scenario 40 also may comprise one or more hearing aid adjustment parameters 64, which may be adjusted by the user for the user interaction scenario 40 with further input controls, such as a slider input control 62 or toggle input control 66.
  • For example, a user adjustment parameter 64 may be volume, gain, clarity (that may be adjusted within a range of values) or may be a hearing program type (that may be selected from a list). When a user modifies a hearing aid adjustment parameter 64, this hearing aid adjustment parameter 64 will be immediately used for deriving changed sound processing parameters 30 for the hearing aid 10 and for implementing them in the hearing aid 10. In such a way, a user immediately may sense the different behavior of the hearing aid 10, and in particular in a real-life scenario/situation.
  • Furthermore, a user interaction scenario 40 also may be rated by the user. The interface 50 may comprise an input control 68 for rating the user interaction scenario 40, which may be a simple toggle button for answering, whether the user interaction scenario 40 has been helpful for the user.
  • Fig. 4 schematically shows a structure of the database 44 stored in the server device 34.
  • The database 44 comprises a list 70 of user profiles 72. Every user profile 72 is associated with a user of a hearing aid 10. For example, during the first fitting, the hearing care specialist may input information about the user into the fitting application and the server device 34 may create a user profile 72 for this user based on the input information.
  • In general, a user profile 72 may comprise a user identification 74, which may be unique for the user, hearing aid information 76 specifying the hearing aid 10, diagnostic information 78 specifying hearing deficiencies of the user and/or lifestyle information 80 specifying hearing scenarios, the user is exposed.
  • For example, the user identification 74 may be used for interrelating the user profile 72 with a mobile device 36, which is requesting information from the server device 34. Also further information about the user like his name, etc. may be stored in his user profile 72.
  • The hearing aid information 76 may comprise information about the hearing aid 10, such as the type of the hearing aid 10, features of the hearing aid 10 and/or operation modes of the hearing aid 10. It also may be possible that the actual sound processing parameters 30 actually applied to the hearing aid or a history thereof are stored in the hearing aid information 76.
  • The diagnostic information 78 may comprise information about the hearing deficiencies of the user. For example, the diagnostic information 78 may comprise a hearing curve recorded by the hearing care specialist and/or whether the user has a tinnitus.
  • The lifestyle information 80 may comprise information about the user, which may be relevant for fitting the hearing aid. For example, the lifestyle information 80 may comprise the gender and the age of the user. The lifestyle information 80 may furthermore comprise, whether the user is often in special places like restaurants, churches, cars, etc.
  • The user profile 72 also comprises a list 82 of user interaction scenarios 40 for the user. A user interaction scenario 40 may comprise a name 54, handling information 56, one or more hearing aid adjustment parameters 64 and feedback information 83. With every hearing aid adjustment parameter 64, also its actual value and/or a history of adjusted values may be stored. Furthermore, a history of the feedback information may be stored in the user interaction scenario 40.
  • A user interaction scenario 40 also may comprise a link 84 to one or more other user interaction scenarios 40, for example, user interaction scenarios 40 that may be selected, when a user has completed the user interaction scenario 40 with the link 84.
  • A user interaction scenario 40 also may comprise an identifier 85 for an audible notation. For example, when the application should display a button 60 as described above, the corresponding sound or an identifier for the sound may be stored in the user interaction scenario 40.
  • A user interaction scenario 40 also may comprise a time trigger 86. For example, the user interaction scenario may be automatically selected by the application 42, when a specific date and/or time has been reached or when a specific time duration has been completed.
  • A user interaction scenario 40 also may comprise an ambient sound trigger 87. It may be possible that a user interaction scenario 40 is selected automatically, when a specific ambient sound situation is detected. For example, the ambient sound trigger 87 may specify that a specific user interaction scenario 40 is selected, when music, car sound or a room with large echo is detected.
  • Furthermore, the database 44 may comprise a list 88 of possible user interaction scenarios 90. The possible user interaction scenarios 90 may have the same structure as the user interaction scenarios 40 of a user profile 72. Additionally, a possible user interaction scenario may be associated with specific hearing aid information 76, diagnostic information 78 and/or lifestyle information 80.
  • The list 82 of user interaction scenarios 40 of a specific user may be generated with the list 88 of possible user interaction scenarios 90 by selecting those possible user interaction scenarios 90 that fit with the information 76, 78, 80 of the user profile 72.
  • Fig. 5 shows a flow diagram for a method for adjusting the hearing aid 10 with the fitting system 32.
  • In step S10, a user profile 72 of the hearing aid user is created in the server device 34. For example, the user profile 72 is generated with the fitting application. As already mentioned, a hearing care specialist may input information about the user into the fitting application 48, which then may trigger the server device 34 to generate a corresponding user profile 72. It also may be possible that the user profile 72 is modified with the fitting application, for example during a next visit of the user at the hearing care specialist.
  • In step S12, the list 82 of user interaction scenarios 40 is prepared based on the user profile 72.
  • For example, the list 82 is prepared automatically by the server device 34. The preparation of the list 82 of user interaction scenarios 40 of a specific user may be based on automatically selecting (i.e. copying) user interaction scenarios 40 from the list 88 of possible user interaction scenarios 90. A possible user interaction scenario 90 may be automatically selected for the specific user, when the user profile 72 of the user comprises hearing aid information 76, diagnostic information 78 and/or lifestyle information 80 assigned with the possible user interaction scenario 90. For example, user interaction scenarios 40 for a specific feature of a hearing aid 10 only may be copied, when the hearing aid 10 of the specific user has this feature.
  • Additionally or alternatively, it may be possible that user interaction scenarios 40 in the list 82 of a specific user are defined and/or modified with the fitting application 48. For example, the hearing care specialist may customize the automatically prepared list 82 for the specific user.
  • After that, the application 42 may be installed in the mobile device 36 and/or the application 42 may register in the server device 34. The list 82 of user interaction scenarios 40 may then be loaded at least partially into the mobile device 36.
  • In step S14, a user interaction scenario 40 from the list 82 is selected with the application 42. After that, the user interaction scenario may be displayed as described with respect to Fig. 3.
  • The selection may be performed in several ways. For example, the user interaction scenario 40 may be selected manually by the user with the application 42. Another possibility is, that the user interaction scenario 40 is selected automatically based on a timer 86 stored in the user interaction scenario 40. For example, some user interaction scenarios 40 may be selected regularly or at specific times.
  • A user interaction scenario 40 also may be selected automatically based on another associated user interaction scenario 40, which has been completed. For example, when the user has provided feedback for a user interaction scenario 40, he just has interacted with and the user interaction scenario 40 has a link 84 to a further user interaction scenario 40, this further user interaction scenario 40 may be displayed.
  • It also may be possible that a user interaction scenario 40 is selected based upon an ambient sound situation of the user. In particular, an ambient sound signal may be analysed for determining a hearing scenario.
  • For example, the mobile device 36 (which also may comprise a microphone) and in particular the application may continuously record and analyse ambient sound. It also may be possible that the ambient sound signal is recorded by the hearing aid 10 and transmitted to the mobile device 36, where it is analysed. The ambient sound signal also may be transmitted to the server device 34, where it is analysed.
  • When it is determined that a specific hearing scenario is present, for example, it has been determined that the user is inside a car or that the user listens to music, a user interaction scenario 40, which via an ambient sound trigger 87 is associated with the hearing scenario, is selected automatically. For example, the ambient sound trigger may comprise an identifier for a hearing scenario, which may be selected from a list of hearing scenarios.
  • In step S16, when the user interaction scenario 40 has been selected, one or more hearing aid adjustment parameters 64 based on the selected user interaction scenario 40 are determined. The type and the actual value from the hearing aid adjustment parameters 64 may be stored in the selected user interaction scenario 40.
  • The application 42 then may display one or more input controls 62, 66 for adjusting the determined one or more hearing aid adjustment parameters 64. Also the type of input control 62, 66 may be stored in the selected user interaction scenario 40.
  • Furthermore, handling information 56 may be displayed by the application 42. The user can then read the handling information 56 and actuate the input controls 62, 66 to adjust the one or more hearing aid adjustment parameters 64.
  • In step S18, after adjustment of at least one of the hearing aid adjustment parameters 64 with the one or more input controls 62, 66, sound processing parameters 30 for the hearing aid 10 are derived based on the adjusted one or more hearing aid adjustment parameters 64.
  • The derivation of the sound processing parameters 30 may be performed in several ways. Sound processing parameters 30 may be calculated from the hearing aid adjustment parameters 64 based on an algorithm, which is adapted to modify the actual applied sound processing parameters 30 in the hearing aid 10 correspondingly. Also, sound processing parameters 30 may be selected from predefined sound processing parameters 30, which, for example, may be stored in the server device 34 and/or the mobile device 36.
  • Furthermore, these calculations/selections may be performed in the hearing aid 10, the mobile device 36 and/or the server device 34.
  • In step S20, the derived sound processing parameters 30 may be applied in the hearing aid 10, such that the hearing aid 10 is adapted for generating optimized sound signals based on the applied sound processing parameters 30.
  • Steps S18 and S20 may be performed directly every time after the user has adjusted a hearing aid adjustment parameter 64. In such a way, the user directly can hear the difference of his adjustment. It has to be noted that before and/or after the adjustment of the hearing aid adjustment parameters 64 and the application of the sound processing parameters 30, the hearing aid 10 may process an ambient sound signal based on sound generated in the ambient of the user. Thus, the user may adjust his hearing aid 10 directly in the scenario/situation, to which the hearing aid 10 should be adapted in a better way.
  • In step S22, when the user wants to finish the user interaction scenario 40, he may rate the user interaction scenario 40 with the application 42. For example, the user may write a text regarding his experience with the user interaction scenario. The rating or feedback information 83 may be input into the mobile application 42 and then may be stored in the user profile 72.
  • While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practising the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or controller or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.
  • LIST OF REFERENCE SYMBOLS
  • 10
    hearing aid
    12
    input of hearing aid
    14
    output of hearing aid
    16
    microphone
    18
    sender/receiver for digital signals
    20
    T-coil
    22
    loudspeaker
    24
    transducer
    26
    amplifier
    28
    controller
    30
    sound processing parameters
    32
    fitting system
    34
    server device
    36
    mobile device
    38
    internet
    40
    user interaction scenario
    42
    application in mobile device
    44
    database
    46
    fitting device
    48
    fitting application
    50
    user interface
    52
    selection input control
    54
    name of user interaction scenario
    56
    handling information of user interaction scenario
    58
    text box
    60
    button input control
    62
    slider input control
    64
    hearing aid adjustment parameter
    66
    toggle button input control
    68
    feedback input control
    70
    list of user profiles
    72
    user profile
    74
    user identification
    76
    hearing aid information
    78
    diagnostic information
    80
    lifestyle information
    82
    list of user interaction scenarios
    83
    feedback information
    84
    link to other user interaction scenario
    85
    identifier for audible notation
    86
    time trigger
    87
    ambient sound trigger
    88
    list of possible user interaction scenarios
    90
    possible user interaction scenarios

Claims (17)

  1. A method of adjusting a hearing aid (10), which hearing aid (10) is adapted for processing sound signals based on sound processing parameters (30) stored in the hearing aid (10), the method comprising:
    creating a user profile (72) of the hearing aid user in a server device (34), the user profile (72) comprising at least a user identification (74) and hearing aid information (76) specifying the hearing aid (10);
    preparing a list (82) of user interaction scenarios (40) in the server device (34), based on the user profile (72);
    selecting a user interaction scenario (40) from the list (82) with a mobile device (36) of the user communicatively interconnected with the server device (34);
    determining one or more hearing aid adjustment parameters (64) based on the selected user interaction scenario (40);
    displaying one or more input controls (62, 66) for adjusting the determined one or more hearing aid adjustment parameters (64) with the mobile device (36);
    after adjustment of at least one of the hearing aid adjustment parameters (64) with the one or more input controls (62, 66): deriving sound processing parameters (30) for the hearing aid (10) based on the adjusted one or more hearing aid adjustment parameters (64);
    applying the derived sound processing parameters (30) in the hearing aid (10), such that the hearing aid (10) is adapted for generating optimized sound signals based on the applied sound processing parameters (30).
  2. The method of claim 1,
    wherein the user profile (72) additionally comprises diagnostic information (78) specifying hearing deficiencies of the user; and/or
    wherein the user profile (72) additionally comprises lifestyle information (80) specifying hearing scenarios, the user is exposed.
  3. The method of claim 1 or 2, wherein selecting the user interaction scenario (40) from a list (88) of possible user interaction scenarios (90) is carried out automatically based on the user profile (72).
  4. The method of one of the preceding claims,
    wherein at least some of the user interaction scenarios (40) are displayed by the mobile device (36) to be selected by the user; and/or
    wherein handling information (56) provided by a user interaction scenario (40) is displayed by the mobile device (36).
  5. The method of one of the preceding claims,
    wherein the user interaction scenario (40) is selected automatically based on a timing information (86) stored in the user interaction scenario (40); and/or
    wherein the user interaction scenario (40) is selected automatically based on a history of interaction scenario (40), which have been completed.
  6. The method of one of the preceding claims, wherein the adjustment of at least one of the hearing aid adjustment parameters (64) is based on an ambient sound signal of the user.
  7. The method of one of the preceding claims,
    wherein the user interaction scenario is associated with one or more hearing scenarios and wherein an ambient sound signal is analyzed for determining a hearing scenario;
    wherein the selection of the user interaction scenario (40) is based on the determined hearing scenario.
  8. The method of claim 7, wherein the ambient sound signal is processed and/or recorded by the hearing aid (10) and/or by the mobile device (36).
  9. The method of one of the preceding claims,
    wherein the sound processing parameters (30) are derived from the one or more hearing aid adjustment parameters (64) in the mobile device (36); or
    wherein the sound processing parameters (30) are derived from the one or more hearing aid adjustment parameters (64) in the hearing aid (10); or
    wherein the sound processing parameters (30) are derived from the one or more hearing aid adjustment parameters (64) in the server device (34).
  10. The method of one of the preceding claims,
    wherein an audible notification (85) indicating an operation mode of the hearing aid is assigned with the user interaction scenario (40);
    wherein an input control (60) for triggering the audible notification with the mobile device (36) in the hearing aid (10) is displayed by the mobile device (36).
  11. The method of one of the preceding claims,
    wherein an input control (68) for providing rating and/or further user input related to the user interaction scenario (40) is displayed by the mobile device (36);
    wherein the rating and/or further user input is stored in the user profile (72).
  12. The method of one of the preceding claims,
    wherein the one or more hearing aid adjustment parameters (64) comprises at least one of: volume, gain, clarity, sound input type, directivity, tonal balance, dynamic compression, loudness, a frequency transposition parameter, hearing program type; and/or
    wherein the sound processing parameters (30) encode at least one of:
    an adjusting a frequency dependent amplifying of sound signals,
    a mixing of sound signals from more than one sound sources (16, 18, 20).
  13. The method of one of the preceding claims, further comprising:
    generating a user profile (72) and/or modifying a user profile (72) with a fitting application (48) executed in a fitting device (46) communicatively interconnected with the server device (34); and/or
    defining and/or modifying a user interaction scenario (40) in the list (82) of user interaction scenarios (40) for a user with the fitting application (48).
  14. A computer program (42, 48) for adjusting a hearing aid (10), which, when being executed by a processor, is adapted to carry out the steps of the method of one of the previous claims.
  15. A computer-readable medium, in which a computer program according to claim 14 is stored.
  16. A fitting system (32) for adjusting a hearing aid (10), the fitting system comprising a mobile device (36) communicatively interconnected with the hearing aid (10);
    wherein the fitting system (32) is adapted for carrying out the method of one of claims 1 to 13.
  17. The fitting system (32) of claim 17, further comprising:
    a fitting device (46) communicatively interconnected with the server device (34), the fitting device being adapted for defining a user profile (72) and/or for defining user interaction scenarios (40) for a specific user.
EP16165711.9A 2016-04-18 2016-04-18 Adjusting a hearing aid based on user interaction scenarios Withdrawn EP3236673A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP16165711.9A EP3236673A1 (en) 2016-04-18 2016-04-18 Adjusting a hearing aid based on user interaction scenarios

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP16165711.9A EP3236673A1 (en) 2016-04-18 2016-04-18 Adjusting a hearing aid based on user interaction scenarios

Publications (1)

Publication Number Publication Date
EP3236673A1 true EP3236673A1 (en) 2017-10-25

Family

ID=55910735

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16165711.9A Withdrawn EP3236673A1 (en) 2016-04-18 2016-04-18 Adjusting a hearing aid based on user interaction scenarios

Country Status (1)

Country Link
EP (1) EP3236673A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017106359A1 (en) 2017-03-24 2018-09-27 Sennheiser Electronic Gmbh & Co. Kg Apparatus and method for processing audio signals to improve speech intelligibility

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000022874A2 (en) 2000-02-18 2000-04-20 Phonak Ag Fitting system
EP1353530A1 (en) 2002-04-12 2003-10-15 Siemens Audiologische Technik GmbH Individual hearing training for hearing aid carriers
US20110200214A1 (en) * 2010-02-12 2011-08-18 Audiotoniq, Inc. Hearing aid and computing device for providing audio labels
US20110235835A1 (en) * 2008-12-12 2011-09-29 Widex A/S Method for fine tuning a hearing aid
US20110293123A1 (en) * 2010-05-25 2011-12-01 Audiotoniq, Inc. Data Storage System, Hearing Aid, and Method of Selectively Applying Sound Filters
EP2549397A1 (en) * 2012-07-02 2013-01-23 Oticon A/s Method for customizing a hearing aid
WO2013117214A1 (en) 2012-02-07 2013-08-15 Widex A/S Hearing aid fitting system and a method of fitting a hearing aid system
WO2015009564A1 (en) 2013-07-16 2015-01-22 iHear Medical, Inc. Online hearing aid fitting system and methods for non-expert user
EP2876899A1 (en) * 2013-11-22 2015-05-27 Oticon A/s Adjustable hearing aid device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000022874A2 (en) 2000-02-18 2000-04-20 Phonak Ag Fitting system
EP1353530A1 (en) 2002-04-12 2003-10-15 Siemens Audiologische Technik GmbH Individual hearing training for hearing aid carriers
US20110235835A1 (en) * 2008-12-12 2011-09-29 Widex A/S Method for fine tuning a hearing aid
US20110200214A1 (en) * 2010-02-12 2011-08-18 Audiotoniq, Inc. Hearing aid and computing device for providing audio labels
US20110293123A1 (en) * 2010-05-25 2011-12-01 Audiotoniq, Inc. Data Storage System, Hearing Aid, and Method of Selectively Applying Sound Filters
WO2013117214A1 (en) 2012-02-07 2013-08-15 Widex A/S Hearing aid fitting system and a method of fitting a hearing aid system
EP2549397A1 (en) * 2012-07-02 2013-01-23 Oticon A/s Method for customizing a hearing aid
WO2015009564A1 (en) 2013-07-16 2015-01-22 iHear Medical, Inc. Online hearing aid fitting system and methods for non-expert user
EP2876899A1 (en) * 2013-11-22 2015-05-27 Oticon A/s Adjustable hearing aid device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017106359A1 (en) 2017-03-24 2018-09-27 Sennheiser Electronic Gmbh & Co. Kg Apparatus and method for processing audio signals to improve speech intelligibility

Similar Documents

Publication Publication Date Title
US9744357B2 (en) Optimizing operational control of a hearing prosthesis
US9107016B2 (en) Interactive hearing aid fitting system and methods
US20160234606A1 (en) Method for augmenting hearing
US9918171B2 (en) Online hearing aid fitting
US10187733B2 (en) Method for controlling and/or configuring a user-specific hearing system via a communication network
US9319814B2 (en) Method for fitting a hearing aid device with active occlusion control to a user
US9532152B2 (en) Self-fitting of a hearing device
CN103052012B (en) Automatic real-time hearing aid fitting based on auditory evoked potential
CN106233754B (en) Hearing assistance devices control
EP2892247A1 (en) An earplug for selectively providing sound to a user
US20130345594A1 (en) Calibrated digital headset and audiometric test methods therewith
KR101725986B1 (en) Hearing aid fitting system and a method of fitting a hearing aid system
US8761421B2 (en) Portable electronic device and computer-readable medium for remote hearing aid profile storage
AU2004300976B2 (en) Speech-based optimization of digital hearing devices
JP2016511648A (en) Method and system for enhancing self-managed voice
US6944474B2 (en) Sound enhancement for mobile phones and other products producing personalized audio for users
US8447042B2 (en) System and method for audiometric assessment and user-specific audio enhancement
US9918159B2 (en) Time heuristic audio control
KR101779641B1 (en) Personal communication device with hearing support and method for providing the same
US20180115841A1 (en) System and method for remote hearing aid adjustment and hearing testing by a hearing health professional
US9992586B2 (en) Method of optimizing parameters in a hearing aid system and a hearing aid system
US6522988B1 (en) Method and system for on-line hearing examination using calibrated local machine
US20170223471A1 (en) Remotely updating a hearing aid profile
DK1453357T3 (en) Apparatus and method for adjusting a hearing aid
EP2191662B1 (en) Hearing system with a user preference control and method for operating a hearing system

Legal Events

Date Code Title Description
AK Designated contracting states:

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AV Request for validation of the european patent in

Extension state: MA MD

AX Request for extension of the european patent to

Extension state: BA ME

18D Deemed to be withdrawn

Effective date: 20180426