EP3809724A1 - Procédé de fonctionnement d'un appareil auditif ainsi qu'appareil auditif - Google Patents

Procédé de fonctionnement d'un appareil auditif ainsi qu'appareil auditif Download PDF

Info

Publication number
EP3809724A1
EP3809724A1 EP20201989.9A EP20201989A EP3809724A1 EP 3809724 A1 EP3809724 A1 EP 3809724A1 EP 20201989 A EP20201989 A EP 20201989A EP 3809724 A1 EP3809724 A1 EP 3809724A1
Authority
EP
European Patent Office
Prior art keywords
setting
user
feedback
hearing aid
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP20201989.9A
Other languages
German (de)
English (en)
Other versions
EP3809724B1 (fr
Inventor
Gerard Loosschilder
Matthias Fröhlich
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Sivantos Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sivantos Pte Ltd filed Critical Sivantos Pte Ltd
Publication of EP3809724A1 publication Critical patent/EP3809724A1/fr
Application granted granted Critical
Publication of EP3809724B1 publication Critical patent/EP3809724B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • H04R25/507Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/39Aspects relating to automatic logging of sound environment parameters and the performance of the hearing aid during use, e.g. histogram logging, or of user selected programs or settings in the hearing aid, e.g. usage logging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest

Definitions

  • the invention relates to a method for operating a hearing aid and a corresponding hearing aid.
  • a hearing aid is used to supply a typically hearing-impaired user.
  • the hearing aid has a microphone which picks up sound signals from the user's surroundings and converts them into an electrical input signal.
  • This is modified in a signal processing of the hearing aid, in particular on the basis of an audiogram of the user.
  • the signal processing generates an electrical output signal which is fed to a receiver of the hearing aid, which then converts the electrical output signal into an output sound signal and outputs it to the user.
  • the modification within the signal processing takes place depending on one or more parameters, more precisely signal processing parameters. These are each set to a specific value so that each parameter has a specific setting at a given point in time. The respective setting and correspondingly the associated value are expediently selected depending on the situation.
  • the hearing aid has, for example, a classifier which determines a current situation on the basis of the electrical input signal and then adjusts the signal processing parameters appropriately depending on the current situation.
  • a hearing aid in which a classifier extracts a plurality of features from an input signal and generates a classifier output signal, by means of which parameter one Transfer function of signal processing can be adapted.
  • the classifier output signal is dependent on a weighting which is updated by means of a feedback from a user.
  • a semi-supervised learning process with a passive update scheme is also described. It is assumed that feedback is only given if the setting of the classifier has to be changed. If there is no feedback, however, the current settings are retained.
  • a hearing aid i.e. to specify an improved method for operating a hearing aid.
  • the aim is to improve the learning of the most optimal settings possible for the hearing aid.
  • An improved hearing aid is also to be specified.
  • the method is used to operate a hearing aid, so it is an operating method for the hearing aid.
  • the hearing aid has signal processing which has at least one adjustable parameter which has a given setting at a given point in time.
  • the setting is in particular a specific value for the parameter, for example a specific gain or volume or a width of a directional lobe for directional hearing with the hearing aid.
  • a user of the hearing aid wears it in or on the ear when used as intended.
  • the hearing aid is preferably used to supply a hearing impaired user.
  • the hearing aid preferably has at least one microphone for picking up ambient noise and a receiver for outputting noise to the user.
  • the microphone generates an electrical input signal from the ambient noise, which is passed on to the signal processing and which is then modified by the signal processing depending on the parameter, for example is amplified. As a result, a modified input signal is generated, which is then an electrical output signal and which is passed on to the listener for output.
  • the input signal is modified by the signal processing as a function of an individual audiogram, which is stored in the hearing aid in particular.
  • the signal processing preferably has a modification unit which modifies the input signal as a function of the parameter.
  • the parameter is set as a function of the situation by selecting a setting for the parameter as a function of a current environmental situation and by means of a learning machine.
  • the parameter is preferably set repeatedly depending on the situation.
  • the situation-dependent setting of the parameter takes place in particular automatically by the signal processing and as part of the operation of the hearing aid.
  • the parameter can expediently also be set in another way, for example manually by the user.
  • the situation-dependent setting the current environmental situation is first recognized. A specific setting is assigned to this environmental situation in accordance with an assignment rule, which setting is then selected so that the parameter is set accordingly.
  • the learning machine has a classifier by means of which the surrounding situation is recognized.
  • the learning machine especially the classifier, analyzes in particular the input signal generated by the microphone and assigns a class to the current environmental situation, for example speech, music or noise. Depending on the class, the parameter is then set, ie a suitable setting is selected for the parameter. Using the learning machine, the hearing aid learns over time which setting is most suitable in which environmental situation and then selects it.
  • the assignment of a respective setting to a respective environmental situation is therefore not static in the present case, but is adapted dynamically by the learning machine. In other words: the assignment rule between Settings and environmental situations are continuously adapted by the learning machine.
  • a current setting of the parameter can be assessed by feedback from a user of the hearing aid.
  • the current setting is the setting that is currently set.
  • the user can rate this setting with feedback.
  • the feedback generally includes a request or request from the user to the hearing aid to change the current setting, that is to say to set the parameter differently.
  • the feedback generally takes place via an input element of the hearing aid, e.g. a button for manual input or a microphone for voice input or another sensor for detecting user input. With the feedback, the user expresses his satisfaction with the current setting.
  • An evaluation is then assigned to each setting of the parameter, e.g. in the form of a counter. The rating is then changed depending on the feedback and thus generally indicates the satisfaction of the user with this setting.
  • the setting is assigned to a specific class and thus to a specific environmental situation, so that the evaluation therefore indicates the satisfaction of the user with this setting for the assigned environmental situation.
  • the setting is assigned to a specific class and thus to a specific environmental situation, so that the evaluation therefore indicates the satisfaction of the user with this setting for the assigned environmental situation.
  • several different settings are assigned to a single class or that several different classes are assigned to a single setting, or both.
  • a single setting can receive and have different ratings for several classes.
  • the learning machine is passively trained by negative feedback in a first training session, in that feedback from the user is assessed as dissatisfaction with the current setting and in that the satisfaction of the user with the current setting is assumed as long as there is no feedback.
  • the method thus includes a learning process for the learning machine.
  • the first training is a passive training. This means that in the course of the first training, feedback from the user is not explicitly requested or queried, but rather that it was given voluntarily Feedback from the user can be used. Instead of actively asking the user about satisfaction with a setting, this satisfaction is derived from the behavior of the user. If the user gives feedback that it is assumed that the setting is unsatisfactory at the time of the feedback and that the feedback was therefore given. In contrast, if there is no feedback, it is assumed that the current setting is satisfactory.
  • the learning machine is also trained in a second training, in that this is changed independently of a feedback from the user and despite an assumed satisfaction with the current setting, so that the user is presented with a different setting, which is then can be assessed by feedback.
  • the user is accordingly offered deviating settings, without being asked, in order to receive additional ratings for these settings, although the current setting is assumed to be satisfactory per se.
  • the current setting of the parameter is changed when the ambient situation remains the same, in order to test different settings for the same ambient situation.
  • experiments are therefore carried out with deviating settings, so that the second training session is also referred to as experimental training.
  • the current setting is changed in the second training session if no automatic or no manual change has taken place over a certain period of time, or both.
  • the current setting is preferably changed if no situation-dependent change has taken place over a certain period of time.
  • the period of time is generally preferably between 5 minutes and 15 minutes, alternatively or additionally
  • the current setting is changed in the second training if the current setting is rated as satisfactory.
  • the invention is initially based on the observation that active training of the learning machine is usually annoying for the user, since it requires regular feedback, possibly even without the user being able to determine the time for this himself. Under certain circumstances, the use of the hearing aid is even emotionally charged negatively for the user. In the case of active training, the user is offered various settings, which the user should then evaluate with a corresponding feedback. In contrast, passive training of the learning machine, in which such active feedback is not required, is significantly more advantageous. Such passive training has a significantly higher level of acceptance when the hearing aid is used as intended. Active training in which the user is actively consulted, however, has the advantage that typically more feedback is available and can also be generated as required, so that satisfactory settings can be learned by the learning machine significantly faster than with passive training.
  • a very special advantage results from the combination of the first, passive training with the second, experimental training, whereby overall a faster learning is achieved than with a passive training alone.
  • the experimental training potentially provokes additional feedback and thus potentially generates additional evaluations, but the advantage of passive training is retained, namely the reduced user interaction compared to active training.
  • the mechanism of the first training is basically continued to be used and used to check, by intentionally changing the setting, whether another setting apart from the current setting is still satisfactory for the user. This other setting is then fed in without being asked, so to speak, during operation and as an alternative to the current setting.
  • the second training increases the range of values for the parameter made available to a passive evaluation by the user. Overall, this significantly accelerates the convergence of the overall system, especially the learning machine, towards the best possible settings for the respective user. The learning of optimal settings is thus accelerated and improved accordingly.
  • first training and second training are used in the present case to clarify the two levels of learning in a preferred embodiment of the learning machine, namely passive training, which is simple in itself, on the one hand and experimenting and testing additional settings on the other.
  • passive training which is simple in itself, on the one hand and experimenting and testing additional settings on the other.
  • both training sessions run at the same time.
  • the combination of the first and second training simply corresponds to a modified, passive training. Since additional settings are fed in without being asked, this form of training is also referred to as "injected learning”. Since feedback from the user is not actively requested when other settings are also fed in, this training is basically still passive.
  • the second training of the learning machine is passive in that feedback from the user is not actively requested. Accordingly, as with the first training, feedback from the user is preferably not actively requested in the second training either, but it is sufficient that the other setting can be evaluated. The user can evaluate this other setting, but does not necessarily have to do so. In other words: feedback from the user is assessed as dissatisfaction with the current setting and satisfaction from the user with the current setting is assumed as long as there is no feedback. In fact, the same mechanism is preferably used for evaluating the other setting as for the first training session. In any case, the learning machine evaluates a feedback as dissatisfaction with the setting immediately before the feedback or at the time of the feedback and not as satisfaction with the setting immediately after the feedback, if the user has changed the setting as part of the feedback.
  • the learning machine increases a rating of this setting when satisfied with a setting and reduces the rating when dissatisfied.
  • This concept is based on the idea of storing the suitability of the individual settings in the form of an evaluation in order to then select the optimal setting in each case when the parameter is set as a function of the situation when the hearing aid is in operation. If there is a change in the surrounding situation, the new surrounding situation is recognized and the setting that has the highest rating for this surrounding situation is then selected. If the ambient situation remains the same, other settings, which are fundamentally rated worse, are set and tested to that extent. The user can then rate an initially badly rated setting as actually worse via a negative feedback. In a suitable further training, if feedback is not given, satisfaction with the poorly rated setting is assumed and its rating is then increased.
  • the learning machine automatically assumes that the user is satisfied with the current setting if there has been no feedback over a certain period of time.
  • This approach supports the general passive approach to training.
  • an embodiment is generally advantageous in which a feedback that includes a change in the parameter by the user is rated as satisfaction with the setting newly selected by the user.
  • this is not mandatory per se and in any case still requires feedback from the user in order to generate a positive rating, ie to increase the rating of a setting.
  • a positive evaluation is possible without active user interaction, whereby the convergence of the training is further improved.
  • the period of time which is waited until the satisfaction with the current setting is assumed is preferably between 5 minutes and 15 minutes.
  • the evaluation of the current setting is then expediently only increased when the surrounding situation is the same during the period, that is to say has not changed.
  • the other setting which is presented to the user without being asked during the experimental training, can in principle be selected arbitrarily or at random, but a certain selection is expediently made.
  • the other setting is selected in the second training session as a function of a previous evaluation of this setting in comparison to other settings.
  • a setting is selected which has a lower number of evaluations at least for the current environmental situation than the current setting in order to then potentially receive further evaluations.
  • the other setting is expediently selected depending on its similarity to the current setting.
  • the other setting differs in the second training session by at most 10% from the current setting, i.e. it is similar.
  • the parameter is a volume and the setting is a value for this volume, which is then varied by the experimental training within a range of +/- 10%.
  • the selection of a similar setting is advantageously attempted by the learning machine to expand the acceptable range of values for the parameter by testing slightly different settings. If the user expresses dissatisfaction with the new setting through feedback, this will be rated negatively. Otherwise, the new setting is automatically assessed as positive, especially after a certain period of time, as described above, that is, its assessment is increased. Overall, other settings, apart from the setting selected from the outset depending on the situation, are passively checked for their suitability without actively requesting user interaction.
  • the other setting is expediently selected depending on its evaluation by other users.
  • the selection is preferably further restricted by only taking into account the ratings of those other users who are similar to the user, for example have a similar audiogram or belong to a similar population group or are of a similar age.
  • the modified, passive training described can also be combined with active training.
  • the learning machine is then additionally actively trained in a third training session by requesting feedback from the user in order to evaluate the current setting.
  • the active training takes place depending on the time or situation or is initiated by the user himself. For example, active training is carried out at certain times or after a certain time interval has elapsed or when the environmental situation changes.
  • the modified, passive training advantageously reduces the need for active training so that it is carried out much less frequently.
  • the feedback from the user consists in the user changing the parameter, for example manually.
  • the hearing aid or an additional device which is connected to the hearing aid has an input element as already described above.
  • the parameter can be set by the user himself, i.e. manually set, in contrast to the automatic situation-dependent setting.
  • the user can therefore change the parameter and thus its setting.
  • the learning machine evaluates this as dissatisfaction with the setting made immediately before the feedback and reduces its evaluation accordingly.
  • a new setting is then set by the feedback.
  • it is assumed that this new setting is satisfactory for the user, since the user has specifically chosen this setting, ie satisfaction with the new setting is assumed and its rating is increased accordingly.
  • the feedback suitably comprises one of the following actions by the user: changing a volume of the hearing aid, changing a program of the hearing aid, changing a focus of the hearing aid.
  • other actions are also conceivable and suitable.
  • the first and second training sessions are preferably carried out during normal operation of the hearing aid, i.e. while the hearing aid is being worn and used by the user and not just in a fitting session with the acoustician or in a special training situation.
  • the modified, passive training of the learning machine is therefore preferably carried out online while the hearing aid is in operation.
  • the learning machine is, for example, a neural network, a support vector machine or the like.
  • the learning machine is suitably designed as an integrated circuit, in particular in terms of programming, e.g. as a microcontroller, or in terms of circuitry, e.g. as an ASIC.
  • the learning machine is preferably integrated into the hearing aid, in particular together with or as part of the signal processing.
  • an embodiment is also suitable in which the learning machine is relocated to an additional device which is connected to the hearing device, preferably wirelessly.
  • the object is also achieved, independently of the hearing aid and the method for its operation, in particular by a learning machine as described above, which is suitable for use with a hearing aid as described.
  • a hearing aid 2 which has a signal processing 4 which has at least one adjustable parameter P which has a given setting E at a given point in time, ie a specific value for the parameter P, for example a specific gain or volume.
  • a user of the hearing aid 2 wears it in or on the ear when it is used as intended.
  • the hearing aid 2 has at least one microphone 6 for picking up ambient noises and a receiver 8 for outputting noises to the user.
  • the microphone 6 generates an electrical input signal from the ambient noise, which is passed on to the signal processing 4 and is modified by the latter as a function of the parameter P, for example is amplified.
  • a modified input signal is generated, which is then an electrical output signal and which is passed on to the receiver 8 for output.
  • the signal processing unit 4 has a modification unit 9 which modifies the input signal as a function of the parameter P.
  • the parameter P is set as a function of the situation in that a setting E that is as suitable as possible for the parameter P is selected as a function of a current environmental situation and by means of a learning machine 10. This takes place repeatedly and automatically by the signal processing 4 and as part of the operation of the hearing aid 2.
  • the parameter P in the present case can also be set manually by the user via an input element 12.
  • Fig. 2 an embodiment of the method is shown.
  • the current environmental situation is first recognized in a first step S1.
  • This environmental situation is assigned a specific setting E in accordance with an assignment rule, which setting is then selected in a second step S2 so that the parameter P is set accordingly.
  • step S1 the surrounding situation is recognized by means of a classifier 14 of the learning machine 10.
  • the classifier 14 analyzes the input signal generated by the microphone and assigns a class to the current environmental situation.
  • the parameter P is then used in the second step S2 set.
  • the hearing aid 2 learns over time which setting E is most suitable in which environmental situation and then selects it.
  • the learning takes place in a third step S3 parallel to the two steps S1 and S2 and influences the selection of the setting E for the parameter P in the second step S2, as in FIG Fig. 2 shown.
  • the assignment of a respective setting E to a respective environmental situation is therefore not static in the present case, but is adapted dynamically by the learning machine 10.
  • a current setting E of the parameter P can be assessed by a feedback R from a user of the hearing aid 2.
  • the current setting E is the setting E that is set at the current time.
  • the user can evaluate this setting E in a fourth step S4 by means of a feedback R.
  • the feedback R generally includes a request or request from the user to the hearing aid 2 to change the current setting E.
  • the feedback R occurs in the present case via the input element 12 of the hearing aid 2, e.g. a button for manual input or a microphone, e.g. the microphone 6, for voice input or another sensor for detecting a user input.
  • the user expresses his satisfaction with the current setting E via the feedback R.
  • An evaluation is then assigned to each setting E of the parameter P, e.g. in the form of a counter.
  • the evaluation is then changed as a function of the feedback R and indicates the satisfaction of the user with a respective setting E for the assigned environmental situation.
  • the method includes a learning method for the learning machine 10.
  • An exemplary embodiment for this is described below with reference to FIG Fig. 3 explained.
  • the learning machine 10 is passively trained by negative feedback R, in that a feedback R from the user in step B- is assessed as dissatisfaction with the current setting E and in a step B + that the user is satisfied with the current setting E. is accepted as long as no feedback R is received.
  • Feedback R from the user is not explicitly requested or queried, but voluntarily given feedback R from the user is used.
  • the learning machine 10 is additionally trained in a second training session, in that the current setting E is changed in a fifth step S5 independently of feedback R from the user and despite an assumed satisfaction with the current setting E, so that a different setting E is given to the user is presented, which can then be evaluated accordingly by a feedback R.
  • the user is accordingly offered deviating settings E, without being asked, in order to receive additional evaluations in steps B-, B + for these settings E, although the current setting E is assumed to be satisfactory per se.
  • the current setting E of the parameter P is therefore changed when the ambient situation remains the same, in order to test different settings E for the same ambient situation, i.e.
  • the learning machine 10 experiments with different settings E, so that the second training session is also referred to as experimental training.
  • the experimental training by means of the fifth step S5 potentially provokes additional feedback R and thus potentially additional evaluations are then generated in steps B-, B +, but the advantage of passive training is retained, namely the reduced user interaction compared to active training.
  • the second training of the learning machine 10 is also passive, in that feedback R from the user is not actively requested. Accordingly, feedback R from the user is not actively requested during the second training either, but it is already sufficient that the other setting E can be evaluated. The user can rate this other setting E, but does not necessarily have to do so.
  • the same mechanism is used for the evaluation as for the first training.
  • the learning machine 10 evaluates a feedback R as dissatisfaction with the setting immediately before the feedback R or at the time of the feedback R and not as satisfaction with the setting immediately after the feedback R if the user changes the setting as part of the feedback R. E has changed.
  • the learning machine 10 if it is satisfied with a setting E, it increases a rating of this setting E and if it is dissatisfied it reduces the rating.
  • the suitability of the individual settings E is stored in the form of a respective evaluation in order to then select the optimal setting E in each case when the parameter P is set in the second step S2 as a function of the situation. If there is a change in the surrounding situation, the new surrounding situation is recognized and that setting E is selected which has the highest evaluation for this surrounding situation. If the ambient situation remains the same, then other settings E, which are fundamentally rated worse, are set and to that extent tested.
  • the learning machine 10 automatically assumes that the user is satisfied with the current setting E if no feedback R has been received over a certain period t. This is also in the embodiment of FIG Fig. 3 the case. With this automatic assumption of the satisfaction of the user after a certain period t without changing the setting E by the user, a positive evaluation is realized without active user interaction.
  • the time period t which is waited for is between 5 minutes and 15 minutes, for example.
  • the other setting E which is presented to the user without being asked during the experimental training, can in principle be selected arbitrarily or at random, but in the present case a specific selection is made.
  • the other setting E is selected in the present case as a function of a previous evaluation of this setting E in comparison to other settings E. For example, a setting E is selected which has a lower number of evaluations at least for the current environmental situation than the current setting E in order to then potentially receive further evaluations.
  • the other setting E is selected depending on its similarity to the current setting E and differs, for example, by at most 10% from the current setting E, i.e. if it is similar.
  • the parameter P is a volume and the setting E is a value for this volume, which is then varied by the experimental training within a range of +/- 10%.
  • the other setting E is selected depending on its evaluation by other users.
  • the selection is further restricted by only taking into account the ratings of those other users who are similar to the user, e.g. have a similar audiogram or belong to a similar population group or are of a similar age.
  • the learning machine 10 is then additionally actively trained by requesting feedback R from the user to evaluate the current setting E.
  • the active training takes place as a function of time or situation or is initiated by the user himself. For example, active training is carried out at certain times or after a certain time interval has elapsed or when the environmental situation changes.
  • the feedback E from the user consists in the fact that the user changes the parameter P manually using the input element 12.
  • the input element 12 is not as in FIG Fig. 1 shown a part of the hearing aid 2, but a part of an additional device which is connected to the hearing aid 2 for data transmission.
  • the additional device is, for example, a remote control for the hearing device 2 or a smartphone or the like.
  • the manual setting E of the parameter P by means of the input element 12 is also in Fig. 3 shown. The user can therefore change the parameter P if he is dissatisfied with the setting E. This is then evaluated by the learning machine 10 as dissatisfaction with the setting E set immediately before the feedback R and its evaluation is reduced accordingly in step B-.
  • a new setting E is then set by the feedback R.
  • this new setting E is satisfactory for the user, since the user has specifically chosen this setting E, ie satisfaction with the new setting E is assumed and its evaluation is increased accordingly in a step B + .
  • This variant is in Fig. 3 not explicitly shown.
  • the feedback R includes, for example, one of the following actions by the user: changing a volume of the hearing device 2, changing a program of the hearing device 2, changing a focus of the hearing device 2.
  • other actions are also conceivable and suitable.
  • the learning machine 10 is, for example, a neural network, a support vector machine or the like.
  • the learning machine 10 is designed as an integrated circuit, e.g. in terms of programming as a microcontroller or in terms of circuitry as an ASIC.
  • the learning machine 10 is integrated into the hearing aid 2, in the exemplary embodiment shown even as part of the signal processing 4.
  • an embodiment (not shown) is also suitable in which the learning machine 10 is outsourced to an additional device, e.g. as described above, which is connected to the hearing aid 2 connected, e.g. wirelessly.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Otolaryngology (AREA)
  • Neurosurgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Electrically Operated Instructional Devices (AREA)
EP20201989.9A 2019-10-18 2020-10-15 Procédé de fonctionnement d'un appareil auditif ainsi qu'appareil auditif Active EP3809724B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102019216100.6A DE102019216100A1 (de) 2019-10-18 2019-10-18 Verfahren zum Betrieb eines Hörgeräts sowie Hörgerät

Publications (2)

Publication Number Publication Date
EP3809724A1 true EP3809724A1 (fr) 2021-04-21
EP3809724B1 EP3809724B1 (fr) 2021-10-27

Family

ID=72915777

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20201989.9A Active EP3809724B1 (fr) 2019-10-18 2020-10-15 Procédé de fonctionnement d'un appareil auditif ainsi qu'appareil auditif

Country Status (5)

Country Link
US (1) US11375325B2 (fr)
EP (1) EP3809724B1 (fr)
CN (1) CN112689230A (fr)
DE (1) DE102019216100A1 (fr)
DK (1) DK3809724T3 (fr)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11849288B2 (en) * 2021-01-04 2023-12-19 Gn Hearing A/S Usability and satisfaction of a hearing aid
DE102021204974A1 (de) 2021-05-17 2022-11-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein Vorrichtung und Verfahren zum Bestimmen von Audio-Verarbeitungsparametern
US11849286B1 (en) 2021-10-25 2023-12-19 Chromatic Inc. Ear-worn device configured for over-the-counter and prescription use
US11832061B2 (en) * 2022-01-14 2023-11-28 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US20230306982A1 (en) 2022-01-14 2023-09-28 Chromatic Inc. System and method for enhancing speech of target speaker from audio signal in an ear-worn device using voice signatures
US11818547B2 (en) * 2022-01-14 2023-11-14 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US11950056B2 (en) 2022-01-14 2024-04-02 Chromatic Inc. Method, apparatus and system for neural network hearing aid
US11902747B1 (en) 2022-08-09 2024-02-13 Chromatic Inc. Hearing loss amplification that amplifies speech and noise subsignals differently

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10347211A1 (de) * 2003-10-10 2005-05-25 Siemens Audiologische Technik Gmbh Verfahren zum Nachtrainieren und Betreiben eines Hörgeräts und entsprechendes Hörgerät
EP1906700B1 (fr) * 2006-09-29 2013-01-23 Siemens Audiologische Technik GmbH Procédé de réglage commandé dans le temps d'un dispositif auditif et dispositif auditif
EP2255548B1 (fr) 2008-03-27 2013-05-08 Phonak AG Procédé pour faire fonctionner une prothèse auditive
DE102013205357A1 (de) * 2013-03-26 2014-10-02 Siemens Ag Verfahren zum automatischen Einstellen eines Geräts und Klassifikator

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970146B2 (en) * 2006-07-20 2011-06-28 Phonak Ag Learning by provocation
DK2098097T3 (da) * 2006-12-21 2019-08-26 Gn Hearing As Høreinstrument med brugergrænseflade
US20110313315A1 (en) * 2009-02-02 2011-12-22 Joseph Attias Auditory diagnosis and training system apparatus and method
EP2596647B1 (fr) * 2010-07-23 2016-01-06 Sonova AG Système auditif et procédé d'exploitation d'un système auditif
EP2566193A1 (fr) * 2011-08-30 2013-03-06 TWO PI Signal Processing Application GmbH Système et procédé d'adaptation d'un appareil auditif
DK2731356T3 (en) * 2012-11-07 2016-05-09 Oticon As Body worn control device for hearing aids
JP6190351B2 (ja) * 2013-12-13 2017-08-30 ジーエヌ ヒアリング エー/エスGN Hearing A/S 学習型補聴器
US10575103B2 (en) * 2015-04-10 2020-02-25 Starkey Laboratories, Inc. Neural network-driven frequency translation
DE102016216054A1 (de) * 2016-08-25 2018-03-01 Sivantos Pte. Ltd. Verfahren und Einrichtung zur Einstellung eines Hörhilfegeräts
US9886954B1 (en) * 2016-09-30 2018-02-06 Doppler Labs, Inc. Context aware hearing optimization engine
DE102017214164B3 (de) 2017-08-14 2019-01-17 Sivantos Pte. Ltd. Verfahren zum Betrieb eines Hörgeräts und Hörgerät
CN111512646B (zh) * 2017-09-12 2021-09-07 维思博Ai公司 低延迟音频增强的方法和设备
WO2019099699A1 (fr) * 2017-11-15 2019-05-23 Starkey Laboratories, Inc. Système interactif pour dispositifs auditifs
US10194259B1 (en) * 2018-02-28 2019-01-29 Bose Corporation Directional audio selection
CN109256122A (zh) * 2018-09-05 2019-01-22 深圳追科技有限公司 机器学习方法、装置、设备及存储介质
US11503413B2 (en) * 2018-10-26 2022-11-15 Cochlear Limited Systems and methods for customizing auditory devices
WO2020198023A1 (fr) * 2019-03-22 2020-10-01 Lantos Technologies, Inc. Système et procédé de conception et de fabrication basées sur l'apprentissage automatique de dispositifs de logement à l'oreille
US11601765B2 (en) * 2019-12-20 2023-03-07 Sivantos Pte. Ltd. Method for adapting a hearing instrument and hearing system therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10347211A1 (de) * 2003-10-10 2005-05-25 Siemens Audiologische Technik Gmbh Verfahren zum Nachtrainieren und Betreiben eines Hörgeräts und entsprechendes Hörgerät
EP1906700B1 (fr) * 2006-09-29 2013-01-23 Siemens Audiologische Technik GmbH Procédé de réglage commandé dans le temps d'un dispositif auditif et dispositif auditif
EP2255548B1 (fr) 2008-03-27 2013-05-08 Phonak AG Procédé pour faire fonctionner une prothèse auditive
DE102013205357A1 (de) * 2013-03-26 2014-10-02 Siemens Ag Verfahren zum automatischen Einstellen eines Geräts und Klassifikator

Also Published As

Publication number Publication date
US20210120349A1 (en) 2021-04-22
CN112689230A (zh) 2021-04-20
DK3809724T3 (da) 2022-01-24
US11375325B2 (en) 2022-06-28
EP3809724B1 (fr) 2021-10-27
DE102019216100A1 (de) 2021-04-22

Similar Documents

Publication Publication Date Title
EP3809724B1 (fr) Procédé de fonctionnement d'un appareil auditif ainsi qu'appareil auditif
EP1523219B1 (fr) Procédé pour l'apprentissage et le fonctionnement d'une prothèse auditive et prothèse auditive correspondente
EP2081406A1 (fr) Procédé et dispositif de configuration de possibilités de réglage sur un appareil auditif
DE102019206743A1 (de) Hörgeräte-System und Verfahren zur Verarbeitung von Audiosignalen
DE102016216054A1 (de) Verfahren und Einrichtung zur Einstellung eines Hörhilfegeräts
EP1453356B1 (fr) Méthode pour ajuster un système auditif et système auditif correspondant
EP1906700B1 (fr) Procédé de réglage commandé dans le temps d'un dispositif auditif et dispositif auditif
EP3840418A1 (fr) Procédé d'ajustement d'un instrument auditif et système auditif associé
EP3062249A1 (fr) Procede de determination de donnees de rendement specifiques au porteur d'un appareil auditif, procede d'adaptation de reglages d'appareil auditif, systeme d'appareil auditif et unite de reglage pour un systeme d'appareil auditif
EP1453358A2 (fr) Appareil et procédé pour ajuster une prothèse auditive
EP2239963B1 (fr) Procédé et dispositif auditif destinés au réglage d'un appareil auditif doté de dotées enregistrées dans une unité externe
DE69829770T2 (de) Neurofuzzy-vorrichtung für programmierbare hörhilfen
DE102019218808B3 (de) Verfahren zum Trainieren eines Hörsituationen-Klassifikators für ein Hörgerät
DE102019203786A1 (de) Hörgerätesystem
DE102012203349B4 (de) Verfahren zum Anpassen einer Hörvorrichtung anhand des Sensory Memory und Anpassvorrichtung
DE102016207936A1 (de) Verfahren zum Betrieb eines Hörgeräts
WO2011103934A1 (fr) Procédé d'entraînement à la compréhension du discours et dispositif d'entraînement
EP2262282A2 (fr) Procédé de détermination d'une réponse de fréquence d'un dispositif auditif et dispositif auditif correspondant
DE102011083672B4 (de) Lernalgorithmus für eine Kompression
EP3944635B1 (fr) Procédé de fonctionnement d'un système auditif, système auditif, appareil auditif
EP0765103A2 (fr) Procédé pour l'adaptation de prothèses auditives utilisant la logique floue
WO2024104945A1 (fr) Procédé de fonctionnement d'une prothèse auditive, et prothèse auditive
DE60211264T2 (de) Adaptieve Navigation in einer Sprachantwortsystem
EP2590437B1 (fr) Adaptation périodique d'un dispositif de suppression de l'effet Larsen
DE102021208643B4 (de) Verfahren zur Anpassung eines digitalen Hörgerätes, Hörgerät und Computerprogrammprodukt

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210416

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20210601

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502020000307

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1442979

Country of ref document: AT

Kind code of ref document: T

Effective date: 20211115

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20220118

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20211027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220127

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220227

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220228

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220127

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220128

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502020000307

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20220728

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20221031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221015

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221015

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231023

Year of fee payment: 4

Ref country code: DK

Payment date: 20231025

Year of fee payment: 4

Ref country code: DE

Payment date: 20231018

Year of fee payment: 4

Ref country code: CH

Payment date: 20231102

Year of fee payment: 4

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211027