US11310608B2 - Method for training a listening situation classifier for a hearing aid and hearing system - Google Patents

Method for training a listening situation classifier for a hearing aid and hearing system Download PDF

Info

Publication number
US11310608B2
US11310608B2 US17/110,509 US202017110509A US11310608B2 US 11310608 B2 US11310608 B2 US 11310608B2 US 202017110509 A US202017110509 A US 202017110509A US 11310608 B2 US11310608 B2 US 11310608B2
Authority
US
United States
Prior art keywords
situation
user
signal
recording
listening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/110,509
Other languages
English (en)
Other versions
US20210168535A1 (en
Inventor
Sven Schoen
Christoph Kukla
Andreas Bollmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Sivantos Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sivantos Pte Ltd filed Critical Sivantos Pte Ltd
Assigned to Sivantos Pte. Ltd. reassignment Sivantos Pte. Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUKLA, CHRISTOPH, SCHOEN, SVEN, BOLLMANN, ANDREAS
Publication of US20210168535A1 publication Critical patent/US20210168535A1/en
Application granted granted Critical
Publication of US11310608B2 publication Critical patent/US11310608B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • G06F18/41Interactive pattern learning with a human teacher
    • G06K9/6267
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • H04R25/507Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest

Definitions

  • the invention relates to a method for training a listening situation classifier for a hearing aid. Furthermore, the invention relates to a hearing system which is designed in particular to carry out the aforementioned method.
  • Hearing aids serve for compensating at least in part for the hearing impairment of those who have a hearing impairment.
  • Conventional hearing aids usually have at least one microphone to pick up noises from the environment, as well as a signal processor, which serves for processing the detected noises and amplifying and/or muffling them, especially in dependence on the individual hearing impairment (especially one which is frequency specific).
  • the processed microphone signals are then taken from the signal processor to an output transducer—usually one in the form of a loudspeaker—for auditory output to the person wearing the particular hearing aid.
  • so-called bone conduction receivers or cochlear implants are also used for the mechanical or electrical stimulation of the sense of hearing.
  • hearing aid also encompasses other devices, such as earphones, so-called tinnitus masks, or headsets.
  • hearing assistance aids often have a so-called listening situation classifier (in short: classifier), which serves for identifying the presence of a particular, predefined “listening situation” especially with the help of the noises detected.
  • listening situations are usually characterized by a specific signal/noise ratio, the presence of speech, a relatively high tonality, and/or similar factors.
  • the signal processing is then generally altered in dependence on the identified listening situation. For example, a narrow directional effect of a directional microphone is required for a situation in which the hearing aid wearer is speaking to only one person, whereas a unidirectional effect may be advantageous out in the open with no speech present.
  • a speech recognition algorithm is often used in the classifier to identify a speech situation.
  • Classifiers are furthermore customarily “trained” (or programmed) by means of a database, where known listening examples are stored, such as those for specific listening situations, so that as many possible acoustic situations can be matched up with the correct listening situation during normal operation. Even so, it may happen that a classifier will wrongly match up an acoustic situation (at least subjectively so for the hearing aid wearer) or not even be able to match it up. In this case, a signal processing which is not satisfactory for the hearing aid wearer may result.
  • the problem which the invention proposes to solve is to make possible a better coordination of a listening situation with an acoustic situation.
  • the method according to the invention serves for the training of a listening situation classifier (in short” “classifier”) for a hearing aid, especially a hearing assist aid.
  • a listening situation classifier in short” “classifier”
  • a user especially a user of a terminal device
  • the user is presented with a number of acoustic signals by means of the terminal device.
  • the user is then prompted to indicate the signal source for the or the respectively presented signal.
  • the user is prompted accordingly to make an indication preferably after each individual presentation before the next presentation occurs.
  • Training data is adapted for the listening situation classifier in dependence on the user's indication of the presented signal or one of the possibly several signals presented and the listening situation classifier is updated by means of the training data (especially the adapted training data).
  • the hearing system according to the invention is designed to carry out the method as described here and in the following (preferably in automatic manner, i.e., on its own, but also in particular in an interaction with the user).
  • the hearing system contains at least one terminal device of the aforementioned kind, the classifier, which is formed preferably by an algorithm, especially a neural net, a self-learning software or the like, and a controller.
  • the controller serves preferably for carrying out at least parts of the method (such as the adapting of the training data).
  • the user is the user of the hearing aid, also called the hearing aid wearer in the following.
  • the method and the hearing system equally have the same benefits as described here and in the following.
  • the method also preferably makes use of the physical features provided by the hearing system and/or other data provided by it.
  • the terminal device is preferably a mobile device, having an associated processor and also preferably an interface to an (optionally mobile) data network.
  • the terminal device is the hearing aid (especially the hearing assist device of the hearing aid wearer), preferably having a wireless interface for connection to an (optionally mobile) data network or at least for connection to an intermediate device which is connected to such a data network, such as a smartphone, a tablet, a smartwatch, a laptop or the like.
  • the terminal device is one such intermediate device, especially a smartphone, a tablet, or the like.
  • the user may also be independent of the hearing aid wearer, and thus not even wear any hearing aid, for example. But basically, the user can also be a hearing aid wearer.
  • the acoustic presentation of the respective signal is done via a loudspeaker or another output transducer of the hearing aid—optionally controlled by the intermediate device forming the terminal device (i.e., by the smartphone, for example)—or by the terminal device itself (i.e., especially by a loudspeaker of the smartphone etc.).
  • the indication is preferably a manual input by the user, e.g., on a touch screen of the smartphone etc.
  • the terminal device may also be formed by a digital assistant, which is provided and designed for voice commands from the user.
  • the presented signal or one of the possibly several signals presented forms a characteristic signal for an acoustic situation not known to the classifier.
  • the presented signal (or at least one of possibly several signals) is characteristic of such an unknown acoustic situation.
  • unknown is meant in this context and in the following in particular that the classifier cannot match up this acoustic situation with any known, in particular any trained listening situation, or has matched it up with a listening situation which results in an unsatisfactory listening experience on account of the associated signal processing settings for the user or at least for one user (especially for one of optionally multiple hearing aid wearers).
  • the indication of the user for this characteristic signal of the unknown acoustic situation can be utilized advantageously to adapt the training data for the classifier so that in future the same or preferably also a comparable acoustic situation can be identified or matched up with adequate precision.
  • the characteristic signal for the unknown listening situation contains a recording of a real (acoustic) situation.
  • This signal is preferably formed by such a recording.
  • the classifier can be adapted advantageously to real situations not (yet) modeled or contained in the training data.
  • the above described recording of the real (acoustic) situation is produced when at least one hearing aid wearer provides a characteristic input for a setting produced in unsatisfactory manner by the classifier and/or when the classifier can only match up the real (acoustic) situation with a trained (i.e., a known) listening situation by a probability value less than a limit value.
  • the characteristic input is seen here as being in particular a user changing loudness, for example, or a manual switching of “programs” (i.e., in particular, a switching to signal processing settings associated with a different listening situation), which is done after a change in the signal processing settings (especially one occurring automatically by the hearing aid based on the classification).
  • such an input can also occur on a remote control for the hearing aid, for example in an associated control application installed for example on a smartphone.
  • the classifier is adapted to create, during the classification, a probability value for the probability with which the current acoustic situation corresponds to a known listening situation.
  • the recording of the current acoustic situation may also be started by another (third) person, such as an audiologist or the like, especially by means of a kind of remote access.
  • the recording is produced by means of at least one of optionally multiple microphones of the hearing aid or the other terminal device optionally present (such as the smartphone, etc.).
  • recordings are produced continuously, especially sliding over predetermined “sample” or “snippet” periods by means of at least the optionally multiple microphones of the hearing aid (and preferably placed in temporary storage) and only memorized for further use if one of the two aforementioned events for initiating of the recording occurs.
  • This has the advantage that, when the recording is initiated, already a large number of sound impressions (especially noises) possibly resulting in (at least subjective) wrong classification (or no recognition) might no longer be audible.
  • the probability is increased that corresponding noises are contained in the recording when the latter is initiated.
  • the preceding recordings are advisedly discarded, preferably in permanent manner.
  • the characteristic signal for the unknown acoustic situation i.e., in particular for the above described recording of the real (acoustic) situation
  • a central database such as a database provided by a cloud service
  • the unknown acoustic situation in response to the indication of the signal source by the user is added to the training data as a known listening situation, in particular specifically as an example of a known listening situation, optionally also as a “newly generated” listening situation.
  • the characteristic signal for the unknown listening situation is presented to multiple users (or hearing aid wearers) and the unknown acoustic situation is only added to the training data as a known listening situation (or as an example of such) if a predetermined number of users (or a percentage or a specific number of users who were presented with this signal) have agreed on the indication of this signal as the signal source.
  • a predetermined number of users or a percentage or a specific number of users who were presented with this signal
  • the user or the respective user is presented with multiple acoustic signals from known signal sources and the indications of this user as to the signal source associated with the respective signal, especially in terms of the correctness of his indications, are used to determine a suitability value.
  • the characteristic signal for the unknown listening situation is preferably only presented to the user if the suitability value determined for this user exceeds a given value.
  • this makes it possible that only users who assign the known acoustic situations with high reliability to the “correct” signal sources make an indication for the unknown acoustic situation.
  • This can increase the probability of the most realistic possible matching of the unknown acoustic situation with a signal source (which may also comprise multiple single sources, for example, the noise of an electrical device embedded in outdoor noises or the like).
  • the above described “evaluation” of the particular user in terms of their suitability is determined by means of so-called gamification.
  • elements are used which are familiar to users from games (especially a kind of reward system), preferably to increase the motivation of the user to participate through various incentives.
  • acoustic signals are presented to the user, for example as part of a hearing test, a listening training program, or a game (which is provided, e.g., by the hearing aid maker as a software application for a smartphone or a digital assistant of the aforementioned kind for the user, especially the hearing aid wearer), which the user needs to match up with a signal source, especially by making a choice from a presented list of signal sources.
  • the “distances” i.e., the differences in particular
  • the noise of a drum is sounded, and besides the drum there are offered a piano, a flute, and a guitar as further possible choices.
  • the user makes the correct choice, the distance of the further possible choices from the correct solution and/or the complexity of the presented signal is increased on respectively “higher levels”.
  • the noise of a marching drum which is sounded there are offered the further possible choices of a kettle drum, a conga drum, and bongo drums.
  • an unknown acoustic situation is only added to the training data as a known listening situation (or as an example of such) when a given number of users have agreed on the indication of the signal source of this signal with a suitability value greater than a target value.
  • This target value is optionally higher than the aforementioned given value at which the user is first presented with the characteristic signal for the unknown listening situation.
  • answers of different users can also be optionally weighted (especially in dependence on their suitability value).
  • the indications of the user or the respective user are taken to the central database described above or to an additional central database and are evaluated by an algorithm implemented on this database, especially in regard to the suitability value.
  • the determination as to whether the particular user is suitable for the potential identification of the unknown acoustic situation, especially whether his suitability value is ranked high enough is done in centralized manner.
  • the above described program e.g., the game
  • the online program e.g., an “online game”
  • the suitability value is determined “offline”, especially by means of the above described program, for example by incorporating the known listening examples of the program as a package. Then, at a sufficiently high level—optionally after obtaining the consent of the user—the program expediently establishes a connection with the database to retrieve the characteristic signal for an unknown acoustic situation, or at least one such signal—optionally with proposals for signal sources created by the database with the aid of a first classification.
  • the recording or the respective recording of the real situation as described above is associated with meta-data.
  • This meta-data contains information about a listening situation matched up by the classifier, a current position of the hearing aid (determined in particular by a position sensor), a background noise level, optionally a signal/noise ratio derived from this, an estimated value for the distance from a sound source, a manufacturing date of the hearing aid and/or an operation software and additionally or alternatively a number of microphones used for the recording (and optionally their age).
  • this meta-data additionally or alternatively comprises information about a noise canceling performed during the recording, an own voice processing, or the like.
  • the recording or the respective recording of the real situation is furthermore standardized as a characteristic signal for the unknown acoustic situation prior to its presentation. This makes possible the highest possible uniformity of different presentations, especially in the event that multiple unknown acoustic situations are to be presented to the user. Expediently, however, the recording or the respective recording is also discarded, i.e., erased in particular, if this recording does not correspond to the given standards or cannot be standardized.
  • the recording or the respective recording is adjusted to a target format in regard to its length (in time) and/or its data format.
  • the recording is optionally changed on the corresponding central database to a data format which can be processed equally by all terminal devices, preferably independently of platform.
  • the length of the respective recording may be, for example, a period of time between 2 and 10 seconds, especially around 3 to 7 seconds.
  • a quality value is assigned to the recording or the respective recording (i.e., the snippet) on the basis of the meta-data (such as the derived signal/noise ratio, the estimated value for a distance from a sound source, the length of the recording, a degree of clogging of the microphone or microphones, the age of the microphones, and the like).
  • the meta-data such as the derived signal/noise ratio, the estimated value for a distance from a sound source, the length of the recording, a degree of clogging of the microphone or microphones, the age of the microphones, and the like.
  • the assigning of the quality value is done preferably by means of a self-learning algorithm, which is preferably processed on the corresponding central database.
  • a selection is done in several steps or “levels” by means of an appropriately adapted or a taught self-learning algorithm by sorting out unusable recordings step by step, each time with the aid of “more sensitive” or more precise algorithms.
  • multiple recordings are compared in regard to their acoustic content and recordings with sufficiently similar acoustic content are grouped together.
  • multiple groups of recordings which sound the same are preferably formed.
  • this grouping is done for example with the aid of the original classification by the hearing aid classifier.
  • pattern recognition methods will be used (preferably on the corresponding database), making possible and preferably also carrying out a comparison of the acoustic content of different recordings.
  • the aforementioned controller is optionally a central microcontroller.
  • the controller is formed by a distributed system, such as one or more databases and the terminal device, wherein the indications of the user are evaluated and the training data is composed for the classifier in particular on the corresponding database.
  • the above described software application preferably runs on the terminal device or is at least installed and can run on it.
  • the hearing system (especially as one of several terminal devices) comprises a hearing aid with a signal processor, on which the above described classifier is preferably implemented, especially in the form of a (self-learning and trained) algorithm.
  • the hearing system comprises a central classifier, which is designed in particular as a “cloud classifier” and implemented preferably on the aforementioned central database or optionally on another central database.
  • the hearing aid preferably sends a recording (especially with the meta-data) to the cloud classifier in the above described manner.
  • the recording is analyzed by said classifier—making use of system resources usually more extensive than those of the hearing aid (especially computing power and/or computing memory)—and a classification result is sent back to the hearing aid.
  • snippets containing speech are furthermore discarded or optionally the voice component is also removed.
  • FIG. 1 is an illustration of a hearing system
  • FIG. 2 is a schematic flow chart of a method carried out by the hearing system.
  • the hearing system 1 includes multiple individual subsystems 2 , that is, each one associated with a hearing aid wearer (or user, not otherwise represented).
  • Each subsystem 2 contains at least one hearing assist device, referred to as “hearing aid 1 ” in short, and a mobile device, represented here as a smartphone 4 .
  • the respective smartphone 4 forms a terminal device of the respective hearing aid wearer and is adapted for bidirectional communication with the respective hearing aid 3 . In normal operation, the respective hearing aid 3 is also connected to its associated smartphone 4 .
  • control app 6 a control application (in short: “control app” 6 ) is installed on the respective smartphone 4 , by means of which settings can be done for the hearing aid 3 —for example, changing the loudness and/or switching the listening programs.
  • the hearing system 1 furthermore contains at least one central database 8 , which is adapted to communicate with the respective smartphone 4 of the respective subsystem 2 —especially via the Internet.
  • the database 8 provides information to the respective hearing aid 3 , such as updates for firmware and also the control app 6 for download.
  • a listening situation classifier (in short: “classifier”) is implemented on the database 8 , so that a central classification can be done for the subsystems 2 of the hearing system 1 .
  • the hearing aids 3 send data on the current acoustic situations to the database 8 via the smartphones 4 .
  • the classifier there analyzes this data and sends back to the smartphone 4 a classification result, specifically a recognized listening situation, i.e., one which is stored in memory and “taught” or trained with the aid of examples—which is associated in turn with a listening program stored on the hearing aid 3 .
  • the smartphone 4 relays this listening situation to the hearing aid 3 and the latter switches its signal processing to the recognized listening situation (it changes the listening program, e.g., from a “speech” listening program to a “television” listening program).
  • the database 8 sends multiple possible “program proposals” for the recognized listening situation to the smartphone 4 (such as “one-on-one conversation”, “television”, “music listening”), so that the respective hearing aid wearer can select a listening program which appears suitable to him. This is then sent to the hearing aid 3 .
  • the above described classifier is implemented separately in each hearing aid 3 .
  • the classification of the current acoustic situation is thus done locally (or also “offline”, since no connection with the database 8 is required).
  • games applications are also provided by the database, especially games involving listening itself, i.e., they are provided for download. These games are provided and designed especially for listening training through games or hearing tests.
  • a games application in short: games app 10
  • games app 10 is installed on each of the smartphones 4 shown. This games app 10 is also provided for the listening training.
  • recordings are played (i.e., acoustic signals are presented) during the course of the game to the respective hearing aid wearer—in the exemplary embodiment shown, by means of the respective hearing aid 3 —which the hearing aid wearer has to identify.
  • solution proposals are also shown to the hearing aid wearer for selection on the display 12 of the corresponding smartphone 4 (specifically, one correct solution and at least one alternative but “wrong” answer). If the hearing aid wearer correctly recognizes the content of the recording (e.g., the noise of a drum), the degree of difficulty is increased in the next “round” or on a higher level (e.g., only after several correct answers). For example, the acoustic content of the recording is more complex (e.g., two instruments) and/or the solution proposals are “easier” to confuse with the correct solution.
  • FIG. 2 In the event of an unsatisfactory classification of the current acoustic situation for the corresponding hearing aid wearer or if the classifier cannot match up the situation with a memorized listening situation with sufficient probability, a method represented by FIG. 2 is carried out by the hearing system 1 in order to make possible a further learning of such an acoustic situation, hereinafter called an “unknown” situation.
  • a recording of the current acoustic situation is triggered in a first method step 20 .
  • a recording of the current acoustic situation will be saved temporarily by means of at least one microphone of the corresponding hearing aid 3 continuously over sliding time slots.
  • the recording will be discarded once more. But if triggering occurs, the recording will be saved as a “snippet” throughout the current time slot (i.e., for a period of time prior to the triggering and after the triggering) and sent to the database 8 via the smartphone 3 , specifically the control app 6 .
  • the particular recording is accompanied by meta-data containing the signal/noise ratio, the age of the hearing aid and the microphone(s), the number of microphones used for the recording, and the result of the classification.
  • the recording (i.e., the snippet) is standardized in a following method step 30 , i.e., it is transformed if necessary into a data format which can be handled by all hearing aids 3 connected to the hearing system 1 and sorted into groups with other recordings by means of the classification relayed with the meta-data. Furthermore, the signal/noise ratio and the age of the microphone(s) are used to determine a quality value. If this quality value falls below a given value (e.g., because of no useful signal component or one which can hardly be determined), the recording is discarded from the database.
  • the recording is provided as an acoustic signal for retrieval by the games app 10 in a further method step 40 .
  • a hearing aid wearer plays the games app 10 on his smartphone 4 , at first, he is presented with an acoustic signal for a known acoustic situation in the manner described above (method step 50 ) and then, in a method step 60 , he is asked to indicate the content of the acoustic signal (i.e., the recording presented). Under increasing degree of difficulty as described above, the method steps 50 and 60 are repeated during the games app 10 . As the game proceeds, the games app 10 (or also the database 8 ) will increase a suitability value of this hearing aid wearer.
  • the hearing aid wearer will be offered a “special level” in a method step 70 , during which the hearing aid wearer has a chance to participate actively in the training of the classifiers.
  • the hearing aid wearer accepts this offer, he will have at least one of the recordings of the unknown acoustic situations played to him.
  • the hearing aid wearer will be offered a free data entry mask for his appraisal of the acoustic situation or also possible answers.
  • the answer of the hearing aid wearer will be saved in the database 8 .
  • a sufficiently high number specifically, a given number of hearing aid wearers (such as 100, 500, 1000 or more) achieve a correspondingly high suitability value and then provide an answer for the same unknown acoustic situation
  • these answers for the respective unknown acoustic situation will be compared by the database. If a percentage or an absolute number of the same answers exceeds a given value, the database will save the corresponding recording in a method step 80 as a known acoustic situation for a listening situation formed together with the answers or assign the recording to an already existing listening situation as a further training example based on the consistent answers.
  • the training data for the classifiers is updated by means of real situations.
  • Such newly determined training examples or new listening situations are then used for the training of the classifiers with the updated (adapted) training data.
  • the updated training data will be used during an update.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Fuzzy Systems (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
US17/110,509 2019-12-03 2020-12-03 Method for training a listening situation classifier for a hearing aid and hearing system Active US11310608B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019218808 2019-12-03
DE102019218808.7A DE102019218808B3 (de) 2019-12-03 2019-12-03 Verfahren zum Trainieren eines Hörsituationen-Klassifikators für ein Hörgerät

Publications (2)

Publication Number Publication Date
US20210168535A1 US20210168535A1 (en) 2021-06-03
US11310608B2 true US11310608B2 (en) 2022-04-19

Family

ID=73448795

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/110,509 Active US11310608B2 (en) 2019-12-03 2020-12-03 Method for training a listening situation classifier for a hearing aid and hearing system

Country Status (4)

Country Link
US (1) US11310608B2 (fr)
EP (1) EP3833052A1 (fr)
CN (1) CN112911478B (fr)
DE (1) DE102019218808B3 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210195343A1 (en) * 2019-12-20 2021-06-24 Sivantos Pte Ltd. Method for adapting a hearing instrument and hearing system therefor

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021010997A1 (fr) 2019-07-17 2021-01-21 Google Llc Systèmes et procédés pour vérifier des mots-clés de déclenchement dans des applications d'assistant numérique basées sur l'acoustique

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6035050A (en) * 1996-06-21 2000-03-07 Siemens Audiologische Technik Gmbh Programmable hearing aid system and method for determining optimum parameter sets in a hearing aid
US6862359B2 (en) * 2001-12-18 2005-03-01 Gn Resound A/S Hearing prosthesis with automatic classification of the listening environment
US20050105750A1 (en) * 2003-10-10 2005-05-19 Matthias Frohlich Method for retraining and operating a hearing aid
US7319769B2 (en) * 2004-12-09 2008-01-15 Phonak Ag Method to adjust parameters of a transfer function of a hearing device as well as hearing device
US7340231B2 (en) * 2001-10-05 2008-03-04 Oticon A/S Method of programming a communication device and a programmable communication device
US7769702B2 (en) * 2005-02-09 2010-08-03 Bernafon Ag Method and system for training a hearing aid using a self-organising map
US7889879B2 (en) * 2002-05-21 2011-02-15 Cochlear Limited Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions
US20110051963A1 (en) * 2009-08-28 2011-03-03 Siemens Medical Instruments Pte. Ltd. Method for fine-tuning a hearing aid and hearing aid
US8139778B2 (en) * 2006-09-29 2012-03-20 Siemens Audiologische Technik Gmbh Method for the time-controlled adjustment of a hearing apparatus and corresponding hearing apparatus
US8150044B2 (en) * 2006-12-31 2012-04-03 Personics Holdings Inc. Method and device configured for sound signature detection
US20130070928A1 (en) * 2011-09-21 2013-03-21 Daniel P. W. Ellis Methods, systems, and media for mobile audio event recognition
EP2255548B1 (fr) 2008-03-27 2013-05-08 Phonak AG Procédé pour faire fonctionner une prothèse auditive
DE102013205357A1 (de) 2013-03-26 2014-10-02 Siemens Ag Verfahren zum automatischen Einstellen eines Geräts und Klassifikator
US9031663B2 (en) * 2013-02-22 2015-05-12 Cochlear Limited Genetic algorithm based auditory training
US9131324B2 (en) * 2007-06-20 2015-09-08 Cochlear Limited Optimizing operational control of a hearing prosthesis
US20150271607A1 (en) * 2014-03-19 2015-09-24 Bose Corporation Crowd sourced recommendations for hearing assistance devices
US20180192208A1 (en) * 2016-12-30 2018-07-05 Starkey Laboratories, Inc. Listening experiences for smart environments using hearing devices
US10390152B2 (en) * 2013-08-20 2019-08-20 Widex A/S Hearing aid having a classifier
US10536786B1 (en) * 2018-06-27 2020-01-14 Google Llc Augmented environmental awareness system
US10560790B2 (en) * 2016-06-27 2020-02-11 Oticon A/S Method and a hearing device for improved separability of target sounds
DE102019203786A1 (de) 2019-03-20 2020-02-13 Sivantos Pte. Ltd. Hörgerätesystem
US10609494B2 (en) * 2017-08-14 2020-03-31 Sivantos Pte. Ltd. Method for operating a hearing device and hearing device
US20210176572A1 (en) * 2019-12-06 2021-06-10 Sivantos Pte. Ltd. Method for the environment-dependent operation of a hearing system and hearing system
US20210211814A1 (en) * 2020-01-07 2021-07-08 Pradeep Ram Tripathi Hearing improvement system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024585A1 (fr) * 2013-08-20 2015-02-26 Widex A/S Prothèse auditive comportant un classificateur adaptatif
DE102015203288B3 (de) * 2015-02-24 2016-06-02 Sivantos Pte. Ltd. Verfahren zur Ermittlung von trägerspezifischen Nutzungsdaten eines Hörgeräts, Verfahren zur Anpassung von Hörgeräteeinstellungen eines Hörgeräts, Hörgerätesystem und Einstelleinheit für ein Hörgerätesystem
EP3269152B1 (fr) * 2015-03-13 2020-01-08 Sonova AG Procédé de détermination des caractéristiques utiles pour un appareil acoustique sur la base de données de classification de sons enregistrés
DE102017205652B3 (de) * 2017-04-03 2018-06-14 Sivantos Pte. Ltd. Verfahren zum Betrieb einer Hörvorrichtung und Hörvorrichtung

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6035050A (en) * 1996-06-21 2000-03-07 Siemens Audiologische Technik Gmbh Programmable hearing aid system and method for determining optimum parameter sets in a hearing aid
US7340231B2 (en) * 2001-10-05 2008-03-04 Oticon A/S Method of programming a communication device and a programmable communication device
US6862359B2 (en) * 2001-12-18 2005-03-01 Gn Resound A/S Hearing prosthesis with automatic classification of the listening environment
US7889879B2 (en) * 2002-05-21 2011-02-15 Cochlear Limited Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions
US20050105750A1 (en) * 2003-10-10 2005-05-19 Matthias Frohlich Method for retraining and operating a hearing aid
US7319769B2 (en) * 2004-12-09 2008-01-15 Phonak Ag Method to adjust parameters of a transfer function of a hearing device as well as hearing device
US7769702B2 (en) * 2005-02-09 2010-08-03 Bernafon Ag Method and system for training a hearing aid using a self-organising map
US8139778B2 (en) * 2006-09-29 2012-03-20 Siemens Audiologische Technik Gmbh Method for the time-controlled adjustment of a hearing apparatus and corresponding hearing apparatus
US8150044B2 (en) * 2006-12-31 2012-04-03 Personics Holdings Inc. Method and device configured for sound signature detection
US9131324B2 (en) * 2007-06-20 2015-09-08 Cochlear Limited Optimizing operational control of a hearing prosthesis
US8477972B2 (en) * 2008-03-27 2013-07-02 Phonak Ag Method for operating a hearing device
EP2255548B1 (fr) 2008-03-27 2013-05-08 Phonak AG Procédé pour faire fonctionner une prothèse auditive
US20110051963A1 (en) * 2009-08-28 2011-03-03 Siemens Medical Instruments Pte. Ltd. Method for fine-tuning a hearing aid and hearing aid
US20130070928A1 (en) * 2011-09-21 2013-03-21 Daniel P. W. Ellis Methods, systems, and media for mobile audio event recognition
US9031663B2 (en) * 2013-02-22 2015-05-12 Cochlear Limited Genetic algorithm based auditory training
US9191754B2 (en) 2013-03-26 2015-11-17 Sivantos Pte. Ltd. Method for automatically setting a piece of equipment and classifier
DE102013205357A1 (de) 2013-03-26 2014-10-02 Siemens Ag Verfahren zum automatischen Einstellen eines Geräts und Klassifikator
US10390152B2 (en) * 2013-08-20 2019-08-20 Widex A/S Hearing aid having a classifier
US20150271607A1 (en) * 2014-03-19 2015-09-24 Bose Corporation Crowd sourced recommendations for hearing assistance devices
US10560790B2 (en) * 2016-06-27 2020-02-11 Oticon A/S Method and a hearing device for improved separability of target sounds
US20180192208A1 (en) * 2016-12-30 2018-07-05 Starkey Laboratories, Inc. Listening experiences for smart environments using hearing devices
US10609494B2 (en) * 2017-08-14 2020-03-31 Sivantos Pte. Ltd. Method for operating a hearing device and hearing device
US10536786B1 (en) * 2018-06-27 2020-01-14 Google Llc Augmented environmental awareness system
DE102019203786A1 (de) 2019-03-20 2020-02-13 Sivantos Pte. Ltd. Hörgerätesystem
US20210176572A1 (en) * 2019-12-06 2021-06-10 Sivantos Pte. Ltd. Method for the environment-dependent operation of a hearing system and hearing system
US20210211814A1 (en) * 2020-01-07 2021-07-08 Pradeep Ram Tripathi Hearing improvement system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Barchiesi Daniele et al: "Acoustic Scene Classification: Classifying environments from the sounds they produce" IEEE Signal Processing Magazine, IEEE Service Center, Piscataway, NJ. US, vol. 32, No. 3, May 1, 2015 (May 1, 2015), pp. 16-34, XP011577488, ISSN: 1053-5888, DOI:10.1109/MSP.2014.2326181 [retrieved Apr. 2, 2015].
BARCHIESI DANIELE; GIANNOULIS DIMITRIOS; STOWELL DAN; PLUMBLEY MARK D.: "Acoustic Scene Classification: Classifying environments from the sounds they produce", IEEE SIGNAL PROCESSING MAGAZINE, IEEE, USA, vol. 32, no. 3, 1 May 2015 (2015-05-01), USA, pages 16 - 34, XP011577488, ISSN: 1053-5888, DOI: 10.1109/MSP.2014.2326181
Carola Wagener Kirsten et al et al: "Recording and Classification of the Acoustic Environment of Hearing Aid Users", Journal of the American Academy of Audiology, [Online], vol. 19, No. 04, Apr. 1, 2008 (Apr. 1, 2008), pp. 348-370, XP055795916,CA, ISSN: 1050-0545, DOI: 10.3766jjaaa.19.4.7 URL:https://www.audiology.orgjsitesjdefaul t/filesjjournal/JAAA_19_04_06.pdf> [retrieved Apr. 15, 2021].
CAROLA WAGENER KIRSTEN, HANSEN MARTIN, LUDVIGSEN CARL: "Recording and Classification of the Acoustic Environment of Hearing Aid Users", JOURNAL OF THE AMERICAN ACADEMY OF AUDIOLOGY, THE ACADEMY, BURLINGTON, CA, vol. 19, no. 04, 1 April 2008 (2008-04-01), CA , pages 348 - 370, XP055795916, ISSN: 1050-0545, DOI: 10.3766/jaaa.19.4.7

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210195343A1 (en) * 2019-12-20 2021-06-24 Sivantos Pte Ltd. Method for adapting a hearing instrument and hearing system therefor
US11601765B2 (en) * 2019-12-20 2023-03-07 Sivantos Pte. Ltd. Method for adapting a hearing instrument and hearing system therefor

Also Published As

Publication number Publication date
US20210168535A1 (en) 2021-06-03
CN112911478B (zh) 2022-10-21
DE102019218808B3 (de) 2021-03-11
CN112911478A (zh) 2021-06-04
EP3833052A1 (fr) 2021-06-09

Similar Documents

Publication Publication Date Title
CN110072434B (zh) 用于辅助听力设备使用的声音声学生物标记的使用
US7853028B2 (en) Hearing aid and method for its adjustment
US11310608B2 (en) Method for training a listening situation classifier for a hearing aid and hearing system
US10643620B2 (en) Speech recognition method and apparatus using device information
US20130158977A1 (en) System and Method for Evaluating Speech Exposure
JP6084654B2 (ja) 音声認識装置、音声認識システム、当該音声認識システムで使用される端末、および、話者識別モデルを生成するための方法
US9952826B2 (en) Audio mixer system
US20100056951A1 (en) System and methods of subject classification based on assessed hearing capabilities
US11095995B2 (en) Hearing assist device fitting method, system, algorithm, software, performance testing and training
CN112689230A (zh) 用于运行听力设备的方法和听力设备
CN110602624A (zh) 音频测试方法、装置、存储介质及电子设备
US11882413B2 (en) System and method for personalized fitting of hearing aids
US10334376B2 (en) Hearing system with user-specific programming
JP6721365B2 (ja) 音声辞書生成方法、音声辞書生成装置及び音声辞書生成プログラム
WO1999031937A1 (fr) Systeme automatique pour optimiser les reglages d'une prothese auditive
Tait et al. The predictive value of measures of preverbal communicative behaviors in young deaf children with cochlear implants
AU2017202620A1 (en) Method for operating a hearing device
KR100925828B1 (ko) 차량 음향의 음질에 대한 정량적 도출방법 및 그 장치
KR102451956B1 (ko) 음악을 이용하여 청능 훈련을 수행하는 전자 장치, 방법, 및 컴퓨터 프로그램
KR20220089043A (ko) 청지각능력 훈련 방법
WO2009000238A3 (fr) Vérification de l'âge au moyen de sons aigus
Kobayashi et al. Performance Evaluation of an Ambient Noise Clustering Method for Objective Speech Intelligibility Estimation
CN112932471A (zh) 用于确定测试人员的听阈的方法
Faulkner Evaluating Speech Intelligibility with Processed Sound
TWI596955B (zh) 具有測試功能之助聽器

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SIVANTOS PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCHOEN, SVEN;KUKLA, CHRISTOPH;BOLLMANN, ANDREAS;SIGNING DATES FROM 20201204 TO 20201207;REEL/FRAME:054607/0463

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE