EP4178228B1 - Verfahren und computerprogramm zum betrieb eines hörsystems, hörsystem und computerlesbares medium - Google Patents

Verfahren und computerprogramm zum betrieb eines hörsystems, hörsystem und computerlesbares medium

Info

Publication number
EP4178228B1
EP4178228B1 EP21206983.5A EP21206983A EP4178228B1 EP 4178228 B1 EP4178228 B1 EP 4178228B1 EP 21206983 A EP21206983 A EP 21206983A EP 4178228 B1 EP4178228 B1 EP 4178228B1
Authority
EP
European Patent Office
Prior art keywords
audio signal
visual object
sound
user
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP21206983.5A
Other languages
English (en)
French (fr)
Other versions
EP4178228A1 (de
Inventor
Nicola HILDEBRAND
Sebastian GRIEPENTROG
Daniel VON HOLTEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonova Holding AG
Original Assignee
Sonova AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonova AG filed Critical Sonova AG
Priority to DK21206983.5T priority Critical patent/DK4178228T3/da
Priority to EP21206983.5A priority patent/EP4178228B1/de
Publication of EP4178228A1 publication Critical patent/EP4178228A1/de
Application granted granted Critical
Publication of EP4178228B1 publication Critical patent/EP4178228B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/558Remote control, e.g. of amplification, frequency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest

Definitions

  • the invention relates to a method and a computer program for operating a hearing system, to the hearing system, and to a computer-readable medium in which the computer program is stored.
  • Hearing devices are generally small and complex devices.
  • a typical hearing device comprises a processing unit, e.g. including one or more processors, a sound input module, e.g. a microphone, a sound output module, e.g. an loudspeaker, a memory communicatively coupled to the processing unit, a housing, and other electronical and mechanical components.
  • Some example hearing devices are Behind-The-Ear (BTE), Receiver-In-Canal (RIC), In-The-Ear (ITE), Completely-In-Canal (CIC), and Invisible-In-The-Canal (IIC) devices.
  • BTE Behind-The-Ear
  • RIC Receiver-In-Canal
  • ITE In-The-Ear
  • CIC Completely-In-Canal
  • IIC Invisible-In-The-Canal
  • a user can prefer one of these hearing devices compared to another device based on hearing loss, aesthetic preferences, lifestyle needs, and budget.
  • hearing device users use their hearing device in everyday life, they may notice that the hearing device does not always support them well enough. Be it that the hearing device setting is not correctly set for the current acoustic environment, e.g. with respect to audio sources in the environment and/or a listening activity of the user, or an automatic classifier does not classify the situation correctly based on an acoustic input only.
  • US 2016/277850 A1 describes a device, which includes a processor, at least one camera accessible to the processor, and memory accessible to the processor.
  • the memory bears instructions executable by the processor to identify, at least in part based on input from the at least one camera, a source of sound.
  • the instructions are also executable to, based at toast in part on input from at least one microphone, execute beamforming and provide audio at a hearing aid comprising sound from the source.
  • US 2017/188173 A1 describes a method for presenting to a user of a wearable audio device a modified audio scene together with additional information related to the audio scene, comprising: capturing audio signals with a plurality of microphones; outputting an audio signal with a plurality of acoustical transducers; processing the captured audio signals, the processing comprising filtering, equalization, echoes processing and/or beamforming; separating audio sources from the processed audio signals; selecting at least one separated audio source; classifying at least one said selected audio source; retrieving additional information related to the classified audio source; presenting the additional information to the user.
  • WO 2021/038295 A1 describes a hearing aid system including a wearable camera; a microphone; and a processor.
  • the processor is programmed to receive images captured by the camera; receive audio signals representative of sounds received by the at least one microphone; determine a look direction of the user based on analysis of the images; determine an amplitude of a first audio signal associated with an individual or object in a region associated with the look direction of the user; determine an amplitude of a second audio signal from a region other than the look direction of the user; adjust the second amplitude in accordance with the first amplitude; and cause transmission of the second audio signal at the adjusted amplitude to a hearing interface device configured to provide sound to an ear of the user.
  • a first aspect relates to a method for operating a hearing system.
  • the hearing system comprises a hearing device configured to be worn at an ear of a user, a user device communicatively coupled to the hearing device and comprising a camera and a display, with the hearing device comprising at least one sound input module for generating an audio signal indicative of a sound detected in an environment of the hearing device, a first processing unit for modifying the audio signal, and at least one sound output module for outputting the modified audio signal.
  • the method comprises: receiving image data from the camera, the image data being representative of a scene in front of the camera; receiving an audio signal from the at least one sound input module, the audio signal being representative of the acoustic environment of the hearing device substantially at a time the image data have been captured, wherein the acoustic environment comprises at least one audio source and wherein the audio signal is at least in part representative for a sound from the audio source; and determining at least one visual object as the audio source, within the scene from the image data and the audio signal.
  • the method may be a computer-implemented method, which may be performed automatically by the hearing system.
  • the step of determining the at least one visual object as the audio source within the scene from the image data and the audio signal may be carried out by an artificial intelligence and/or a neural network.
  • the artificial intelligence or, respectively the neural network may be trained with the data set comprising a huge amount of image data representing different scenes with visual objects, wherein at least some of the visual objects are the audio sources, and a corresponding amount of audio signals associated and/or synchronized with the image data.
  • the hearing system may, for instance, comprise one or two hearing devices used by the same user.
  • One or both of the hearing devices may be worn on or in an ear of the user.
  • a hearing device may be a hearing aid, which may be adapted for compensating a hearing loss of the user.
  • a cochlear implant may be a hearing device or at least a part of it.
  • the hearing system may optionally further comprise at least one connected user device, such as a smartphone, smartwatch, smart glasses, or another device carried by the user or a personal computer of the user etc.
  • the visual objects may be determined from the image data by analysing the image data with the help of a data base comprising several different objects and/or object classes.
  • the time the image data have been captured may be encoded in meta data accompanying the image data.
  • the image data may be captured and received in real time such that the time of receiving the image data automatically corresponds to the time the audio signal is captured and received.
  • the visual object may be selected by a touch on the display, if
  • the audio signal is representative of the acoustic environment of the hearing device substantially at a time the image data have been captured, wherein “substantially” may mean in this context, that the audio signal is representative of the acoustic environment of the hearing device at the time the image data have been captured, that the audio signal is representative of the acoustic environment of the hearing device during a time interval in which the image data have been captured, or that the audio signal is representative of the acoustic environment of the hearing device during a time interval which overlaps a time interval during which the image data have been captured.
  • the audio signal may comprise meta data representing the time or time interval during which the corresponding sound of the acoustic environment has been captured.
  • the image may comprise meta data representing the time or time interval during which the image data have been captured. The audio signal and the image data may be synchronized by these meta data.
  • a second aspect relates to the hearing system.
  • the hearing system comprises: the hearing device configured to be worn at an ear of a user and comprising the at least one sound input module for generating the audio signal, the first processing unit for modifying the audio signal, and the at least one sound output module for outputting the modified audio signal; and the user device communicatively coupled to the hearing device and comprising the display, the camera, and a second processing unit; wherein at least one control unit is coupled to the hearing device and the user device and is configured to carry out the above method.
  • the hearing system may further include, by way of example, a second hearing device worn by the same user. If the user device is a smartphone, the camera may be the camera implemented within the smartphone. Alternatively, the camera may be implemented in smart glasses, if the user device is the smart glasses.
  • a third aspect relates to a computer program for operating the hearing system, which program, when being executed by a processing unit, e.g. the first and/or second processing unit, is adapted to carry out the steps of the above method.
  • a processing unit e.g. the first and/or second processing unit
  • a fourth aspect relates to a computer-readable medium, in which the above computer program is stored.
  • a computer-readable medium may be a floppy disk, a hard disk, an USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or a FLASH memory.
  • a computer-readable medium may also be a data communication network, e.g. the Internet, which allows downloading a program code.
  • the computer-readable medium may be a non-transitory or transitory medium.
  • the computer program may be executed in the first processing unit of the hearing device, which hearing device, for example, may be carried by the person behind the ear.
  • the computer-readable medium may be a memory of this hearing device.
  • the computer program also may be executed by the second processing unit of the connected user device, such as a smartphone or any other type of mobile device, which may be a part of the hearing system, and the computer-readable medium may be a memory of the connected user device. It also may be that some steps of the method are performed by the hearing device and other steps of the method are performed by the connected user device.
  • Determining the at least one visual object as the audio source within the scene from the image data and the audio signal enable a quick and/or easy in-the-field-setting of the hearing system for the user of the hearing system, in particular in complex acoustic environments.
  • in-the-field-setting is getting more and more important as it has many advantages compared with the conventional hearing device setting in a sound booth.
  • the above method contributes to increasing the trust of a customer as it ensures a quick and easy solution for difficult listening situations, i.e. complex acoustic environments with e.g. several audio and/or noise sources.
  • the method further comprises classifying the acoustic environment from the received audio signal; modifying the audio signal in accordance with the classification; and determining the at least one audio source within the acoustic environment from the modified audio signal.
  • a set of feature parameters is selected as a determined sound program.
  • an acoustic environment classification an acoustic situation the wearer is in is classified and consequently categorized in order to automatically adjust the features and/or values of parameters of these features in accordance with the current acoustic situation.
  • a feature activity may be logged such that it is logged which feature is active at which time.
  • Classifying the acoustic environment and modifying the audio signal in accordance with the classification by the corresponding sound program including a set of features contributes to identify a signal to noise ratio, an object loudness, a type of noise or room acoustic, a pitch of the object or other information which may be required to choose the most effective sound cleaner or frequency dependent gain modification.
  • the method further comprises determining at least one object class of the determined visual object and labelling the marked visual object in the scene in accordance with the determined object class.
  • the labels assist the user in identifying the marked visual object.
  • the object class may be at least one of the group of human, animal, instrument, speaker, dishes, newspaper, car, and water. For each object class, auto-adjustments, macro modifications and/or a list of modifiers may be defined, which may have an impact on the sound from the corresponding sound source.
  • the method further comprises providing at least one input field on the display for the input of the user, with the input field being representative for at least one modification of the audio signal with respect to the sound from the audio source assigned to the selected visual object, wherein the audio signal is selectively modified with respect to the sound from the audio source assigned to the selected visual object, if the user activates the input field.
  • the input field may be activated by a direct pressure on the input field or by a gesture above, on or next to the input field.
  • the input field on the display represents an intuitive possibility for the user to quickly and easily set the preferred modification.
  • the input field is provided depending on the classification of the acoustic environment, with the input field being representative for at least one modification in accordance with the classification of the acoustic environment.
  • the input field is provided depending on the object class of the determined visual object, with the input field being representative for at least one modification in accordance with the object class of the determined visual object.
  • Providing the input field depending on the classification enables to provide the user with the optimal and/or preferred option for modifying the audio signal in the current acoustic situation.
  • the input is representative for the user wishing to increase the volume of the sound from the selected visual object and/or to decrease the volume or effectuate a dampening of the sound from all other determined visual objects, or to decrease the volume or effectuate a dampening of the sound from the selected visual object and/or to increase the volume of the sound from all other determined visual objects
  • the audio signal is selectively modified such that the volume of the audio source associated with the selected visual object is increased or, respectively decreased, or that the volume of the audio sources of all other determined visual objects is decreased or, respectively, increased.
  • the method further comprises monitoring the visual object by the camera and stopping to selectively modify the audio signal with respect to the sound from the audio source assigned to the monitored visual object if the visual object disappears from a field of view of the camera.
  • the method further comprises detecting at least one gesture of the user on or above the display; and selecting the marked visual object in accordance with the gesture; and/or selectively modifying the audio signal with respect to the sound from the audio source associated with the selected visual object in accordance with the gesture. Detecting the gesture provides a very intuitive input possibility for the user.
  • the hearing system further comprises a remote server communicatively coupled to the hearing device and/or the user device and being configured to carry out at least a part of the above method.
  • a remote server communicatively coupled to the hearing device and/or the user device and being configured to carry out at least a part of the above method.
  • Fig. 1 schematically shows a hearing system 10 according to an embodiment of the invention.
  • the hearing system 10 includes a hearing device 12 and a user device 14 connected to the hearing device 12.
  • the hearing device 12 is formed as a behind-the-ear device carried by a user (not shown) of the hearing device 12.
  • the hearing device 12 is a specific embodiment and that the method described herein also may be performed with other types of hearing devices, such as e.g. an in-the-ear device or one or two of the hearing devices 12 mentioned above.
  • the user device 14 may be a smartphone, a tablet computer, and/or smart glasses.
  • the hearing device 12 comprises a part 15 behind the ear and a part 16 to be put in the ear channel of the user.
  • the part 15 and the part 16 are connected by a tube 18.
  • the part 15 comprises at least one sound input module 20, e.g. a microphone or a microphone array, a sound output module 22, such as a loudspeaker, and an input mean 24, e.g. a knob, a button, or a touch-sensitive sensor, e.g. capacitive sensor.
  • the sound input module 20 can detect a sound in the environment of the user and generate an audio signal indicative of the detected sound.
  • the sound output module 22 can output sound based on the audio signal modified by the hearing device 12, wherein the sound from the sound output module 22 is guided through the tube 18 to the part 16.
  • the input mean 24 enables an input of the user into the hearing device 12, e.g. in order to power the hearing device 12 on or off, and/or for choosing a sound program or any other modification of the audio signal.
  • the user device 14 comprises a display 30, e.g. a touch-sensitive display, providing a graphical user interface 32 including control element 32, e.g. a slider, which may be controlled via a touch on the display 30, and a camera 36.
  • the control element 32 may be referred to as input means of the user device 14.
  • the camera 36 may be a photo camera and/or video camera. If the user device 14 is the smart glasses, the use device 14 may comprise a knob or button instead of the display 30 and/or the graphical user interface 32.
  • Fig. 2 shows a block diagram of components of the hearing system 10 according to figure 1 .
  • the hearing device 12 comprises a first processing unit 40.
  • the first processing unit 40 is configured to receive the audio signal generated by the sound input module 20.
  • the hearing device 12 may include a sound processing module 42.
  • the sound processing module 42 may be implemented as a computer program executed by the first processing unit 40.
  • the sound processing module 42 may be configured to modify, in particular increase or decrease a volume of and/or delay, the audio signal generated by the sound input module 20, e.g. some frequencies or frequency ranges of the audio signal depending on parameter values of parameters, which influence the amplification, the damping and/or, respectively, the delay, e.g. in correspondence with a current sound program.
  • the parameter may be one or more of the group of frequency dependent gain, time constant for attack and release times of compressive gain, time constant for noise canceller, time constant for dereverberation algorithms, reverberation compensation, frequency dependent reverberation compensation, mixing ratio of channels, gain compression, gain shape/amplification scheme.
  • a set of one or more of these parameters and parameter values may correspond to a predetermined sound program, wherein different sound programs are characterized by correspondingly different parameters and parameter values.
  • the sound program may comprise a list of sound processing features.
  • the sound processing features may for example be a noise cancelling algorithm or a beamformer, which strengths can be increased to increase speech intelligibility but with the cost of more and stronger processing artifacts.
  • the sound output module 22 generates sound from the modified audio signal and the sound is guided through the tube 18 and the in-the-ear part 16 into the ear channel of the user.
  • the hearing device 12 may include a control module 44.
  • the control module 44 may be implemented as a computer program executed by the first processing unit 40.
  • the control module 44 may be configured for adjusting the parameters of the sound processing module 42, e.g. such that an output volume of the sound signal is adjusted based on an input volume.
  • the user may select a modifier (such as bass, treble, noise suppression, dynamic volume, etc.) and levels and/or values of the modifiers with the input mean 24. From this modifier, an adjustment command may be created and processed as described above and below.
  • processing parameters may be determined based on the adjustment command and based on this, for example, the frequency dependent gain and the dynamic volume of the sound processing module 42 may be changed.
  • the first memory 50 may be implemented by any suitable type of storage medium, in particular a non-transitory computer-readable medium, and can be configured to maintain, e.g. store, data controlled by the first processing unit 40, in particular data generated, accessed, modified and/or otherwise used by the first processing unit 40.
  • the first memory 50 may also be configured to store instructions for operating the hearing device 12 and/or the user device 14 that can be executed by the first processing unit 40, in particular an algorithm and/or a software that can be accessed and executed by the first processing unit 40.
  • a sound source detector 46 may be implemented in a computer program executed by the first processing unit 40.
  • the sound source detector 46 is configured to determine at least the one sound source from the audio signal.
  • the sound source detector 46 may be configured to determine a spatial relationship between the hearing device 12 and the corresponding sound source.
  • the spatial relationship may be given by a direction and/or a distance from the hearing device 12 to the corresponding audio source, wherein the audio signal may be a stereo-signal and the direction and/or distance may be determined by different arrival times of the sound waves from one audio source at two different sound input modules 20 of the hearing device 12 and/or a second hearing device 12 worn by the same user.
  • a first classifier 48 may be implemented in a computer program executed by the first processing unit 40.
  • the first classifier 48 can be configured to evaluate the audio signal generated by the sound input module 20.
  • the first classifier 48 may be configured to classify the audio signal generated by the sound input module 20 by assigning the audio signal to a class from a plurality of predetermined classes.
  • the first classifier 48 may be configured to determine a characteristic of the audio signal generated by the sound input module 20, wherein the audio signal is assigned to the class depending on the determined characteristic. For instance, the first classifier 48 may be configured to identify one or more predetermined classification values based on the audio signal from the sound input module 20.
  • the classification may be based on a statistical evaluation of the audio signal and/or a machine learning (ML) algorithm that has been trained to classify the ambient sound, e.g. by a training set comprising a huge amount of audio signals and associated classes of the corresponding acoustic environment. So, the ML-algorithm may be trained with several audio signals of acoustic environments, wherein the corresponding classification is known.
  • ML machine learning
  • the first classifier 48 may be configured to identify at least one signal feature in the audio signal generated by the sound input module 20, wherein the characteristic determined from the audio signal corresponds to a presence and/or absence of the signal feature.
  • Exemplary characteristics include, but are not limited to, a mean-squared signal power, a standard deviation of a signal envelope, a mel-frequency cepstrum (MFC), a mel-frequency cepstrum coefficient (MFCC), a delta mel-frequency cepstrum coefficient (delta MFCC), a spectral centroid such as a power spectrum centroid, a standard deviation of the centroid, a spectral entropy such as a power spectrum entropy, a zero crossing rate (ZCR), a standard deviation of the ZCR, a broadband envelope correlation lag and/or peak, and a four-band envelope correlation lag and/or peak.
  • MFC mel-frequency cepstrum
  • MFCC mel-frequency cepstrum coefficient
  • the first classifier 48 may determine the characteristic from the audio signal using one or more algorithms that identify and/or use zero crossing rates, amplitude histograms, auto correlation functions, spectral analysis, amplitude modulation spectrums, spectral centroids, slopes, roll-offs, auto correlation functions, and/or the like.
  • the characteristic determined from the audio signal is characteristic of an ambient noise in an environment of the user, e.g. a noise level, and/or a speech, e.g. a speech level.
  • the first classifier 48 may be configured to divide the audio signal into a number of segments and to determine the characteristic from a particular segment, e.g. by extracting at least one signal feature from the segment. The extracted feature may be processed to assign the audio signal to the corresponding class.
  • the first classifier 48 may be further configured to assign, depending on the determined characteristic, the audio signal generated by the sound input module 20 to a class of at least two predetermined classes.
  • the classes may represent a specific content in the audio signal.
  • the classes may relate to a speaking activity of the user and/or another person and/or an acoustic environment of the user.
  • Exemplary classes include, but are not limited to, low ambient noise, high ambient noise, traffic noise, music, machine noise, babble noise, public area noise, background noise, speech, nonspeech, speech in quiet, speech in babble, speech in noise, speech in loud noise, speech from the user, speech from a significant other, background speech, speech from multiple sources, calm situation and/or the like.
  • a first class representative of a larger speech content may be assigned to the audio signal when the characteristic is above the threshold, and a second class representative of a smaller speech content may be assigned to the audio signal when the characteristic is below the threshold.
  • At least two of the classes can be associated with different sound programs, in particular with different sound processing parameters, which may be applied by the sound processing module 42 for modifying the audio signal.
  • the class assigned to the audio signal which may correspond to a classification value, may be provided to the control module 44 in order to select the associated audio processing parameters, in particular the associated sound program, which may be stored in the first memory 50.
  • the class assigned to the audio signal may thus be used to determine the sound program, which may be automatically used by the hearing device 12, in particular depending on the audio signal received from the sound input module 20.
  • the hearing device 12 may further comprise a first transceiver 52.
  • the first transceiver 52 may be configured for a wireless data communication with a remote server 72. Additionally or alternatively, the first transceiver 52 may be adapted for a wireless data communication with a second transceiver 64 of the user device 14.
  • the first and/or the second transceiver 52, 64 each may be e.g. a Bluetooth or RFID radio chip.
  • the user device 14 which may be connected to the hearing device 12, may comprise a second processing unit 60, a second memory 62, a second transceiver 64, a second classifier 66 and/or a visual object detector 68.
  • the second classifier 66 may have the same functionality as the first classifier 48 explained above.
  • the second classifier 66 may be arranged alternatively or additionally to the first classifier 48 of the hearing device 12.
  • the second classifier 66 may be configured to classify the acoustic environment of the user and the user device 14 depending on the received audio signal, as explained above with respect to the first classifier 48, wherein the acoustic environment of the user and the user device 14 corresponds to the acoustic environment of the hearing device 12 and wherein the audio signal may be forwarded from the hearing device 12 to the user device 14.
  • a set of adjustable audio sources and corresponding object classes may be predefined, like e.g. human, instrument, speaker, dishes, newspaper, car, water, noise, etc.
  • auto-adjustments, macro modifications and/or a list of modifiers may be predefined, which may have an impact on the corresponding sound object.
  • An exemplary auto-adjustment may be decreasing an overall gain, e.g. of 1.5dB, increasing a noise reduction strength, e.g. to 0.8, and increasing a beamformer strength, e.g. to 0.85.
  • the above-mentioned modifiers and their levels and/or values are adjusted with the user device 14 and/or that the adjustment command is generated with the user device 14.
  • This may be performed with a computer program run in the second processing unit 60 and stored in the second memory 62 of the user device 14.
  • This computer program may also provide the graphical user interface 32 on the display 30 of the connected user device 14.
  • the graphical user interface 32 may comprise the control element 34, such as a slider.
  • an adjustment command may be generated, which will change the sound processing of the hearing device 12 as described above and below.
  • the user may adjust the modifier with the hearing device 12 itself, for example via the input mean 24.
  • the hearing device 12 and/or the user device 14 may communicate with each other and/or with the remote server 72 via the Internet 70.
  • the method explained below with respect to figures 3 and/or 4 may be carried out at least in part by the remote server 72.
  • processing tasks which require a huge amount of processing resources, may be outsourced from the hearing device 12 and/or the user device of 14 to the remote server 72.
  • the determination of the visual objects from the image data and/or of the audio sources from the audio signal may be outsourced to the remote server 72.
  • the processing units (not shown) of the remote server 72 may be used at least in part as the controller for controlling the hearing device 12 and/or the use device 14
  • Fig. 3 shows a flow diagram of a method for operating the hearing system 10.
  • the method may be carried out by the first and/or the second processing unit 40, 60 and/or by the remote server 72, wherein some of the steps of the method may be carried out by the first and/or the second processing unit 40, 60 and/or some other steps of the method may be carried out by the remote server 72.
  • image data from the camera 36 are received, e.g. by the first or second processing unit 40.
  • the image data are representative for a scene 80 in front of the user.
  • a picture or video of the front of the user may be taken by the camera 36 and the camera 36 generates the image data representing the scene 80 in front of the user.
  • an audio signal from the at least one sound input module 20 may be received, e.g. by the first or second processing unit 40.
  • the audio signal is representative for the acoustic environment of the user at a time the image data have been captured.
  • the acoustic environment comprises at least one audio source and the audio signal is at least in part representative for a sound from the audio source.
  • a step S8 the scene 80 is displayed on the display 30.
  • Fig. 5 shows an example of the scene 80 including several visual objects 88.
  • the scene 80 comprises four persons 82, wherein two of the persons 82 are seen in the upper left in the background of the scene 80, one person 82 is shown at the right in the background of the scene 80 and one person 82 is shown in the middle in the front of the scene 80.
  • the scene 80 comprises an instrument 84 and a speaker 86 both representing visual objects 88 within the scene 80 and potential audio sources.
  • a step S10 at least one determined visual object 88 is marked within the scene 80. Further, the user may be prompted to select at least one marked visual object 88 within the scene 80. If one of the visual objects 88 is detected as an adjustable audio source, the user has to become aware of it. One way to indicate a detected audio source is to highlight the corresponding visual object 88 within the scene 80. This may be done with augmenting a boundary around the detected visual object 88 within the scene 80. Another way to indicate the detected visual object 88 acting as an audio source is to show a corresponding object related modifier directly within the scene 80 (see figures 9 and/or 10).
  • Fig. 6 shows the scene 80 of figure 5 with several marked visual objects 88.
  • the persons 82, the instrument 84 and the speaker 86 are identified and marked as visual objects 88 potentially acting as audio sources.
  • the visual objects 88 are marked by the fine dashed rectangles, as an example.
  • a step S12 an input of the user is received.
  • the input is representative for that the user selected at least one of the marked visual objects 88.
  • Each marked visual object 88 which has been selected by the user, is referred to as selected visual object 90 in the following.
  • the input is further representative for that the user wishes to selectively modify the sound from the audio source associated with the selected visual object 90.
  • the input may be representative for the user wishing to increase the volume of the sound from the selected visual object 90 and/or to decrease the volume or effectuate a dampening of the sound from all other determined visual objects 88.
  • the input may be representative for the user wishing to decrease the volume or effectuate a dampening of the sound from the selected visual object 90 and/or to increase the volume of the sound from all other determined visual objects 88.
  • the visual objects 88 potentially acting as an audio source are highlighted within the scene 80, e.g. with an augmented boundary around the corresponding visual object 80, the augmented boundary itself may be a hitbox to select the corresponding visual object 80.
  • the object-based modifier is shown directly when a visual object 80 acting as an audio source is detected within the scene 80, this step becomes obsolete, because the user automatically selects one of the visual objects 80, if he/she activates the corresponding object-based modifier.
  • Fig. 7 shows the scene of figure 6 with two selected visual objects 90.
  • the instrument 84 and the speaker 86 are marked as selected visual objects 90, e.g. by course dashed rectangles.
  • a step S16 the modified audio signal is outputted to the user.
  • the method further comprises the step of monitoring the visual object(s) 88 by the camera 36 and stopping to selectively modify the audio signal with respect to the sound from the audio source assigned to the monitored visual object 88, if the visual object 88 disappears from a field of view of the camera 36.
  • the above method may further comprise the steps of classifying the acoustic environment from the received audio signal, modifying the audio signal in accordance with the corresponding classification, and step S6, e.g. determining the at least one audio source within the acoustic environment, may be carried out using the modified audio signal.
  • a combination of a visual and an acoustic classification may be even better able to provide the optimal adjustment parameter. Therefore, the above method may further comprise the steps of determining at least one object class of the determined visual object 88.
  • a list of adjustable visual objects acting as audio sources or corresponding sound object classes may be available in a corresponding database stored within the first and/or second memory 50, 62 and/or within the remote server 72.
  • the corresponding visual and acoustic object detection algorithm may be trained to estimate the most possible and effective sound object class and its adjustment suggestion. This adjustment suggestion may be applied automatically or for manual adjustments offered to the user.
  • Fig. 4 shows a flow diagram of a sub-method of the method of figure 3 .
  • figure 4 shows a sub-routine of the above method in case the method is implemented by a "traditional" algorithm and not by an artificial intelligence and/or a neuronal network.
  • step S6 i.e. the step of determining at least one visual object 88 as the audio source within the scene 80 from the image data and the audio signal, may comprise the following steps S18, S20 and S22.
  • step S18 at least one visual object 88, which is a potential audio source, is determined within the scene 80 from the image data. For example, step S18 comprises determining a first spatial relationship between the camera 36 and the visual object 88.
  • step S20 at least one audio source is determined within the acoustic environment from the audio signal. For example, step S20 comprises determining a second spatial relationship between the hearing device 12 and the audio source.
  • step S22 at least one determined visual object 88 is associated with at least one determined audio source.
  • the first spatial relationships of all determined visual objects 88 may be compared with the second spatial relationships of all determined sound sources. Then, that visual object 88 of all determined visual objects 88 may be associated with that audio source of all determined sound sources such that the corresponding first and second spatial relationship fulfil a predetermined requirement.
  • the predetermined requirement may be the "best fit" of the spatial relationship between the camera and the visual object and the hearing device and the audio source.
  • Fig. 8 shows another example of a scene 80 including several marked and labelled visual objects 88 in accordance with the present invention, in particular in accordance with the above method.
  • all marked visual objects 88 within the scene 80 are labelled in accordance with the determined object class.
  • the woman in the background in the upper left has the label 92 "Female”
  • the man in the middle in the front has also the label "Male”
  • the man in the background on the right has the label "Male”
  • the clapping hands of the man in the background on the right have the label "Clapping”.
  • the labelling may be carried out within steps S8 or S10 of the above method.
  • one, two or more state indicators 94 may be shown within the scene 80.
  • the state indicators 94 may indicate the classification of the acoustic environment, if the acoustic environment is classified.
  • the state indicators 94 may be representative for the noisy acoustic environment ("Noisy" in figure 8 ) and/or that the acoustic environment is within a room or house (house-symbol in figure 8 ).
  • the above method further comprises the step of providing the at least one input field 96 on the display 30 for the input of the user such that the input field 96 is representative for at least one modification of the audio signal with respect to the sound from the audio source assigned to the selected visual object 90, wherein the audio signal is selectively modified with respect to the sound from the audio source assigned to the selected visual object 90, if the user activates the input field 96.
  • an input field 96 may be provided on the display 30 within the scene 80.
  • the input field 96 may comprise a one, two or more, e.g. four, buttons 98. The user may use of the input field 96 and in particular the buttons 98 for the user input of step S12 of the above method.
  • an overview of the detected visual objects 88 potentially acting as an audio source and/or the corresponding recommended sound adjustments, which may be performed in the background of a sound object-based modifier (see figure 11 ) may be shown to the user on the display 30, e.g. within the scene 80. In this way, the user can be instructed how to self-adjust the sound of the audio source corresponding to certain visual objects 88 properly.
  • Fig. 9 shows the scene of figure 5 and two examples of input fields 96, wherein the input fields 96 of figure 9 may represent object-based modifiers.
  • the input fields 96 each comprise a label 92 and two buttons 98 for increasing and, respectively decreasing, the value of one of the parameters for modifying the audio signal with respect to the sound from the audio source.
  • one or more of the above input fields 96 are provided depending on the classification of the acoustic environment, with the corresponding input field 96 being representative for at least one modification in accordance with the classification of the acoustic environment. Additionally, the corresponding input field 96 is provided depending on the object class of the determined visual object 88, with the input field 96 being representative for at least one modification in accordance with the object class of the determined visual object 88, wherein in this case the input field 96 may be also referred to as object-based modifier.
  • Fig. 10 shows another example of a scene 80 including a labelled and selected visual object 90 and several examples for selecting a proper modification of the corresponding audio source.
  • the above method may further comprise the steps of detecting at least one gesture 102 of the user on or above the display 30, wherein the marked visual object 88 may be selected in accordance with the gesture 102 and/or the audio signal may be selectively modified with respect to the sound from the audio source associated with the selected visual object 90 in accordance with the gesture 102.
  • the gesture 102 may be a movement of the thumb relative to the forefinger, wherein an increase of the distance between the thumb and the forefinger may be representative for an increase of the value of the corresponding parameter for modifying the audio signal with respect to the selected visual object 90 and/or a decrease of the distance between the thumb and the forefinger may be representative for a decrease of the value of the corresponding parameter for modifying the audio signal with respect to the selected visual object 90.
  • a beamformer symbol 104 e.g. a triangle, may be laid over the selected visual object 90 and the user may increase or decrease an upper base of the triangle and as such the value of the corresponding parameter by the gesture 102.
  • the slider 100 may be provided on the display 30 within the scene 80, wherein the slider 100 may be used to increase or decrease the value of one of the parameters for modifying the audio signal.
  • FIG. 11 shows an example of a visual object-based audio signal modifier implementation.
  • figure 11 shows an example of a visual object Obj k acting as an audio source, wherein the visual object Obj k may be taken from a lookup table.
  • the corresponding acoustical information is analysed and the most suitable sub-object Obj k,1 , Obj k,2 , Obj k,3 with its set of weights w a , ..., w g may be chosen, e.g. depending on the corresponding acoustical context.
  • each visual object Obj k may have an individual set of weights w a , ..., w g , which depends also on the acoustical information of the visual object.
  • each visual object Obj k may have the same modification possibilities Mod a , ..., Mod g .
  • the relevant modification parameters will be chosen.
  • One possible realization could be the set of weights w a , ..., w g of the predefined modifiers Mod a , ..., Mod g as shown in figure 11 .
  • the weights w a , ..., w g which depends on the detected visual object Obj k .
  • the appropriate weights w a , ..., w g may get a rather high value close to 1, which means that the visual object-based modifier may have a high impact on this modifier for this specific visual object or object class. If the modifier is expected to not help for this purpose, the appropriate weights w a , ..., w g may get a rather low value close to 0, which means that the sound object-based modifier will have a low impact or no impact on this modifier for this specific visual object and/or object class. It is also possible to set the weights w a , ..., w g in the mid-range so that there is a moderate impact on this modifier for a specific visual object and/or object class.
  • a level dependency or other properties of the acoustic environment may be added to consider that the impact of each modifier on the same visual object 88 may have a different strength depending on the sound properties of the visual object 88 and the corresponding audio source as well as the acoustic environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (13)

  1. Verfahren zum Betreiben eines Hörsystems (10), wobei das Hörsystem (10) ein Hörgerät (12) umfasst, das dazu eingerichtet ist, an einem Ohr eines Benutzers getragen zu werden,
    eine Benutzervorrichtung (14), die kommunikativ mit dem Hörgerät (12) gekoppelt ist und eine Kamera (36) und ein Display (30) umfasst,
    wobei das Hörgerät (12) mindestens ein Schalleingangsmodul (20) zum Generieren eines Audiosignals, das einen in einer Umgebung des Hörgeräts detektierten Schall angibt, eine erste Verarbeitungseinheit (40) zum Modifizieren des Audiosignals und mindestens ein Schallausgabemodul (22) zum Ausgeben des modifizierten Audiosignals umfasst,
    wobei das Verfahren umfasst:
    Empfangen von Bilddaten von der Kamera (36), wobei die Bilddaten eine Szene (80) vor der Kamera (36) repräsentieren;
    Empfangen eines Audiosignals von dem mindestens einen Schalleingangsmodul (20), wobei das Audiosignal die akustische Umgebung des Hörgeräts (12) im Wesentlichen zu dem Zeitpunkt repräsentiert, an dem die Bilddaten aufgenommen wurden, wobei die akustische Umgebung mindestens eine Audioquelle umfasst und wobei das Audiosignal mindestens teilweise einen Schall von der Audioquelle repräsentiert;
    Bestimmen mindestens eines visuellen Objekts (88) als die Audioquelle innerhalb der Szene (80) aus den Bilddaten und dem Audiosignal;
    Anzeigen der Szene (80) auf dem Display (30);
    Markieren des bestimmten visuellen Objekts (88) innerhalb der Szene (80);
    Bestimmen mindestens einer Objektklasse des bestimmten visuellen Objekts (88) und Bezeichnen des markierten visuellen Objekts (88) in der Szene (80) gemäß der bestimmten Objektklasse;
    Bereitstellen mindestens eines Eingabefeldes (96) auf dem Display (30) für eine Eingabe des Benutzers;
    Empfangen der Eingabe des Benutzers zum Auswählen des markierten visuellen Objekts (88), wobei das Eingabefeld (96) für mindestens eine Modifikation des Audiosignals in Bezug auf den Schall von der dem ausgewählten visuellen Objekt (90) zugewiesenen Audioquelle repräsentativ ist;
    selektives Modifizieren des Audiosignals aus der mit dem ausgewählten visuellen Objekt (90) verknüpften Audioquelle, falls der Benutzer das Eingabefeld (96) aktiviert;
    Ausgeben des modifizierten Audiosignals an den Benutzer;
    dadurch gekennzeichnet, dass das Eingabefeld (96) in Abhängigkeit von der Objektklasse des bestimmten visuellen Objekts (88) bereitgestellt wird, wobei das Eingabefeld (96) für mindestens eine Modifikation gemäß der Objektklasse des bestimmten visuellen Objekts (88) repräsentativ ist.
  2. Verfahren nach Anspruch 1, wobei der Schritt des Bestimmens mindestens eines visuellen Objekts (88) als die Audioquelle innerhalb der Szene (80) aus den Bilddaten und dem Audiosignal umfasst:
    Bestimmen mindestens eines visuellen Objekts (88), das eine potenzielle Audioquelle ist, innerhalb der Szene (80) aus den Bilddaten;
    Bestimmen mindestens einer Audioquelle innerhalb der akustischen Umgebung anhand des Audiosignals;
    Verknüpfen mindestens eines bestimmten visuellen Objekts (88) mit mindestens einer bestimmten Audioquelle.
  3. Verfahren nach Anspruch 2, wobei
    der Schritt des Bestimmens des mindestens einen visuellen Objekts (88), das die potenzielle Audioquelle ist, das Bestimmen einer ersten räumlichen Beziehung zwischen der Kamera (36) und dem visuellen Objekt (88) umfasst;
    der Schritt des Bestimmens der mindestens einen Audioquelle innerhalb der akustischen Umgebung das Bestimmen einer zweiten räumlichen Beziehung zwischen dem Hörgerät (12) und der Audioquelle umfasst; und
    der Schritt des Verknüpfens des mindestens einen bestimmten visuellen Objekts (88) mit der mindestens einen bestimmten Audioquelle umfasst:
    Vergleichen der ersten räumlichen Beziehungen aller bestimmten visuellen Objekte (88) mit den zweiten räumlichen Beziehungen aller bestimmten Schallquellen, und
    Verknüpfen jenes visuellen Objekts (88) aller bestimmten visuellen Objekte (88) mit jener Audioquelle aller bestimmten Schallquellen, dergestalt, dass die entsprechende erste und zweite räumliche Beziehung eine zuvor festgelegte Anforderung erfüllen.
  4. Verfahren nach einem der vorangehenden Ansprüche, des Weiteren umfassend:
    Klassifizieren der akustischen Umgebung anhand des empfangenen Audiosignals;
    Modifizieren des Audiosignals gemäß der Klassifikation; und
    Bestimmen der mindestens einen Audioquelle innerhalb der akustischen Umgebung anhand des modifizierten Audiosignals.
  5. Verfahren nach Anspruch 4, wobei
    das Eingabefeld (96) in Abhängigkeit von der Klassifikation der akustischen Umgebung bereitgestellt wird, wobei das Eingabefeld (96) für mindestens eine Modifikation gemäß der Klassifikation der akustischen Umgebung repräsentativ ist.
  6. Verfahren nach einem der vorangehenden Ansprüche, wobei
    die Eingabe dafür repräsentativ ist, dass der Benutzer die Lautstärke des Schalls von dem ausgewählten visuellen Objekt (90) zu erhöhen wünscht und/oder die Lautstärke des Schalls von allen anderen bestimmten visuellen Objekten (88) zu verringern wünscht oder eine Dämpfung des Schalls von allen anderen bestimmten visuellen Objekten (88) herbeizuführen wünscht, oder die Lautstärke des Schalls von dem ausgewählten visuellen Objekt (90) zu verringern wünscht und/oder eine Dämpfung des Schalls von dem ausgewählten visuellen Objekt (90) herbeizuführen wünscht und/oder die Lautstärke des Schalls von allen anderen bestimmten visuellen Objekten (88) zu erhöhen wünscht, und
    das Audiosignal selektiv so modifiziert wird, dass die Lautstärke der mit dem ausgewählten visuellen Objekt (90) verknüpften Audioquelle erhöht bzw. verringert wird oder dass die Lautstärke der Audioquellen aller anderen bestimmten visuellen Objekte (88) verringert bzw. erhöht wird.
  7. Verfahren nach einem der vorangehenden Ansprüche, des Weiteren umfassend:
    Überwachen des visuellen Objekts (88) durch die Kamera (36) und Stoppen des selektiven Modifizierens des Audiosignals in Bezug auf den Schall von der dem überwachten visuellen Objekt (88) zugewiesenen Audioquelle, falls das visuelle Objekt (88) aus dem Sichtfeld der Kamera (36) verschwindet.
  8. Verfahren nach einem der vorangehenden Ansprüche, des Weiteren umfassend:
    Detektieren mindestens einer Geste (102) des Benutzers auf oder über dem Display (30); und
    Auswählen des markierten visuellen Objekts (88) gemäß der Geste (102); und/oder
    selektives Modifizieren des Audiosignals in Bezug auf den Schall von der mit dem ausgewählten visuellen Objekt (90) verknüpften Audioquelle gemäß der Geste (102).
  9. Hörsystem (10), umfassend:
    ein Hörgerät (12), das dazu eingerichtet ist, an einem Ohr eines Benutzers getragen zu werden, und mindestens ein Schalleingangsmodul (20) zum Generieren eines Audiosignals, eine erste Verarbeitungseinheit (40) zum Modifizieren des Audiosignals, und mindestens ein Schallausgabemodul (24) zum Ausgeben des modifizierten Audiosignals umfasst; und
    eine Benutzervorrichtung (14), die kommunikativ mit dem Hörgerät (12) gekoppelt ist und eine Kamera (36), ein Display (30) und eine zweite Verarbeitungseinheit (60) umfasst,
    wobei mindestens eine Steuereinheit mit dem Hörgerät (12) und der Benutzervorrichtung (14) gekoppelt und dazu eingerichtet ist, das Verfahren nach einem der vorangehenden Ansprüche auszuführen.
  10. Hörsystem (10) nach Anspruch 9, des Weiteren umfassend:
    einen räumlich abgesetzten Server (72), der kommunikativ mit dem Hörgerät (12) und/oder der Benutzervorrichtung (14) gekoppelt ist und dazu eingerichtet ist, mindestens einen Teil des Verfahrens nach einem der Ansprüche 1 bis 8 auszuführen.
  11. Hörsystem (10) nach einem der Ansprüche 9 und 10, wobei
    die mindestens eine Steuereinheit in der ersten Verarbeitungseinheit (40), der zweiten Verarbeitungseinheit (60) oder dem räumlich abgesetzten Server (72) implementiert ist.
  12. Computerprogramm zum Betreiben eines Hörsystems (10), wobei das Programm, wenn es durch eine Verarbeitungseinheit (40, 60) ausgeführt wird, dafür ausgelegt ist, die Schritte des Verfahrens nach einem der Ansprüche 1 bis 8 auszuführen.
  13. Computerlesbares Medium, auf dem ein Computerprogramm nach Anspruch 12 gespeichert ist.
EP21206983.5A 2021-11-08 2021-11-08 Verfahren und computerprogramm zum betrieb eines hörsystems, hörsystem und computerlesbares medium Active EP4178228B1 (de)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DK21206983.5T DK4178228T3 (da) 2021-11-08 2021-11-08 Fremgangsmåde og computerprogram til drift af et høresystem, høresystem ogcomputerlæsbart medium
EP21206983.5A EP4178228B1 (de) 2021-11-08 2021-11-08 Verfahren und computerprogramm zum betrieb eines hörsystems, hörsystem und computerlesbares medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP21206983.5A EP4178228B1 (de) 2021-11-08 2021-11-08 Verfahren und computerprogramm zum betrieb eines hörsystems, hörsystem und computerlesbares medium

Publications (2)

Publication Number Publication Date
EP4178228A1 EP4178228A1 (de) 2023-05-10
EP4178228B1 true EP4178228B1 (de) 2025-08-27

Family

ID=78592524

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21206983.5A Active EP4178228B1 (de) 2021-11-08 2021-11-08 Verfahren und computerprogramm zum betrieb eines hörsystems, hörsystem und computerlesbares medium

Country Status (2)

Country Link
EP (1) EP4178228B1 (de)
DK (1) DK4178228T3 (de)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10499164B2 (en) * 2015-03-18 2019-12-03 Lenovo (Singapore) Pte. Ltd. Presentation of audio based on source
US9949056B2 (en) * 2015-12-23 2018-04-17 Ecole Polytechnique Federale De Lausanne (Epfl) Method and apparatus for presenting to a user of a wearable apparatus additional information related to an audio scene
US10409548B2 (en) * 2016-09-27 2019-09-10 Grabango Co. System and method for differentially locating and modifying audio sources
US20220312128A1 (en) * 2019-08-26 2022-09-29 Orcam Technologies Ltd. Hearing aid system with differential gain
CN115211144A (zh) * 2020-01-03 2022-10-18 奥康科技有限公司 助听器系统和方法

Also Published As

Publication number Publication date
DK4178228T3 (da) 2025-09-29
EP4178228A1 (de) 2023-05-10

Similar Documents

Publication Publication Date Title
US12058496B2 (en) Hearing system and a method for personalizing a hearing aid
EP3120578B2 (de) Crowd-source empfehlungen für hörgeräte
US6895098B2 (en) Method for operating a hearing device, and hearing device
Launer et al. Hearing aid signal processing
US20250322833A1 (en) Audio signal processing for automatic transcription using ear-wearable device
US20220369048A1 (en) Ear-worn electronic device employing acoustic environment adaptation
CN108235181B (zh) 在音频处理装置中降噪的方法
CN113473341A (zh) 包括有源通流口的被配置用于音频分类的助听设备及其操作方法
US12513472B2 (en) Ear-worn electronic device employing user-initiated acoustic environment adaptation
CA2400089A1 (en) Method for operating a hearing-aid and a hearing aid
EP2876899A1 (de) Einstellbare Hörgerätevorrichtung
US11457320B2 (en) Selectively collecting and storing sensor data of a hearing system
US12022259B2 (en) System, method and computer program for interactively assisting a user in evaluating a hearing loss
CN111279721B (zh) 听力装置系统和动态地呈现听力装置修改建议的方法
EP4149120A1 (de) Verfahren, hörsystem und computerprogramm zur verbesserung der hörerfahrung eines benutzers, der ein hörgerät trägt, und computerlesbares medium
EP4178228B1 (de) Verfahren und computerprogramm zum betrieb eines hörsystems, hörsystem und computerlesbares medium
EP4510624A1 (de) Verfahren zur ermöglichung der anpassung einer hörgerätekonfiguration für einen benutzer
EP4521777A1 (de) Betrieb eines hörgeräts zur optimierung der tonausgabe aus einer lokalisierten medienquelle
EP4415390A1 (de) Betrieb eines hörgeräts zur klassifizierung eines audiosignals zur berücksichtigung der benutzersicherheit
CN115312067B (zh) 基于人声的声音信号识别方法、装置及存储介质
US12225354B2 (en) Hearing aid personalization using machine leaning
US11758341B2 (en) Coached fitting in the field
EP4507327A1 (de) Betrieb eines hörgeräts zur klassifizierung eines audiosignals
EP4068805A1 (de) Verfahren, computerprogramm und computerlesbares medium zum konfigurieren eines hörgeräts, steuerung zum betrieb eines hörgeräts und hörsystem
Grant et al. An objective measure for selecting microphone modes in OMNI/DIR hearing aid circuits

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231102

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 25/00 20060101AFI20250506BHEP

INTG Intention to grant announced

Effective date: 20250522

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602021037053

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20250924

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20250827

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20251227

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20251128

Year of fee payment: 5

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20251127

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20251127

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20251229

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DK

Payment date: 20251125

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20251125

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20251128

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20251127

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20250827