EP4231667A1 - Procédé de fonctionnement d'un système de dispositif auditif binaural et système de dispositif auditif binaural - Google Patents

Procédé de fonctionnement d'un système de dispositif auditif binaural et système de dispositif auditif binaural Download PDF

Info

Publication number
EP4231667A1
EP4231667A1 EP23154327.3A EP23154327A EP4231667A1 EP 4231667 A1 EP4231667 A1 EP 4231667A1 EP 23154327 A EP23154327 A EP 23154327A EP 4231667 A1 EP4231667 A1 EP 4231667A1
Authority
EP
European Patent Office
Prior art keywords
music
sources
hearing
hearing device
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP23154327.3A
Other languages
German (de)
English (en)
Inventor
Cecil Wilson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Sivantos Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sivantos Pte Ltd filed Critical Sivantos Pte Ltd
Publication of EP4231667A1 publication Critical patent/EP4231667A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1058Manufacture or assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/552Binaural
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/60Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
    • H04R25/604Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • H04R2201/105Manufacture of mono- or stereophonic headphone components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction

Definitions

  • the invention relates to a method for operating a binaural hearing device system.
  • the invention also relates to such a binaural hearing device system.
  • Hearing devices are usually used to output an audio signal to the hearing of the wearer of this hearing device.
  • the output takes place by means of an output converter, usually acoustically via airborne sound using a loudspeaker (also referred to as “listener” or “receiver”).
  • Hearing devices of this type are often used as so-called hearing aid devices (also: hearing aids for short).
  • the hearing devices normally comprise an acoustic input converter (in particular a microphone) and a signal processor which is set up to process the input signal (also: microphone signal) generated by the input converter from the ambient sound using at least one signal processing algorithm which is usually stored in a user-specific manner in such a way that a Hearing loss of the wearer of the hearing device is at least partially compensated.
  • the output converter can, in addition to a loudspeaker, also alternatively be a so-called bone conduction receiver or a cochlear implant, which are set up for mechanically or electrically coupling the audio signal into the wearer's hearing.
  • hearing devices also includes, in particular, devices such as so-called tinnitus maskers, headsets, headphones and the like.
  • BTE behind-the-ear
  • ITE in-the-ear
  • ITE in-the-ear
  • ITE in-the-ear
  • ITE in-the-ear
  • monaural or binaural fittings can also be considered.
  • the former is regularly the case when only one ear has a hearing impairment.
  • the latter usually occurs when both ears have a hearing loss.
  • data is exchanged between the two hearing devices assigned to the user's ears in order to have more acoustic information available and thus to be able to make the hearing experience even more pleasant, preferably more realistic, for the user.
  • a so-called classifier which recognizes certain hearing situations - e.g. a conversation in quiet, a conversation with background noise, music, quiet, driving and the like - mostly by means of pattern recognition, artificial intelligence and the like. Based on these listening situations, the signal processing can be adjusted to improve the hearing experience of the respective listening situation. For example, in the case of conversations with background noise, a comparatively narrow directivity can be specified and noise suppression can be used. However, this is less useful for music, since the widest possible directivity or omnidirectionality, as well as a low or deactivated noise suppression are advantageous in order to "lose” as little “acoustic information" as possible.
  • the classifier can be misinterpreted - namely, if music is present but the user is not listens or wants to listen - the attitude towards improving listening to music can have a negative impact on language comprehension and the like.
  • hearing aids referred to so-called “hearing programs”, which have comparatively fixed sets of parameters
  • hearing aids usually use step-by-step adjustment of the individual parameters in order to enable intermediate stages between two hearing situations, gentle crossfading between different settings or the like.
  • the invention is based on the object of further improving the comfort of use of a hearing device system.
  • the method according to the invention serves to operate a binaural hearing device system.
  • the latter has a hearing device assigned or to be assigned to a left ear and a right ear of a user (in normal operation).
  • Each of the hearing devices in turn has at least one microphone.
  • acoustic information is collected using the two hearing devices and the acoustic information (in particular in the form of ambient noise, preferably in the form of electronic signals representing the ambient noise) is evaluated to determine whether it contains music.
  • the acoustic information in particular in the form of ambient noise, preferably in the form of electronic signals representing the ambient noise
  • it is determined whether two (particularly spatially separate) sources can be detected for the music (ie if the presence of music is detected).
  • a solid angle range is determined in relation to a viewing direction of the user, in which the respective source of the music is positioned.
  • a probability in particular a probability value
  • a predetermined probability limit value i.e. for the case of conscious listening to music
  • the user's “line of sight” refers in particular to the direction in which the user's head is directed, regardless of the actual line of sight of the eyes.
  • the “direction of gaze” refers here in the following in particular to a (head) direction also referred to as “rostral” (possibly also as “nasal”). This designation is based on the fact that the two hearing devices of the binaural hearing device system are worn approximately symmetrically on the head during normal operation (within the anatomical possibilities), with the viewing direction usually corresponding to a direction used for signal processing as the 0-degree direction of the hearing device system.
  • solid angle area is understood to mean in particular a comparatively small-angled area, preferably open like a cone and starting from the face of the user and/or the respective hearing device.
  • range this takes into account the fact that a spatial localization of a source is regularly associated with comparatively high tolerances, so that an exact position statement is usually not possible.
  • solid angular range also covers a vector pointing to the localized source.
  • the “front hemisphere” referred to above is understood here and in the following in particular as the space that is spanned rostrally by a frontal plane of the head, which is preferably positioned at the ears of the user.
  • the front hemisphere is the one into which the user "looks”.
  • Adaptation of the signal processing means here and in the following in particular a change in parameters that influence the reproduction of recorded audio signals (in particular the microphone signals representing them recorded by the respective microphone or signals derived from them).
  • These parameters are, for example, amplification factors (particularly frequency-dependent), settings for what is known as compression, filter settings (which are used, for example, for noise suppression) and the like.
  • the probability limit is specified in such a way that the arrangement of the music sources in the front hemisphere is sufficient to exceed the probability limit.
  • the acoustic signals emanating from the two music sources are within a range that is typical for music, in particular for a stereo performance of the music, i. H. especially within given limits, are dissimilar to each other.
  • a stereo performance of a piece of music regularly contains comparatively similar signal components on both stereo channels, but also comparatively dissimilar ones in order to convey the stereo impression. If such a difference between the two music sources is detected, in this optional variant of the method, a particularly high probability of the existence of a genuine stereo performance and in particular also of conscious listening to this stereo performance is assumed (in other words, the probability value described above is further increased ).
  • the signal processing can be more "aggressive", i.
  • H. with comparatively stronger negative effects on speech understanding or the like, are adapted to the (as natural as possible) reproduction of music.
  • the signal processing is only adapted for better (that is to say as natural as possible) playback of the music if, as above, a situation with a genuine stereo performance is inferred.
  • This determination of whether a genuine stereo presentation is present therefore preferably represents a refined criterion for adapting the signal processing.
  • a preferably frequency-dependent correlation is determined between the acoustic signals assigned to the two music sources.
  • the respective frequency-dependent stereo correlation coefficients are preferably given limits within which this stereo correlation coefficient must lie in order to conclude that there is a dissimilarity that is typical for stereo.
  • the limits mentioned above (in particular the upper and lower limits) for the (in particular the respective, frequency-dependent) stereo correlation coefficient are preferably selected in such a way that they are below values that are typical for a mono presentation and above those for uncorrelated ones (or only slightly correlated) noises.
  • a mono performance could theoretically be assumed to be 100 percent, but in a normal listening environment, e.g. B. tolerances of the microphones used, ambient noise, etc., regularly lower values of the correlation coefficient for a mono performance (e.g. "only" 90 percent).
  • correlation values of completely uncorrelated signals are usually above “zero” percent, since this value can only be assumed for white noise, but regularly for ambient noise (and thus also music from just one music source or "mono music” from several music sources ) are picked up equally by all microphones used.
  • the limits mentioned above are therefore specified in such a way that they delimit a range between 40 and 90 percent, further for example between 50 and 80 or even only 70 percent (the latter to allow a sufficient distance to a mono presentation).
  • the situation of the user consciously listening to music is inferred (or at least the probability value that such a situation exists is further increased), if the respective solid angle range of the music sources is in an angular range of up to about +/- 60 degrees, preferably up to about +/- 45 degrees, to the viewing direction.
  • Such a situation indicates with a comparatively high probability of conscious listening to a stereo performance, since the stereo loudspeakers in private rooms in particular of the limited spatial boundary are usually in such an angular range in relation to the position of the listener.
  • a listener When consciously listening in stereo, a listener will usually also have their line of sight, at least the associated sagittal plane (or in particular median plane), directed at least roughly between the stereo loudspeakers.
  • each hearing device has two microphones.
  • the respective solid angle range of the two music sources is determined in particular using a time delay of a signal associated with the music, expediently between the two microphones of a hearing device.
  • a “direction of arrival” or “direction of arrival” is determined. This is exemplified on WO 2019 086 435 A1 and WO 2019 086 439 A1 referenced, the content of which is hereby incorporated in its entirety.
  • the recognition or detection of the music sources described at the outset takes place by means of a so-called (in particular “blind”) source separation.
  • the music sources in particular the two “stereo sources”, are optionally identified before the associated solid angle range is determined.
  • the solid angle range in which a signal source is located can also be determined first and only then can it be determined whether this signal source represents a music source. In the latter case, for example, a solid angle range is assigned to different (particularly separable) sound sources.
  • the source separation described above for example using frequency bands to which a source type (for example music, speech, natural sounds) is assigned, optionally also takes place in parallel.
  • the information about the localization of the individual sources and about the type of source is then brought together.
  • the type of source can be assigned, for example, by determining whether the frequencies of the source assigned to this level value sufficiently match the frequencies recognized for music, or also, whether the for the music frequency band detected level value corresponds sufficiently to the level value assigned to the source. If the levels and/or frequencies match, a probability value is increased that the determined source type can be assigned to this specific source (and thus also to the solid angle range determined for this). If the probability value is sufficiently high (e.g. based on a threshold value comparison), the source type (ie in particular the source type "music") is assigned to the localized source.
  • each hearing device preferably also has two microphones
  • the respective solid angle range of the two sources is determined by means of a type of scanning using a directional sensitivity, which is formed in particular by means of two microphones of a hearing device.
  • the directional sensitivity is formed by a binaural combination of both hearing devices. In the latter case, there is also talk of binaural directional microphones. In this case, each hearing device can in principle only have one microphone.
  • the front hemisphere is preferably scanned. In particular, in the present case, the space around the head of the user of the hearing device, preferably the front hemisphere, is divided into sectors.
  • a type of directional lobe or "sensitivity range" of the directional microphone formed is directed into each of these sectors.
  • the acoustic intensities (also "levels") recorded for the respective sectors are compared with one another and, compared to other sectors, increased intensity or level values are used as an indicator that a signal source is arranged in this sector.
  • a signal source arranged at the edge of the sector or between two sectors can also be detected, specifically a solid angle area in which it is arranged can be assigned to it.
  • only sources up to a specified distance from the user for example up to 8 or only up to 5 meters, are recognized as (music) sources.
  • the two music sources ie in particular the stereo sources for the music
  • the two music sources are each "tracked". This means that a change in the position of the respective source, in particular its solid angle range in which it was localized, is detected and “tracked” (e.g. by aligning a directional effect with it).
  • a movement of the sources relative to the viewing direction can occur, for example, when the user of the hearing device turns his head and/or changes his (body) position in space relative to the music sources. If the music sources are loudspeaker boxes, the two music sources remain constant to one another or only move within a comparatively narrow solid angle range.
  • an angle between the two (from user outgoing) vectors pointing to the two music sources remains constant. For example, if the user bends forward out of an armchair, for example to have something to drink, eat or the like, the angle between the two vectors will change, but usually only comparatively slightly (for example by a maximum of 20 degrees). If the two music sources remain within this permissible angular range (e.g. up to 10 or up to 20 degrees), the situation of conscious listening to music is still assumed to exist. If there is a greater movement of the music sources towards each other, e.g.
  • the user of the hearing device gives up his position in the room or even leaves the room, it is assumed that the situation of conscious listening to music no longer exists and in particular the signal processing for the previous settings reset or a new classification of the hearing situation.
  • a waiting time is started and the user waits to see whether the user has returned to his previous position relative to the two music sources. source switches back. This can be useful, for example, if the user is only briefly away from the same room, e.g. B. only gets something (e.g. to drink), but basically wants to continue listening to the music.
  • the existence of the situation of conscious listening to music is ruled out if movement is detected for only one of the two music sources.
  • a movement is detected as described above.
  • the fact that only one source is moving can be recognized in particular by the fact that the solid angle range that was detected for the other source remains constant, but changes for the "first" source.
  • such a case cannot be combined with a stereo performance and rather indicates a different situation, for example two music sources that are independent of one another and possibly different.
  • spectral differences between the means of the respective hearing device and/or for the respective Source detected music determined. Based on these differences, conclusions are then drawn about a type of music.
  • classical music in particular orchestral music
  • a comparatively large spectral difference between the two stereo channels and thus the sound emitted by the two (stereo) music sources is regularly to be expected due to the classical orchestra structure that is usually used.
  • a comparatively larger spectral difference can also be expected for recordings of jazz bands.
  • pop, rock or electronic music on the other hand, a comparatively small spectral difference is to be expected.
  • a further spectral for example with regard to an "emphasis" on certain frequencies
  • a harmonic evaluation can be carried out.
  • This embodiment is based on the knowledge that each hearing device primarily, ie in particular with a stronger level, detects the acoustic signals in the assigned front quarter space.
  • the acoustic signals from the other quarter of the room i.e. assigned to the other half of the face
  • the signal processing is then preferably adapted to the type of music, in particular in a further refined manner.
  • the parameters mentioned above are adapted to the type of music in a manner known per se (compare, for example, equalizer presettings in audio systems).
  • the "trebles" i.e. high frequencies
  • the most balanced possible setting is selected, while in hip-hop or pop, for example, bass is emphasized.
  • the signal processing is performed for cases in which several of the criteria described above are considered, e.g Stereo performance is present, smoothly matched to music playback.
  • the signal processing becomes less "aggressive", ie different Aspects of hearing (especially speech comprehension) have a comparatively small negative influence, changed if the two music sources are only localized in the front hemisphere.
  • the signal processing is adapted increasingly aggressively in the direction of music reproduction, e.g. by reducing noise suppression and/or directivity and the like.
  • the binaural hearing device system has the hearing devices assigned or to be assigned to the left ear and the right ear of the user. These each have at least one microphone.
  • the hearing device system has a controller that is set up to carry out the method described above automatically or in interaction with the user.
  • the hearing device system equally has the physical features described above in the respective method variants in corresponding embodiments.
  • the controller is also set up accordingly to carry out the measures described in the context of the above method variants in assigned versions.
  • the controller is embodied, for example, in one of the two hearing devices or in a control device assigned to them but separate from them.
  • each of the two hearing devices has its own controller (also referred to as a signal processor), which communicate with one another in binaural operation and preferably together form the controller of the hearing device system under master-slave regulation.
  • the (or the respective) controller is formed at least in its core by a microcontroller with a processor and a data memory in which the functionality for carrying out the method according to the invention in the form of operating software (firmware) is implemented programmatically, so that the method - possibly in interaction with users - at Execution of the operating software in the microcontroller is carried out automatically.
  • the respective controller is formed by an electronic component that is not or not fully freely programmable, for example an ASIC, in which the functionality for carrying out the method according to the invention is implemented using circuitry means.
  • the hearing device system described above and the method described above also function advantageously in sound systems with more than two sound sources, for example a 5.1 system or the like.
  • the presence of two music sources in the front hemisphere is used as a basic criterion as to whether a situation of conscious listening to music is present. If more than these two music sources are present, especially in the rear hemisphere, these are not recorded, for example, or are ignored as not relevant for the assessment of the current (music) listening situation.
  • a binaural hearing device system 1 is shown schematically. This has two hearing devices 2 and 4 .
  • the hearing device 2 is in normal operation - shown schematically in 2 or 3 - Assigned to a left ear 6 of a user 8.
  • the hearing device 4 is correspondingly assigned to the right ear 10 of the user 8 .
  • Each hearing device 2, 4 has a front microphone 12 and a rear microphone 14.
  • both hearing devices 2 and 4 have a signal processor 16 and a loudspeaker 18 , a communication device 20 and an energy source 22 .
  • the signal processor 16 is set up to process ambient sound, which was detected by the microphones 12 and 14 and converted into microphone signals MS, depending on a hearing impairment of the user 8, specifically to filter and amplify it as a function of frequency, and as an output signal AS to the loudspeaker 18 to spend. The latter in turn converts the output signal AS into sound for output to the hearing of the user 8 .
  • both hearing devices 2 and 4 are in communication with one another.
  • both signal processors 16 transmit data to one another by means of the respective communication devices 20 (indicated by a double arrow 24).
  • One of the signal processor 16 forms a "master” and the other a "slave”.
  • the two signal processors 16 thus also form a controller of the hearing device system 1.
  • the controller (usually the signal processor 16 acting as master) processes, among other things, the microphone signals MS of both hearing devices 2 and 4 to form a binaural directional microphone signal.
  • the controller is set up to classify different hearing situations based on the information contained in the microphone signals MS and to change the signal processing of the microphone signals MS depending on the classification, ie to adapt signal processing parameters.
  • the signal processors 16, specifically the controller are set up to carry out an operating method that is described in more detail below.
  • the controller determines whether there is music in the ambient noise. However, in order to avoid the signal processing being incorrectly set to music, although music is only accidentally included in the ambient noise, the controller determines whether several sound sources for the music, indicated here by two loudspeaker boxes 26, are present in the vicinity of the user 8 . Specifically, the controller determines whether the two loudspeaker boxes 26 are located in a front hemisphere 28 .
  • the front hemisphere 28 represents the area viewed in the direction of view 30 (see 2 ) in front of a frontal plane 32 that intersects the two ears 6 and 10.
  • both signal processors 16 use a "detection stage 34" (see 4 ), which uses the two microphones 12 and 14 in a known manner to determine a so-called direction of arrival for the sound emanating from the two loudspeaker boxes 26 .
  • the respective direction of arrival is used as a solid angle region 36 (related to the viewing direction 30 as the zero-degree direction) (in particular in the form of a vector), in which the respective loudspeaker box 26 is arranged.
  • the current hearing situation is classified in a classification stage 38. Here it is determined whether music is present. If this is the case and two different sound sources, i.e.
  • a fusion stage 40 in which the information from the classification stage 38 and the detection stage 34 are combined, checks whether both sound sources are outputting the same music. If, for both hearing devices 2 and 4, a sound source is determined for the music recognized in the classification stage 38 within a solid angle region 36 arranged in the front hemisphere 28 - which is determined on the basis of the communication between the two hearing devices 2 and 4 (cf. 4 ) - the controller assumes in the fusion stage 40 that there is a situation with a stereo performance of the music. The controller takes this as an indication to increase a probability value that a situation of conscious listening to music is present.
  • the controller fits parameters for a subsequent processing stage 42 for signal processing of music. For example, the controller sets a so-called linear compression and reduces noise reduction.
  • the fusion stage 40 is preceded by a stereo detection stage 44, in which it is determined whether both sound sources emit sufficiently similar but not exactly the same sound signals. unless the output is set to "mono", the case.
  • the probability value is further increased compared to the previously described variant if such a stereo presentation is recognized.
  • the probability value only reaches a limit value with this "additional" increase, from which the parameters for the signal processing of music are changed.
  • the probability value is also increased if the two sound sources are not only in the front hemisphere 28, but also in a narrow spatial area of 60 degrees on both sides of the viewing direction 30.
  • the controller does not switch the signal processing between two sets of parameters when the probability limit value is reached, but changes the parameters increasingly with increasing probability, so that a situation-dependent increasing change in the signal processing is implemented.
  • a directional sensitivity of a binaural directional microphone is set in the detection stage 34 in such a way that several sectors 46 with increased sensitivity compared to the other spatial areas are distributed in a fan-like manner in the front hemisphere 28 .
  • a level value is recorded for each sector 46 and compared with those of the other sectors 46 .
  • An increased level value indicates a sound source in the area of sector 46.
  • an interpolation is carried out between the sectors 46, so that a sound source arranged between two sectors 46 (in 3 indicated by the loudspeaker box 26) shown on the left, specifically whose solid angle range 36 can be narrowed down.
  • the decision as to whether there are two sound sources for the music and the resulting measures, in particular the decision on changing the signal processing parameters, is made by the signal processor 16 acting as the master and to the signal processor 16 acting as the slave transmitted.
  • the signal processing is only changed by the controller when two sound sources for the music, in this case the two loudspeaker boxes 26, are recognized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Manufacturing & Machinery (AREA)
  • Stereophonic System (AREA)
EP23154327.3A 2022-02-18 2023-01-31 Procédé de fonctionnement d'un système de dispositif auditif binaural et système de dispositif auditif binaural Pending EP4231667A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102022201706.4A DE102022201706B3 (de) 2022-02-18 2022-02-18 Verfahren zum Betrieb eines binauralen Hörvorrichtungssystems und binaurales Hörvorrichtungssystem

Publications (1)

Publication Number Publication Date
EP4231667A1 true EP4231667A1 (fr) 2023-08-23

Family

ID=85150853

Family Applications (1)

Application Number Title Priority Date Filing Date
EP23154327.3A Pending EP4231667A1 (fr) 2022-02-18 2023-01-31 Procédé de fonctionnement d'un système de dispositif auditif binaural et système de dispositif auditif binaural

Country Status (4)

Country Link
US (1) US20230269548A1 (fr)
EP (1) EP4231667A1 (fr)
CN (1) CN116634322A (fr)
DE (1) DE102022201706B3 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040175008A1 (en) * 2003-03-07 2004-09-09 Hans-Ueli Roeck Method for producing control signals, method of controlling signal and a hearing device
EP1858291A1 (fr) * 2006-05-16 2007-11-21 Phonak AG Système auditif et méthode de déterminer information sur un espace sonore
WO2019086435A1 (fr) 2017-10-31 2019-05-09 Widex A/S Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
EP3684075A1 (fr) * 2019-02-01 2020-07-22 Sonova AG Systèmes et procédés d'optimisation de traitement basée sur un accéléromètre effectués par un dispositif auditif
WO2021023667A1 (fr) * 2019-08-06 2021-02-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Système et procédé d'aide à l'audition sélective

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006047983A1 (de) 2006-10-10 2008-04-24 Siemens Audiologische Technik Gmbh Verarbeitung eines Eingangssignals in einem Hörgerät
EP2373062A3 (fr) 2010-03-31 2015-01-14 Siemens Medical Instruments Pte. Ltd. Procédé de réglage double pour un système auditif
DE102012214081A1 (de) 2012-06-06 2013-12-12 Siemens Medical Instruments Pte. Ltd. Verfahren zum Fokussieren eines Hörinstruments-Beamformers
DE102016225204B4 (de) 2016-12-15 2021-10-21 Sivantos Pte. Ltd. Verfahren zum Betrieb eines Hörgerätes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040175008A1 (en) * 2003-03-07 2004-09-09 Hans-Ueli Roeck Method for producing control signals, method of controlling signal and a hearing device
EP1858291A1 (fr) * 2006-05-16 2007-11-21 Phonak AG Système auditif et méthode de déterminer information sur un espace sonore
WO2019086435A1 (fr) 2017-10-31 2019-05-09 Widex A/S Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
WO2019086439A1 (fr) 2017-10-31 2019-05-09 Widex A/S Procédé de fonctionnement d'un système d'aide auditive et système d'aide auditive
EP3684075A1 (fr) * 2019-02-01 2020-07-22 Sonova AG Systèmes et procédés d'optimisation de traitement basée sur un accéléromètre effectués par un dispositif auditif
WO2021023667A1 (fr) * 2019-08-06 2021-02-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Système et procédé d'aide à l'audition sélective

Also Published As

Publication number Publication date
DE102022201706B3 (de) 2023-03-30
US20230269548A1 (en) 2023-08-24
CN116634322A (zh) 2023-08-22

Similar Documents

Publication Publication Date Title
EP3451705B1 (fr) Procédé et dispositif de reconnaissance rapide de voix propre
EP2603018B1 (fr) Dispositif auditif avec détection d'activité de locuteur et procédé de fonctionnement d'un dispositif auditif
EP1307072B1 (fr) Procédé pour actionner une prothèse auditive et prothèse auditive
US8249284B2 (en) Hearing system and method for deriving information on an acoustic scene
EP3104627B1 (fr) Procédé d'amélioration d'un signal d'enregistrement dans un système auditif
EP2506603B1 (fr) Système d'aide auditive avec système de microphone directif et procédé de fonctionnement dudit système d'aide auditive avec système de microphone directif
EP2672732A2 (fr) Procédé de focalisation d'un générateur de faisceau d'un instrument auditif
EP2164283B1 (fr) Appareil auditif et fonctionnement d'un appareil auditif doté d'une transposition de fréquence
EP2226795B1 (fr) Dispositif auditif et procédé de réduction d'un bruit parasite pour un dispositif auditif
EP2833651A1 (fr) Procédé de suivi d'une source sonore
EP1489884A2 (fr) Procédé de commande d'une prothèse auditive et prothèse auditive avec système de microphone dans lequel différentes caractéristiques directionnelles sont réglables
DE102007035174B4 (de) Hörvorrichtung gesteuert durch ein perzeptives Modell und entsprechendes Verfahren
EP2991379A1 (fr) Procede et dispositif de perception amelioree de sa propre voix
EP1858291B1 (fr) Système auditif et méthode de déterminer information sur une scène sonore
EP3951780A1 (fr) Procédé de fonctionnement d'un appareil auditif et appareil auditif
EP2373062A2 (fr) Procédé de réglage double pour un système auditif
EP3567874B1 (fr) Procédé de fonctionnement d'un appareil auditif ainsi qu'appareil auditif
EP2658289B1 (fr) Procédé de commande d'une caractéristique de guidage et système auditif
DE102022201706B3 (de) Verfahren zum Betrieb eines binauralen Hörvorrichtungssystems und binaurales Hörvorrichtungssystem
WO2019215200A1 (fr) Procédé de fonctionnement d'un système auditif ainsi que système auditif
EP3913618A1 (fr) Procédé de fonctionnement d'un appareil auditif et appareil auditif
DE102020201615B3 (de) Hörsystem mit mindestens einem im oder am Ohr des Nutzers getragenen Hörinstrument sowie Verfahren zum Betrieb eines solchen Hörsystems
EP2592850B1 (fr) Activation et désactivation automatiques d'un système auditif binaural
EP4149121A1 (fr) Procédé de fonctionnement d'un appareil auditif
DE102022202266A1 (de) Verfahren zum Betrieb eines Hörgeräts

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20240216

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR