EP1858291B1 - Système auditif et méthode de déterminer information sur une scène sonore - Google Patents
Système auditif et méthode de déterminer information sur une scène sonore Download PDFInfo
- Publication number
- EP1858291B1 EP1858291B1 EP20060015269 EP06015269A EP1858291B1 EP 1858291 B1 EP1858291 B1 EP 1858291B1 EP 20060015269 EP20060015269 EP 20060015269 EP 06015269 A EP06015269 A EP 06015269A EP 1858291 B1 EP1858291 B1 EP 1858291B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- audio signals
- unit
- sound
- hearing
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 42
- 230000005236 sound signal Effects 0.000 claims description 123
- 230000006870 function Effects 0.000 claims description 79
- 238000012546 transfer Methods 0.000 claims description 75
- 230000005540 biological transmission Effects 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 20
- 230000002463 transducing effect Effects 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 16
- 210000003128 head Anatomy 0.000 description 16
- 238000013459 approach Methods 0.000 description 7
- 238000012512 characterization method Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000003321 amplification Effects 0.000 description 6
- 238000003199 nucleic acid amplification method Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 102100032182 Crooked neck-like protein 1 Human genes 0.000 description 3
- 101000921063 Homo sapiens Crooked neck-like protein 1 Proteins 0.000 description 3
- 230000001427 coherent effect Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 2
- 241000534414 Anotopterus nikparini Species 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 210000000613 ear canal Anatomy 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000035611 feeding Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/405—Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
Definitions
- the invention relates to a hearing system and a method for operating a hearing system, and to a method for deriving information on an acoustic scene and the application of that method in a hearing system.
- the invention furthermore relates to a method for manufacturing signals to be perceived by a user of the hearing system.
- the hearing system comprises at least one hearing device. Under a "hearing device", a device is understood, which is worn adjacent to or in an individual's ear with the object to improve individuals acoustical perception. Such improvement may also be barring acoustical signals from being perceived in the sense of hearing protection for the individual.
- the hearing device is tailored so as to improve the perception of a hearing impaired individual towards hearing perception of a "standard" individual, then we speak of a hearing-aid device.
- a hearing device may be applied behind the ear, in the ear, completely in the ear canal or may be implanted.
- monaural and binaural hearing systems are considered.
- a hearing device is a hearing-aid device.
- Modern hearing-aid devices when employing different hearing programs (typically two to four hearing programs, also termed audiophonic programs), permit their adaptation to varying acoustic environments or scenes. The idea is to optimize the effectiveness of the hearing-aid device for the hearing-aid device user in all situations.
- the hearing program can be selected either via a remote control or by means of a selector switch on the hearing-aid device itself. For many users, however, having to switch program settings is a nuisance, or it is difficult, or even impossible. It is also not always easy, even for experienced users of hearing-aid devices, to determine, at what point in time which hearing program is suited best and offers optimum speech intelligibility. An automatic recognition of the acoustic scene and a corresponding automatic switching of the program setting in the hearing-aid device is therefore desirable.
- the switch from one hearing program to another can also be considered a change in a transfer function of the hearing device, wherein the transfer function describes signal processing within the hearing system.
- the transfer function may depend on one or more parameters, also referred to as transfer function parameters, and may then be adjusted by assigning values to said parameters.
- a pattern-recognition unit employing a particular algorithm makes a determination as to the attribution of the analyzed signal to a specific acoustic environment.
- EP 1 670 285 A2 published on June 14, 2006, shall be mentioned, which discloses a training mode for classifiers in hearing devices. It is disclosed that in said training mode, a sound source can be separated by narrow beam-forming. This will isolate the targeted source and, as far as said training mode is on, the classifier will be trained for the targeted source, while other sources of sound are suppressed by said narrow beam-forming.
- the training provides the classifier with considerable amounts of data on the class represented by the targeted source. This way, an improved reliability of the classification can be achieved.
- hearing program change based on the classification result provides for an optimum hearing sensation for the user. It would be desirable to provide for an improved basis for choosing a hearing program to switch to and/or for the point in time when to switch hearing programs.
- One object of the invention is to create a hearing system, a method of operating a hearing system, a method for deriving information on an acoustic scene, and method for manufacturing signals to be perceived by a user of the hearing system, which allow for an improved performance, in particular, for an improved automatic adaptation (of a hearing system) to an acoustic environment.
- Another object of the invention is to provide for an improved basis for deciding about changes in an adjustable transfer function of the hearing system.
- Another object of the invention is to more comprehensively recognize acoustic scenes.
- Another object of the invention is to increase the probability that sources of sound are correctly recognized.
- Another object of the invention is to provide for a more precise determination of an acoustic scene.
- the method for operating a hearing system comprising an input unit, an output unit and a transmission unit operationally interconnecting said input unit and said output unit, said transmission unit implementing a transfer function which describes, how audio signals generated by said input unit are processed in order to derive audio signals fed to said output unit, and which can be adjusted by one or more transfer function parameters, comprises the steps of
- the hearing system comprises
- the method for deriving information on an acoustic scene comprises the steps of
- the invention also comprises the use of said method for deriving information on an acoustic scene in a hearing system.
- the method for manufacturing signals to be perceived by a user of a hearing system comprising an input unit, an output unit and a transmission unit operationally interconnecting said input unit and said output unit, said transmission unit implementing a transfer function which describes, how audio signals generated by said input unit are processed in order to derive audio signals fed to said output unit, and which can be adjusted by one or more transfer function parameters, comprises the steps of
- the invention provides for a link (or for an improved link) between the result of a sound characterization and a direction in space.
- the link between the information on which kind of sounds are present, or, more general, the sound-characterizing data, and the directional information is realized by evaluating the sound-characterizing data together with data comprising information on the directional characteristic.
- Directional characteristics are typically described in form of polar patterns.
- the invention provides for an improved way for evaluating the acoustic environment. Sound characteristics can be assigned to the direction of arrival of the sound.
- the transfer function of a hearing system describes, how input audio signals are processed in order to derive output audio signals.
- input audio signals are audio signals derived, by means of said input unit, from incoming acoustic sound and fed to said transmission unit
- output audio signals are audio signals which are fed (from said transmission unit) to said output unit and which are to be transduced into signals to be perceived by a user of the hearing system.
- the transfer function may comprise filtering, dynamics processing, phase shifting, pitch shifting, noise cancelling, beam steering and various other functions. This is known in the art, in particular in the field of hearing-aid devices.
- the transfer function may depend, e.g., on time, frequency, direction of sound, amplitude. Numerous parameters on which the transfer function may depend (also referred to as "transfer function parameters") can be thought of, like parameters depicting frequencies, e.g., filter cutoff frequencies or knee point levels for dynamics processing, or parameters depicting loudness values or gain values, or parameters depicting the status or functions of units like noise cancellers, beam formers, locators, or a parameter simply indicating a pre-stored hearing program.
- Said input unit usually comprises at least one input transducer.
- An input transducer typically is a mechanical-to-electrical converter, in particular a microphone. It transduces acoustic sound into audio signals.
- Said output unit usually comprises at least one output transducer.
- An output transducer can be an electrical-to-electrical or electrical-to-mechanical converter and typically is a loudspeaker, also referred to as receiver.
- acoustic sound is used in order to indicate that sound in the acoustic sense, i.e., acoustic waves, is meant.
- Said set of sound-characterizing data may be just one number or datum, e.g., a signal-to-noise ratio or a signal pressure level in a certain frequency range, but typically comprises several numbers or data. In particular, it may comprise classification results.
- the sound-characterizing data can be indicative of an acoustic scene.
- Classification (classifying methods, possible features to classify, classes and so on) will be described only roughly here. More details on classification may, e.g., be taken from the above-mentioned publications WO 01/20965 A2 , WO 01/22790 A2 and WO 02/32208 A2 and references therein.
- the set of possible classes according to which the sets of features can be classified may, e.g., comprise acoustic-scene-describing classes, like, e.g., "speech”, “noise”, “speech in noise”, “music” and/or others.
- directional characteristic as used in the present application is understood as a characteristic of amplification or sensivity in dependence of the direction of arrival of the incoming acoustic sound.
- direction of arrival the direction is understood, in which an acoustical source (also referred to as source of sound or sound source) "sees” the center of the user's head.
- sources of sound or sound source also referred to as source of sound or sound source
- Said directional characteristic with which, by means of said input unit, said audio signals are obtained from said incoming acoustic sound typically depends on the polar pattern of the employed transducers (microphones) and on the processing of the so-derived raw audio signals.
- so-called head-related transfer functions (HRTFs) may be considered, in particular their part describing the head shadow, i.e., the direction-dependent damping of sound due to the fact that a hearing device of the hearing system is worn in or near the user's ear.
- the HRTFs may be averaged HRTFs or individually measured.
- Said derived value or values for the transfer function parameters can be considered to form a set of values.
- That set of values may be just said set of sound-characterizing data and said directional information, in which case the evaluation unit merely passes on the data it received; or it may comprise other data derived therefrom, in particular, it may be data indicating at least one direction (typically representing a polar angle or a range of polar angles) and data indicating an estimate about the kind of source of sound located in said direction; or it may be just a number indicating which hearing program to choose.
- Said signals to be perceived by a user of the hearing system may be acoustic sound or, e.g., in the case of a hearing system comprising an implanted hearing device, an electrical and/or mechanical signal or others.
- Said transmission unit may be realized in form of a signal processor, in particular in form of a digital signal processor (DSP).
- DSP digital signal processor
- said DSP may embody said transmission unit, said characterizing unit, said evaluating unit, a beam former unit, a beam former controller, a localizer, a feature extractor and a classifier, or part of these.
- the various units are described or drawn separately or together merely for reasons of clarity, but they may be realized in a different arrangement; this applies, in particular, also to the examples and embodiments described below.
- a beam former unit is provided.
- a beam former unit also referred to as “beam former” is capable of beam forming.
- beam-forming also referred to as “technical beam-forming” tailoring the amplification of an electrical signal (also referred to as “audio signals”) with respect to an acoustical signal (also referred to as “acoustical sound”) as a function of direction of arrival of the acoustical signal relative to a predetermined spatial direction.
- the beam characteristic is represented in form of a polar diagram, scaled in dB.
- Beam formers are known in the art.
- One type of beam formers receives audio signals from at least two spaced-apart transducers (typically microphones), which convert incoming acoustic sound into said audio signals, and processes these audio signals, typically by delaying the one audio signals with respect to the other audio signals and adding or subtracting the result.
- new audio signals are derived, which are, with a new, tailored directional characteristic, obtained from said incoming acoustic sound.
- said tailored directional characteristic is tailored such, that acoustic sound originating from a certain direction (typically characterized by a certain polar angle or polar angle range) is either preferred with respect to acoustic sound originating from other directions, or suppressed with respect to acoustic sound originating from other directions.
- a localizer is provided.
- Localizers are known in the art. They receive audio signals from at least two spaced-apart transducers (microphones) and process the audio signals such that, for major sources of sound, the corresponding directions of arrival of sound are detected.
- the directions, from which certain acoustic signals originate, can be determined; sound sources can be localized, at least directionally.
- the output of the localizer also referred to as “localizing data" may be used for controlling (steering) a beam former.
- the at least one input transducer can, by itself, provide for several different directional characteristics. This may, e.g., be realized by means of a movable (e.g., rotatable) input transducer or by an input transducer with movable (e.g., rotatable) feedings, through which acoustic sound is fed (guided), so that acoustic sound from various directions (with respect to the arrangement of the hearing system or with respect to the user's head) may be suppressed or be preferably transduced.
- a movable input transducer e.g., rotatable
- an input transducer with movable (e.g., rotatable) feedings through which acoustic sound is fed (guided), so that acoustic sound from various directions (with respect to the arrangement of the hearing system or with respect to the user's head) may be suppressed or be preferably transduced.
- the classification is not a "hard” or discrete-mode classification, in which a current acoustic scene (or, more precisely, the corresponding features) would be classified into exactly one of at least two classes, but a "mixed-mode” classification is used, the output of which comprises similarity values indicative of the similarity (likeness) of said current acoustic scene and each acoustic scene represented by each of said at least two classes.
- a so-obtained similarity vector can be used as a set of values for the transfer function parameters. More details on this type of classification can be taken from the unpublished European patent application with the application number EP 06 114 038.0 of the same applicant, filed on May 16, 2006, and titled "Hearing Device and Method of Operating a Hearing Device".
- the method of operating a hearing system furthermore comprises the steps of
- acoustic sound from the acoustic environment is converted into audio signals at least twice, each time with a different directional characteristic. This may happen successively (i.e., consecutively) or simultaneously. In the latter case, preferably also the processing (deriving of the sound-characterizing data) takes place simultaneously.
- the hearing system has to provide for a possibility to simultaneously obtain, with different directional characteristics, audio signals from acoustic sound; this may, e.g., be accomplished by means of at least two input transducers (or at least two sets of input transducers), and/or by realizing two simulaneously-available beam formers.
- the processing (deriving of the sound-characterizing data) for each directional characteristic may well take place consecutively, i.e., processing for one directional characteristic first, and then processing for another directional characteristic. This is slower, but reduces the required processing capacity.
- This embodiment may even be realized with one single input transducer capable of changing its directional characteristic, or with a single beam former unit, the latter typically being connected to at least two input transducers.
- Input transducers of the input unit may be distributed among hearing devices of a hearing system, e.g., the input unit may comprise two (or more) input transducers arranged at each of two hearing devices of a binaural hearing system.
- the first directional characteristic may be attributed substantially to the two (or more) input transducers of the left hearing device
- the second directional characteristic may be attributed substantially to the two (or more) input transducers of the right hearing device.
- said two different directional characteristics are significantly different. It can be advantageous to obtain audio signals from acoustic sound with at least two different directional characteristics, because the information on the acoustic scene, which can be gained that way, is very valuable, since the location of sources of sound can be determined; and the transfer function can be better adapted to the acoustic environment. In particular, it is possible to determine both, the location of sources of sound, and the type of sources of sound.
- Fig. 1 schematically shows a block diagram of a hearing system 1.
- the hearing system 1 comprises an input unit 10, a transmission unit 20, an output unit 80, a characterizing unit 40, an evaluation unit 50 and a storage unit 60.
- the input unit 10 is operationally connected to the transmission unit 20, which is operationally connected to the output unit 80, and to the characterizing unit 40, which is operationally connected to the evaluating unit 50.
- the evaluating unit 50 is operationally connected to the storage unit 60 and to the transmission unit 20.
- the input unit 10 receives acoustic sound 6 from the environment and outputs audio signals S1.
- the audio signals S1 are fed to the transmission unit 20 (e.g., a digital signal processor), which implements (embodies) a transfer function G.
- the audio signals are processed (amplified, filtered and so on) according to the transfer function G, thus generating output audio signals 7, which are fed to the output unit 80, which may be a loudspeaker.
- the output unit 80 outputs signals 8 to be perceived by a user of the hearing system 1, which may be acoustic sound (or other signals) derived from the incoming acoustic sound 6.
- the audio signals S1 are also fed to the characterizing unit 40, which derives a set C1 of sound-characterizing data therefrom.
- This set C1 is fed to the evaluating unit 50, and the evaluating unit 50 also receives directional information D1, provided by the storage unit 60.
- the evaluating unit 50 derives, in dependence of the set C1 of sound-characterizing data and the directional information D1, a set of values T for parameters of the transfer function, and that set of values T is fed to the transmission unit 20.
- the transfer function G depends on one or more transfer function parameters. This allows to adjust the transfer function G by assigning different values to at least a part of these transfer function parameters.
- a link between the audio signals S1 (and, accordingly, the picked-up incoming acoustic sound 6) and the directional information D1 is generated, which is very valuable for assigning such values T to parameters of the transfer function G, which result in an optimized hearing sensation for the user in the current acoustical environment.
- the storage unit 60 is optional and may, e.g., be realized in form of some computer memory.
- the evaluating unit 50 might as well receive the directional information D1 from elsewhere, e.g., from the input unit 10.
- the directional information D1 is or comprises data related to a directional characteristic, with which the audio signals S1 have been obtained (by means of the input unit 10) from the incoming acoustic sound 6. It may, e.g., comprise data related to a head-related transfer function (HRTF) of the user and/or data related to polar patterns of employed microphones.
- HRTF head-related transfer function
- Fig. 2 schematically shows a block diagram of a hearing system with classification and successive (consecutive) obtaining, with different directional characteristics, audio signals from acoustic sound.
- the embodiment is similar to that of Fig. 1 , but the input unit 10 and the characterizing unit 40 are depicted in greater detail.
- the input unit 10 comprises at least two input transducers M1.H2 (e.g., microphones), which derive raw audio signals R1 and R2, respectively, from incoming acoustic sound (not depicted in Fig. 2 ). Audio signals obtained by means of input transducers M1 and M2, respectively, are obtained with different directional characteristics: the directional characteristic that can be assigned to input transducer M1 is different from the directional characteristic that can be assigned to input transducer M2. This may be due to differences between the transducers themselves, but may also (at least in part) be due to the location at which the respective transducer is arranged, since this provides for different HRTFs.
- M1.H2 e.g., microphones
- one of the raw audio signals R1,R2 can be selected as audio signal S1 or S2, respectively, and fed to the characterizing unit 40.
- the switch 14 symbolizes or indicates a successive (consecutive) obtaining, with different directional characteristics, of audio signals from acoustic sound. The characterization thereof will then usually take place successively.
- the characterizing unit 40 comprises a feature extractor FE1 and a classifier CLF1.
- the feature extractor FE1 extracts features f1a,f1b,f1c from the fed-in audio signal S1, and features f2a,f2b,f2c from the fed-in audio signal S2, respectively.
- These sets of features which in general may comprise one, two or more (maybe even of the order of ten or 40) features, are fed to classifier CLF1, in which it is classified into one or a number of several possible classes.
- the classification result is the sound-characterizing data C1 and C2, respectively, or is comprised therein.
- the evaluating unit 50 For deriving at least a part of the directional information D1, the evaluating unit 50 is operationally connected to the switch 14. Accordingly, the evaluating unit 50 "knows" whether a currently received set of sound-characterizing data is obtained from acoustic sound picked-up with transducer M1 or with trahsnducer M2. Besides the information, with which of the transducers (M1 or M2) acoustic sound has been picked up, the evaluating unit 50 preferably shall also have information about the directional characteristic assigned to the corresponding transducers. Such information (e.g., on HRTFs and polar patterns) may be obtained from the position of switch 14 or from a storage modul in the hearing system (not shown).
- Fig. 2 may be interpreted to represent, e.g., a hearing device with of a monaural hearing system.
- Fig. 3 schematically shows a block diagram of a hearing system 1 with a beam former DF1 and classification.
- the input unit 10 comprises a beam former unit BF1 with a beam former controller BFC1, which controls the beam former.
- the beam former unit BF1 receives raw audio signals R1,R2 and can therefrom derive audio signals S1, wherein these audio signals S1 are obtained with a predetermined, adjustable directional characteristic. This is usually accomplished by delaying said raw audio signals R1,R2 with respect to each other and summing or subtracting the result.
- Both raw audio signals R1,R2 will usually be fed also to the transmission unit 20. Additionally or alternatively, said audio signals S1 can be fed to the transmission unit 20, too.
- the beam former can be adjusted to form a desired directional characteristic, i.e., the directional characteristic is set by means of the beam former.
- Data related to that desired directional characteristic are at least a part of the directional information D1 and can be transmitted from the beam former controller BFC1 to the evaluation unit 50.
- the beam former will have a preferred direction, i.e., it will be adjusted such that acoustic sound impinging on the transducers M1,M2 from that preferred direction (or angular range) is picked-up with relatively high sensitivity, while acoustic sound from other directions is damped.
- first preferred direction or, more general, a first directional characteristic
- second preferred direction or, more general, a second directional characteristic
- a common evaluation of the (at least) two corresponding sets of sound-characterizing data and the corresponding directional information will take place.
- Fig. 4 shows an example for that.
- Fig. 4 shows schematically two possible exemplary directional characteristics P1 (solid line) and P2 (dashed line) of a microphone arrangement, e.g., like of the two microphones M1,M2 in Fig. 3 .
- the commonly used polar-pattern presentation is chosen; the 0°-direction runs along the hearing system user's nose.
- the microphones M1,M2 When the hearing system is worn by a user, the microphones M1,M2 will usually be on a side of the user's head, so that the (acoustic) head shadow will deform the cardioids of P1,P2 (deformation not shown). This effect can be considered, and accordingly corrected polar patterns P1,P2 can be obtained by making use of a head-related transfer function (HRTF).
- HRTF head-related transfer function
- HRTF head-related transfer function
- the two microphones M1,M2 may be worn on the same side of the user's head or on opposite sides.
- the beam former such that the acoustic environment is investigated in four quadrants, preferably with center directions at approximately 0°, 90°, 180°, 270°. This can be accomplished by simultaneously or successively adjusting the beam former such, that sound originating from a location in 0°, 90°, 180° and 270°, respectively, is amplified stronger or attenuated less than sound originating from other locations.
- the corresponding four sets of sound-characterizing data can, e.g., be deduced from the four corresponding beam former settings. An evaluation of the corresponding four sets of sound-characterizing data together with their corresponding directional information is preferred.
- Fig. 5 shows an example for that.
- Fig. 5 a schematic diagram indicating a possibility for sectioning space with a beam former is shown.
- the front hemisphere and the sides are investigated in 30°-spaced-apart sections (polar angle ranges) ⁇ 1 to ⁇ 7 , the width of which may also be about 30°, or a little larger, so that they overlap stronger.
- the rest (of the back hemisphere) is investigated less precisely, since in most situations, a user looks approximately towards relevant sources of sound.
- only two slice ⁇ 8 and ⁇ 9 are foreseen. It would, of course, also be possible to continue in the back hemisphere with finer slices.
- the evaluating unit 50 can provide the beam former controller BFC1 with data for new beam former parameters, so that possibly an improved directional characteristic can be chosen.
- Fig. 6 schematically shows a block diagram of a hearing system with a beam former, a localizer and with classification.
- the beam former controller BFC1 is realized by or comprised in a localizer L1.
- the localizer L1 the directions of major sources of sound can be found, e.g., in a way known in the art, e.g., like in one of the above-mentioned publications WO 00/68703 A2 and EP 1326478 A2 .
- the beam former controller BFC1 can control the beam former BF1 such, that it focuses into such a direction.
- the localizer L1 also derives the approximate angular width of a source of acoustic sound.
- the beam former controller BFC1 controls the beam former BF1 accordingly, i.e., such, that the directional characteristic set by means of the beam former BF1 not only matches the direction, but also the angular width of the sound source detected by means of the localizer L1.
- Fig. 7 schematically shows a block diagram of a method of operating a hearing system.
- the hearing system of Fig. 7 comprises a localizer, which functions as a beam former controller, and sound characterization is done by classification.
- a localizer which functions as a beam former controller
- sound characterization is done by classification.
- Three beam formers are depicted in Fig. 7 ; nevertheless, any number of beam formers, in particular 1, 2, 3, 4, 5 or 6 or more may be foreseen. If more than one beam former is provided for, the beam formers may work simultaneously, i.e., acoustic sound from different directions may be characterized at the same time.
- the beam forming (and classifying) may take place successively (at least in part).
- the beam forming and classifying may take place successively (at least in part).
- Fig. 7 three input transducers M1,M2,M3 are shown, but there may be two or four or more input transducers foreseen, which may be comprised in one hearing device, or which may be distributed among two hearing devices of the hearing system.
- Raw audio signals R1,R2,R3 from the input transducers M1,M2,M3, respectively, (or from audio signals derived therefrom) are fed to the localizer L1.
- the localizer L1 derives that (in this example) three main sources of acoustic sound Q1,Q2,Q3 exist, which are located at polar angles of about 110°, 190° and 330°, respectively.
- first, second and third audio signals S1, S2 and S3, respectively are generated such, that they preferably contain acoustic sound stemming from one of the main sources of acoustic sound Q1, Q2 and Q3, respectively.
- These audio signals S1, S2 and S3 are separately characterized, in this example by feature extraction and classifying.
- the classes according to which an acoustic scene is classified are speech, speech in noise, noise and music.
- Each classification result may comprise similarity values indicative of the likeness of the current acoustical scene and an acoustic scene represented by a certain class ("mixed-mode" classification), as shown in Fig. 7 ; or simply that one class is output, the corresponding acoustic scene of which is most similar to the current acoustic scene.
- the link between the knowledge obtained from the localizer, that some sources of acoustic sound are present in the above-mentioned three main directions, and the findings, obtained from the characterizing units (feature extractors and classifiers), about what kind of sound source is apparently located in the respective direction, can be made in the evaluation unit 50. This way, the acoustic environment can be captured rather precisely.
- Figs. 8 and 9 schematically show environmental situations (acoustic scenes) and beam former opening angles realized by adapting the transfer function G.
- Fig. 8 depicts a 4-person-at-a-table situation.
- the user U and three other persons (speakers) A1, A2, A3 talk to each other.
- a noise source e.g., a radio or TV is present, too.
- the transfer function would be adjusted such that A1 would be highlighted (i.e., A1 would be provided with an increased amplification), but A2 would be somewhat damped, and A3 would basically be muted.
- the noise source N would be only slightly damped.
- the corresponding beam former opening angle ⁇ ' is indicated by dashed lines in Fig. 8 . Accordingly, the user U would hardly or not at all hear, when A3 would give comments, and the noise source would decrease the intelligibility of the speakers. That simple approach does obviously not give satisfying results.
- Fig. 9 depicts a 6-person-at-a-table situation.
- the simple classifier-beamformer approach described above in conjunction with the example of Fig. 7 would basically prevent the user U from hearing comments from his neighbors A1 and A5 (see dashed lines, ⁇ ').
- transfer function settings in form of values for transfer function parameters, in particular beam former parameters
- Comments from A1 and A5 could be perceived by the user, without turning his head.
- Fig. 10 shows an embodiment similar to the one of Fig. 3 , but the input unit 10 comprises a second beam former BF2 with a second beam former controller BFC2, and a second feature extractor FE2 and a second classifier CLF2.
- the beam former controllers BFC1, BFC2 may be realized in form of localizers (confer, for example, also to Figs. 6 and 7 ). As depicted, these additional parts BFC2, BF2, FE2 and CLF2 may work simultaneously with their counterparts.
- C1 and D1 and C2 and D2 will be considered. It is possible to provide for further beam formers and characterizing units for parallel processing and time savings; it is even possible to adjust their number according to current needs, e.g., if a localizer is used, their number could match the number of sources of sound that are found.
- the output unit 80 may have one or two output transducers (e.g., loudspeakers or implanted electrical-to-electrical or electrical-to-mechanical converters). If two output transducers are present, these will typically be fed with two different (partial) output audio signals 7.
- output transducers e.g., loudspeakers or implanted electrical-to-electrical or electrical-to-mechanical converters. If two output transducers are present, these will typically be fed with two different (partial) output audio signals 7.
- Fig. 11 shows schematically a block diagram of a binaural hearing system with classification.
- each hearing device of the hearing system may have as little as only one input transducer (M1 and M2, respectively).
- the transducers M1 and M2 may, by themselves, have the same directional characteristic. Due to the fact, that the hearing devices (and therefore also the transducers M1 and M2), are worn on different sides of the user's head, the finally resulting directional characteristics P1 and P2 are different from each other.
- P1 and P2 are roughly sketched in Fig. 11 . They may be obtained experimentally or from calculations. In calculations, HRTFs will usually be involved for modelling the so-called head shadow.
- directional characteristics P1 and P2 in an embodiment like shown in Fig. 11 have a maximum sensitivity somewhere between 30° and 60° off the straight-forward direction. In Fig. 11 , these directions are indicated as arrows labelled ⁇ 1 and ⁇ 2 , respectively.
- a "mixed-mode" classification (described above) is used.
- a "mixed-mode” classification is used.
- the directional information D1,D2 may comprise HRTF-information and/or information on the directional characteristics of the microphones M1,M2, preferably both (which would approximately correspond to experimentally determined directional characteristics when the hearing system is worn, at the user or at a dummy).
- the evaluation may take place in one of the two hearing devices, in which case at least one of the sets C1,C2 of sound-characterizing data has to be transmitted from one hearing device to the other. Or the evaluation may take place in both hearing devices, in which case the sets C1,C2 of sound-characterizing data have to be interchanged between the two hearing devices. It would also be possible to do the feature extraction and classification in only one of the hearing devices, in which case the audio signals S1 or S2 have to be transmitted to from one hearing device to the other.
- the transmission unit 20 and transfer function G may be realized in one or in both hearing devices, and it may process audio data for one or in both hearing devices.
- the hearing system might be a cross-link hearing system, which picks-up acoustic sound on both sides of the head, but outputs sound only on one side.
- Fig. 11 may be interpreted that way.
- Fig. 12 schematically depicts the transmission unit 20 in more detail for a case, in which a "stereo" output of the hearing system is generated.
- Fig. 12 may, for such an embodiment, be understood as the lower part of Fig. 11 .
- the set of values T for transfer function parameters may have two subsets T L and T R for the left and the right side, respectively, and the transfer function may comprise two partial transfer functions G L and G R for the left and the right side, respectively.
- the partial output audio signals 7 L ,7 R are obtained (via said (partial) transfer functions G L and G R , which are fed to separate output transducers 80 L ,80 R to be located at different sides of the user's head.
- a binaural system it can be decided, whether the sound characterization and/or the evaluation and/or the transfer function processing shall take place in one or both of the hearing devices. Therefrom results the necessity to transmit input audio signals, sound-characterizing data, sets of values for transfer function parameters of (partial) transfer functions and/or (partial) output audio signals from one of the two hearing devices to the other.
- Fig. 13 is similar to Fig. 12 and schematically depicts the transmission unit 20 for a case, in which a "stereo" output of the hearing system is generated.
- Fig. 13 may, for such an embodiment, be understood as the lower part of Fig. 11 , and it shall be illustrated that both hearing devices of the binaural hearing system may, in fact, have the same hardware and (in case of a digital hearing system) also (virtually) the same software (in particular: same algorithms for characterization and evaluation); yet, the hearing device should preferably "know", whether it is the "left” or the "right” hearing device.
- the left part of Fig. 13 depicts parts of the left hearing device, and the right part of Fig. 13 depicts parts of the right hearing device.
- the characterizing unit 40 has one part 40 L ,40 R on each side
- the evaluation unit 50 is distributed among the two hearing devices of the hearing system, having two separate (partial) evaluation units 50 L ,50 R .
- the transmission unit 20 is distributed among the two hearing devices of the hearing system, having two separate (partial) transmission units 20 L ,20 R . It is possible to process in the (partial) transmission unit 20 L only the audio signals S1 and in the (partial) transmission unit 20 R only the audio signals S2 (both depicted as solid arrows in Fig. 13 ). It is optionally possible to process in both (partial) transmission units 20 L ,20 R both audio signals S1 and S2 (depicted as dashed arrows in Fig. 13 ).
- the invention may be realized with only one input transducer with fixed directional characteristics per side in a binaural hearing system, it can be advantageous to provide for the possibility of obtaining (on one, or on each side) audio signals, with different directional characteristics.
- This can be realized by using input transducers with variable directional characteristics or by the provision of at least two input transducers (e.g., so as to realize a beam former).
- the same physical beam former may be used for both tasks, or different ones, and beam formers may be realized in form of software, so that various beam former software modules may run in parallel or successively for finding values for transfer function parameters and for the transfer function itself, i.e., for signal processing in the transmission unit.
- At least one pair of data comprising
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
Claims (16)
- Procédé pour opérer un système auditif (1) comprenant une entité d'entrée (10), une entité d'exit (80) et une entité de transmission (20) entre-liant fonctionnellement l'entité d'entrée (10) et l'entité d'exit (80), l'entité de transmission (20) implémentant une fonction de transfert (G) qui décrit, comment des signaux audio (S1 ; R1) généré par l'entité d'entrée (10) sont traités pour dériver des signaux audio (7) alimentés à l'entité d'exit (80), et qui peuvent être ajustés par un ou plusieurs paramètres de fonction de transfert, le procédé comprenant les pas dea1) obtenir, par moyen de l'entité d'entrée (10) et avec une première caractéristique (P1) directionnelle de l'entité d'entrée (10), des premiers signaux audio (S1) du bruit acoustique entrant (6) ;b1) dériver des premiers signaux audio (S1) un premier set (C1) des données de bruit caractérisantes ;c) dériver, en dépendanceune valeur (T) pour chaque de ces au moins un paramètre de fonction de transfert.- d'une première information directionnelle (D1), étant des données comprenant de l'information des premières caractéristiques directionnelles (P1), et- du premier set (C1) des données de bruit caractérisantes,
- Procédé selon la revendication 1, l'entité d'entrée (10) comprenant un premier transducteur d'entrée (M1), un second transducteur d'entrée (M2) et au moins une première entité de beam former (BF1), le procédé de plus comprenant les pas ded1) alimenter des premier signaux audio crus (R1) dérivés du premier transducteur d'entrée (M1) à au moins une entité de beam former (BF1) ;d2) alimenter des second signaux audio crus (R2) dérivés du second transducteur d'entrée (M2) à au moins une entité de beam former (BF1) ;e1) traiter le premier (R1) et le second (R2) des signaux audio crus dans au moins une entité de beam former (BF1), de la sorte afin de régler la première caractéristique directionnelle (P1) et de dériver les premiers signaux audio (S1).
- Procédé selon la revendication 2, l'entité d'entrée (10) de plus comprenant au moins une première entité de localisateur (L1), le procédé de plus comprenant les pas def1) alimenter les premiers signaux audio crus (R1) à ledit au moins une première entité de localisateur (L1) ;f2) alimenter les seconds signaux audio crus (R2) à ledit au moins une première entité de localisateur (L1) ;g1) traiter le premier (R1) et le second (R2) des signaux audio crus dans au moins une entité de localisateur (L1), de la sorte afin de dériver des données, ici nommé données localisantes, qui sont comprises dans la première information directionnelle (D1) ;h1) contrôler ledit au moins une première entité de beam former (BF1) en dépendance des données localisantes.
- Procédé selon une des revendications précédentes, le pas b1) comprenant les pas dei1) extraire un premier set de caractéristiques (f1a, f1b, f1c) des premiers signaux audio (S1) ; etj1) classifier le premier set de caractéristiques (f1a, f1b, f1c) selon un set des classes, le résultat de la classification étant compris dans le premier set (C1) des données de bruit caractérisantes.
- Procédé selon la revendication 4, les premiers signaux audio (S1) étant dérivés d'une scène acoustique courante, et le résultat de la classification comprenant, pour au moins une des classes, en particulier pour au moins deux classes, des données indicatives de la similarité de la scène acoustique courante et une scène acoustique de laquelle la classe respective est représentative.
- Le procédé selon une des revendications précédentes, de plus comprenant le pas dea2) obtenir, par moyen de l'entité d'entrée (10) et avec la seconde caractéristique directionnelle (P2) de l'entité d'entrée (10), qui est différente de la première caractéristique directionnelle (P1), les seconds signaux audio (S2) du bruit acoustique entrant (6) ;b2) dériver des seconds signaux audio (S2) un second set (C2) des données de bruit caractérisantes ; etdans le pas c) la dérivation d'une valeur (T) pour chaque au moins une des paramètres de fonction de transfert est de plus effectuée en dépendance- du second set (C2) des données de bruit caractérisantes, et- de la seconde information directionnelle (D2), qui est des données comprenant de l'information sur la seconde caractéristique directionnelle (P2).
- Procédé selon la revendication 6, les pas a1) et a2) sont effectués de manière simultanée ou successive, et les pas b1) et b2) sont effectués de manière simultanée ou successive.
- Procédé selon la revendication 6 ou revendication 7, le système auditif (1) comprenant un premier et un deuxième dispositif auditif, qui sont reliés fonctionnelle l'un à l'autre et qui sont destinés à être porté dans ou près de l'oreille gauche ou l'oreille droite, respectivement, d'un utilisateur (U) du système auditif (1), chacun des deux dispositifs auditifs comprenant au moins un transducteur d'entrée (M1 ; M2), et la première (D1) et/ou la seconde (D2) information directionnelle comprend de l'information dérivée d'une fonction de transfert relative à la tête (HRFT1 ; HRFT2).
- Procédé selon une des revendications précédentes, la valeur dérivée ou valeurs (T) constituent un set de valeurs (T) indicative d'une scène acoustique.
- Procédé selon une des revendications précédentes, les signaux (8) à être aperçus par un utilisateur (U) du système auditif (1) étant générés par :v) obtenir des signaux audio d'exit (7) en traitant les signaux audio (S1 ; R1) généré par l'entité d'entrée (10) selon la fonction de transfert (G) en utilisant la valeur dérivée ou valeurs (T) ;w) transformer les signaux audio d'exit (7) dans les signaux (8) à être aperçus par l'utilisateur (U) du système auditif (1).
- Système auditif (1) comprenant- une entité d'entrée (10) pour obtenir, avec une première caractéristique directionnelle (P1) de l'entité d'entrée (10), du bruit acoustique (6) entrant et dérivant delà des premiers signaux audio (S1) ;- une entité d'exit (80) pour recevoir des signaux audio d'exit (7) et les transformer dans des signaux (8) à être aperçus par un utilisateur (U) du système auditif (1) ;- une entité de transmission (20), qui est entre-liante fonctionnellement l'entité d'entrée (10) et l'entité d'exit (80), et qui implémente une fonction de transfert (G), qui peut être ajustée par un ou plus paramètre de fonction de transfert et qui décrit, comme des signaux audio (S1 ; R1) généré par l'entité d'entrée (10) sont traités afin de dériver les signaux audio d'exit (7) ;- une entité caractérisante (40) pour dériver des premiers signaux audio (S1) un premier set (C1) des données de bruit caractérisantes ;- une entité d'évaluation (50) pour dériver, en dépendance du premier set (C1) des données de bruit caractérisantes et de la première information directionnelle (D1), qui est des données comprenant de l'information sur la première caractéristique directionnelle (P1), une valeur (T) pour chacun des au moins un paramètre des fonctions de transfert.
- Système auditif (1) selon la revendication 11, de plus comprenant une entité des stockage (60) contenant des données dérivées d'une fonction de transfert relative à la tête (HRTF1 ; HRTF2) et/ou données relative à une caractéristique directionnelle d'au moins un premier transducteur d'entrée (M1) de l'entité d'entrée (10), et la première information directionnelle (D1) étant au moins en partie dérivée de l'entité de stockage (60).
- Système auditif (1) selon la revendication 11 ou revendication 12, l'entité d'entrée (10) comprend au moins un premier transducteur d'entrée (M1), au moins un second transducteur d'entrée (M2) et au moins une entité de beam former (BF1), qui est relié fonctionnellement au premier (M1) et second (M2) transducteur d'entrée, et un contrôleur de beam former (BFC1) pour contrôler au moins une entité de beam former (BF1), la première information directionnelle (D1) étant au moins en partie dérivée du contrôleur de beam former (BFC1).
- Système auditif (1) selon la revendication 13, l'entité d'entrée (10) comprenant au moins un localisateur (L1) relié fonctionnellement au premier (M1) et au second (M2) transducteur d'entrée, pour déterminer la location des sources du bruit (Q1,Q2,Q3) et pour prévoir au moins un contrôleur de beam former (BFC1) avec des données relatives à la location des sources de bruit (Q1,Q2,Q3).
- Système auditif (1) selon une des revendications 11 à 14, l'entité caractérisant (40) comprenant au moins une caractéristique d'extracteur (FE1) pour extraire un premier set des caractéristiques (f1a, f1b, f1c) des premiers signaux audio (S1) et au moins un classificateur (CLF1) pour classifier le premier set des caractéristiques (f1a, f1b, f1c) selon un set des classes, le résultat de la classification étant compris dans le premier set (C1) des données de bruit caractérisantes.
- Système auditif (1) selon une des revendications 11 à 15, qui est un système d'aide auditive comprenant au moins un dispositif d'aide auditif.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20060015269 EP1858291B1 (fr) | 2006-05-16 | 2006-07-21 | Système auditif et méthode de déterminer information sur une scène sonore |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06114033 | 2006-05-16 | ||
EP20060015269 EP1858291B1 (fr) | 2006-05-16 | 2006-07-21 | Système auditif et méthode de déterminer information sur une scène sonore |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1858291A1 EP1858291A1 (fr) | 2007-11-21 |
EP1858291B1 true EP1858291B1 (fr) | 2011-10-05 |
Family
ID=37903794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20060015269 Active EP1858291B1 (fr) | 2006-05-16 | 2006-07-21 | Système auditif et méthode de déterminer information sur une scène sonore |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP1858291B1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013170885A1 (fr) | 2012-05-15 | 2013-11-21 | Phonak Ag | Procédé de mise en fonctionnement d'un appareil auditif et appareil auditif |
RU2759715C2 (ru) * | 2017-01-03 | 2021-11-17 | Конинклейке Филипс Н.В. | Звукозапись с использованием формирования диаграммы направленности |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102007010601A1 (de) * | 2007-03-05 | 2008-09-25 | Siemens Audiologische Technik Gmbh | Hörsystem mit verteilter Signalverarbeitung und entsprechendes Verfahren |
US8331594B2 (en) * | 2010-01-08 | 2012-12-11 | Sonic Innovations, Inc. | Hearing aid device with interchangeable covers |
DE102010026381A1 (de) * | 2010-07-07 | 2012-01-12 | Siemens Medical Instruments Pte. Ltd. | Verfahren zum Lokalisieren einer Audioquelle und mehrkanaliges Hörsystem |
EP2611220A3 (fr) * | 2011-12-30 | 2015-01-28 | Starkey Laboratories, Inc. | Prothèses auditives avec formeur de faisceaux adaptatif en réponse à la parole hors-axe |
DE102017205652B3 (de) * | 2017-04-03 | 2018-06-14 | Sivantos Pte. Ltd. | Verfahren zum Betrieb einer Hörvorrichtung und Hörvorrichtung |
DE102022201706B3 (de) * | 2022-02-18 | 2023-03-30 | Sivantos Pte. Ltd. | Verfahren zum Betrieb eines binauralen Hörvorrichtungssystems und binaurales Hörvorrichtungssystem |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6449216B1 (en) * | 2000-08-11 | 2002-09-10 | Phonak Ag | Method for directional location and locating system |
EP1326478B1 (fr) * | 2003-03-07 | 2014-11-05 | Phonak Ag | Procédé de génération des signaux de commande et système d'appareil auditif binaural |
WO2005029914A1 (fr) * | 2003-09-19 | 2005-03-31 | Widex A/S | Procede de commande de la directionnalite de la caracteristique de reception sonore d'une protese auditive et appareil de traitement d'un signal pour prothese auditive presentant une caracteristique directionnelle pouvant etre commandee |
-
2006
- 2006-07-21 EP EP20060015269 patent/EP1858291B1/fr active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013170885A1 (fr) | 2012-05-15 | 2013-11-21 | Phonak Ag | Procédé de mise en fonctionnement d'un appareil auditif et appareil auditif |
RU2759715C2 (ru) * | 2017-01-03 | 2021-11-17 | Конинклейке Филипс Н.В. | Звукозапись с использованием формирования диаграммы направленности |
Also Published As
Publication number | Publication date |
---|---|
EP1858291A1 (fr) | 2007-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8249284B2 (en) | Hearing system and method for deriving information on an acoustic scene | |
EP1858291B1 (fr) | Système auditif et méthode de déterminer information sur une scène sonore | |
US8873779B2 (en) | Hearing apparatus with own speaker activity detection and method for operating a hearing apparatus | |
CN108882136B (zh) | 带有协调声音处理的双耳助听器系统 | |
US8194900B2 (en) | Method for operating a hearing aid, and hearing aid | |
CN101505447B (zh) | 估计助听器中的音频信号加权函数的方法 | |
EP2123113B1 (fr) | Système auditif à suppression de bruit améliorée et procédé d'opération d'un tel système | |
US7957548B2 (en) | Hearing device with transfer function adjusted according to predetermined acoustic environments | |
CA2648851A1 (fr) | Systeme auditif et procede mettant en oeuvre une reduction de bruit biaurale pour preserver des fonctions de transfert interaurales (itf) | |
US7340073B2 (en) | Hearing aid and operating method with switching among different directional characteristics | |
CN107708045B (zh) | 用于改善听力系统中的接收信号的方法 | |
CN109845296B (zh) | 双耳助听器系统和操作双耳助听器系统的方法 | |
US20080086309A1 (en) | Method for operating a hearing aid, and hearing aid | |
US20150036850A1 (en) | Method for following a sound source, and hearing aid device | |
US20230143347A1 (en) | Hearing device with feedback instability detector that changes an adaptive filter | |
EP1858292B1 (fr) | Prothèse auditive et procédé d'utilisation d'une prothèse auditive | |
EP1827058A1 (fr) | Appareil acoustique avec une transition graduelle entre des modes d'opération d'une prothèse auditive | |
EP2688067B1 (fr) | Système d'apprentissage et d'amélioration de la réduction du bruit dans des dispositifs d'assistance auditive | |
WO2007131815A1 (fr) | Dispositif auditif et procédé pour le faire fonctionner | |
US11653147B2 (en) | Hearing device with microphone switching and related method | |
CN116634322A (zh) | 用于运行双耳听力设备系统的方法和双耳听力设备系统 | |
Hamacher | Algorithms for future commercial hearing aids | |
CN115314820A (zh) | 配置成选择参考传声器的助听器 | |
Puder | Compensation of hearing impairment with hearing aids: Current solutions and trends |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK YU |
|
17P | Request for examination filed |
Effective date: 20080520 |
|
17Q | First examination report despatched |
Effective date: 20080625 |
|
AKX | Designation fees paid |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: TROESCH SCHEIDEGGER WERNER AG Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602006024796 Country of ref document: DE Effective date: 20111208 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: VDEP Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
LTIE | Lt: invalidation of european patent or patent extension |
Effective date: 20111005 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 527831 Country of ref document: AT Kind code of ref document: T Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20120205 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20120206 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20120106 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20120105 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
26N | No opposition filed |
Effective date: 20120706 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602006024796 Country of ref document: DE Effective date: 20120706 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120731 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20130329 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20120116 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120731 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120721 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20111005 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20120721 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20060721 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R084 Ref document number: 602006024796 Country of ref document: DE |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CH Payment date: 20230802 Year of fee payment: 18 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240729 Year of fee payment: 19 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240729 Year of fee payment: 19 |