CA2439427A1 - Method for determining an acoustic environment situation, application of the method and hearing aid - Google Patents
Method for determining an acoustic environment situation, application of the method and hearing aid Download PDFInfo
- Publication number
- CA2439427A1 CA2439427A1 CA002439427A CA2439427A CA2439427A1 CA 2439427 A1 CA2439427 A1 CA 2439427A1 CA 002439427 A CA002439427 A CA 002439427A CA 2439427 A CA2439427 A CA 2439427A CA 2439427 A1 CA2439427 A1 CA 2439427A1
- Authority
- CA
- Canada
- Prior art keywords
- kin
- class information
- processing stage
- processing
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
Abstract
The invention relates to a method and a device for determining an acoustic environment situation. The method consists of processing an acoustic input signal (IN) that is preferably picked up with the help of at least one microphone in at least two processing stages (S1, ..., Sn). At least one of the two processing stages (S1, ..., Sn) comprises an extraction phase in which characteristic features are extracted from the input signal (IN) and each of the processing stages (S1, ..., Sn) comprises an identification phase in which extracted characteristic features are classified. Class information (KI1, ..., KIn; KI1', ..., KIn') characterising or identifying the acoustic environment situation is generated based on the classification of said features in at least one processing stage (S1, ..., Sn). The invention also relates to applications of the inventive method in hearing aids and to a hearing aid.
Claims (21)
1. Method for identifying an acoustic scene, whereas the method comprises the steps that an acoustic input signal (IN) preferably recorded by at least one microphone is processed in at least two processing stages (S1, ..., Sn) in such a manner - that an extraction phase is provided in at least one of the at least two processing stages (S1, ..., Sn), in which extraction phase characteristic features are extracted from the input signal (IN), and - that an identification phase is provided in each processing stage (S1, ..., Sn), in which identification phase the extracted characteristic features are classified, whereby class information (KI1, ..., Kin; KI1', ..., KIn') is generated according to the classification of the features in at least one of the processing stages(S1, ..., Sn), which class information (KI1, ..., Kin; KI1', ..., KIn') characterizes or identifies the acoustic scene.
2. Method according to claim 1, characterized in that an extraction phase is provided in each processing stage (S1, ..., Sn), in which extraction phase characteristic features are extracted from the input signal (IN).
3. Method according to claim 1 or 2, characterized in that a manner of processing in a processing stage (S1, ..., Sn) is selected according to the class information (KI1, ..., Kin; KI1', ..., KIn') obtained in another processing stage (S1, ..., Sn).
4. Method according to claim 2 or 3, characterized in that the class information (KI1, ..., KIn; KI1', ..., KIn') obtained in the identification phase of a processing stage i (S1, ..., Sn) determines a processing manner in one of the following, inferior processing stages i+1 (S1, ..., Sn).
5. Method according to claim 4, characterized in that, according to class information (KI1, ..., KIn; KI1', ..., KIn') obtained in the processing stage i (S1, ..., Sn), specific features are selected in the extraction phase of the following, inferior processing stage i+1 (S1, ..., Sn) and/or specific classification methods are selected in the identification phase of the following, inferior processing stage i+1 (S1, ..., Sn).
6. Method according to one of the preceding claims, characterized in that one of the following classification methods is used in the identification phase:
- Hidden Markov Models;
- Fuzzy Logic;
-Bayes Classifier;
-Rule-based Classifier -Neuronal Networks;
-Minimal Distance.
- Hidden Markov Models;
- Fuzzy Logic;
-Bayes Classifier;
-Rule-based Classifier -Neuronal Networks;
-Minimal Distance.
7. Method according to one of the preceding claims, characterized in that technical- and/or auditory-based features are extracted in the extraction phase.
8. Method according to one of the preceding claims, characterized in that a post-processing phase is provided in at least one processing stage (S1, ..., Sn) subsequent to the extraction phase, in which post-processing stage the class information (KI1, ..., KIn) are revised in order to generate revised class information (KI1', ..., KIn').
9. Use of the method according to one of the claims 1 to 8 for the adjustment of at least one hearing device to a momentary acoustic scene.
10. Use according to claim 9, characterized in that a hearing program or a transfer function between at least one microphone and a speaker in a hearing device is selected according to a determined acoustic scene.
11. Use of the method according to one of the claims 1 to 8 for speech recognition.
12. Device for identifying an acoustic scene with a feature extraction unit (M) which is operatively connected to a classification unit (C) in order to process an input signal (IN), characterized in that at least two processing stages (S1, ..., Sn) are provided, a feature extraction unit (F1, ..., Fn) being contained in at least one of the at least two processing stages (S1, ..., Sn), and a classification unit (C1, ..., Cn) being contained in each processing stage (S1, ..., Sn), that the input signal (IN) is fed to the feature extraction units (F1, ..., Fn) and that class information (KI1, ..., KIn) is generated by the classification units (C1, ..., Cn).
13. Device according to claim 12, characterized in that a feature extraction unit (F1, ..., Fn) is provided in each processing stage (S1, ..., Sn).
14. Device according to claim 12 or 13, characterized in that the class information (KI1, ..., KIn; KI1', ..., KIn') is fed to other processing stages (S1, ..., Sn).
15. Device according to one of the claims 12 to 14, characterized in that the class information (KI1, ..., KIn;
KI1', ..., KIn') of a processing stage i (S1, ..., Sn) is fed to a following, inferior processing stage i+1 (S1, ..., Sn).
KI1', ..., KIn') of a processing stage i (S1, ..., Sn) is fed to a following, inferior processing stage i+1 (S1, ..., Sn).
16. Device according to claim 15, characterized in that the class information (KI1, ..., KIn; KI1', ..., KIn') of a processing stage i (S1, ..., Sn) is fed to a feature extraction unit (F1, ..., Fn) of a following, inferior processing stage i+1 (S1, ..., Sn), and/or that the class information (KI1, ..., KIn; KI1', ..., KIn') of a processing stage i (S1, ..., Sn) is fed to a classification unit (C1, ..., Cn) of a following, inferior processing stage i+1 (S1, ..., Sn).
17. Device according to one of the claims 12 to 16, characterized in that the class information (KI1, ..., KIn) obtained in at least one processing stage (S1, ..., Sn) is fed to a post-processing unit (P1, ..., Pn) in order to generate revised class information (KI1', ..., KIn').
18. Device according to claim 12 or 13, characterized in that the class information (KI1, ..., KIn) of all processing stages (S1, ..., Sn) is fed to a decision unit (ED).
19. Device according to claim 18, characterized in that the decision unit (ED) is operatively connected to at least one of the feature extraction units (F1, ..., Fn) and/or to at least one of the classification units (C1, ..., Cn).
20. Hearing device with a transfer unit (200) which is, on its input side, operatively connected to at least one microphone and, on its output side, to a converter unit, in particular to a speaker, and with a device according to one of the claims 12 to 17 for generating class information (KI1, ..., KIn; KI1', ..., KIn'), whereas the class information (KI1, ..., KIn; KI1', ..., KIn') is fed to the transfer unit (200).
21. Hearing device according to claim 20, characterized in that an input unit (300) is provided which is operatively connected to the transfer unit (200) and/or to the device according to one of the claims 12 to 17.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CH2002/000049 WO2002032208A2 (en) | 2002-01-28 | 2002-01-28 | Method for determining an acoustic environment situation, application of the method and hearing aid |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2439427A1 true CA2439427A1 (en) | 2002-04-25 |
CA2439427C CA2439427C (en) | 2011-03-29 |
Family
ID=4358282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2439427A Expired - Lifetime CA2439427C (en) | 2002-01-28 | 2002-01-28 | Method for determining an acoustic environment situation, application of the method and hearing aid |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1470735B1 (en) |
JP (1) | JP3987429B2 (en) |
AU (2) | AU2002224722B2 (en) |
CA (1) | CA2439427C (en) |
WO (1) | WO2002032208A2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1835785A2 (en) * | 2006-03-14 | 2007-09-19 | Starkey Laboratories, Inc. | Environment detection and adaptation in hearing assistance devices |
US7738665B2 (en) | 2006-02-13 | 2010-06-15 | Phonak Communications Ag | Method and system for providing hearing assistance to a user |
US7738666B2 (en) | 2006-06-01 | 2010-06-15 | Phonak Ag | Method for adjusting a system for providing hearing assistance to a user |
US7986790B2 (en) | 2006-03-14 | 2011-07-26 | Starkey Laboratories, Inc. | System for evaluating hearing assistance device settings using detected sound environment |
US8068627B2 (en) | 2006-03-14 | 2011-11-29 | Starkey Laboratories, Inc. | System for automatic reception enhancement of hearing assistance devices |
US8958586B2 (en) | 2012-12-21 | 2015-02-17 | Starkey Laboratories, Inc. | Sound environment classification by coordinated sensing using hearing assistance devices |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AUPS247002A0 (en) * | 2002-05-21 | 2002-06-13 | Hearworks Pty Ltd | Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions |
US7889879B2 (en) | 2002-05-21 | 2011-02-15 | Cochlear Limited | Programmable auditory prosthesis with trainable automatic adaptation to acoustic conditions |
EP2254350A3 (en) * | 2003-03-03 | 2014-07-23 | Phonak AG | Method for manufacturing acoustical devices and for reducing wind disturbances |
US8027495B2 (en) | 2003-03-07 | 2011-09-27 | Phonak Ag | Binaural hearing device and method for controlling a hearing device system |
EP1320281B1 (en) | 2003-03-07 | 2013-08-07 | Phonak Ag | Binaural hearing device and method for controlling such a hearing device |
DK1326478T3 (en) | 2003-03-07 | 2014-12-08 | Phonak Ag | Method for producing control signals and binaural hearing device system |
US20040175008A1 (en) | 2003-03-07 | 2004-09-09 | Hans-Ueli Roeck | Method for producing control signals, method of controlling signal and a hearing device |
ATE527829T1 (en) * | 2003-06-24 | 2011-10-15 | Gn Resound As | BINAURAL HEARING AID SYSTEM WITH COORDINATED SOUND PROCESSING |
US6912289B2 (en) | 2003-10-09 | 2005-06-28 | Unitron Hearing Ltd. | Hearing aid and processes for adaptively processing signals therein |
DE10356093B3 (en) * | 2003-12-01 | 2005-06-02 | Siemens Audiologische Technik Gmbh | Hearing aid with adaptive signal processing of received sound waves dependent on identified signal source direction and signal classification |
US20060182295A1 (en) | 2005-02-11 | 2006-08-17 | Phonak Ag | Dynamic hearing assistance system and method therefore |
US7957548B2 (en) | 2006-05-16 | 2011-06-07 | Phonak Ag | Hearing device with transfer function adjusted according to predetermined acoustic environments |
WO2007131815A1 (en) * | 2006-05-16 | 2007-11-22 | Phonak Ag | Hearing device and method for operating a hearing device |
DK1858292T4 (en) | 2006-05-16 | 2022-04-11 | Phonak Ag | Hearing device and method of operating a hearing device |
US8249284B2 (en) | 2006-05-16 | 2012-08-21 | Phonak Ag | Hearing system and method for deriving information on an acoustic scene |
US8605923B2 (en) | 2007-06-20 | 2013-12-10 | Cochlear Limited | Optimizing operational control of a hearing prosthesis |
EP2192794B1 (en) | 2008-11-26 | 2017-10-04 | Oticon A/S | Improvements in hearing aid algorithms |
EP2569955B1 (en) | 2010-05-12 | 2014-12-03 | Phonak AG | Hearing system and method for operating the same |
EP2596647B1 (en) | 2010-07-23 | 2016-01-06 | Sonova AG | Hearing system and method for operating a hearing system |
WO2010133703A2 (en) | 2010-09-15 | 2010-11-25 | Phonak Ag | Method and system for providing hearing assistance to a user |
JP2012083746A (en) * | 2010-09-17 | 2012-04-26 | Kinki Univ | Sound processing device |
US20150139468A1 (en) * | 2012-05-15 | 2015-05-21 | Phonak Ag | Method for operating a hearing device as well as a hearing device |
CN112954569B (en) * | 2021-02-20 | 2022-10-25 | 深圳市智听科技有限公司 | Multi-core hearing aid chip, hearing aid method and hearing aid |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE60120949T2 (en) * | 2000-04-04 | 2007-07-12 | Gn Resound A/S | A HEARING PROSTHESIS WITH AUTOMATIC HEARING CLASSIFICATION |
WO2001020965A2 (en) * | 2001-01-05 | 2001-03-29 | Phonak Ag | Method for determining a current acoustic environment, use of said method and a hearing-aid |
-
2002
- 2002-01-28 AU AU2002224722A patent/AU2002224722B2/en not_active Ceased
- 2002-01-28 JP JP2002535462A patent/JP3987429B2/en not_active Expired - Fee Related
- 2002-01-28 WO PCT/CH2002/000049 patent/WO2002032208A2/en active Application Filing
- 2002-01-28 EP EP02706499.7A patent/EP1470735B1/en not_active Expired - Lifetime
- 2002-01-28 CA CA2439427A patent/CA2439427C/en not_active Expired - Lifetime
- 2002-01-28 AU AU2472202A patent/AU2472202A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7738665B2 (en) | 2006-02-13 | 2010-06-15 | Phonak Communications Ag | Method and system for providing hearing assistance to a user |
EP1835785A2 (en) * | 2006-03-14 | 2007-09-19 | Starkey Laboratories, Inc. | Environment detection and adaptation in hearing assistance devices |
US7986790B2 (en) | 2006-03-14 | 2011-07-26 | Starkey Laboratories, Inc. | System for evaluating hearing assistance device settings using detected sound environment |
US8068627B2 (en) | 2006-03-14 | 2011-11-29 | Starkey Laboratories, Inc. | System for automatic reception enhancement of hearing assistance devices |
US8494193B2 (en) | 2006-03-14 | 2013-07-23 | Starkey Laboratories, Inc. | Environment detection and adaptation in hearing assistance devices |
US9264822B2 (en) | 2006-03-14 | 2016-02-16 | Starkey Laboratories, Inc. | System for automatic reception enhancement of hearing assistance devices |
US7738666B2 (en) | 2006-06-01 | 2010-06-15 | Phonak Ag | Method for adjusting a system for providing hearing assistance to a user |
US8958586B2 (en) | 2012-12-21 | 2015-02-17 | Starkey Laboratories, Inc. | Sound environment classification by coordinated sensing using hearing assistance devices |
US9584930B2 (en) | 2012-12-21 | 2017-02-28 | Starkey Laboratories, Inc. | Sound environment classification by coordinated sensing using hearing assistance devices |
Also Published As
Publication number | Publication date |
---|---|
WO2002032208A3 (en) | 2002-12-05 |
EP1470735A2 (en) | 2004-10-27 |
JP2005504325A (en) | 2005-02-10 |
AU2472202A (en) | 2002-04-29 |
JP3987429B2 (en) | 2007-10-10 |
WO2002032208A2 (en) | 2002-04-25 |
CA2439427C (en) | 2011-03-29 |
EP1470735B1 (en) | 2019-08-21 |
AU2002224722B2 (en) | 2008-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2439427A1 (en) | Method for determining an acoustic environment situation, application of the method and hearing aid | |
US7158931B2 (en) | Method for identifying a momentary acoustic scene, use of the method and hearing device | |
US10923137B2 (en) | Speech enhancement and audio event detection for an environment with non-stationary noise | |
US6895098B2 (en) | Method for operating a hearing device, and hearing device | |
JP4939935B2 (en) | Binaural hearing aid system with matched acoustic processing | |
JP5607627B2 (en) | Signal processing apparatus and signal processing method | |
US8755546B2 (en) | Sound processing apparatus, sound processing method and hearing aid | |
JP2007507119A5 (en) | ||
EP1670285A2 (en) | Method to adjust parameters of a transfer function of a hearing device as well as a hearing device | |
WO2020256257A3 (en) | Combined learning method and device using transformed loss function and feature enhancement based on deep neural network for speaker recognition that is robust to noisy environment | |
WO2001022790A3 (en) | Method for operating a hearing-aid and a hearing aid | |
EP1429314A1 (en) | Correction of energy as input feature for speech processing | |
CN105049802A (en) | Speech recognition law-enforcement recorder and recognition method thereof | |
EP1326478A3 (en) | Method for producing control signals, method of controlling signal transfer and a hearing device | |
CN214226506U (en) | Sound processing circuit, electroacoustic device, and sound processing system | |
US20020150264A1 (en) | Method for eliminating spurious signal components in an input signal of an auditory system, application of the method, and a hearing aid | |
JP2010506526A (en) | Hearing aid operating method and hearing aid | |
KR20030010432A (en) | Apparatus for speech recognition in noisy environment | |
JPH0916193A (en) | Speech-rate conversion device | |
JP2006095635A (en) | Control device of mobile robot | |
CN110738990A (en) | Method and device for recognizing voice | |
KR20190123120A (en) | Hologram speaker | |
JP4044916B2 (en) | Voice input device | |
JP6169526B2 (en) | Specific voice suppression device, specific voice suppression method and program | |
JPH04156600A (en) | Voice recognizing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKEX | Expiry |
Effective date: 20220128 |