AU2001246278B2 - Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid - Google Patents

Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid Download PDF

Info

Publication number
AU2001246278B2
AU2001246278B2 AU2001246278A AU2001246278A AU2001246278B2 AU 2001246278 B2 AU2001246278 B2 AU 2001246278B2 AU 2001246278 A AU2001246278 A AU 2001246278A AU 2001246278 A AU2001246278 A AU 2001246278A AU 2001246278 B2 AU2001246278 B2 AU 2001246278B2
Authority
AU
Australia
Prior art keywords
signal
components
features
signal components
desired signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2001246278A
Other versions
AU2001246278A1 (en
Inventor
Silvia Allegro
Hans-Ueli Roeck
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonova Holding AG
Original Assignee
Phonak AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Phonak AG filed Critical Phonak AG
Priority claimed from PCT/CH2001/000236 external-priority patent/WO2001047335A2/en
Publication of AU2001246278A1 publication Critical patent/AU2001246278A1/en
Application granted granted Critical
Publication of AU2001246278B2 publication Critical patent/AU2001246278B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Description

1 METHOD FOR THE ELIMINATION OF NOISE SIGNAL COMPONENTS IN AN INPUT SIGNAL OF AN AUDITORY SYSTEM, USE OF SAID METHOD AND A HEARING DEVICE The present invention is related to a method for eliminating noise signal components in an input signal of an auditory system, an application of the method for operating a hearing device, and a hearing device.
Hearing devices are generally used by hearing-impaired persons, their basic purpose being fullest possible compensation for the hearing disorder. The potential wearer of a hearing device will more readily accept the use of the hearing device if and when the hearing device performs satisfactorily even in an environment with strong noise interference, i.e. when the wearer can discriminate the spoken word with a high level of clarity even in the presence of significant noise signals.
Where in the following description the term "hearing device" is used, it is intended to apply to so-called hearing aids which serve to correct for the hearing impairment of a person as well as to all other audio communication systems such as radio equipment.
There are three techniques for improving speech intelligibility in the presence of noise signals, using hearing devices: First, reference is made to hearing devices which are equipped with so-called directional microphone technology.
This technology permits spatial filtering which makes it 2 possible to minimize or even eliminate noise interference from a direction other than that of a desired signal, for instance from behind or from the side. This earlier method, also referred to as "beam forming", requires a minimum of two microphones in the hearing device. One of the main shortcomings of such hearing devices consists in the fact that noise impinging from the same direction as the desired signal cannot be reduced, let alone eliminated.
In another prior-art approach, the significant desired signal is preferably captured at its point of origin whereupon a transmitter sends it via a wireless link directly into a receiver in the hearing device. This prevents noise signals from entering the hearing device. This prior-art method, also known in the audio-equipment industry as frequency-modulation (FM) technology, requires auxiliary equipment such as a transmitter in the audio source unit and the receiver that must be coupled into the hearing device, making manipulation of the hearing device by the user correspondingly awkward.
Finally, a third genre of hearing devices employs signal processing algorithms for processing input signals for the purpose of suppressing or at least attenuating noise signal components in the input signal, or to amplify the corresponding desired signal components (the so-called noise canceling technique). The process involves the estimation of the noise signal components contained in the input signal in several frequency bands whereupon, for generating a clean desired signal, any noise signal components are subtracted from the input signal of the hearing device. This procedure is also known as spectral subtraction. The European patent No. EP-Bl-0 534 837 describes one such method which yields acceptable results. However, spectral subtraction only works 3 well in cases where the noise signal components are bandwidth-limited and stationary. Failing that, for instance in the case of nonstationary noise signal components, the desired signal the nonstationary voice signal) cannot be discriminated from the noise signal components. In that type of situation, spectral subtraction will not work well and speech clarity will be severely reduced due to the absence of noise suppression. Moreover, the application of spectral subtraction can cause a deterioration of the desired signal as well.
Reference is also made to a study by Bear et.al. ("Spectral Contrast Enhancement of Speech in Noise for Listeners with Sensorineural Hearing Impairment: Effects on Intelligibility, Quality, and Response Times", Journal of Rehabilitation Research and Development 30, pages 49 to 72) which has shown that, while spectral enhancement leads to a subjectively better signal quality and reduced listening strain, it does not generally result in improved voice clarity. In this connection, reference is made to an article by Frank et.al., titled "Evaluation of Spectral Enhancement in Hearing Aids, Combined with Phonemic Compression" (Journal of the Acoustic Society of America 106, pages 1452 to 1464) For the sake of completeness, reference is also made to the following documents: T. Baer, B.C.J. Moore, "Evaluation of a Scheme to Compensate for Reduced Frequency Selectivity in Hearing- Impaired Subjects", published in "Modeling Sensorineural Hearing Loss" by W. Jesteadt, Lawrence Erlbaum Associated Publishers, Mahwah, New Jersey, 1997;
I-
<4 S- V. Hohmann, "Binaural Noise Reduction and a Localization Model Based on the Statistics of Binaural Signal Parameters", International Hearing Aid Research Conference, Lake Tahoe, 2000; 00 \0 US-5 727 072; S- N. Virag, "Speech enhancement based on masking CI properties of the human auditory system", Ph.D. thesis, Ecole Polytechnique Federale de Lausanne, 1996; WO 91/03042.
According to an aspect of the present invention there is provided a method for the elimination of noise signal components in an input signal, said method consisting of -characterization, in a signal analysis phase, of noise signal components and of a desired signal contained in an input signal; and the determination of generation, in a signal processing phase, of the desired signal or an estimated desired signal on the basis of the characterization obtained in the signal analysis phase; and wherein said characterization of the noise signal components is performed by at least evaluating auditory-based features determined in the signal analysis phase employing a primitive-grouping method.
This allows for a much improved noise suppression in adaptation to the auditory environment. Unlike conventional noise canceling, the method according to the N \Melboume\Cas\Patent\47000-47999\P47947AU\Specis\P47947AU Speciication 2007-I0-I doc 31110/07 O present invention has no negative effect on the desired Ssignal. Furthermore, it also permits the elimination of 0 o nonstationary noise from the input signal. It should also be stated that it is not possible with conventional noise suppression algorithms to synthesize the desired signal.
00 0 The present invention will be further explained in the (N following by referring to drawings showing exemplified embodiments. Thereby, it is shown in: Fig. 1 a schematic representation of an embodiment of the present invention with the aid of a block diagram; Fig. 2 again a schematic representation, part of the block diagram according to fig. 1; and Fig. 3 a further embodiment of the partial block diagram shown in fig. 2.
The block diagram in fig. 1 depicts the method according to an embodiment of the present invention, consisting of a signal analysis phase I and a signal processing phase II.
In the signal analysis phase I an input signal ES, impinging on an auditory system and likely to contain noise signal components SS as well as desired signal components NS, is analyzed along auditory-based principles which will be explained further below. Thereupon, noise elimination takes place in the signal processing phase II under utilization of the data acquired in the signal analysis phase I on the noise signal components SS and the desired signal components NS. Basically, two embodiments are proposed: The first method provides for the desired signal NS to be obtained by removing the noise signal components SS from the input signal ES, i.e. by suppressing or attenuating the noise signal components SS.
The second method provides for a synthesis of the desired signal NS or, respectively, NS' N\Melboumc\Cs\Paent\4700-47999\P47947ALSpccis\P47947 AU Spccifcation 2007-10-15 dc 31/10/07 6 Another embodiment of the method according to the present invention employs both of the aforementioned techniques, meaning a combination of the suppression of the detected noise signal components and the synthesis of the identified desired signals NS and/or NS'.
In contrast to conventional noise suppression techniques where, in a similar signal analysis phase, an input signal is examined purely on the basis of its stationary or nonstationary nature, the method according to the present invention is based on an auditory-based signal analysis. The process involves the extraction from the input signal ES at least of auditory-based features such as loudness, spectral profile (timbre), harmonic structure (pitch), common build-up periods and decay times (onset/offset), coherent amplitude and frequency modulation, coherent phases, interaural runtime and level differences and others, such extraction covering specific individual features or all features. The definitions and other information regarding auditory features are provided in the publication by A.S. Bregman titled Auditory Scene Analysis (MIT Press, Cambridge, London, 1990). It should be noted that the method according to the present invention is not limited to the extraction of auditory features but that it is possible constituting an additional desirable aspect of the method according to the present invention to extract in addition to the auditory features such purely technical features as for instance zero axis crossing rates, periodic level fluctuations, varying modulation frequencies, spectral emphasis, amplitude distribution, and others.
7 One particular embodiment provides for feature extraction either from the time signal or from different frequency bands. This can be accomplished by using a hearing-adapted filtering bank Zwicker, H. Fastl, Psychoacoustics Facts and Models, Springer Verlag, 1999) or a technical filter bank such as an FFT filter bank or a wavelet filter bank.
The evaluation of the determined features, whether auditorybased or technically-based, permits the identification and discrimination of different signal components SA 1 to SAn, where some of these signal components SA 1 to SAn represent useful desired signals NS and others are noise signals SS which are to be eliminated.
According to the invention, the signal components SA 1 to SAn are separated by two different approaches which are explained below with the aid of figures 2 and 3.
Fig. 2 illustrates in a block diagram the progression of the process steps in the signal analysis phase I. Involved in the process are two series-connected units, i.e. a feature extraction unit 20 and a grouping unit 21.
The feature extraction unit 20 handles the above-mentioned extraction of auditory-based and possibly technically-based features M 1 to Mj for the characterization of the input signal ES. These features Mi to Mj are subsequently sorted in the grouping unit 21 employing the method of primitive grouping as described in the article by A.S. Bregman titled "Auditory Scene Analysis" (MIT Press, Cambridge, London, 1990). This essentially conventional method is contextindependent and is based on the sequential execution of various procedural steps by means of which, as a function of 8 the extracted features M 1 to Mj, the input signal ES is broken down into the signal components SA 1 to SA, mapped to the different sound sources. This approach is also referred to as a "bottom-up" or "data-driven" process. In this connection, reference is made to the publication by G. Brown titled "Computational Auditory Scene Analysis: A Representational Approach" (Ph.D. thesis, University of Sheffield, 1992), and to the publication by M. Cooke titled "Modelling Auditory Processing Analysis and Organisation" (Ph.D. thesis, University of Sheffield, 1993). A preferred embodiment is illustrated in fig. 3, again as a block diagram, employing the scheme-based grouping method which was explained in depth by A.S. Bregman (see above). The schemebased grouping method is context-independent and is also known as a "top-down" or "prediction-driven" process. In this connection, reference is made to the publication by D.P.W.
Ellis titled "Prediction-Driven Computational Auditory Scene Analysis" (Ph.D. thesis, Massachusetts Institute of Technology, 1996) In addition to the feature extraction unit 20 and the grouping unit 21, as can be seen in fig. 3, a hypothesis unit 22 is activated in the signal analysis phase I. It will be evident from the structure depicted in fig. 3 that there is no longer merely a sequential series of operating steps but that, based on predetermined data V fed to the hypothesis unit 22, a hypothesis H is established on the nature of input signal ES in view of the extracted features M 1 to Mj and of the signal components SA 1 to SAn. Preferably, based on the hypothesis H, both the feature extraction in the feature extraction unit 20 and the grouping of the features M 1 to Mj are adapted to a momentary situation. In other words, the hypothesis H is generated by means of a "bottom-up" analysis 9 and on the basis of pre-established data V relative to the acoustic context. The hypothesis H on its part determines the context of the grouping and is derived from knowledge as well as assumptions regarding the acoustic environment and from the grouping itself. Hence, the procedural steps taking place in the signal analysis phase I are no longer strictly sequential; instead, a feedback loop is provided which permits an adaptation to the particular situation at hand.
The preferred embodiment just described makes it possible for instance in the case of a known speaker for whom the preestablished data V may reflect the phonemics, the typical pitch frequencies, the rapidity of speech and the formant frequencies, to substantially ameliorate the intelligibility as compared to a situation where no information on the speaker is included in the equation.
In both of the grouping approaches mentioned, taking into account the above grouping-related explications, the method according to the present invention permits the formation of the auditory objects, meaning the signal components SA 1 to SAn, by applying the principles of the Gestalt theory B.
Goldstein, Perception Psychology, Spektrum Akademischer Verlag, 1996) to the features M 1 to Mj. These include in particular: continuity, proximity, similarity, common destiny, unity and good constancy.
10 For example, features which change neither continuously nor abruptly suggest their association with a particular signal source. Time-sequential features with a similar harmonic structure (pitch) point to spectral proximity and are mapped to the same signal source. Other similar features as well, for instance modulation, level or spectral profile, permit grouping along individual sound components. A common destiny such as joint build-up and decay and coherent modulation also indicates an association with the same signal component.
Assuming unity in terms of timing facilitates the interpretation of abrupt changes, with inter-signal gaps separating different events or sources, while overlapping components point to several sources.
To continue with the above explanations it can also be stated that the "good constancy" criterion is highly useful for drawing conclusions. For example, a signal will not normally change its character all of a sudden and gradual changes can therefore be attributed to the same signal component, whereas rapid changes are ascribed to new signal components.
Additional grouping possibilities are offered by the extracted features MI to Mj themselves. For example, analyzing the loudness level permits a determination of whether a particular signal component is even present or not.
Similarly, the spectral profile of different sound components (signal components) typically varies, thus permitting differentiation between dissimilar auditory objects. A detected harmonic structure (pitch) on its part suggests a tonal signal component which can be identified by pitch filtering. The transfer function of a pitch filter may be as follows: II Hpitch(Z) 1 Z k where z k represents the cycle length of the pitch frequency.
Pitch filtering then permits the separation of the tonal signal components from the other signal components.
By analyzing coherent modulations it is possible to group spectral components modulated along the same time pattern, or to separate them if these patterns are dissimilar. This permits in particular the identification and subsequent separation of speech components in the signal.
By means of an evaluation of common build-up and decay processes it can be determined which signal components with varying frequency content belong together. Major asynchronous amplitude increases and decreases again point to dissimilar signal components.
Following the identification of the individual signal components SA 1 to SAn in the signal analysis phase I the actual noise signal elimination can take place in the signal processing phase II (fig. One embodiment of the method according to the present invention provides for the reduction or suppression of the noise components in the frequency bands in which they occur. The same result is obtained by amplifying the identified desired signal components. The scope of the solution offered by the present invention also covers the combination of both approaches, i.e. the reduction or suppression of noise signal components and the amplification of desired signal components.
12 In another embodiment of the procedural steps performed in the signal processing phase II, the signal components identified and grouped as desired signal components are recombined.
In yet another embodiment of the method according to the present invention, the desired signal NS, or the estimated desired signal NS', is resynthesized on the basis of the information acquired in the signal analysis phase I. A preferred embodiment thereof consists in the extraction, by means of an analysis of the harmonic structure (pitch analysis), of the different base frequencies of the desired signals and the determination of the spectral levels of the harmonics for instance by means of a loudness or LPC analysis Launer, "Loudness Perception in Listeners with Sensorineural Hearing Loss", thesis, Oldenburg University, 1995; J.R. Deller, J.G. Proakis, J.H.L.Hansen, "Discrete-Time Processing of Speech Signals", Macmillan Publishing Company, 1993). With that information it is possible to generate a completely synthesized signal for tonal speech components. To expand on the above preferred embodiment it is proposed to employ a combination of desired signal amplification and desired signal synthesis.
It is thus possible with the method according to the present invention, employing a signal analysis phase I and a signal processing phase II, to extract from any input signal ES any desired signal NS, to eliminate noise components SS and to regenerate desired signal components NS. This permits substantially improved noise suppression in adaptation to the acoustic environment. Unlike the conventional noise canceling approach, the method according to the present invention has no negative effect on the desired signal. It also permits the removal of non-stationary noise from the input signal ES.
13 Finally, it should be pointed out that with conventional c_ noise suppression algorithms it is not possible to synthesize
O
C.)
o the desired signal.
In another embodiment of the method according to the present invention, the method is combined with the techniques first above mentioned such as beam-forming, binaural approaches for IO noise localization and suppression, or classification of the (C acoustic environment and corresponding program selection.
CI Two examples of similar noise elimination approaches which, however, use primitive grouping only, are as follows: Unoki and M. Akagi, "A method of signal extraction from noisy signal based on auditory scene analysis", Speech Communication, 27, pages 261 to 279, 1999; and WO 00/01200.
Both approaches involve noise suppression by the extraction of a few auditory features and by context-independent grouping. However, the solution presented by the present invention is more complete and is more closely adapted to the auditory system. It should be noted that the method according to the present invention is not limited to speech for the desired signal. It also makes use of all known auditory mechanisms as well as technically-based features. Moreover, the feature extraction and grouping functions are performed as needed and/or as possible, whether dependent or independent of context or pre-established data.
In the claims which follow and in the preceding description, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.
N \Mclbow-n\C.,cs\Patent\47000-43999\P47947 AU\Spcis\P47947 AU Specification 2007-10-15 doc 31/10/07 -13a It is to be understood that, if any prior art publication is referred to herein, such reference does not constitute o an admission that the publication forms a part of the common general knowledge in the art, in Australia or any other country.
00 N \Mectbr,,e\Ces\Paen\47000-47999\P47947 AUJ\Specis\P47947 AU Spccification 2007-10. Odoc 31/1 0/07

Claims (11)

  1. 2. A method according to claim 1, wherein that one or several of the following auditory-based features are used for the characterization of the signal components: Loudness, spectral profile, harmonic structure, common build-up and decay times, coherent amplitude and frequency modulation, coherent phases, interaural runtime and level differences.
  2. 3. A method according to claim 1 or 2, wherein the auditory-based features are determined in different frequency bands.
  3. 4. A method according to any one of the claims 1 to 3, wherein the characterization of the signal components is performed by evaluating the features determined in the signal analysis phase employing the scheme-based grouping technique. N \ciboume\Cases\Patlt\47000-47999\P47947 ALASpecis\P47947.AU Spcification 2007-10-I 3doc 31/10/07 O 5. A method according to claim 4, wherein a hypothesis is established or specified on the nature of the signal O O component and is taken into account in the grouping of the identified features.
  4. 6. A method according to claim 4 or 5, wherein, for the 00 characterization of the signal components, the auditory- (N based features and, as applicable, other features are grouped along the principles of the Gestalt theory.
  5. 7. A method according to any one of the claims 1 to 6, wherein the signal components identified as noise signal components are suppressed and/or the signal components identified as desired signals or estimated desired signals are amplified.
  6. 8. A method according to any one of the claims 1 to 7, wherein the desired signal or an estimated desired signal is synthesized in the signal processing phase on the basis of the features detected in the signal analysis phase.
  7. 9. A method according to any one of the claims 1 to 6, wherein different base frequencies of the signal component of the desired signal or of the estimated desired signal are extracted with the aid of an analysis of the harmonic structure in the signal analysis phase and, with the aid especially of a loudness or LPC analysis, spectral levels of harmonics of these signal components are defined, and on the basis of the spectral levels and the harmonics a desired signal for tonal speech components is synthesized. A method according to any one of the claims 1 to 6, wherein non-tonal signal components of the desired signal or of the estimated desired signal are extracted with the aid of an analysis of the harmonic structure in the signal analysis phase and, with the aid especially of a loudness or LPC analysis, spectral levels of these signal N \Melbuumc\Cases\Patn\47000-47999 P47947AU\Specis\P47947AU Spccification 2007-101-5doc 31/10/07 -16 O components are defined, and with the aid of a noise Sgenerator a desired signal for non-tonal speech components O is synthesized.
  8. 11. A method according to claim 9 or 10, wherein the desired signal and/or the estimated desired signal is 00 amplified.
  9. 12. Use of the method according to any one of claims 1 to 10 11 for operating a hearing device.
  10. 13. A hearing device operating by the method according to any one of the claims 1 to 11.
  11. 14. A method according to any of claims 1 to 11, and substantially as herein described with reference to the accompanying drawings. A hearing device according to claim 13, and substantially as herein described with reference to the accompanying drawings. N MWelboume\Cases\Patcnt\47000-47999\P47947AU\Specis\P47947AU Spccification 2007- 0-1 5doc 31/10/07
AU2001246278A 2001-04-11 2001-04-11 Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid Ceased AU2001246278B2 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CH2001/000236 WO2001047335A2 (en) 2001-04-11 2001-04-11 Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid

Publications (2)

Publication Number Publication Date
AU2001246278A1 AU2001246278A1 (en) 2001-09-20
AU2001246278B2 true AU2001246278B2 (en) 2008-02-07

Family

ID=4358195

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2001246278A Ceased AU2001246278B2 (en) 2001-04-11 2001-04-11 Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid

Country Status (4)

Country Link
EP (1) EP1380028A2 (en)
JP (1) JP2004512700A (en)
AU (1) AU2001246278B2 (en)
CA (1) CA2409835A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012007183A1 (en) 2010-07-15 2012-01-19 Widex A/S Method of signal processing in a hearing aid system and a hearing aid system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4051331A (en) * 1976-03-29 1977-09-27 Brigham Young University Speech coding hearing aid system utilizing formant frequency transformation
US5651071A (en) * 1993-09-17 1997-07-22 Audiologic, Inc. Noise reduction system for binaural hearing aid

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9813973D0 (en) * 1998-06-30 1998-08-26 Univ Stirling Interactive directional hearing aid

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4051331A (en) * 1976-03-29 1977-09-27 Brigham Young University Speech coding hearing aid system utilizing formant frequency transformation
US5651071A (en) * 1993-09-17 1997-07-22 Audiologic, Inc. Noise reduction system for binaural hearing aid

Also Published As

Publication number Publication date
EP1380028A2 (en) 2004-01-14
CA2409835A1 (en) 2002-11-20
JP2004512700A (en) 2004-04-22

Similar Documents

Publication Publication Date Title
US8638961B2 (en) Hearing aid algorithms
AU2010204470B2 (en) Automatic sound recognition based on binary time frequency units
US7243060B2 (en) Single channel sound separation
Levitt Noise reduction in hearing aids: a review.
US6895098B2 (en) Method for operating a hearing device, and hearing device
US10154353B2 (en) Monaural speech intelligibility predictor unit, a hearing aid and a binaural hearing system
CN107547983B (en) Method and hearing device for improving separability of target sound
EP1914721B1 (en) Data embedding device, data embedding method, data extraction device, and data extraction method
JP2008544660A (en) Hearing aid with enhanced high frequency reproduction and audio signal processing method
US20020150264A1 (en) Method for eliminating spurious signal components in an input signal of an auditory system, application of the method, and a hearing aid
US20140288938A1 (en) Systems and methods for enhancing place-of-articulation features in frequency-lowered speech
CA2400089A1 (en) Method for operating a hearing-aid and a hearing aid
Kim et al. Robust speech recognition using temporal masking and thresholding algorithm.
Jamieson et al. Evaluation of a speech enhancement strategy with normal-hearing and hearing-impaired listeners
WO2010051857A1 (en) N band fm demodulation to aid cochlear hearing impaired persons
EP1216527B1 (en) Apparatus and method for de-esser using adaptive filtering algorithms
AU2001246278B2 (en) Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid
CN109788410A (en) A kind of method and apparatus inhibiting loudspeaker noise
JPH07146700A (en) Pitch emphasizing method and device and hearing acuity compensating device
JP2001249676A (en) Method for extracting fundamental period or fundamental frequency of periodical waveform with added noise
WO2001018794A1 (en) Spectral enhancement of acoustic signals to provide improved recognition of speech
CA2400104A1 (en) Method for determining a current acoustic environment, use of said method and a hearing-aid
AU2004242561B2 (en) Modulation Depth Enhancement for Tone Perception
Walliker A versatile digital speech processor for hearing aids and cochlear implants

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired