EP1729287A1 - Verfahren und Vorrichtung für adaptive Rauschunterdrückung - Google Patents

Verfahren und Vorrichtung für adaptive Rauschunterdrückung Download PDF

Info

Publication number
EP1729287A1
EP1729287A1 EP06076642A EP06076642A EP1729287A1 EP 1729287 A1 EP1729287 A1 EP 1729287A1 EP 06076642 A EP06076642 A EP 06076642A EP 06076642 A EP06076642 A EP 06076642A EP 1729287 A1 EP1729287 A1 EP 1729287A1
Authority
EP
European Patent Office
Prior art keywords
signal
nsr
power
input signal
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06076642A
Other languages
English (en)
French (fr)
Inventor
Ravi Chandran
Bruce E. Dunne
Daniel J. Marchok
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Coriant Operations Inc
Original Assignee
Tellabs Operations Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tellabs Operations Inc filed Critical Tellabs Operations Inc
Priority claimed from EP00902355A external-priority patent/EP1141948B1/de
Publication of EP1729287A1 publication Critical patent/EP1729287A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

Definitions

  • the present invention relates to suppressing noise in telecommunications systems.
  • the present invention relates to suppressing noise in single channel systems or single channels in multiple channel systems.
  • Speech quality enhancement is an important feature in speech communication systems.
  • Cellular telephones for example, are often operated in the presence of high levels of environmental background noise present in moving vehicles. Background noise causes significant degradation of the speech quality at the far end receiver, making the speech barely intelligible.
  • speech enhancement techniques may be employed to improve the quality of the received speech, thereby increasing customer satisfaction and encouraging longer talk times.
  • FIG 1 shows an example of a noise suppression system 100 that uses spectral subtraction.
  • a spectral decomposition of the input noisy speech-containing signal 102 is first performed using the filter bank 104.
  • the filter bank 104 may be a bank of bandpass filters such as, for example, the bandpass filters disclosed in R. J. McAulay and M. L. Malpass, "Speech Enhancement Using a Soft-Decision Noise Suppression Filter," IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-28, no. 2, (Apr. 1980), pp.137-145 .
  • noise refers to any undesirable signal present in the speech signal including: 1) environmental background noise; 2) echo such as due to acoustic reflections or electrical reflections in hybrids; 3) mechanical and/or electrical noise added due to specific hardware such as tape hiss in a speech playback system; and 3) non-linearities due to, for example, signal clipping or quantization by speech compression.
  • the filter bank 104 decomposes the signal into separate frequency bands. For each band, power measurements are performed and continuously updated over time in the noisy signal power & noise power estimator 106. These power measures are used to determine the signal-to-noise ratio (SNR) in each band.
  • the voice activity detector 108 is used to distinguish periods of speech activity from periods of silence.
  • the noise power in each frequency band is updated only during silence while the noisy signal power is tracked at all times.
  • a gain (attenuation) factor is computed in the gain computer 110 based on the SNR of the band to attenuate the signal in the gain multiplier 112.
  • speech signal refers to an audio signal that may contain speech, music or other information bearing audio signals (e.g., DTMF tones, silent pauses, and noise).
  • a more sophisticated approach may also use an overall SNR level in addition to the individual SNR values to compute the gain factors for each band.
  • the overall SNR is estimated in the overall SNR estimator 114.
  • the gain factor computations for each band are performed in the gain computer 110.
  • the attenuation of the signals in different bands is accomplished by multiplying the signal in each band by the corresponding gain factor in the gain multiplier. Low SNR bands are attenuated more than the high SNR bands. The amount of attenuation is also greater if the overall SNR is low.
  • the possible dynamic range of the SNR of the input signal is large. As such, the speech enhancement system must be capable of handling both very clean speech signals from wireline telephones as well as very noisy speech from cellular telephones.
  • the signals in the different bands are recombined into a single, clean output signal 116.
  • the resulting output signal 116 will have an improved overall perceived quality.
  • speech enhancement system refers to an apparatus or device that enhances the quality of a speech signal in terms of human perception or in terms of another criteria such as accuracy of recognition by a speech recognition device, by suppressing, masking, canceling or removing noise or otherwise reducing the adverse effects of noise.
  • Speech enhancement systems include apparatuses or devices that modify an input signal in ways such as, for example: 1) generating a wider bandwidth speech signal from a narrow bandwidth speech signal; 2) separating an input signal into several output signals based on certain criteria, e.g., separation of speech from different speakers where a signal contains a combination of the speakers' speech signals; 3) and processing (for example by scaling) different "portions" of an input signal separately and/or differently, where a "portion" may be a portion of the input signal in time (e.g., in speaker phone systems) or may include particular frequency bands (e.g., in audio systems that boost the base), or both.
  • the decomposition of the input noisy speech-containing signal can also be performed using Fourier transform techniques or wavelet transform techniques.
  • Figure 2 shows the use of discrete Fourier transform techniques (shown as the Windowing & FFT block 202).
  • a block of input samples is transformed to the frequency domain.
  • the magnitude of the complex frequency domain elements are attenuated at the attenuation unit 208 based on the spectral subtraction principles described above.
  • the phase of the complex frequency domain elements are left unchanged.
  • the complex frequency domain elements are then transformed back to the time domain via an inverse discrete Fourier transform in the IFFT block 204, producing the output signal 206.
  • wavelet transform techniques may be used to decompose the input signal.
  • a voice activity detector may be used with noise suppression systems.
  • Such a voice activity detector is presented in, for example, U.S. Patent No. 4,351,983 to Crouse et al.
  • the power of the input signal is compared to a variable threshold level. Whenever the threshold is exceeded, the system assumes speech is present. Otherwise, the signal is assumed to contain only background noise.
  • Low computational complexity is also desirable as the network noise suppression system may process multiple independent voice channels simultaneously.
  • subtraction and multiplication is preferred to facilitate a direct digital hardware implementation as well as to minimize processing in a fixed-point digital signal processor-based implementation.
  • Division is computationally intensive in digital signal processors and is also cumbersome for direct digital hardware implementation.
  • the memory storage requirements for each channel should be minimized due to the need to process multiple independent voice channels simultaneously.
  • Speech enhancement techniques must also address information tones such as DTMF (dual-tone multi-frequency) tones.
  • DTMF tones are typically generated by push-button/tone-dial telephones when any of the buttons are pressed.
  • the extended touch-tone telephone keypad has 16 keys: (1,2,3,4,5,6,7,8,9,0,*,#,A,B,C,D).
  • the keys are arranged in a four by four array. Pressing one of the keys causes an electronic circuit to generate two tones. As shown in Table 1, there is a low frequency tone for each row and a high frequency tone for each column. Thus, the row frequencies are referred to as the Low Group and the column frequencies, the High Group. In this way, sixteen unique combinations of tones can be generated using only eight unique tones.
  • Table 1 shows the keys and the corresponding nominal frequencies. (Although discussed with respect to DTMF tones, the principles discussed with respect to the present invention are applicable to all inband signals.
  • an inband signal refers to any kind of tonal signal within the bandwidth normally used for voice transmission such as, for example, facsimile tones, dial tones, busy signal tones, and DTMF tones).
  • Table 1 Touch-tone keypad row (Low Group) and column (High Group) frequencies Low ⁇ High (Hz) 1209 1336 1477 1633 697 1 2 3 A 770 4 5 6 B 852 7 8 9 C 941 * 0 # D
  • DTMF tones are typically less than 100 milliseconds (ms) in duration and can be as short as 45 ms. These tones may be transmitted during telephone calls to automated answering systems of various kinds. These tones are generated by a separate DTMF circuit whose output is added to the processed speech signal before transmission.
  • DTMF signals may be transmitted at a maximum rate of ten digits/second. At this maximum rate, for each 100 ms timeslot, the dual tone generator must generate touch-tone signals of duration at least 45 ms and not more than 55 ms, and then remain quiet during the remainder of the timeslot.
  • a tone pair may last any length of time, but each tone pair must be separated from the next pair by at least 40 ms.
  • FIG. 7 shows an input signal 702 containing a 697Hz tone 704 of duration 45 ms (360 samples).
  • the output signal 706 is heavily suppressed initially, until the voice activity detector detects the signal presence. Then, the gain factor 708 gradually increases to prevent attenuation.
  • the output is a shortened version of the input tone, which in this example, does not meet general minimum duration requirements for DTMF tones.
  • the receiver may not detect the DTMF tones correctly due to the tones failing to meet the minimum duration requirements.
  • the gain factor 708 never reaches its maximum value of unity because it is dependent on the SNR of the band. This causes the output signal 706 to be always attenuated slightly, which may be sufficient to prevent the signal power from meeting the threshold of the receiver's DTMF detector.
  • the gain factors for different frequency bands may be sufficiently different so as to increase the difference in the amplitudes of the dual tones. This further increases the likelihood that the receiver will not correctly detect the DTMF tones.
  • An apparatus may utilize a filter bank of bandpass filters to split the input noisy speech-containing signal into separate frequency bands.
  • a filter bank of bandpass filters may be used to determine whether the input signal contains speech, DTMF tones or silence.
  • JVADAD joint voice activity & DTMF activity detector
  • the overall average noise-to-signal ratio (NSR) of the input signal is estimated in the overall NSR estimator, which estimates the average noisy signal power in the input signal during speech activity and the average noise power during silence. From these estimates, the overall NSR is estimated.
  • the long-term power is a scaled version of the noise power in the band.
  • the short-term power is a scaled version of the noisy signal power in the band.
  • the power estimation processes are adapted based on the signal activity indicated by the JVADAD. The number of computations required for power measurement is significantly reduced by undersampling the signals in each frequency band prior to power measurement.
  • the NSR adapter adapts the NSR for each frequency band based on the long-term and short-term power measures, the overall NSR and the signal activity indicated by the JVADAD.
  • the NSR adaptation is performed without division using a prediction error computed as a function of the long-term, short-term and overall NSR measures.
  • the gain computer utilizes these NSR values to determine the gain factors for each frequency band.
  • the gain multiplier may then perform the attenuation of each frequency band.
  • the processed signals in the separate frequency bands are summed up in the combiner to produce the clean output signal.
  • the aforementioned method of adapting the NSR values during speech is different from that used in the presence of DTMF tones.
  • the quick adjustment of the NSR values for the appropriate frequency bands containing the DTMF tones maximizes the amount of the DTMF tones that are passed through transparently.
  • the NSR values are preferably adapted more slowly to correspond to the nature of speech signals.
  • An alternative embodiment of the present invention includes a method and apparatus for extending DTMF tones. Yet another embodiment of the present invention includes regenerating DTMF tones.
  • FIG. 3 that Figure presents a block diagram of a noise suppression apparatus 300.
  • a filter bank 302, voice activity detector 304, a hangover counter 305, and an overall NSR (noise to signal ratio) estimator 306 are presented.
  • a power estimator 308, NSR adapter 310, gain computer 312, a gain multiplier 314 and a combiner 315 are also present.
  • the embodiment illustrated in Figure 3 also presents an input signal x(n) 316 and output signals x k ( n ) 318, a joint voice activity detection and DTMF activity detection signal 320.
  • Figure 3 also presents a DTMF tone generator 321.
  • the output from the overall NSR estimator 306 is the overall NSR (" NSR overall ( n )") 322.
  • the power estimates 323 are output from the power estimator 308.
  • the adapted NSR values 324 are output from the NSR adapter 310.
  • the gain factors 326 are output from the gain computer 312.
  • the attenuated signals 328 are output from the gain multiplier 314.
  • the regenerated DTMF tones 329 are output from the DTMF tone generator 321.
  • Figure 3 also illustrates that the power estimator 308 may optionally include an undersampling circuit 330 and that the power estimator 308 may optionally output the power estimates 323 to the gain computer 312.
  • the filter bank 302 receives the input signal 316.
  • the sampling rate of the speech signal in, for example, telephony applications is normally 8 kHz with a Nyquist bandwidth of 4 kHz. Since the transmission channel typically has a 300-3400 Hz range, the filter bank 302 may be designed to only pass signals in this range. As an example, the filter bank 302 may utilize a bank of bandpass filters.
  • a multirate or single rate filter bank 302 may be used.
  • One implementation of the single rate filter bank 302 uses the frequency-sampling filter (FSF) structure.
  • the preferred embodiment uses a resonator bank which consists of a series of low order infinite impulse response (“IIR”) filters.
  • This resonator bank can be considered a modified version of the FSF structure and has several advantages over the FSF structure.
  • the resonator bank does not require the memory-intensive comb filter of the FSF structure and requires fewer computations as a result.
  • the use of alternating signs in the FSF structure is also eliminated resulting in reduced computational complexity.
  • the center frequency of each resonator is specified through ⁇ k .
  • the bandwidth of the resonator is specified through r k .
  • the value of g k is used to adjust the DC gain of each resonator.
  • the input to the resonator bank is denoted x ( n ) while the output of the k th resonator is denoted x k ( n ), where n is the sample time.
  • the gain factor 326 for each frequency band is computed once every T samples, the gain is "undersampled” since it is not computed for every sample. (As indicated by dashed lines in Figures 1-4, several different items of data, for example gain factors 326, may be output from the pertinent device.
  • the several outputs preferably correspond to the several subbands into which the input signal 316 is split.
  • the gain factor will range between a small positive value, ⁇ , and 1 because the NSR values are limited to lie in the range [0,1- ⁇ ]. Setting the lower limit of the gain to ⁇ reduces the effects of "musical noise" and permits limited background signal transparency.
  • the attenuation of the signal x k ( n ) from the k th frequency band is achieved by multiplying x k ( n ) by its corresponding gain factor, G k ( n ) , every sample.
  • the sum of the resulting attenuated signals, y(n), is the clean output signal 328.
  • the attenuated signals 328 may also be scaled, for example boosted or amplified, for further transmission.
  • the power, P(n) at sample n , of a discrete-time signal u ( n ), is estimated approximately by lowpass filtering the full-wave rectified signal.
  • the coefficient, ⁇ is referred to as a decay constant.
  • power estimates 323 using a relatively long effective averaging window are long-term power estimates, while power estimates using a relatively short effective averaging window are short-term power estimates.
  • a longer or shorter averaging may be appropriate for power estimation.
  • Speech power which has a rapidly changing profile, would be suitably estimated using a smaller ⁇ .
  • Noise can be considered stationary for longer periods of time than speech. Noise power is therefore preferably accurately estimated by using a longer averaging window (large ⁇ ).
  • the preferred embodiment for power estimation significantly reduces computational complexity by undersampling the input signal for power estimation purposes. This means that only one sample out of every T samples is used for updating the power P ( n ). Between these updates, the power estimate is held constant.
  • This first order lowpass IIR filter is preferably used for estimation of the overall average background noise power, and a long-term and short-term power measure for each frequency band. It is also preferably used for power measurements in the VAD 304. Undersampling may be accomplished through the use of, for example, an undersampling circuit 330 connected to the power estimator 308.
  • the overall SNR is used to influence the amount of oversuppression of the signal in each frequency band. Oversuppression improves the perceived speech quality, especially under low overall SNR conditions. Oversuppression of the signal is achieved by using the overall SNR value to influence the NSR adapter 310. Furthermore, undersuppression in the case of high overall SNR conditions may be used to prevent unnecessary attenuation of the signal. This prevents distortion of the speech under high SNR conditions where the low-level noise is effectively masked by the speech. The details of the oversuppression and undersuppression are discussed below.
  • the average background noise power level is preferably limited to P BN,max for two reasons.
  • P BN,max represents the typical worst-case cellular telephony noise scenario.
  • P SIG ( n ) and P BN ( n ) will be used in the NSR adapter 310 to influence the adjustment of the NSR for each frequency band.
  • Limiting P BN ( n ) provides a means to control the amount of influence the overall SNR has on the NSR value for each band.
  • the overall NSR 322 is computed instead of the overall SNR.
  • the overall NSR 322 is more suitable for the adaptation of the individual frequency band NSR values.
  • the preferred embodiment uses an approach that provides a suitable approximation of the overall NSR 322.
  • NSR overall n ⁇ ⁇ 1 P BN n , P SIG n ⁇ ⁇ 1 P BN n ⁇ 2 P BN n , P SIG n ⁇ ⁇ 2 P BN n ⁇ 3 P BN n ⁇ P SIG n , ⁇ 2 P BN n > P SIG n ⁇ ⁇ 3 P BN n
  • the range of NSR overall ( n ) 322 is: ⁇ 0.128 ⁇ NSR overall n ⁇ 0.064.
  • the upper limit on NSR overall ( n ) 322 in this embodiment is caused by limiting P BN ( n ) to be at most P BN, max ( n ).
  • the lower limit arises from the fact that P BN ( n ) -P SIG ( n ) ⁇ -1. (Since it is assumed that the input signal range is normalized to ⁇ 1, both P BN (n) and P SIG ( n ) are always between 0 and 1.)
  • the long-term power measure, P L T k n at sample n , for the k th frequency band is proportional to the actual noise power level in that band. It is an amplified version of the actual noise power level.
  • the amount of amplification is predetermined so as to prevent or minimize underflow in a fixed-point implementation of the IIR filter used for the power estimation. Underflow can occur because the dynamic range of the input signal in a frequency band during silence is low.
  • the long-term power would not be updated during DTMF tone activity or speech activity.
  • DTMF tone activity affects only a few frequency bands.
  • the long-term power estimates corresponding to the frequency bands that do not contain the DTMF tones are updated during DTMF tone activity.
  • the long-term power measure is also preferably undersampled with a period T.
  • the short-term power estimate uses a shorter averaging window than the long-term power estimate. If the short-term power estimate was performed using an IIR filter with fixed coefficients as in equation (7), the power would likely vary rapidly to track the signal power variations during speech. During silence, the variations would be lesser but would still be more than that of the long-term power measure. Thus, the required dynamic range of this power measure would be high if fixed coefficients are used. However, by making the numerator coefficient of the IIR filter proportional to the NSR of the frequency band, the power measure is made to track the noise power level in the band instead. The possibility of overflow is reduced or eliminated, resulting in a more accurate power measure.
  • NSR k ( n ) is the noise-to-signal ratio (NSR) of the k th frequency band at sample n.
  • This IIR filter is adaptive since the numerator coefficient in the transfer function of this filter is proportional to NSR k ( n ) which depends on time and is adapted in the NSR adapter 310. This power estimation is preferably performed at all times regardless of the signal activity indicated by the VAD 304.
  • the NSR of a frequency band is preferably adapted based on the long-term power, P LT ( n ), and the short-term power, P ST ( n ), corresponding to that band as well as the overall NSR, NSR overall ( n ) 322.
  • Figure 4 illustrates the process of NSR adaptation for a single frequency band.
  • Figure 4 presents the compensation factor adapter 402, long term power estimator 308a, short term power estimator 308b, and power compensator 404.
  • the compensation factor 406, long term power estimate 323a, and short term power estimate 323b are also shown.
  • the prediction error 408 is also shown.
  • the overall NSR estimator 306 is common to all frequency bands.
  • the compensation factor adapter 402 is also common to all frequency bands for computational efficiency.
  • the compensation factor adapter 402 may be designed to be different for different frequency bands.
  • the short-term power estimate 323b in a frequency band is a measure of the noise power level.
  • the short-term power 323b predicts the noise power level.
  • the long-term power 323a which is held constant during speech bursts, provides a good estimate of the true noise power preferably after compensation by a scalar.
  • the scalar compensation is beneficial because the long-term power 323a is an amplified version of the actual noise power level.
  • the difference between the short-term power 323b and the compensated long-term power provides a means to adjust the NSR.
  • This difference is termed the prediction error 408.
  • the sign of the prediction error 408 can be used to increase or decrease the NSR without performing a division.
  • the sign of the prediction error 408, P ST ( n ) - C ( n ) P LT ( n ), is used to determine the direction of adjustment of NSR k ( n ).
  • the amount of adjustment is determined based on the signal activity indicated by the VAD.
  • the preferred embodiment uses a large ⁇ during speech and a small ⁇ during silence. Speech power varies rapidly and a larger ⁇ is suitable for tracking the variations quickly. During silence, the background noise is usually slowly varying and thus a small value of ⁇ is sufficient. Furthermore, the use of a small ⁇ value prevents sudden short-duration noise spikes from causing the NSR to increase too much, which would allow the noise spike to leak through the noise suppression system.
  • the NSR adapter adapts the NSR according to the VAD state and the difference between the noise and signal power.
  • the magnitude of this difference can also be used to vary the NSR.
  • the NSR adapter may vary the NSR according to one or more of the following: 1) the VAD state (e.g., a VAD flag indicating speech or noise); 2) the difference between the noise power and the signal power; 3) a ratio of the noise to signal power (instantaneous NSR); and 4) the difference between the instantaneous NSR and a previous NSR.
  • may vary based on one or more of these four factors.
  • may be varied according to the following table (Table 1.1): Table 1.1 Look-up Table for possible values of ⁇ used to vary the adapted NSR Magnitude of difference between a previous NSR and an instantaneous NSR during speech ⁇ During speech
  • the overall NSR, NSR overall ( n ) 322 also may be a factor in the adaptation of the NSR through the compensation factor C(n) 406, given by equation (19).
  • a larger overall NSR level results in the overemphasis of the long-term power 323a for all frequency bands. This causes all the NSR values to be adapted toward higher levels. Accordingly, this would cause the gain factor 326 to be lower for higher overall NSR levels. The perceived quality of speech is improved by this oversuppression under higher background noise levels.
  • the NSR value for each frequency band in this embodiment is adapted toward zero.
  • the relationship between the overall NSR 322 and the adapted NSR 324 in the several frequency bands can be described as a proportional relationship because as the overall NSR 322 increases, the adapted NSR 324 for each band increases.
  • the long-term power is overemphasized by at most 1.5 times its actual value under low SNR conditions.
  • the long-term power is de-emphasized whenever C ( n ) ⁇ 0.128.
  • the NSR values for the frequency bands containing DTMF tones are preferably set to zero until the DTMF activity is no longer detected. After the end of DTMF activity, the NSR values may be allowed to adapt as described above.
  • the voice activity detector (“VAD”) 304 determines whether the input signal contains either speech or silence.
  • the VAD 304 is a joint voice activity and DTMF activity detector (“JVADAD").
  • JVADAD joint voice activity and DTMF activity detector
  • the voice activity and DTMF activity detection may proceed independently and the decisions of the two detectors are then combined to form a final decision.
  • the JVADAD 304 may include a voice activity detector 304a, a DTMF activity detector 304b, and a determining circuit 304c.
  • the VAD 304a outputs a voice detection signal 902 to the determining circuit 304c and the DTMF activity detector outputs a DTMF detection signal 904 to the determining circuit 304c.
  • the determining circuit 304c determines, based upon the voice detection signal 902 and DTMF detection signal 904, whether voice, DTMF activity or silence is present in the input signal 316.
  • the determining circuit 304c may determine the content of the input signal 316, for example, based on the logic presented in Table 2 (below).
  • silence refers to the absence of speech or DTMF activity, and may include noise.
  • the voice activity detector may output a single flag, VAD 320, which is set, for example, to one if speech is considered active and zero otherwise.
  • Table 2 presents the logic that may be used to determine whether DTMF activity or speech activity is present: Table 2: Logic for use with JVADAD DTMF VAD Decision 0 0 Silence 0 1 Speech 1 0 DTMF activity present 1 1 DTMF activity present
  • a pair of tones are generated.
  • One of the tones will belong to the following set of frequencies: ⁇ 697, 770, 852, 941 ⁇ in Hz and one will be from the set ⁇ 1209, 1336, 1477, 1633 ⁇ in Hz, as indicated above in Table 1.
  • These sets of frequencies are termed the low group and the high group frequencies, respectively.
  • sixteen possible tone pairs are possible corresponding to 16 keys of an extended telephone keypad.
  • the tones are required to be received within ⁇ 2% of these nominal values. Note that these frequencies were carefully selected so as to minimize the amount of harmonic interaction.
  • the difference in amplitude between the tones (called 'twist') must be within 6dB.
  • a suitable DTMF detection algorithm for detection of DTMF tones in the JVADAD 304 is a modified version of the Goertzel algorithm.
  • the Goertzel algorithm is a recursive method of performing the discrete Fourier transform (DFT) and is more efficient than the DFT or FFT for small numbers of tones.
  • DFT discrete Fourier transform
  • the detection of DTMF tones and the regeneration and extension of DTMF tones will be discussed in more detail below.
  • Voice activity detection is preferably performed using the power measures in the first formant region of the input signal x ( n ).
  • the first formant region is defined to be the range of approximately 300-850Hz.
  • the long-term power measure tracks the background noise level in the first formant of the signal.
  • the short-term power measure tracks the speech signal level in first formant of the signal.
  • the VAD 304 also may utilize a hangover counter, h VAD 305.
  • the hangover counter 305 is used to hold the state of the VAD output 320 steady during short periods when the power in the first formant drops to low levels.
  • the first formant power can drop to low levels during short stoppages and also during consonant sounds in speech.
  • the VAD output 320 is held steady to prevent speech from being inadvertently suppressed.
  • an inband signal is any kind of tonal signal within the bandwidth normally used for voice transmission.
  • Exemplary inband signals include facsimile tones, DTMF tones, dial tones, and busy signal tones.
  • the above procedure in equations (32)-(34) is preferably performed for each of the eight DTMF frequencies and their second harmonics for a given block of N samples.
  • the second harmonics are the frequencies that are twice the values of the DTMF frequencies. These frequencies are tested to ensure that voiced speech signals (which have a harmonic structure) are not mistaken for DTMF tones.
  • a further confirmation test may be performed to ensure that the detected DTMF tone pair is stable for a sufficient length of time.
  • the same DTMF tone pair must be detected to confirm that a valid DTMF tone pair is present for a sufficient duration of time following a block of silence according to the specifications used, for example, for three consecutive blocks (of approximately 12.75 ms).
  • a modified Goertzel detection algorithm is preferably used. This is achieved by taking advantage of the filter bank 302 in the noise suppression apparatus 300 which already has the input signal split into separate frequency bands.
  • the Goertzel algorithm is used to estimate the power near a test frequency, ⁇ 0 , it suffers from poor rejection of the power outside the vicinity of ⁇ 0 .
  • the apparatus 300 uses the output of the bandpass filter whose passband contains ⁇ 0 .
  • the apparatus 300 preferably uses the validity tests as described above in, for example, the JVADAD 304.
  • the apparatus 300 may or may not use the confirmation test as described above.
  • a more sophisticated method (than the confirmation test) suitable for the purpose of DTMF tone extension or regeneration is used.
  • the validity tests are preferably conducted in the DTMF Activity Detection portion of the Joint Voice Activity & DTMF Activity Detector 304.
  • an inband signal is any kind of tonal signal within the bandwidth normally used for voice transmission.
  • Exemplary inband signals include facsimile tones, DTMF tones, dial tones, and busy signal tones.
  • the input signal 802 tone starts at around sample 100 and ends at around sample 460, lasting about 45 ms.
  • This block is considered to contain a pause.
  • the next two blocks of samples were also found to contain tone activity at the same frequency.
  • three consecutive blocks of samples contain tone activity following a pause which confirms the presence of a tone of the frequency that is being tested for. (Note that, in the preferred embodiment, the presence of a low group tone and a high group tone must be simultaneously confirmed to confirm the DTMF activity).
  • the output signal 806 shows how the input tone is extended even after the input tone dies off at about sample 460. This extension of the tone is performed in real-time and the extended tone preferably has the same phase, frequency and amplitude as the original input tone.
  • the preferred method extends a tone in a phase-continuous manner as discussed below.
  • the extended tone will continue to maintain the amplitude of the input tone.
  • w N ⁇ 1 B 0 sin N ⁇ 0 + ⁇ ⁇ ⁇ / 2
  • w N B 0 sin N + 1 ⁇ 0 + ⁇ ⁇ ⁇ / 2
  • w ( N -1) and w ( N ) contain two consecutive samples of a sinusoid with frequency ⁇ 0 .
  • the phase and amplitude of this sinusoid preferably possess a deterministic relationship to the phase and amplitude of the input sinusoid u ( n ).
  • the DTMF tone generator 321 can generate a sinusoid using a recursive oscillator that matches the phase and amplitude of the input sinusoid u ( n ) for sample times greater than N using the following procedure:
  • the procedure in equations (39)-(42) can be used to extend each of the two tones.
  • the extension of the tones will be performed by a weighted combination of the input signal with the generated tones.
  • a weighted combination is preferably used to prevent abrupt changes in the amplitude of the signal due to slight amplitude and/or frequency mismatch between the input tones and the generated tones which produces impulsive noise.
  • u ( n ) is the input signal
  • w' L ( n ) is the low group generated tone
  • w' H ( n ) is the high group generated tone
  • ⁇ ( n ) is a gain parameter that increases linearly from 0 to 1 over a short period of time, preferably 5 ms or less.
  • x ( n ) is the input sample at time n to the resonator bank 302.
  • the resonator bank 302 splits this signal into a set of bandpass signals ⁇ x k ( n ) ⁇ .
  • y n ⁇ k G k n x k n
  • G k ( n ) and x k ( n ) are the gain factor and bandpass signal from the k th frequency band, respectively
  • y ( n ) is the output of the noise suppression apparatus 300.
  • the set of bandpass signals ⁇ x k ( n ) ⁇ collectively may be referred to as the input signal to the DTMF tone extension method.
  • Figure 5 that Figure presents an exemplary method 500 for extending DTMF tones.
  • the validity tests of the DTMF detection method are preferably applied to each block. If a valid DTMF tone pair is detected, the corresponding digit is decoded based on Table 1.
  • the decoded digits that are output from the DTMF activity detector for example the JVADAD
  • the ith output of DTMF activity detector is Di, with larger i corresponding to a more recent output.
  • the four output blocks will be referred to as Di (i.e., D1, D2, D3 and D4).
  • each output block can have seventeen possible values: the sixteen possible values from the extended keypad and a value indicating that no DTMF tone is present.
  • the output blocks Di may be transmitted to the DTMF tone generator 321 in the voice activity detection and DTMF activity detection signal 320.
  • the appropriate pair of tones corresponding to the digit are generated, for example by using equations (39)-(42), and are used to gradually substitute the input tones. This corresponds to steps 510 and 512 of figure 5.
  • the generated tones are maintained until a DTMF tone pair is no longer detected in a block.
  • the delay in detecting the DTMF tone signal (due to, e.g., the block length) is offset by the delay in detecting the end of a DTMF tone signal.
  • the DTMF tone is extended through the use of generated DTMF tones 329.
  • the generated tones continue after a DTMF tone is no longer detected for example for approximately one-half block after a DTMF tone pair is not detected in a block.
  • the DTMF tone generator since the JVADAD may take approximately one block to detect a DTMF tone pair, the DTMF tone generator extends the DTMF tone approximately one block beyond the actual DTMF tone pair.
  • the DTMF tone output should be at least the length of the minimum input tone.
  • the length of time it takes for the DTMF tone pair to be detected can vary based on the JVADAD's detection method and the block length used. Accordingly, the proper extension period may vary as well.
  • the DTMF tone generator 321 When three or more consecutive blocks contain valid digits, the DTMF tone generator 321 generates DTMF tones 329 to replace the input DTMF tones. This corresponds to steps 513 and 514 of Figure 5.
  • the input signal is attenuated for a suitable time, for example for approximately three consecutive 12.75 ms blocks, to ensure that there is a sufficient pause following the output DTMF signal. This corresponds to steps 515 and 516 of Figure 5.
  • the current block it is possible for the current block to contain DTMF activity although the current block is scheduled to be suppressed as in equation (48). This can happen, for instance, when DTMF tone pairs are spaced apart by the minimum allowed time period. If the input signal 316 contains legitimate DTMF tones, then the digits will normally be spaced apart by at least three consecutive blocks of silence. Thus, only the first block of samples in a valid DTMF tone pair will generally suffer suppression. This will, however, be compensated for by the DTMF tone extension.
  • DTMF tone regeneration is an alternative to DTMF tone extension.
  • an inband signal is any kind of tonal signal within the bandwidth normally used for voice transmission.
  • Exemplary inband signals include facsimile tones, DTMF tones, dial tones, and busy signal tones.
  • DTMF tone regeneration may be performed, for example, in the DTMF tone generator 321.
  • the extension method introduces very little delay (approximately one block in the illustrated embodiment) but is slightly more complicated because the phases of the tones are matched for proper detection of the DTMF tones.
  • the regeneration method introduces a larger delay (a few blocks in the illustrated embodiment) but is simpler since it does not require the generated tones to match the phase of the input tones.
  • the delay introduced in either case is temporary and happens only for DTMF tones. The delay causes a small amount of the signal following DTMF tones to be suppressed to ensure sufficient pauses following a DTMF tone pair.
  • DTMF regeneration may also cause a single block of speech signal following within a second of a DTMF tone pair to be suppressed. Since this is a highly improbable event and only the first N samples of speech suffer the suppression, however, no loss of useful information is likely.
  • the set of signals ⁇ x k ( n ) ⁇ may be referred to collectively as the input to the DTMF Regeneration method.
  • ⁇ k G k x k ( n ) is the output of the gain multiplier
  • w' L ( n ) and w' H ( n ) are the generated low and high group tones (if any)
  • ⁇ 1 ( n ) and ⁇ 2 ( n ) are additional gain factors.
  • ⁇ 2 (n) 0.
  • Preferably two recursive oscillators 332 are used to regenerate the appropriate low and high group tones corresponding to the decoded digit.
  • regeneration of the DTMF tones uses the current and five previous output blocks from the DTMF tone activity detector (e.g., in the JVADAD), two flags, and two counters.
  • the previous five and the current output blocks can be referred to as D1, D2, D3, D4, D5, and D6, respectively.
  • each condition in Table 4 is checked in the order presented in Table 4 at the end of a block (with the exception of conditions 1-3, which are mutually exclusive). The corresponding action is then taken for the next block if the condition is true. Therefore, multiple actions may be taken at the beginning of a block.
  • N 102 is used for DTMF tone detection for use with the DTMF tone regeneration apparatus and method.
  • the DTMF tone regeneration preferably continues until after the input DTMF pair is not detected in the current block.
  • the generated DTMF tones 329 may be continuously output for a sufficient time (after the DTMF pair is no longer detected in the current block), for example for a further three or four blocks (to ensure that a sufficient duration of the DTMF tones are sent).
  • the DTMF tone regeneration may take place for an extra period of time, for example one-half of a block or one block of N samples, to ensure that the DTMF tones meet minimum duration standards.
  • DTMF DTMF activity
  • suppression of the input signal continues, for example by setting the SUPPRESS flag equal to 1 (as indicated if condition 1 of Table 4 is satisfied).
  • Exemplary waiting periods are from about half a second to a second (about 40 to 80 blocks).
  • the waiting period is used to prevent the leakage of short amounts of DTMF tones from the input signal.
  • the use of wait_count facilitates counting down the number of blocks to be suppressed from the point where a DTMF tone pair is first detected. This corresponds to steps 622 and 624 of Figure 6.
  • ⁇ 2 ( n ) 1.
  • DTMF tone extension and DTMF tone regeneration methods are described separately. However, it is possible to combine DTMF tone extension and DTMF tone regeneration into one method and/or apparatus.
  • the DTMF tone extension and regeneration methods disclosed here are with a noise suppression system, these methods may also be used with other speech enhancement systems such as adaptive gain control systems, echo cancellation, and echo suppression systems.
  • the DTMF tone extension and regeneration described are especially useful when delay cannot be tolerated. However, if delay is tolerable, e.g., if a 20 ms delay is tolerable in a speech enhancement system (which may be the case if the speech enhancement system operates in conjunction with a speech compression device), then the extension and/or regeneration of tones may not be necessary. However, a speech enhancement system that does not have a DTMF detector may scale the tones inappropriately. With a DTMF detector present, the noise suppression apparatus and method can detect the presence of the tones and set the scaling factors for the appropriate subbands to unity.
  • the filter bank 302, JVADAD 304, hangover counter 305, NSR estimator 306, power estimator 308, NSR adapter 310, gain computer 312, gain multiplier 314, compensation factor adapter 402, long term power estimator 308a, short term power estimator 308b, power compensator 404, DTMF tone generator 321, oscillators 332, undersampling circuit 330, and combiner 315 may be implemented using combinatorial and sequential logic, an ASIC, through software implemented by a CPU, a DSP chip, or the like.
  • the foregoing hardware elements may be part of hardware that is used to perform other operational functions.
  • the input signals, frequency bands, power measures and estimates, gain factors, NSRs and adapted NSRs, flags, prediction error, compensator factors, counters, and constants may be stored in registers, RAM, ROM, or the like, and may be generated through software, through a data structure located in a memory device such as RAM or ROM, and so forth.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)
  • Noise Elimination (AREA)
EP06076642A 1999-01-07 2000-01-07 Verfahren und Vorrichtung für adaptive Rauschunterdrückung Withdrawn EP1729287A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11524599P 1999-01-07 1999-01-07
EP00902355A EP1141948B1 (de) 1999-01-07 2000-01-07 Verfahren und vorrichtung zur adaptiven rauschunterdrückung

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
EP00902355A Division EP1141948B1 (de) 1999-01-07 2000-01-07 Verfahren und vorrichtung zur adaptiven rauschunterdrückung

Publications (1)

Publication Number Publication Date
EP1729287A1 true EP1729287A1 (de) 2006-12-06

Family

ID=37198612

Family Applications (2)

Application Number Title Priority Date Filing Date
EP06076642A Withdrawn EP1729287A1 (de) 1999-01-07 2000-01-07 Verfahren und Vorrichtung für adaptive Rauschunterdrückung
EP06020682A Withdrawn EP1748426A3 (de) 1999-01-07 2000-01-07 Verfahren und Vorrichtung zur adaptiven Rauschunterdrückung

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP06020682A Withdrawn EP1748426A3 (de) 1999-01-07 2000-01-07 Verfahren und Vorrichtung zur adaptiven Rauschunterdrückung

Country Status (1)

Country Link
EP (2) EP1729287A1 (de)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009043066A1 (en) * 2007-10-02 2009-04-09 Akg Acoustics Gmbh Method and device for low-latency auditory model-based single-channel speech enhancement
WO2010013939A2 (en) * 2008-07-29 2010-02-04 Lg Electronics Inc. An apparatus for processing an audio signal and method thereof
US8515087B2 (en) 2009-03-08 2013-08-20 Lg Electronics Inc. Apparatus for processing an audio signal and method thereof
CN110431625A (zh) * 2019-06-21 2019-11-08 深圳市汇顶科技股份有限公司 语音检测方法、语音检测装置、语音处理芯片以及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201704489D0 (en) 2017-03-21 2017-05-03 Semafone Ltd Telephone signal processing

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4351983A (en) 1979-03-05 1982-09-28 International Business Machines Corp. Speech detector with variable threshold
US4454609A (en) 1981-10-05 1984-06-12 Signatron, Inc. Speech intelligibility enhancement
US4628529A (en) 1985-07-01 1986-12-09 Motorola, Inc. Noise suppression system
US4630305A (en) 1985-07-01 1986-12-16 Motorola, Inc. Automatic gain selector for a noise suppression system
US4630304A (en) 1985-07-01 1986-12-16 Motorola, Inc. Automatic background noise estimator for a noise suppression system
WO1989003141A1 (en) * 1987-10-01 1989-04-06 Motorola, Inc. Improved noise suppression system
US5012519A (en) 1987-12-25 1991-04-30 The Dsp Group, Inc. Noise reduction system
WO1996024128A1 (en) * 1995-01-30 1996-08-08 Telefonaktiebolaget Lm Ericsson Spectral subtraction noise suppression method
US5632003A (en) * 1993-07-16 1997-05-20 Dolby Laboratories Licensing Corporation Computationally efficient adaptive bit allocation for coding method and apparatus
US5706395A (en) * 1995-04-19 1998-01-06 Texas Instruments Incorporated Adaptive weiner filtering using a dynamic suppression factor
EP0856833A2 (de) * 1997-01-29 1998-08-05 Nec Corporation Verfahren und Vorrichtung zur Rauschdämpfung

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4351983A (en) 1979-03-05 1982-09-28 International Business Machines Corp. Speech detector with variable threshold
US4454609A (en) 1981-10-05 1984-06-12 Signatron, Inc. Speech intelligibility enhancement
US4628529A (en) 1985-07-01 1986-12-09 Motorola, Inc. Noise suppression system
US4630305A (en) 1985-07-01 1986-12-16 Motorola, Inc. Automatic gain selector for a noise suppression system
US4630304A (en) 1985-07-01 1986-12-16 Motorola, Inc. Automatic background noise estimator for a noise suppression system
WO1989003141A1 (en) * 1987-10-01 1989-04-06 Motorola, Inc. Improved noise suppression system
US5012519A (en) 1987-12-25 1991-04-30 The Dsp Group, Inc. Noise reduction system
US5632003A (en) * 1993-07-16 1997-05-20 Dolby Laboratories Licensing Corporation Computationally efficient adaptive bit allocation for coding method and apparatus
WO1996024128A1 (en) * 1995-01-30 1996-08-08 Telefonaktiebolaget Lm Ericsson Spectral subtraction noise suppression method
US5706395A (en) * 1995-04-19 1998-01-06 Texas Instruments Incorporated Adaptive weiner filtering using a dynamic suppression factor
EP0856833A2 (de) * 1997-01-29 1998-08-05 Nec Corporation Verfahren und Vorrichtung zur Rauschdämpfung

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DTMF TONE GENERATION AND DETECTION: AN IMPLEMENTATION USING THE TMS320C54X, 1997, pages 5 - 12
GAGNON L ET AL: "SPEECH ENHANCEMENT USING RESONATOR FILTERBANKS", PROC. IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH & SIGNAL PROCESSING (ICASSP '91), 14 May 1991 (1991-05-14) - 17 May 1991 (1991-05-17), IEEE, NEW YORK, USA, pages 981 - 984, XP000222243, ISBN: 0-7803-0003-3 *
R. J. MCAULAY; M. L. MALPASS: "Speech Enhancement Using a Soft-Decision Noise Suppression Filter", IEEE TRANS. ACOUST., SPEECH, SIGNAL PROCESSING, vol. ASSP-28, no. 2, April 1980 (1980-04-01), pages 137 - 145

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2465910B (en) * 2007-10-02 2012-02-15 Akg Acoustics Gmbh Method and device for low-latency auditory model-based single-channel speech enhancement
GB2465910A (en) * 2007-10-02 2010-06-09 Akg Acoustics Gmbh Method and device for low-latency auditory model-based single-channel speech enhancement
WO2009043066A1 (en) * 2007-10-02 2009-04-09 Akg Acoustics Gmbh Method and device for low-latency auditory model-based single-channel speech enhancement
WO2010013939A2 (en) * 2008-07-29 2010-02-04 Lg Electronics Inc. An apparatus for processing an audio signal and method thereof
WO2010013941A2 (en) * 2008-07-29 2010-02-04 Lg Electronics Inc. An apparatus for processing an audio signal and method thereof
WO2010013941A3 (en) * 2008-07-29 2010-06-24 Lg Electronics Inc. An apparatus for processing an audio signal and method thereof
WO2010013939A3 (en) * 2008-07-29 2010-06-24 Lg Electronics Inc. An apparatus for processing an audio signal and method thereof
US8275154B2 (en) 2008-07-29 2012-09-25 Lg Electronics Inc. Apparatus for processing an audio signal and method thereof
US8275150B2 (en) 2008-07-29 2012-09-25 Lg Electronics Inc. Apparatus for processing an audio signal and method thereof
CN102016994B (zh) * 2008-07-29 2013-07-17 Lg电子株式会社 用于处理音频信号的设备及其方法
US8515087B2 (en) 2009-03-08 2013-08-20 Lg Electronics Inc. Apparatus for processing an audio signal and method thereof
US8538043B2 (en) 2009-03-08 2013-09-17 Lg Electronics Inc. Apparatus for processing an audio signal and method thereof
CN110431625A (zh) * 2019-06-21 2019-11-08 深圳市汇顶科技股份有限公司 语音检测方法、语音检测装置、语音处理芯片以及电子设备

Also Published As

Publication number Publication date
EP1748426A2 (de) 2007-01-31
EP1748426A3 (de) 2007-02-21

Similar Documents

Publication Publication Date Title
EP1141948B1 (de) Verfahren und vorrichtung zur adaptiven rauschunterdrückung
USRE43191E1 (en) Adaptive Weiner filtering using line spectral frequencies
US5706395A (en) Adaptive weiner filtering using a dynamic suppression factor
US7492889B2 (en) Noise suppression based on bark band wiener filtering and modified doblinger noise estimate
US7957965B2 (en) Communication system noise cancellation power signal calculation techniques
US5839101A (en) Noise suppressor and method for suppressing background noise in noisy speech, and a mobile station
US6839666B2 (en) Spectrally interdependent gain adjustment techniques
US6023674A (en) Non-parametric voice activity detection
US6766292B1 (en) Relative noise ratio weighting techniques for adaptive noise cancellation
US7649988B2 (en) Comfort noise generator using modified Doblinger noise estimate
US7058572B1 (en) Reducing acoustic noise in wireless and landline based telephony
EP1080465B1 (de) Rauschunterdrückung mittels spektraler subtraktion unter verwendung von linearem faltungsprodukt und kausaler filterung
US20050108004A1 (en) Voice activity detector based on spectral flatness of input signal
EP1806739B1 (de) Rauschunterdrücker
US6233549B1 (en) Low frequency spectral enhancement system and method
US20040078199A1 (en) Method for auditory based noise reduction and an apparatus for auditory based noise reduction
JPH09503590A (ja) 会話の品質向上のための背景雑音の低減
US6671667B1 (en) Speech presence measurement detection techniques
EP1093112A2 (de) Verfahren zur Erzeugung von Sprachmerkmalsignalen und Vorrichtung zu seiner Durchführung
EP1729287A1 (de) Verfahren und Vorrichtung für adaptive Rauschunterdrückung
EP1278185A2 (de) Verfahren zur Verbesserung von Geräuschunterdrückung bei der Sprachübertragung
Puder Kalman‐filters in subbands for noise reduction with enhanced pitch‐adaptive speech model estimation

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060830

AC Divisional application: reference to earlier application

Ref document number: 1141948

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20070512