EP1141949A1 - Elimination of noise from a speech signal - Google Patents

Elimination of noise from a speech signal

Info

Publication number
EP1141949A1
EP1141949A1 EP00979526A EP00979526A EP1141949A1 EP 1141949 A1 EP1141949 A1 EP 1141949A1 EP 00979526 A EP00979526 A EP 00979526A EP 00979526 A EP00979526 A EP 00979526A EP 1141949 A1 EP1141949 A1 EP 1141949A1
Authority
EP
European Patent Office
Prior art keywords
noise
input signal
spectral components
correlation
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00979526A
Other languages
German (de)
English (en)
French (fr)
Inventor
Chao-Shih J. Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Priority to EP00979526A priority Critical patent/EP1141949A1/en
Publication of EP1141949A1 publication Critical patent/EP1141949A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise

Definitions

  • the invention relates to a method for reducing noise in a noisy time-varying input signal, such as a speech signal.
  • the invention further relates to an apparatus for reducing noise in a noisy time-varying input signal.
  • the presence of noise in a time-varying input signal hinders the accuracy and quality of processing the signal. This is particularly the case for processing a speech signal, such as for instance occurs when a speech signal is encoded.
  • the presence of noise is even more destructive if the signal is ultimately not presented to a user, who can relatively well cope with the presence of noise, but if the signal is ultimately processed automatically, as for instance is the case with a speech signal that is recognized automatically.
  • Increasingly automatic speech recognition and coding systems are used. Although the performance of such systems is continuously improving, it is desired that the accuracy be increased further, particularly in adverse environments, such as having a low signal-to-noise ratio (SNR) or a low bandwidth signal.
  • SNR signal-to-noise ratio
  • speech recognition systems compare a representation of an input speech signal against a model ⁇ x of reference signals, such as hidden Markov models (HMMs) built from representations of a training speech signal.
  • the representations are usually observation vectors with LPC or cepstral components.
  • the reference signals are usually relatively clean (high SNR, high bandwidth), whereas the input signal during actual use is distorted (lower SNR, and/or lower bandwidth). It is, therefore, desired to eliminate at least part of the noise present in the input signal in order to obtain a noise- suppressed signal.
  • the conventional spectral subtraction technique involves determining the spectral components of the noisy-speech and estimating the spectral components of the noise.
  • the spectral components may, for instance, be calculated using a Fast Fou ⁇ er transform (FFT).
  • FFT Fast Fou ⁇ er transform
  • the noise spectral components may be estimated once from a part of a signal with predominantly representative noise.
  • the noise is estimated 'on-the-fly', for instance each time a 'silent' part is detected m the input signal with no significant amount of speech signal.
  • the noise- suppressed speech is estimated by subtracting an average noise spectrum from the noisy speech spectrum:
  • S(w;m), Y(w;m), and N(w,m) are the magnitude spectrums of the estimated speech s, noisy speech y and noise n, w and m are the frequency and time indices, respectively.
  • NSR con NSR con
  • the correlation equation is given by:
  • the correlation coefficient sn may be fixed, for instance based on analyzing representative input signals.
  • the correlation coefficient sn is estimated based on the actual input signal.
  • the estimation is based on minimizing a negative spectrum ratio.
  • the expected negative spectrum ratio R is defined as:
  • the correlation coefficient is advantageously obtained by following gradient operation.
  • the correlation coefficient can be learned along the direction of NSR decrement. Preferably, this is done in an iterative algorithm.
  • the equation representing the correlated spectral subtraction may be solved directly. Preferably, the equation is solved in an iterative manner, improving the estimate of the clean speech.
  • the figure shows a block diagram of a conventional speech processing system wherein the invention can be used.
  • the noise reduction according to the invention is particularly useful for processing noisy speech signals, such as coding such a signal or automatically recognizing such a signal.
  • noisy speech signals such as coding such a signal or automatically recognizing such a signal.
  • a person skilled in the art can equally well apply the noise elimination technique in a speech coding system.
  • Speech recognition systems such as large vocabulary continuous speech recognition systems, typically use a collection of recognition models to recognize an input pattern. For instance, an acoustic model and a vocabulary may be used to recognize words and a language model may be used to improve the basic recognition result.
  • the figure illustrates a typical structure of a large vocabulary continuous speech recognition system 100. The following definitions are used for describing the system and recognition method: ⁇ x : a set of trained speech models
  • the system 100 comprises a spectral analysis subsystem 110 and a unit matching subsystem 120.
  • the speech input signal SIS
  • OV observation vector
  • the speech signal is digitized (e.g. sampled at a rate of 6.67 kHz.) and pre-processed, for instance by applying pre-emphasis.
  • Consecutive samples are O grouped (blocked) into frames, corresponding to, for instance, 32 msec, of speech signal.
  • LPC Linear Predictive Coding
  • an acoustic model provides the first term of equation (a).
  • the acoustic model is used to estimate the probability P(Y
  • a speech recognition unit is represented by a sequence of acoustic references. Various forms of speech recognition units may be used. As an example, a whole word or even a group of words may be represented by one speech recognition unit.
  • a word model (WM) provides for each word of a given vocabulary a transcription in a sequence of acoustic references.
  • a whole word is represented by a speech recognition unit, in which case a direct relationship exists between the word model and the speech recognition unit.
  • a word model is given by a lexicon 134, describing the sequence of sub-word units relating to a word of the vocabulary, and the sub-word models 132, describing sequences of acoustic references of the involved speech recognition unit.
  • a word model composer 136 composes the word model based on the sub-word model 132 and the lexicon 134.
  • the (sub-)word models are typically based in Hidden Markov Models (HMMs), which are widely used to stochastically model speech signals.
  • HMMs Hidden Markov Models
  • each recognition unit word model or sub-word model
  • an HMM whose parameters are estimated from a training set of data.
  • An HMM state corresponds to an acoustic reference.
  • Various techniques are known for modeling a reference, including discrete or continuous probability densities.
  • Each sequence of acoustic references which relate to one specific utterance is also referred as an acoustic transcription of the utterance. It will be appreciated that if other recognition techniques than HMMs are used, details of the acoustic transcription will be different.
  • a word level matching system 130 of The figure matches the observation vectors against all sequences of speech recognition units and provides the likelihoods of a match between the vector and a sequence. If sub-word units are used, constraints can be placed on the matching by using the lexicon 134 to limit the possible sequence of sub- word units to sequences in the lexicon 134. This reduces the outcome to possible sequences of words.
  • a sentence level matching system 140 may be used which, based on a language model (LM), places further constraints on the matching so that the paths investigated are those corresponding to word sequences which are proper sequences as specified by the language model.
  • the language model provides the second term P(W) of equation (a).
  • P(W) of equation (a) Combining the results of the acoustic model with those of the language model, results in an outcome of the unit matching subsystem 120 which is a recognized sentence (RS) 152.
  • the language model used in pattern recognition may include syntactical and/or semantical constraints 142 of the language and the recognition task.
  • a language model based on syntactical constraints is usually referred to as a grammar 144.
  • N-gram word models are widely used.
  • wlw2w3...wj-l) is approximated by P(wj
  • bigrams or trigrams are used.
  • wlw2w3...wj-l) is approximated by P(wj
  • the speech processing system may be implemented using conventional hardware.
  • a speech recognition system may be implemented on a computer, such as a PC, where the speech input is received via a microphone and digitized by a conventional audio interface card. All additional processing takes place in the form of software procedures executed by the CPU.
  • the speech may be received via a telephone connection, e.g. using a conventional modem in the computer.
  • the speech processing may also be performed using dedicated hardware, e.g. built around a DSP.
  • the noise elimination according to the invention may be performed in a preprocessing step before the spectral analysis subsystem 100.
  • the noise elimination is integrated in the spectral analysis subsystem 100, for instance to avoid that several conversions from the time domain to the spectral domain and vice versa are required.
  • All hardware and processing capabilities for performing the invention are normally present in a speech recognition or speech coding system.
  • the noise elimination technique according to the invention is normally executed on a processor, such as a DSP or microprocessor of a personal computer, under control of a suitable program. Programming the elementary functions of the noise elimination technique, such as performing a conversion from the time domain to the spectral domain, falls well within the range of a skilled person.
  • the speech signal y can be transformed into a set of spectral components Y(k). It will be appreciated that if already a suitable conversion to the time domain had taken place, it is sufficient to retrieve the spectral components resulting from such a conversion.
  • ⁇ S(k) ⁇ , ⁇ N(k) ⁇ , and ⁇ Y(k) ⁇ be the corresponding magnitude of the spectrums of the time-domain signals s, n, and y, respectively.
  • individual spectral components are forced to be positive. It does not allow the situation wherein an individual spectral component Y(k) of the noisy speech y is less than the corresponding spectral component N(k) of the noise signal n.
  • Equation (8) has two possible solutions. The positive solution which is greater than close to will be chosen since the direction of ⁇ SR decrement is preferred.
  • a preferred iterative algorithm for estimating ⁇ s(k) ⁇ with specified correlation coefficient, ⁇ sn is as follows: LOOP k ( 0 : N-l )
  • the outer loop k deals with all individual spectral components.
  • the inner loop is performed until the iteration has converged (no significant change occurs anymore in the estimated speech).
  • the correlation coefficient ⁇ OT is estimated based on the actual input signal y.
  • the function of negative spectrum ratio (NSR) for the correlated spectral subtraction algorithm according to the invention is defined as follows:
  • the f NS function shown in equation (5) is a zero-one function.
  • a smoothed zero-one, sigmoid function family is preferably used.
  • the following function boss is advantageously used for further derivation due to its differentiability.
  • the correlation coefficient is preferably obtained by the following gradient operation:
  • the correlation coefficient can be learned along the direction of decrease in NSR. This imphes to reduce the residual noise in the estimated spectrum using the proposed correlated spectral subtraction (CSS) algorithm.
  • the block indicated as block 1 is the same as used for the iterative algorithm assuming a fixed correlation coefficient sn .
  • the iterative solution in block one also the one-step solution of equations (7) or (8) may be used.
  • the resulting estimated spectral components of the noise-eliminated signal may be converted back to the time-domain.
  • the spectral components may be used directly for the subsequent further processing, like coding or automatically recognizing the signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP00979526A 1999-10-29 2000-10-27 Elimination of noise from a speech signal Withdrawn EP1141949A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP00979526A EP1141949A1 (en) 1999-10-29 2000-10-27 Elimination of noise from a speech signal

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP99203565 1999-10-29
EP99203565 1999-10-29
EP00979526A EP1141949A1 (en) 1999-10-29 2000-10-27 Elimination of noise from a speech signal
PCT/EP2000/010713 WO2001031640A1 (en) 1999-10-29 2000-10-27 Elimination of noise from a speech signal

Publications (1)

Publication Number Publication Date
EP1141949A1 true EP1141949A1 (en) 2001-10-10

Family

ID=8240796

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00979526A Withdrawn EP1141949A1 (en) 1999-10-29 2000-10-27 Elimination of noise from a speech signal

Country Status (3)

Country Link
EP (1) EP1141949A1 (ja)
JP (1) JP2003513320A (ja)
WO (1) WO2001031640A1 (ja)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4434813B2 (ja) * 2004-03-30 2010-03-17 学校法人早稲田大学 雑音スペクトル推定方法、雑音抑圧方法および雑音抑圧装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3452443B2 (ja) * 1996-03-25 2003-09-29 三菱電機株式会社 騒音下音声認識装置及び騒音下音声認識方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0131640A1 *

Also Published As

Publication number Publication date
JP2003513320A (ja) 2003-04-08
WO2001031640A1 (en) 2001-05-03

Similar Documents

Publication Publication Date Title
EP2216775B1 (en) Speaker recognition
US6950796B2 (en) Speech recognition by dynamical noise model adaptation
US6125345A (en) Method and apparatus for discriminative utterance verification using multiple confidence measures
KR100766761B1 (ko) 화자-독립형 보이스 인식 시스템용 보이스 템플릿을구성하는 방법 및 장치
US20060165202A1 (en) Signal processor for robust pattern recognition
JP4531166B2 (ja) 信頼性尺度の評価を用いる音声認識方法
US20060206321A1 (en) Noise reduction using correction vectors based on dynamic aspects of speech and noise normalization
US8615393B2 (en) Noise suppressor for speech recognition
EP1465154B1 (en) Method of speech recognition using variational inference with switching state space models
EP0470245A1 (en) SPECTRAL EVALUATION PROCEDURE FOR IMPROVING RESISTANCE TO NOISE IN VOICE RECOGNITION.
JP3451146B2 (ja) スペクトルサブトラクションを用いた雑音除去システムおよび方法
JPH0850499A (ja) 信号識別方法
Chowdhury et al. Bayesian on-line spectral change point detection: a soft computing approach for on-line ASR
JPH075892A (ja) 音声認識方法
EP1116219B1 (en) Robust speech processing from noisy speech models
US20040064315A1 (en) Acoustic confidence driven front-end preprocessing for speech recognition in adverse environments
AU776919B2 (en) Robust parameters for noisy speech recognition
JP4705414B2 (ja) 音声認識装置、音声認識方法、音声認識プログラムおよび記録媒体
GB2385697A (en) Speech recognition
Liao et al. Joint uncertainty decoding for robust large vocabulary speech recognition
Haton Automatic speech recognition: A Review
FI111572B (fi) Menetelmä puheen käsittelemiseksi akustisten häiriöiden läsnäollessa
Kotnik et al. Efficient noise robust feature extraction algorithms for distributed speech recognition (DSR) systems
WO2001031640A1 (en) Elimination of noise from a speech signal
JP2007508577A (ja) 音声認識システムの環境的不整合への適応方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

17P Request for examination filed

Effective date: 20011105

17Q First examination report despatched

Effective date: 20030918

RBV Designated contracting states (corrected)

Designated state(s): DE FR GB

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20050322