EP0782127A2 - A time-varying feature space preprocessing procedure for telephone based speech recognition - Google Patents

A time-varying feature space preprocessing procedure for telephone based speech recognition Download PDF

Info

Publication number
EP0782127A2
EP0782127A2 EP96309114A EP96309114A EP0782127A2 EP 0782127 A2 EP0782127 A2 EP 0782127A2 EP 96309114 A EP96309114 A EP 96309114A EP 96309114 A EP96309114 A EP 96309114A EP 0782127 A2 EP0782127 A2 EP 0782127A2
Authority
EP
European Patent Office
Prior art keywords
speech
carbon
speech recognition
microphones
different types
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP96309114A
Other languages
German (de)
French (fr)
Inventor
Alexandros Potamianos
Richard C. Rose
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Publication of EP0782127A2 publication Critical patent/EP0782127A2/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech

Definitions

  • the instant invention relates generally to speech recognition and, more particularly, to method and apparatus for improving recognition based on the generation of transformation process parameters which suppress speech spectral energy for low energy unvoiced sounds, and also for low energy regions of the spectrum between formant peaks for voiced signals.
  • Speech recognition is a process by which one or more unknown speech utterances are identified. Speech recognition is generally performed by comparing the features of an unknown utterance, with the features of known words, phrases, lexical units and/or strings, which are often referred to as "known words.”
  • the unknown speech utterance is typically represented by one or more digital pulse code modulated ("PCM”) signals.
  • PCM digital pulse code modulated
  • the features, or characteristics of the known words are typically defined through a process known as training, and in conjunction with apparatus known as speech recognizers.
  • Speech recognizers typically extract features from the unknown utterance in order to characterize the utterance.
  • speech recognizers such as, for example, conventional template-based and Hidden Markov Model (“HMM”) recognizers, as well as recognizers utilizing recognition models based on neural networks.
  • HMM Hidden Markov Model
  • transducer such as a carbon or linear (electret) microphone.
  • Such transducers introduce distortion into the audio signal, which distortion may have an adverse (or in same cases a beneficial) effect on the recognition process.
  • carbon transducers suppress certain speech information which heretofore has been deemed a reason to minimize use of carbon transducers in speech recognition systems.
  • the use of carbon transducers cannot be avoided, as it is estimated that fifty (50%) percent of existing telephones utilize carbon transducers.
  • the instant invention recognizes that certain characteristics inherent in carbon transducers are in fact beneficial to speech recognition, particularly in the telephone system, when properly identified and utilized.
  • the instant invention makes use of these characteristics to improve the speech recognition process.
  • method and apparatus which provides improved speech recognition by essentially suppressing information in those regions of the speech signal where the signal variability is high, or the modeling accuracy is poor.
  • the invention takes advantage of the fact that one type of microphone, such as the carbon microphone, suppresses speech spectral energy for low energy unvoiced sounds, and also for low energy regions of the spectrum between formant peaks for voiced sounds. This observation is utilized in the invention described below to improve speech recognition for various types of microphones, including the carbon and linear (electret) microphone.
  • HMM digit models trained from carbon utterances are used by a Viterbi decoder, with the output of the Viterbi decoder utilized in a process parameter generator to generate a set of transformation process parameters. Also applied to the process parameter generator are speech utterances obtained from the outputs of both carbon and linear microphones. The output of the process parameter generator is a carbon-linear transformation process parameter, which parameter is indicative of certain significant differences in the properties of carbon and linear microphones.
  • the transformation process parameter is then combined with HMM digit models trained from combined linear-carbon speech via a Viterbi decoder and a speech utterance from a carbon microphone to generate a transformed speech observation vector.
  • This vector is in turn applied to a speech recognizer in combination with the HMM digit models trained from combined linear-carbon speech to produce a recognized word string.
  • FIG. 1 shows examples of smoothed spectral envelopes taken from individual frames of speech for a single speaker, through an electret transducer (solid line), and a carbon transducer (dotted line) simultaneously.
  • the three plots depict filter bank envelopes for the sounds, corresponding to phonetic symbols of "/iy/,” “/ah/,” and "/s/” respectively, shown in FIGS. 1A, 1B and 1C.
  • Table I The data shown in Table I, was obtained from an AT&T Bell Laboratories speech database, where voice samples were recorded from subjects recruited in a mall and stored in the database.
  • the speech database contained connected digit utterances spoken over the telephone network with the speech being stored in directories per speaker labeled as either speech originating from a carbon or electret transducer.
  • Utterances utilized to create the data in Table I consisted of 1-7 digits spoken in a continuous manner. Training data consisted of 5,368 utterances (16,321 digits) from 52 speakers. Testing data consisted of 2,239 utterances (6,793 digits) from 22 speakers. Five dialect regions were available: Long Island, Chicago, Boston, Columbus and Atlanta. The data in Table 1 utilized the Columbus dialect region.
  • the error rate (per digit) recognition of connected digits over the telephone network is substantially lower for speech received through a carbon transducer, than for speech received through an electret transducer regardless of the training conditions.
  • FIG. 2 there is shown a system that computes a carbon transducer-linear transducer transformation process. More particularly, the system shown in FIG. 2 is designed to generate carbon-linear transformation parameters in response to speech from both carbon and linear transducers, and using the HMM Models trained from carbon utterances.
  • HMM models trained from carbon utterances. Such HMM models are applied to Viterbi Decoder 20, which type of Decoder is well known in this technical area and is described, in the Rabiner and Juang reference mentioned above.
  • Speech to be recognized is applied to carbon transducer 40, and linear transducer 50, and the respective transducer outputs are applied to the ASR front-end.
  • ASR front-ends are well-known and described, for example, in "Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences" by S.B. Davis and P. Mermelstein, IEEE Transactions of Acoustic Speech and Signal Processing, 1980.
  • the outputs of the ASR front-ends are x ⁇ c t which is the cepstrum observation vector spoken through the carbon transducer and an output of x ⁇ l t which is the ceptrum observation vector spoken through the linear (electret) transducer.
  • the observation vector x ⁇ c t is applied to the Viterbi Decoder 20.
  • the output of Viterbi Decoder 20, ⁇ c t is applied to block 30, which estimates the parameters of the transformation process.
  • the observation vectors X ⁇ c t and X ⁇ l t are subtracted at block 60 to form Y ⁇ t which is the carbon-linear distortion process also applied to block 30.
  • Block 30 generates ⁇ a k and ⁇ 2 a k , which are the carbon-linear transformation process parameters with: and where N k is the number of vectors Y ⁇ t assigned to class K.
  • the problem of generating the carbon-linear transformation process parameters is treated as a signal recovery problem.
  • x ⁇ c t and y ⁇ t are both represented by Gaussian densities that are tied to the states of the hidden Markov digit models.
  • the parameters of the HMM state dependent Gaussian densities associated with y ⁇ t are obtained from the simultaneous carbon/electret recordings of the database according to the process illustrated in FIG. 2. Viterbi alignment of each training utterance spoken through a carbon transducer is performed against the known word transcription for the utterance.
  • parameters were estimated from a "Stereo Carbon-Electret" database where four speakers spoke triplets of digits simultaneously into two handsets.
  • a transformation vector is estimated for each state of the HMM.
  • the underlying goal was to approximate the highly non-linear characteristics of the carbon transducer using a segmental linear model. It is assumed that over a single HMM state, the transformation is a simple linear filter which can be modeled as an additive bias in the log spectral domain, or in the mel-frequency cepstrum domain.
  • the parameters of the transformation are estimated from a stereo database where speakers uttered connected digit strings simultaneously through carbon and electret telephone handsets. Hence, the parameters of the transformation were trained completely independent from the utterances that were used to test speech recognition performance. Furthermore, the speakers and the telephone handsets used for training the transformations were also separate from those used during testing.
  • test utterances were transformed during recognition using the two pass procedure described in FIG. 3A (First Pass) and FIG. 3B (Second Pass).
  • the two pass procedure is utilized for transforming the feature space prior to speech recognition.
  • a state dependent transformation is applied to the input speech.
  • compensation and rescoring are performed on the transformed features.
  • N- best list a list of N most likely string candidates (N- best list) is generated from the original test utterance. Then, a state dependent transformation is performed for each string candidate by replacing each observation x ⁇ l t with y ⁇ t - ⁇ ⁇ t . Finally, the best string is chosen as the one associated with the transformed utterance with the highest likelihood, as shown in FIG. 3B.
  • HMM models trained from combined carbon-linear speech are stored at block 70.
  • the carbon-linear transformation process parameters obtained from block 30 in FIG. 2 are stored in block 80.
  • the HMM models from block 70 are applied to Viterbi Decoder 90, along with the input test speech observation vector X ⁇ t .
  • the input test speech observation vector is also applied to summation block 110.
  • the output of the Viterbi decoder ⁇ t is applied to Select Transformation Vector Block 100, along with the carbon-linear transformation process parameters.
  • Block 100 is a standard look-up table, where the input ⁇ t is used as a parameter to access the data stored in block 80.
  • the output of block 100 is also applied to summation block 110, whose output is Z t , which is the transformed speech observation vectors, providing a plurality of recognition hypothesis.
  • the transformed speech observation vectors Z + are applied to Compenate/Rescore Block 130, along with the HMM Digit models (Block 120) trained from the combined carbon-linear speech.
  • FIG. 3A produces N recognition hypotheses.
  • the best scoring hypothesis may not be the best in terms of speech recognition.
  • FIG. 3B a decision is made on the "best" (i.e., most recognizable) recognition hypothesis by rescoring the compensated utterances. For example, suppose that P ( Z ⁇ 2 t / ⁇ )> P ( Z ⁇ 1 t / ⁇ ) or the score of the compensated second candidate was higher than the first. Then, the second candidate would be chosen as additional information was used to reorder the recognition hypotheses.
  • the output of the speech recognizer 130 is the recognized word string. It must be emphasized that the feature vectors in the test utterances are not used to estimate any aspect of the transformation process. The parameters are obtained strictly from knowledge of the distortion process that existed prior to recognition. The result of the inventive speech recognition procedure is set forth in the digit recognizer error rate shown below in Table II. In Table II, it is to be understood that baseline conditions for Carbon were 1.3 and 2.8 for Electret, while with compensation Carbon would be at 1.0 and Electret at 1.9. Table II Error Rate (Per Digit) Testing Condition Training Condition: Carbon Electret Carbon 0. 8% 2.1% Electret 1.7% 1.9% Combined 1.0% 1.9%

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Complex Calculations (AREA)
  • Machine Translation (AREA)

Abstract

An improved speech recognition system, in which transformation process parameters are generated in response to selected characteristics derived from speech inputs obtained from both carbon and linear microphones. The transformation process parameters are utilized in conjunction with selected digitized speech models to improve the speech recognition process based on the carbon microphone property of suppressing speech spectral energy for low energy invoiced sounds, and also for low energy regions of the spectrum between formant peaks for voices sounds.
Figure imgaf001

Description

    Field of the Invention
  • The instant invention relates generally to speech recognition and, more particularly, to method and apparatus for improving recognition based on the generation of transformation process parameters which suppress speech spectral energy for low energy unvoiced sounds, and also for low energy regions of the spectrum between formant peaks for voiced signals.
  • Background of the Invention
  • Speech recognition is a process by which one or more unknown speech utterances are identified. Speech recognition is generally performed by comparing the features of an unknown utterance, with the features of known words, phrases, lexical units and/or strings, which are often referred to as "known words." The unknown speech utterance is typically represented by one or more digital pulse code modulated ("PCM") signals. The features, or characteristics of the known words, are typically defined through a process known as training, and in conjunction with apparatus known as speech recognizers.
  • Speech recognizers typically extract features from the unknown utterance in order to characterize the utterance. There are many types of speech recognizers, such as, for example, conventional template-based and Hidden Markov Model ("HMM") recognizers, as well as recognizers utilizing recognition models based on neural networks.
  • It is, of course, understood that audio input to any speech recognition system must first pass through a transducer such as a carbon or linear (electret) microphone. Such transducers introduce distortion into the audio signal, which distortion may have an adverse (or in same cases a beneficial) effect on the recognition process. It is known, for example, that carbon transducers suppress certain speech information which heretofore has been deemed a reason to minimize use of carbon transducers in speech recognition systems. However, when utilizing speech recognition in the telephone network, the use of carbon transducers cannot be avoided, as it is estimated that fifty (50%) percent of existing telephones utilize carbon transducers.
  • Advantageously, the instant invention recognizes that certain characteristics inherent in carbon transducers are in fact beneficial to speech recognition, particularly in the telephone system, when properly identified and utilized. The instant invention makes use of these characteristics to improve the speech recognition process.
  • Summary of the Invention
  • In accordance with the present invention, method and apparatus is described which provides improved speech recognition by essentially suppressing information in those regions of the speech signal where the signal variability is high, or the modeling accuracy is poor.
  • More particularly, the invention takes advantage of the fact that one type of microphone, such as the carbon microphone, suppresses speech spectral energy for low energy unvoiced sounds, and also for low energy regions of the spectrum between formant peaks for voiced sounds. This observation is utilized in the invention described below to improve speech recognition for various types of microphones, including the carbon and linear (electret) microphone.
  • HMM digit models trained from carbon utterances are used by a Viterbi decoder, with the output of the Viterbi decoder utilized in a process parameter generator to generate a set of transformation process parameters. Also applied to the process parameter generator are speech utterances obtained from the outputs of both carbon and linear microphones. The output of the process parameter generator is a carbon-linear transformation process parameter, which parameter is indicative of certain significant differences in the properties of carbon and linear microphones.
  • The transformation process parameter is then combined with HMM digit models trained from combined linear-carbon speech via a Viterbi decoder and a speech utterance from a carbon microphone to generate a transformed speech observation vector.
  • This vector is in turn applied to a speech recognizer in combination with the HMM digit models trained from combined linear-carbon speech to produce a recognized word string.
  • Brief Description of the Drawings
  • In the drawings:
    • FIG. 1 illustrates examples of smoothed spectral envelopes taken from individual frames of speech for a single speaker utilizing a carbon and electret transducer,
    • FIG. 2 illustrates one portion of the inventive system that computes a carbon-linear transformation process parameter, and
    • FIGS. 3A and 3B illustrate the remaining portions of the inventive system, in which a transformed speech observation vector is utilized to improve the speech recognition process.
    Detailed Description
  • It has been found that the use of a carbon microphone suppresses speech spectral energy for low energy unvoiced sounds, and also for low energy regions of the spectrum between formant peaks for voiced sounds. This is illustrated by the plots in FIG. 1. More particularly, FIG. 1 shows examples of smoothed spectral envelopes taken from individual frames of speech for a single speaker, through an electret transducer (solid line), and a carbon transducer (dotted line) simultaneously. The three plots depict filter bank envelopes for the sounds, corresponding to phonetic symbols of "/iy/," "/ah/," and "/s/" respectively, shown in FIGS. 1A, 1B and 1C. From the foregoing, it has been determined that the use of certain characteristics of a carbon transducer can be useful in speech recognition. Support for the proposition that the transformation introduced by a carbon transducer is beneficial to obtain improved speech recognition is given in Table I below. Table I
    Error Rate (Per Digit)
    Testing Condition
    Training Condition: Carbon Electret
    Carbon 1.3% 4.1%
    Electret 1.6% 2.4%
    Combined 1.3% 2.8%
  • The data shown in Table I, was obtained from an AT&T Bell Laboratories speech database, where voice samples were recorded from subjects recruited in a mall and stored in the database. The speech database contained connected digit utterances spoken over the telephone network with the speech being stored in directories per speaker labeled as either speech originating from a carbon or electret transducer. Utterances utilized to create the data in Table I consisted of 1-7 digits spoken in a continuous manner. Training data consisted of 5,368 utterances (16,321 digits) from 52 speakers. Testing data consisted of 2,239 utterances (6,793 digits) from 22 speakers. Five dialect regions were available: Long Island, Chicago, Boston, Columbus and Atlanta. The data in Table 1 utilized the Columbus dialect region.
  • Known hardware and software was used for the speech recognition process that generated the data in Table I, which hardware and software consisted of a front end processing system that computed cepstrum coefficients from a smooth spectral envelope. Also utilized was the Bell Laboratories Automatic Recognition System (BLASR™), which is a Hidden Markov Model (HMM) based system, and a Viterbi decoder/recognizer. Such hardware and associated software are well known and are described for example in "Fundamentals of Speech Recognition" by L.R. Rabiner and B.H. Juang, Prentice Hall, 1993.
  • As is shown in Table I, the error rate (per digit) recognition of connected digits over the telephone network is substantially lower for speech received through a carbon transducer, than for speech received through an electret transducer regardless of the training conditions. Thus, it is apparent from the foregoing experiment that due to the fact that carbon transducers suppress speech information where signal variability is high, speech recognition is improved. The invention advantageously utilized this type of transformation in the feature space created by a carbon transducer to generally improve speech recognition whether a carbon or electret transducer is utilized.
  • Referring now to FIG. 2, there is shown a system that computes a carbon transducer-linear transducer transformation process. More particularly, the system shown in FIG. 2 is designed to generate carbon-linear transformation parameters in response to speech from both carbon and linear transducers, and using the HMM Models trained from carbon utterances.
  • Stored at 10, are HMM models trained from carbon utterances. Such HMM models are applied to Viterbi Decoder 20, which type of Decoder is well known in this technical area and is described, in the Rabiner and Juang reference mentioned above.
  • Speech to be recognized is applied to carbon transducer 40, and linear transducer 50, and the respective transducer outputs are applied to the ASR front-end. Such ASR front-ends are well-known and described, for example, in "Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences" by S.B. Davis and P. Mermelstein, IEEE Transactions of Acoustic Speech and Signal Processing, 1980. The outputs of the ASR front-ends are x
    Figure imgb0001
    c t
    Figure imgb0002
    which is the cepstrum observation vector spoken through the carbon transducer and an output of x
    Figure imgb0003
    l t
    Figure imgb0004
    which is the ceptrum observation vector spoken through the linear (electret) transducer.
  • The observation vector x
    Figure imgb0005
    c t
    Figure imgb0006
    , is applied to the Viterbi Decoder 20. The output of Viterbi Decoder 20, θ c t
    Figure imgb0007
    is applied to block 30, which estimates the parameters of the transformation process. The function performed by block 30 is to estimate α k (y t ) ∼ Nαk , σk ) associated with "carbon-linear" observations decoded in state k for k=l,...K = total number of states.
  • The observation vectors X
    Figure imgb0008
    c t
    Figure imgb0009
    and X
    Figure imgb0010
    l t
    Figure imgb0011
    are subtracted at block 60 to form Y
    Figure imgb0012
    t which is the carbon-linear distortion process also applied to block 30.
  • Block 30 generates µ a k and σ 2 a k
    Figure imgb0013
    , which are the carbon-linear transformation process parameters with:
    Figure imgb0014
    and where Nk is the number of vectors Y
    Figure imgb0015
    t assigned to class K.
  • The problem of generating the carbon-linear transformation process parameters is treated as a signal recovery problem. The cepstrum vectors derived from the carbon transducer, x
    Figure imgb0016
    c t
    Figure imgb0017
    , are taken as the "desired" signal, and the cepstrum vectors derived from the electret transducer x
    Figure imgb0018
    l t
    Figure imgb0019
    , are taken as the "corrupted" signal. It is assumed that these vectors are realizations of random processes that are related according to x
    Figure imgb0020
    c t
    Figure imgb0021
    = x
    Figure imgb0022
    l t
    Figure imgb0023
    + y
    Figure imgb0024
    t, where y
    Figure imgb0025
    t represents a simple linear filtering operation. This can be modeled as an additive bias in the mel-frequency cepstrum domain. It is also assumed that x
    Figure imgb0026
    c t
    Figure imgb0027
    and y
    Figure imgb0028
    t are both represented by Gaussian densities that are tied to the states of the hidden Markov digit models. The parameters of the HMM state dependent Gaussian densities associated with y
    Figure imgb0029
    t are obtained from the simultaneous carbon/electret recordings of the database according to the process illustrated in FIG. 2. Viterbi alignment of each training utterance spoken through a carbon transducer is performed against the known word transcription for the utterance. All frames where x
    Figure imgb0030
    c t
    Figure imgb0031
    are assigned to state θt=k are used to estimate the mean, µ
    Figure imgb0032
    k , and variance, σ
    Figure imgb0033
    2 k
    Figure imgb0034
    of y
    Figure imgb0035
    t = x
    Figure imgb0036
    c t
    Figure imgb0037
    - x
    Figure imgb0038
    l t
    Figure imgb0039
    for state k, which is done in Block 30.
  • In the particular embodiment shown in FIG. 2, parameters were estimated from a "Stereo Carbon-Electret" database where four speakers spoke triplets of digits simultaneously into two handsets.
  • In FIG. 2, a transformation vector is estimated for each state of the HMM. The underlying goal was to approximate the highly non-linear characteristics of the carbon transducer using a segmental linear model. It is assumed that over a single HMM state, the transformation is a simple linear filter which can be modeled as an additive bias in the log spectral domain, or in the mel-frequency cepstrum domain.
  • It is important to note that the parameters of the transformation are estimated from a stereo database where speakers uttered connected digit strings simultaneously through carbon and electret telephone handsets. Hence, the parameters of the transformation were trained completely independent from the utterances that were used to test speech recognition performance. Furthermore, the speakers and the telephone handsets used for training the transformations were also separate from those used during testing.
  • The test utterances were transformed during recognition using the two pass procedure described in FIG. 3A (First Pass) and FIG. 3B (Second Pass). The two pass procedure is utilized for transforming the feature space prior to speech recognition. In the first pass, a state dependent transformation is applied to the input speech. Then, in the second pass, compensation and rescoring are performed on the transformed features.
  • In FIG. 3A, a list of N most likely string candidates (N- best list) is generated from the original test utterance. Then, a state dependent transformation is performed for each string candidate by replacing each observation x
    Figure imgb0040
    l t
    Figure imgb0041
    with y
    Figure imgb0042
    tθ t . Finally, the best string is chosen as the one associated with the transformed utterance with the highest likelihood, as shown in FIG. 3B.
  • More particularly, HMM models trained from combined carbon-linear speech are stored at block 70. Similarly, the carbon-linear transformation process parameters obtained from block 30 in FIG. 2 are stored in block 80.
  • The HMM models from block 70 are applied to Viterbi Decoder 90, along with the input test speech observation vector X
    Figure imgb0043
    t . The input test speech observation vector is also applied to summation block 110. The output of the Viterbi decoder Θt is applied to Select Transformation Vector Block 100, along with the carbon-linear transformation process parameters. Block 100 is a standard look-up table, where the input θt is used as a parameter to access the data stored in block 80.
  • The output of block 100 is also applied to summation block 110, whose output is Zt, which is the transformed speech observation vectors, providing a plurality of recognition hypothesis.
  • On the second pass, the transformed speech observation vectors Z+ are applied to Compenate/Rescore Block 130, along with the HMM Digit models (Block 120) trained from the combined carbon-linear speech.
  • It is to be understood that FIG. 3A produces N recognition hypotheses. However, the best scoring hypothesis may not be the best in terms of speech recognition. Accordingly, in FIG. 3B, a decision is made on the "best" (i.e., most recognizable) recognition hypothesis by rescoring the compensated utterances. For example, suppose that P ( Z
    Figure imgb0044
    2 t
    Figure imgb0045
    /λ )> P ( Z
    Figure imgb0046
    1 t
    Figure imgb0047
    /λ ) or the score of the compensated second candidate was higher than the first. Then, the second candidate would be chosen as additional information was used to reorder the recognition hypotheses.
  • The output of the speech recognizer 130 is the recognized word string. It must be emphasized that the feature vectors in the test utterances are not used to estimate any aspect of the transformation process. The parameters are obtained strictly from knowledge of the distortion process that existed prior to recognition. The result of the inventive speech recognition procedure is set forth in the digit recognizer error rate shown below in Table II. In Table II, it is to be understood that baseline conditions for Carbon were 1.3 and 2.8 for Electret, while with compensation Carbon would be at 1.0 and Electret at 1.9. Table II
    Error Rate (Per Digit)
    Testing Condition
    Training Condition: Carbon Electret
    Carbon
    0. 8% 2.1%
    Electret 1.7% 1.9%
    Combined 1.0% 1.9%
  • As indicated in Table II, the approach outlined above resulted in substantial improvement in the speech recognition process, when compared to the data shown in Table I. Recognition performance improved not only for the electret, but also for the carbon utterances. Recognition performance improved across the board when Table II is compared with Table I, and is even approximately the same (1.6% vs. 1.7%) for the mismatched electret training and carbon testing case. It should be particularly noted that overall performance was significantly improved for both the matched and mismatched case, while the error rate for carbon data is still (after compensation) almost half of the error rate for the electret data (matched case).

Claims (5)

  1. An improved speech recognition system comprising,
    means for generating transformation process parameters in response to selected characteristics derived from speech inputs obtained from a plurality of different types of microphones,
    means for utilizing said transformation process parameters, in conjunction with selected digitized speech models to generate a transformed speech observation vector, said digitized speech models being generated from combined speech inputs received from said different types of microphones, and
    means for applying said digitized speech models to a speech recognizer, along with said transformed speech observation vector to recognize individual words from said speech inputs.
  2. An improved speech recognition system in accordance with Claim 1, wherein said generating means includes means for accessing and utilizing stored digitized speech models trained from speech utterances stemming from one of said different types of microphones, and accessing and utilizing combined speech utterances stemming from two or more of said different types of microphones.
  3. An improved speech recognition system in accordance with Claim 2, wherein there is further included a first Viterbi decoder to which said digit models trained from speech utterances stemming from one of said different types of microphones is applied prior to access by said generating means.
  4. An improved speech recognition system in accordance with Claim 3, wherein said utilizing means includes means for accessing and utilizing stored digitized speech models trained from speech utterances stemming from two or more of said different types of microphones, and for applying said digitized speech models to a second Viterbi decoder.
  5. An improved speech recognition system in accordance with Claim 1, wherein said different types of microphones includes a carbon microphone and a linear microphone.
EP96309114A 1995-12-29 1996-12-13 A time-varying feature space preprocessing procedure for telephone based speech recognition Withdrawn EP0782127A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US581951 1984-02-21
US08/581,951 US5765124A (en) 1995-12-29 1995-12-29 Time-varying feature space preprocessing procedure for telephone based speech recognition

Publications (1)

Publication Number Publication Date
EP0782127A2 true EP0782127A2 (en) 1997-07-02

Family

ID=24327250

Family Applications (1)

Application Number Title Priority Date Filing Date
EP96309114A Withdrawn EP0782127A2 (en) 1995-12-29 1996-12-13 A time-varying feature space preprocessing procedure for telephone based speech recognition

Country Status (4)

Country Link
US (1) US5765124A (en)
EP (1) EP0782127A2 (en)
JP (1) JPH09198085A (en)
CA (1) CA2191377A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19811879C1 (en) * 1998-03-18 1999-05-12 Siemens Ag Speech recognition device
US6292776B1 (en) * 1999-03-12 2001-09-18 Lucent Technologies Inc. Hierarchial subband linear predictive cepstral features for HMM-based speech recognition
US7219058B1 (en) * 2000-10-13 2007-05-15 At&T Corp. System and method for processing speech recognition results
US6912497B2 (en) * 2001-03-28 2005-06-28 Texas Instruments Incorporated Calibration of speech data acquisition path
US6778957B2 (en) * 2001-08-21 2004-08-17 International Business Machines Corporation Method and apparatus for handset detection
WO2009078093A1 (en) 2007-12-18 2009-06-25 Fujitsu Limited Non-speech section detecting method and non-speech section detecting device
US10672387B2 (en) * 2017-01-11 2020-06-02 Google Llc Systems and methods for recognizing user speech

Also Published As

Publication number Publication date
CA2191377A1 (en) 1997-06-30
US5765124A (en) 1998-06-09
JPH09198085A (en) 1997-07-31

Similar Documents

Publication Publication Date Title
EP0789901B1 (en) Speech recognition
EP0866442B1 (en) Combining frequency warping and spectral shaping in HMM based speech recognition
Murthy et al. Robust text-independent speaker identification over telephone channels
EP1301922B1 (en) System and method for voice recognition with a plurality of voice recognition engines
US6076057A (en) Unsupervised HMM adaptation based on speech-silence discrimination
US6058363A (en) Method and system for speaker-independent recognition of user-defined phrases
EP1159737B1 (en) Speaker recognition
US5794192A (en) Self-learning speaker adaptation based on spectral bias source decomposition, using very short calibration speech
Vergin et al. Compensated mel frequency cepstrum coefficients
US6865531B1 (en) Speech processing system for processing a degraded speech signal
Malayath et al. Data-driven temporal filters and alternatives to GMM in speaker verification
US5765124A (en) Time-varying feature space preprocessing procedure for telephone based speech recognition
Wu et al. Performance improvements through combining phone-and syllable-scale information in automatic speech recognition.
Fischer et al. Database and online adaptation for improved speech recognition in car environments
Weber et al. Speaker recognition on single-and multispeaker data
Giuliani et al. Speaker normalization through constrained MLLR based transforms
Lawrence et al. Integrated bias removal techniques for robust speech recognition
Gemello et al. Linear input network based speaker adaptation in the dialogos system
Matassoni et al. Some results on the development of a hands-free speech recognizer for carenvironment
JP3589508B2 (en) Speaker adaptive speech recognition method and speaker adaptive speech recognizer
Rose et al. A user-configurable system for voice label recognition
Feng Speaker adaptation based on spectral normalization and dynamic HMM parameter adaptation
Toledo-Ronen Speech detection for text-dependent speaker verification
JPH0534679B2 (en)
Tolba et al. Comparative experiments to evaluate the use of auditory-based acoustic distinctive features and formant cues for robust automatic speech recognition in low-SNR car environments.

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE ES FR GB IT

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 19971103