WO2012003269A2 - Speech audio processing - Google Patents

Speech audio processing Download PDF

Info

Publication number
WO2012003269A2
WO2012003269A2 PCT/US2011/042515 US2011042515W WO2012003269A2 WO 2012003269 A2 WO2012003269 A2 WO 2012003269A2 US 2011042515 W US2011042515 W US 2011042515W WO 2012003269 A2 WO2012003269 A2 WO 2012003269A2
Authority
WO
WIPO (PCT)
Prior art keywords
speech
noise
audio
speaker
information
Prior art date
Application number
PCT/US2011/042515
Other languages
French (fr)
Other versions
WO2012003269A3 (en
Inventor
Willem M. Beltman
Matias Zanartu
Arijit Raychowdhury
Anand P. Rangarajan
Michael E. Deisher
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to KR1020127031843A priority Critical patent/KR101434083B1/en
Priority to JP2013513424A priority patent/JP5644013B2/en
Priority to CN201180027602.0A priority patent/CN102934159B/en
Priority to EP11801384.6A priority patent/EP2589047A4/en
Publication of WO2012003269A2 publication Critical patent/WO2012003269A2/en
Publication of WO2012003269A3 publication Critical patent/WO2012003269A3/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0264Noise filtering characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0224Processing in the time domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals

Definitions

  • the present invention relates generally to audio processing and in particular, to speech signal processing.
  • Figure 1 is a diagram for a speech processing engine in accordance with some embodiments.
  • Figure 2 is a diagram of a synthesizer in accordance with some embodiments.
  • Figure 3 is a diagram of a structure for implementing a speech processing engine in accordance with some embodiments.
  • Figure 4 is a diagram of an electronic device platform in accordance with some embodiments.
  • Voice command and continuous speech recognition are used for mobile Internet devices, for example, with in-car applications and phones that have limited keyboard functionality. It is desirable to be able to provide clean input to any speech recognition engine, but background noise in the environment impedes this objective. For example, experiments have shown that the open dictation word accuracy can degrade to
  • Kalman filtering techniques may be used to improve speech signal processing.
  • speech recognition performance may be enhanced by bifurcating audio noise filtering processing into separate speech recognition and human reception paths. That is, the audio path may be cloned to generate a
  • FIG. 1 is a block diagram of a speech processing engine 102 in accordance with some embodiments. It comprises a Kalman based filtering engine 104, a speaker/voice model 106, an environmental noise model 107, an automatic speech recognition (ASR) engine 108, and a standard noise suppression block 110.
  • ASR automatic speech recognition
  • Audio comes into the SPE (speech processing engine) and is split into two paths: a speech recognition path, entering the Kalman filter block 104, and an audio perception path (cloned audio) that is processed using standard noise suppression techniques in block 110 for reception by a user.
  • the Kalman filter utilizes components from the speaker/voice model 106, as well as from the environmental noise model 107, to filter out noise from the audio signal and provide a filtered signal to the automatic speech recognition (ASR) engine 108.
  • ASR automatic speech recognition
  • the speaker/;noise model 106 (at least an initial version) is generated before SPE execution since the SPE works off of it, although an initial version may be fairly bare, and the speech/voice model may be updated while the SPE is executing.
  • the speaker/voice engine 106 provides particular characteristics associated with the current speaker. Such characteristics could include one or more glottal harmonics, including the user's particular fundamental glottal frequency, along with any other suitable information. For example, if previously acquired models (e.g., resulting from user training) are available, they may also be incorporated into the speaker/user model 106. As indicated, previously generated "clean" audio information (x'(n)) for the particular user may also be used.
  • the environmental noise model 107 may be based on initial default data/assumptions for assumed noise environments or for specific or previously characterized environments (e.g., an office, a car, an airplane, etc). It may be static data (e.g., assumed background noise elements) associated with an environment and/or it may comprise dynamic data obtained from real-time sensors and the like. For example, it could include sensor inputs such as car speed, background noise microphone data, and air conditioning information, to enhance the performance of the noise model estimator. In some embodiments, a noise estimation method may be employed, e.g., for a single channel, by detecting periods of speech absence using a voice activity detector algorithm.
  • the noise model may be further enhanced using an iterative loop between the noise model and Kalman filtering.
  • the filter 104 may use either or both the speaker model and noise model to filter the received audio signal. Again, from the speaker model, it may use an extension to add periodic components in the form of pulses into the Kalman filtering to account for glottal harmonics generated by the speech source (e.g., human or other entity speaker using, for example, a dictation, voice controlled, or translation device).
  • the speech source e.g., human or other entity speaker using, for example, a dictation, voice controlled, or translation device.
  • Kalman filtering has typically been used with a white noise input, but in the case of human speech, the addition of a periodic input may more closely resemble the physiology of speech generation.
  • the speaker model information including the predetermined model information and glottal harmonic parameters may be used to load a set of predetermined or previously determined coefficients for the speaker model.
  • Kalman filtering results in audio that does not necessarily noticeably improve the human perception, but it does typically improve the performance of the speech recognition engine. Therefore, the audio path is cloned (two paths) to maximize both human perception and the speech recognition input using the Kalman pre processing filtering.
  • An implemented filter 104 using Kalman techniques can be used to model the vocal tract response as an AR or ARMA system, using an independent input and a driving noise, along with a noisy observation that accounts for additive colored-noise.
  • the driving periodic input is typically neglected and only a driving white noise is used for simplicity.
  • This assumption implies that the filter will (under an ideal performance) produce a clean but unvoiced speech signal, which neither has physiological value nor sounds natural.
  • the assumption may be adequate in cases where only filter parameters are needed.
  • the linear Kalman filter may capture the fundamental interactive features observed in voice production, thus yielding better estimates of the clean input under noisy conditions.
  • it may perform even better for speech processing applications.
  • the error in a scheme of this nature will be associated to its parameter estimation errors and not the product of a physiological/acoustical misrepresentation. Therefore, speech enhancement schemes disclosed herein are based on the linear Kalman filter, with the structure shown in the following table under the "Linear" heading.
  • the state Xk corresponds to the clean speech input that is produced by the glottal source 3 ⁇ 4 and environmental noise Wk . (x, is not an actual input to the SPE.)
  • the measured signal yk is corrupted by the observation noise Vk .
  • previous Kalman approaches neglect the periodic input 3 ⁇ 4 for simplicity, yielding white noise excited speech.
  • a Kalman filtering model-based approach is used for speech enhancement. It assumes that the clean speech follows a particular representation that is linearly corrupted with background noise. With standard Kalman filtering, clean speech is typically represented using an autoregressive (AR) model, which normally has a white Gaussian noise as an input. This is represented in discrete time equation 1.
  • AR autoregressive
  • Xk+i and Xk are vectors containing p samples of the future and current clean speech
  • is the state transition matrix that contains the LPC coefficients in the last row of a controllable canonical form
  • Wk represents the white noise input that is converted into a vector that affects the current sample via the vector gain G.
  • the clean speech is projected via the projector vector H to obtain the current sample that is linearly added to the background noise Vk to produce the corrupted observation or noisy speech yk.
  • Kalman filtering comprises two basic steps, a propagation step and an update step. In the propagation step the model is used to predict the current sample based on the previous estimate (hence the notation n
  • the "modified Kalman filter” that is proposed in this project extends the standard filter by generalizing the two basic noise assumptions in the system, i.e., assuming that glottal pulses also drive the AR model during voiced segments and that the background noise has resonances associated with it (non- white process).
  • the glottal pulses are represented by u[n] and are present when there is vocal fold vibration.
  • ⁇ [ ⁇ ] ⁇ ⁇ ⁇ [ ⁇ - k] + u[n] +w s [n] (8)
  • the state equation needed to the Kalman filter can be extended by creating two subsystems embedded in a larger diagonal matrix.
  • the same system structure is used to track speech and noise as shown in equations (10) to (13), where the subscript s indicates speech and v indicates background noise.
  • the filter better represents speech signal and background noise conditions, thus yielding better noise removal and ASR performance.
  • the new Kalman filtering technique can not only be used for enhancement of speech recognition, but also to improve speech synthesis.
  • Figure 2 a diagram showing a time-domain based synthesizer is shown.
  • the proposed scheme has a design that combines three interconnected processes that are applied to the input signal.
  • the first branch identifies the nature of the source component and creates a source signal.
  • the second branch searches for the filter structure and applies either CP-(closed phase) analysis or full-frame analysis to define the Linear Prediction Coefficients (LPC) of the filter.
  • LPC Linear Prediction Coefficients
  • the third branch detects the envelope and ensures stability of the synthetic sound.
  • branches can be computed in a sequential or parallel fashion and may use different frame and windowing structures (e.g., in some implementations, the first branch could use a rectangular window and non-overlapping frames, while the second one could use Hamming with, for example, 50% of overlap) as long the level of interaction is handled properly.
  • Figure 3 shows a general structure for implementing a front-end for an audio processing engine, e.g., in a mobile device, for reducing power consumption. It illustrates a power efficient way to structure the different blocks, e.g., for the SPE 102 of Figure 1. It is divided into a compute intensive block 301 and a backend 305, which is memory access intensive.
  • the compute intensive front end 301 has a filter processing section 302 and a decision block 304 for determining if input audio has speech within it.
  • the memory intensive back end 305 has speaker model block 306 for generating and updating the speaker model and a speech recognition block 308 for implementing ASR. Note that the speaker model block 306 may also have a noise model section for generating all or part of the noise model. Audio comes into the front end 301, processed by filter 302 and if it has speech, as decided at decision block 304, the speaker model and speech recognition blocks 306, 308 are activated for processing the filtered speech signal from the filter 302.
  • the hardware implementation of the speech enhancement algorithms in the front-end 301 provides opportunity for achieving low power and will also enables the use of a threshold detector 304 to provide a wake -up signal to the back-end of the processor hardware.
  • the back end 305 provides hardware implementation of the speech recognition algorithms e.g., (HMM and/or neural networks based), which is typically memory intensive, and high performance.
  • HMM and/or neural networks based the speech recognition algorithms
  • FIG. 4 shows an example of an electronic device platform 402 such as for a portable computing device, smart phone, and the like.
  • the represented portion comprises one or more processing cores 404, graphics processor (GPX) 406, memory controller hub (MCH) 408, IO section 410, and power management section 416.
  • the GPX 406 interfaces with a display 407 to provide video content.
  • the MCH 408 interfaces with memory 409 for providing the platform with additional memory (e.g., volatile or non- volatile).
  • the power management section 416 controls a power source (e.g., battery, adapter converters, VRs, etc.) to provide power to the different platform sections, and it also manages the different activity states for reducing power consumption when feasible.
  • a power source e.g., battery, adapter converters, VRs, etc.
  • the IO section 410 comprises an audio processing section 412 and peripheral interfaces(s) 414.
  • the Peripheral interface(s) provide interfaces (e.g., PCI, USB) for communicating and enabling various different peripheral devices 415 (keyboard, wireless interface, printer, etc.).
  • the audio processing section 412 may receive various audio input/output (analog and/or digital) for providing/receiving audio content from a user. It may also communicate with internal modules, for example, to communicate audio between a user and a network (e.g., cell, Internet, etc.).
  • the audio processing section 412 includes the various components (e.g., A/D/A converters, codecs, etc. for processing audio as dictated by the functions of the platform 402.
  • the audio Px 412 includes an SPE 413, as discussed herein, for implementing speech processing. In particular, it may comprise a power efficient structure as described in Figure 3.
  • Coupled is used to indicate that two or more elements are in direct physical or electrical contact with each other.
  • Connected is used to indicate that two or more elements are in direct physical or electrical contact with each other.
  • Connected is used to indicate that two or more elements are in direct physical or electrical contact with each other.
  • Connected is used to indicate that two or more elements are in direct physical or electrical contact with each other.
  • Connected is used to indicate that two or more elements are in direct physical or electrical contact with each other.
  • Coupled is used to indicate that two or more elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact.
  • PMOS transistor refers to a P-type metal oxide semiconductor field effect transistor.
  • NMOS transistor refers to an N-type metal oxide semiconductor field effect transistor.
  • MOS transistor MOS transistor
  • NMOS transistor NMOS transistor
  • PMOS transistor PMOS transistor
  • transistor can include other suitable transistor types, e.g., junction-field-effect transistors, bipolar-junction transistors, metal semiconductor FETs, and various types of three dimensional transistors, MOS or otherwise, known today or not yet developed.
  • suitable transistor types e.g., junction-field-effect transistors, bipolar-junction transistors, metal semiconductor FETs, and various types of three dimensional transistors, MOS or otherwise, known today or not yet developed.
  • IC semiconductor integrated circuit
  • PDA programmable logic arrays
  • signal conductor lines are represented with lines. Some may be thicker, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Circuits Of Receivers In General (AREA)

Abstract

A speech processing engine is provided that in some embodiments, employs Kalman filtering with a particular speaker's glottal information to clean up an audio speech signal for more efficient automatic speech recognition.

Description

SPEECH AUDIO PROCESSING
TECHNICAL FIELD
The present invention relates generally to audio processing and in particular, to speech signal processing.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
Figure 1 is a diagram for a speech processing engine in accordance with some embodiments.
Figure 2 is a diagram of a synthesizer in accordance with some embodiments. Figure 3 is a diagram of a structure for implementing a speech processing engine in accordance with some embodiments.
Figure 4 is a diagram of an electronic device platform in accordance with some embodiments.
DETAILED DESCRIPTION
Voice command and continuous speech recognition are used for mobile Internet devices, for example, with in-car applications and phones that have limited keyboard functionality. It is desirable to be able to provide clean input to any speech recognition engine, but background noise in the environment impedes this objective. For example, experiments have shown that the open dictation word accuracy can degrade to
approximately 20% in car noise and cafeteria environments, which may be unacceptable to the user.
Today's speech engines have some noise reduction features to reduce the impact of background noise. However, these features may not be sufficient to allow open dictation in challenging environments. Accordingly, Kalman filtering techniques may be used to improve speech signal processing.
With some embodiments presented herein, speech recognition performance may be enhanced by bifurcating audio noise filtering processing into separate speech recognition and human reception paths. That is, the audio path may be cloned to generate a
"perception" (or auditory reception) channel and a separate channel that is used for preprocessing audio for the speech recognition engine. Figure 1 is a block diagram of a speech processing engine 102 in accordance with some embodiments. It comprises a Kalman based filtering engine 104, a speaker/voice model 106, an environmental noise model 107, an automatic speech recognition (ASR) engine 108, and a standard noise suppression block 110.
Audio (e.g., digitized audio from a microphone) comes into the SPE (speech processing engine) and is split into two paths: a speech recognition path, entering the Kalman filter block 104, and an audio perception path (cloned audio) that is processed using standard noise suppression techniques in block 110 for reception by a user. The Kalman filter utilizes components from the speaker/voice model 106, as well as from the environmental noise model 107, to filter out noise from the audio signal and provide a filtered signal to the automatic speech recognition (ASR) engine 108.
The speaker/;noise model 106 (at least an initial version) is generated before SPE execution since the SPE works off of it, although an initial version may be fairly bare, and the speech/voice model may be updated while the SPE is executing. The speaker/voice engine 106 provides particular characteristics associated with the current speaker. Such characteristics could include one or more glottal harmonics, including the user's particular fundamental glottal frequency, along with any other suitable information. For example, if previously acquired models (e.g., resulting from user training) are available, they may also be incorporated into the speaker/user model 106. As indicated, previously generated "clean" audio information (x'(n)) for the particular user may also be used.
The environmental noise model 107, like the speaker/voice model, may be based on initial default data/assumptions for assumed noise environments or for specific or previously characterized environments (e.g., an office, a car, an airplane, etc). It may be static data (e.g., assumed background noise elements) associated with an environment and/or it may comprise dynamic data obtained from real-time sensors and the like. For example, it could include sensor inputs such as car speed, background noise microphone data, and air conditioning information, to enhance the performance of the noise model estimator. In some embodiments, a noise estimation method may be employed, e.g., for a single channel, by detecting periods of speech absence using a voice activity detector algorithm. The noise model may be further enhanced using an iterative loop between the noise model and Kalman filtering. The filter 104 may use either or both the speaker model and noise model to filter the received audio signal. Again, from the speaker model, it may use an extension to add periodic components in the form of pulses into the Kalman filtering to account for glottal harmonics generated by the speech source (e.g., human or other entity speaker using, for example, a dictation, voice controlled, or translation device). Kalman filtering has typically been used with a white noise input, but in the case of human speech, the addition of a periodic input may more closely resemble the physiology of speech generation. The speaker model information including the predetermined model information and glottal harmonic parameters may be used to load a set of predetermined or previously determined coefficients for the speaker model. Kalman filtering results in audio that does not necessarily noticeably improve the human perception, but it does typically improve the performance of the speech recognition engine. Therefore, the audio path is cloned (two paths) to maximize both human perception and the speech recognition input using the Kalman pre processing filtering.
An implemented filter 104 using Kalman techniques can be used to model the vocal tract response as an AR or ARMA system, using an independent input and a driving noise, along with a noisy observation that accounts for additive colored-noise.
In conventional Kalman applications, the driving periodic input is typically neglected and only a driving white noise is used for simplicity. This assumption implies that the filter will (under an ideal performance) produce a clean but unvoiced speech signal, which neither has physiological value nor sounds natural. However, the assumption may be adequate in cases where only filter parameters are needed.
On the other hand, we have determined that the linear Kalman filter may capture the fundamental interactive features observed in voice production, thus yielding better estimates of the clean input under noisy conditions. When combined with CP analysis and source modeling, for example, it may perform even better for speech processing applications. The error in a scheme of this nature will be associated to its parameter estimation errors and not the product of a physiological/acoustical misrepresentation. Therefore, speech enhancement schemes disclosed herein are based on the linear Kalman filter, with the structure shown in the following table under the "Linear" heading.
Figure imgf000006_0001
The state Xk corresponds to the clean speech input that is produced by the glottal source ¾ and environmental noise Wk . (x, is not an actual input to the SPE.) The measured signal yk is corrupted by the observation noise Vk . As described before, previous Kalman approaches neglect the periodic input ¾ for simplicity, yielding white noise excited speech. However, the inclusion of such a periodic input and CP
representation of the state transition matrix provides better estimates of the clean input Xk and thus better speech recognition performance. In the following section, Kalman filtering, as applied herein, will be discussed in more detail.
In some embodiments, a Kalman filtering model-based approach is used for speech enhancement. It assumes that the clean speech follows a particular representation that is linearly corrupted with background noise. With standard Kalman filtering, clean speech is typically represented using an autoregressive (AR) model, which normally has a white Gaussian noise as an input. This is represented in discrete time equation 1.
P
x[n] = anx[n - k] +¾{n] (1)
k=\
where x[n] is the clean speech, an the AR or linear prediction coding (LPC) coefficients, w[n] the white noise input, and p is the order of the AR model (normally assumed to follow the rule of thumb p = fs/1000+2, where fs is the sampling rate in kHz). This model can be rewritten to produce the desired structure needed for the Kalman filter, as described in equations (2) and (3). Thus,
¾+i = φ¾ + Gwk (2)
Λ = # * + ν* (3)
where Xk+i and Xk are vectors containing p samples of the future and current clean speech, Φ is the state transition matrix that contains the LPC coefficients in the last row of a controllable canonical form, Wk represents the white noise input that is converted into a vector that affects the current sample via the vector gain G. The clean speech is projected via the projector vector H to obtain the current sample that is linearly added to the background noise Vk to produce the corrupted observation or noisy speech yk. Kalman filtering comprises two basic steps, a propagation step and an update step. In the propagation step the model is used to predict the current sample based on the previous estimate (hence the notation n|n-l). This is represented in equation (4). Note that only one buffer of one vector containing the previous p points is required. The update step is depicted in equations (5) - (7), where the predicted samples are first corrected considering the error between the prediction and the estimate. This error is controlled by the Kalman gain Kn, which is defined in equations (6) and (7). Note that all these parameters may be computed once within each frame, i.e., speech is considered a stationary process within each frame (normally of duration no longer than 25 ms).
Figure imgf000007_0001
Figure imgf000007_0002
-l + Kn(yn ~ #,A|»-l) (5)
Figure imgf000007_0003
Pn = I - (KnHn)Pn _, (7)
The "modified Kalman filter" that is proposed in this project extends the standard filter by generalizing the two basic noise assumptions in the system, i.e., assuming that glottal pulses also drive the AR model during voiced segments and that the background noise has resonances associated with it (non- white process). The glottal pulses are represented by u[n] and are present when there is vocal fold vibration. The background noise is assumed to follow an AR model of order q (which may be estimated, e.g., empirically obtained as q=fs/2000). Therefore, the two equations that represent the new structure of the system are
P
χ[η] = αηχ[η - k] + u[n] +ws[n] (8)
k=l v[n] =∑finv[" - k] +wn[n] (9)
Since the model for speech and noise have a similar structure, the state equation needed to the Kalman filter can be extended by creating two subsystems embedded in a larger diagonal matrix. The same system structure is used to track speech and noise as shown in equations (10) to (13), where the subscript s indicates speech and v indicates background noise. The glottal pulses are introduced only in the current sample, for which the vector B has the same structure as G. ·½+ι = φ ·½ + 5*½ + 6½, (10) yk = Hxk + vk (11)
Figure imgf000008_0001
H = [HS Ην] (13)
The equations to compute Kalman propagation and update are different from the standard Kalman filter, for among other reasons, in that the glottal pulses are included and the noise covariance matrix Rn is not, since the noise is being tracked by the filter itself. These changes are represented by modifying equation (4) by (14), and equation (6) by (15). Thus,
Figure imgf000008_0002
With these modifications, the filter better represents speech signal and background noise conditions, thus yielding better noise removal and ASR performance.
The new Kalman filtering technique can not only be used for enhancement of speech recognition, but also to improve speech synthesis. With reference to Figure 2, a diagram showing a time-domain based synthesizer is shown. The proposed scheme has a design that combines three interconnected processes that are applied to the input signal. The first branch identifies the nature of the source component and creates a source signal. The second branch searches for the filter structure and applies either CP-(closed phase) analysis or full-frame analysis to define the Linear Prediction Coefficients (LPC) of the filter. The third branch detects the envelope and ensures stability of the synthetic sound. These branches can be computed in a sequential or parallel fashion and may use different frame and windowing structures (e.g., in some implementations, the first branch could use a rectangular window and non-overlapping frames, while the second one could use Hamming with, for example, 50% of overlap) as long the level of interaction is handled properly.
Figure 3 shows a general structure for implementing a front-end for an audio processing engine, e.g., in a mobile device, for reducing power consumption. It illustrates a power efficient way to structure the different blocks, e.g., for the SPE 102 of Figure 1. It is divided into a compute intensive block 301 and a backend 305, which is memory access intensive. The compute intensive front end 301 has a filter processing section 302 and a decision block 304 for determining if input audio has speech within it. The memory intensive back end 305 has speaker model block 306 for generating and updating the speaker model and a speech recognition block 308 for implementing ASR. Note that the speaker model block 306 may also have a noise model section for generating all or part of the noise model. Audio comes into the front end 301, processed by filter 302 and if it has speech, as decided at decision block 304, the speaker model and speech recognition blocks 306, 308 are activated for processing the filtered speech signal from the filter 302.
By reducing memory requirements at the front-end of the hardware, the use of lower power operation may be enabled to increase the number of operations per watt. The hardware implementation of the speech enhancement algorithms in the front-end 301 provides opportunity for achieving low power and will also enables the use of a threshold detector 304 to provide a wake -up signal to the back-end of the processor hardware. The back end 305 provides hardware implementation of the speech recognition algorithms e.g., (HMM and/or neural networks based), which is typically memory intensive, and high performance. Thus by dividing the hardware (e.g., SPE hardware) into a compute- intensive front-end and a high performance back-end, "voice-wake" and "always- listening" features may also be implemented for speech enhancement and recognition.
Figure 4 shows an example of an electronic device platform 402 such as for a portable computing device, smart phone, and the like. The represented portion comprises one or more processing cores 404, graphics processor (GPX) 406, memory controller hub (MCH) 408, IO section 410, and power management section 416. The GPX 406 interfaces with a display 407 to provide video content. The MCH 408 interfaces with memory 409 for providing the platform with additional memory (e.g., volatile or non- volatile). The power management section 416 controls a power source (e.g., battery, adapter converters, VRs, etc.) to provide power to the different platform sections, and it also manages the different activity states for reducing power consumption when feasible.
The IO section 410 comprises an audio processing section 412 and peripheral interfaces(s) 414. The Peripheral interface(s) provide interfaces (e.g., PCI, USB) for communicating and enabling various different peripheral devices 415 (keyboard, wireless interface, printer, etc.). The audio processing section 412 may receive various audio input/output (analog and/or digital) for providing/receiving audio content from a user. It may also communicate with internal modules, for example, to communicate audio between a user and a network (e.g., cell, Internet, etc.). The audio processing section 412 includes the various components (e.g., A/D/A converters, codecs, etc. for processing audio as dictated by the functions of the platform 402. In particular, the audio Px 412 includes an SPE 413, as discussed herein, for implementing speech processing. In particular, it may comprise a power efficient structure as described in Figure 3.
In the preceding description, numerous specific details have been set forth.
However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques may have not been shown in detail in order not to obscure an understanding of the description. With this in mind, references to "one embodiment", "an embodiment", "example embodiment", "various embodiments", etc., indicate that the embodiment(s) of the invention so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments.
In the preceding description and following claims, the following terms should be construed as follows: The terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" is used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" is used to indicate that two or more elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact.
The term "PMOS transistor" refers to a P-type metal oxide semiconductor field effect transistor. Likewise, "NMOS transistor" refers to an N-type metal oxide semiconductor field effect transistor. It should be appreciated that whenever the terms: "MOS transistor", "NMOS transistor", or "PMOS transistor" are used, unless otherwise expressly indicated or dictated by the nature of their use, they are being used in an exemplary manner. They encompass the different varieties of MOS devices including devices with different VTs, material types, insulator thicknesses, gate(s) configurations, to mention just a few. Moreover, unless specifically referred to as MOS or the like, the term transistor can include other suitable transistor types, e.g., junction-field-effect transistors, bipolar-junction transistors, metal semiconductor FETs, and various types of three dimensional transistors, MOS or otherwise, known today or not yet developed.
The invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. For example, it should be appreciated that the present invention is applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chip set components, programmable logic arrays (PLA), memory chips, network chips, and the like.
It should also be appreciated that in some of the drawings, signal conductor lines are represented with lines. Some may be thicker, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
It should be appreciated that example sizes/models/values/ranges may have been given, although the present invention is not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the FIGS, for simplicity of illustration and discussion, and so as not to obscure the invention. Further, arrangements may be shown in block diagram form in order to avoid obscuring the invention, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the present invention is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that the invention can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.

Claims

CLAIMS What is claimed is:
1. An apparatus, comprising:
a speech processing engine having first and second audio speech paths, the first path to be provided to an auditory receiver; and
a Kalman filter coupled to the second path to receive the audio speech signal and to remove noise therefrom, the Kalman filter to remove said noise based at least in part on a speaker model including speaker glottal information.
2. The apparatus of claim 1, in which the filter is to remove noise based also on a noise model incorporating environmental noise information.
3. The apparatus of claim 2, in which the environmental noise information includes realtime information.
4. The apparatus of claim 3, in which the real-time information includes information from one or more noise sensors.
5. The apparatus of claim 1, in which the speaker model incorporates previously generated noise-removed speech signal information for the speaker.
6. The apparatus of claim 1, in which the filter is implemented in a front end section and the speaker model is implemented in a back end section that is enabled if speech is detected in the audio speech signal.
7. The apparatus of claim 6, in which the speech processing engine comprises a speech recognition engine.
8. The apparatus of claim 7, in which the speech recognition engine is part of the back end section.
9. An electronic device, comprising:
an audio processing section including a speech processing engine having first and second audio speech paths, the first path to be provided to an auditory receiver; and
a Kalman filter coupled to the second path to receive the audio speech signal and to remove noise therefrom, the Kalman filter to remove said noise based at least in part on a speaker model including speaker glottal information.
10. The electronic device of claim 9, in which the filter is to remove noise based also on a noise model incorporating environmental noise information.
11. The electronic device of claim 10, in which the environmental noise information includes real-time information.
12. The electronic device of claim 11, in which the real-time information includes information from one or more noise sensors.
13. The electronic device of claim 9, in which the speaker model incorporates previously generated noise-removed speech signal information for the speaker.
14. The electronic device of claim 9, in which the filter is implemented in a front end section and the speaker model is implemented in a back end section that is enabled if speech is detected in the audio speech signal.
15. The electronic device of claim 14, in which the speech processing engine comprises a speech recognition engine.
16. The electronic device of claim 15, in which the speech recognition engine is part of the back end section.
PCT/US2011/042515 2010-06-30 2011-06-30 Speech audio processing WO2012003269A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020127031843A KR101434083B1 (en) 2010-06-30 2011-06-30 Speech audio processing
JP2013513424A JP5644013B2 (en) 2010-06-30 2011-06-30 Speech processing
CN201180027602.0A CN102934159B (en) 2010-06-30 2011-06-30 Speech audio process
EP11801384.6A EP2589047A4 (en) 2010-06-30 2011-06-30 Speech audio processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/828,195 2010-06-30
US12/828,195 US8725506B2 (en) 2010-06-30 2010-06-30 Speech audio processing

Publications (2)

Publication Number Publication Date
WO2012003269A2 true WO2012003269A2 (en) 2012-01-05
WO2012003269A3 WO2012003269A3 (en) 2012-03-29

Family

ID=45400342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/042515 WO2012003269A2 (en) 2010-06-30 2011-06-30 Speech audio processing

Country Status (7)

Country Link
US (1) US8725506B2 (en)
EP (1) EP2589047A4 (en)
JP (1) JP5644013B2 (en)
KR (1) KR101434083B1 (en)
CN (1) CN102934159B (en)
TW (1) TWI455112B (en)
WO (1) WO2012003269A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8725506B2 (en) 2010-06-30 2014-05-13 Intel Corporation Speech audio processing

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8812014B2 (en) * 2010-08-30 2014-08-19 Qualcomm Incorporated Audio-based environment awareness
US9947333B1 (en) * 2012-02-10 2018-04-17 Amazon Technologies, Inc. Voice interaction architecture with intelligent background noise cancellation
US9384759B2 (en) 2012-03-05 2016-07-05 Malaspina Labs (Barbados) Inc. Voice activity detection and pitch estimation
US9437213B2 (en) 2012-03-05 2016-09-06 Malaspina Labs (Barbados) Inc. Voice signal enhancement
US9020818B2 (en) 2012-03-05 2015-04-28 Malaspina Labs (Barbados) Inc. Format based speech reconstruction from noisy signals
US20140358552A1 (en) * 2013-05-31 2014-12-04 Cirrus Logic, Inc. Low-power voice gate for device wake-up
US9361890B2 (en) * 2013-09-20 2016-06-07 Lenovo (Singapore) Pte. Ltd. Context-based audio filter selection
US9413434B2 (en) 2013-10-04 2016-08-09 Intel Corporation Cancellation of interfering audio on a mobile device
US10565984B2 (en) 2013-11-15 2020-02-18 Intel Corporation System and method for maintaining speech recognition dynamic dictionary
US9449602B2 (en) * 2013-12-03 2016-09-20 Google Inc. Dual uplink pre-processing paths for machine and human listening
KR102216048B1 (en) 2014-05-20 2021-02-15 삼성전자주식회사 Apparatus and method for recognizing voice commend
US9858922B2 (en) 2014-06-23 2018-01-02 Google Inc. Caching speech recognition scores
CN104463841A (en) * 2014-10-21 2015-03-25 深圳大学 Attenuation coefficient self-adaptation filtering method and filtering system
US9299347B1 (en) * 2014-10-22 2016-03-29 Google Inc. Speech recognition using associative mapping
US9786270B2 (en) 2015-07-09 2017-10-10 Google Inc. Generating acoustic models
US10229672B1 (en) 2015-12-31 2019-03-12 Google Llc Training acoustic models using connectionist temporal classification
DK3217399T3 (en) * 2016-03-11 2019-02-25 Gn Hearing As Kalman filtering based speech enhancement using a codebook based approach
DE102017209585A1 (en) * 2016-06-08 2017-12-14 Ford Global Technologies, Llc SYSTEM AND METHOD FOR SELECTIVELY GAINING AN ACOUSTIC SIGNAL
US20180018973A1 (en) 2016-07-15 2018-01-18 Google Inc. Speaker verification
US10706840B2 (en) 2017-08-18 2020-07-07 Google Llc Encoder-decoder models for sequence to sequence mapping
WO2019169616A1 (en) * 2018-03-09 2019-09-12 深圳市汇顶科技股份有限公司 Voice signal processing method and apparatus
CN110738990B (en) 2018-07-19 2022-03-25 南京地平线机器人技术有限公司 Method and device for recognizing voice
US12080317B2 (en) 2019-08-30 2024-09-03 Dolby Laboratories Licensing Corporation Pre-conditioning audio for echo cancellation in machine perception
GB202104280D0 (en) * 2021-03-26 2021-05-12 Samsung Electronics Co Ltd Method and apparatus for real-time sound enhancement
CN113053382B (en) * 2021-03-30 2024-06-18 联想(北京)有限公司 Processing method and device

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5148488A (en) * 1989-11-17 1992-09-15 Nynex Corporation Method and filter for enhancing a noisy speech signal
US5434947A (en) * 1993-02-23 1995-07-18 Motorola Method for generating a spectral noise weighting filter for use in a speech coder
US5774846A (en) * 1994-12-19 1998-06-30 Matsushita Electric Industrial Co., Ltd. Speech coding apparatus, linear prediction coefficient analyzing apparatus and noise reducing apparatus
US5864810A (en) * 1995-01-20 1999-01-26 Sri International Method and apparatus for speech recognition adapted to an individual speaker
JP3522012B2 (en) * 1995-08-23 2004-04-26 沖電気工業株式会社 Code Excited Linear Prediction Encoder
US6377919B1 (en) * 1996-02-06 2002-04-23 The Regents Of The University Of California System and method for characterizing voiced excitations of speech and acoustic signals, removing acoustic noise from speech, and synthesizing speech
KR20000022285A (en) * 1996-07-03 2000-04-25 내쉬 로저 윌리엄 Voice activity detector
TW309675B (en) 1996-12-26 1997-07-01 Yiing Lii Method and apparatus for complex fuzzy signal processing
US5970446A (en) * 1997-11-25 1999-10-19 At&T Corp Selective noise/channel/coding models and recognizers for automatic speech recognition
US6408269B1 (en) * 1999-03-03 2002-06-18 Industrial Technology Research Institute Frame-based subband Kalman filtering method and apparatus for speech enhancement
TW425542B (en) 1999-03-19 2001-03-11 Ind Tech Res Inst Kalman filter for speech enhancement
US7117157B1 (en) * 1999-03-26 2006-10-03 Canon Kabushiki Kaisha Processing apparatus for determining which person in a group is speaking
US6954745B2 (en) 2000-06-02 2005-10-11 Canon Kabushiki Kaisha Signal processing system
US7072833B2 (en) * 2000-06-02 2006-07-04 Canon Kabushiki Kaisha Speech processing system
US20020026253A1 (en) 2000-06-02 2002-02-28 Rajan Jebu Jacob Speech processing apparatus
JP2002006898A (en) 2000-06-22 2002-01-11 Asahi Kasei Corp Method and device for noise reduction
US7457750B2 (en) * 2000-10-13 2008-11-25 At&T Corp. Systems and methods for dynamic re-configurable speech recognition
US6850887B2 (en) * 2001-02-28 2005-02-01 International Business Machines Corporation Speech recognition in noisy environments
WO2002077972A1 (en) * 2001-03-27 2002-10-03 Rast Associates, Llc Head-worn, trimodal device to increase transcription accuracy in a voice recognition system and to process unvocalized speech
US6757651B2 (en) * 2001-08-28 2004-06-29 Intellisist, Llc Speech detection system and method
WO2003036614A2 (en) * 2001-09-12 2003-05-01 Bitwave Private Limited System and apparatus for speech communication and speech recognition
JP2003271191A (en) * 2002-03-15 2003-09-25 Toshiba Corp Device and method for suppressing noise for voice recognition, device and method for recognizing voice, and program
US20040064315A1 (en) * 2002-09-30 2004-04-01 Deisher Michael E. Acoustic confidence driven front-end preprocessing for speech recognition in adverse environments
KR100633985B1 (en) 2004-05-04 2006-10-16 주식회사 팬택앤큐리텔 Apparatus for eliminating echo and noise in handset
EP1878012A1 (en) * 2005-04-26 2008-01-16 Aalborg Universitet Efficient initialization of iterative parameter estimation
CA2612903C (en) * 2005-06-20 2015-04-21 Telecom Italia S.P.A. Method and apparatus for transmitting speech data to a remote device in a distributed speech recognition system
US8744844B2 (en) * 2007-07-06 2014-06-03 Audience, Inc. System and method for adaptive intelligent noise suppression
CN101281744B (en) 2007-04-04 2011-07-06 纽昂斯通讯公司 Method and apparatus for analyzing and synthesizing voice
KR100930584B1 (en) * 2007-09-19 2009-12-09 한국전자통신연구원 Speech discrimination method and apparatus using voiced sound features of human speech
WO2009116291A1 (en) 2008-03-21 2009-09-24 学校法人東京理科大学 Noise suppression device and noise suppression method
US8121837B2 (en) * 2008-04-24 2012-02-21 Nuance Communications, Inc. Adjusting a speech engine for a mobile computing device based on background noise
KR101056511B1 (en) * 2008-05-28 2011-08-11 (주)파워보이스 Speech Segment Detection and Continuous Speech Recognition System in Noisy Environment Using Real-Time Call Command Recognition
JP5153886B2 (en) * 2008-10-24 2013-02-27 三菱電機株式会社 Noise suppression device and speech decoding device
US9202455B2 (en) * 2008-11-24 2015-12-01 Qualcomm Incorporated Systems, methods, apparatus, and computer program products for enhanced active noise cancellation
US8660281B2 (en) * 2009-02-03 2014-02-25 University Of Ottawa Method and system for a multi-microphone noise reduction
KR101253102B1 (en) * 2009-09-30 2013-04-10 한국전자통신연구원 Apparatus for filtering noise of model based distortion compensational type for voice recognition and method thereof
US8725506B2 (en) 2010-06-30 2014-05-13 Intel Corporation Speech audio processing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A. YASMIN ET AL.: "ICASSP, IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING - PROCEEDINGS", 1999, IEEE, article "Speech enhancement using voice source models"
See also references of EP2589047A4
WEN JIN ET AL.: "SOUTHEASTCON, 2005. PROCEEDINGS. IEEE FT. LAUDERDALE, FLORIDA", 8 April 2005, IEEE, article "Speech Enhancement by Kalman Filtering with Residual Noise Clipping"

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8725506B2 (en) 2010-06-30 2014-05-13 Intel Corporation Speech audio processing

Also Published As

Publication number Publication date
TW201222527A (en) 2012-06-01
JP2013531275A (en) 2013-08-01
JP5644013B2 (en) 2014-12-24
WO2012003269A3 (en) 2012-03-29
US8725506B2 (en) 2014-05-13
KR101434083B1 (en) 2014-08-25
TWI455112B (en) 2014-10-01
CN102934159A (en) 2013-02-13
KR20130033372A (en) 2013-04-03
EP2589047A2 (en) 2013-05-08
CN102934159B (en) 2015-12-16
US20120004909A1 (en) 2012-01-05
EP2589047A4 (en) 2015-11-25

Similar Documents

Publication Publication Date Title
US8725506B2 (en) Speech audio processing
CN111081231B (en) Adaptive audio enhancement for multi-channel speech recognition
CN106663446B (en) User environment aware acoustic noise reduction
KR101224755B1 (en) Multi-sensory speech enhancement using a speech-state model
US9536538B2 (en) Method and device for reconstructing a target signal from a noisy input signal
JP4842583B2 (en) Method and apparatus for multisensory speech enhancement
US8249270B2 (en) Sound signal correcting method, sound signal correcting apparatus and computer program
CN106663445B (en) Sound processing device, sound processing method, and program
US9378755B2 (en) Detecting a user's voice activity using dynamic probabilistic models of speech features
KR20160125984A (en) Systems and methods for speaker dictionary based speech modeling
WO2012158156A1 (en) Noise supression method and apparatus using multiple feature modeling for speech/noise likelihood
KR20040088360A (en) Method of noise estimation using incremental bayes learning
US11308946B2 (en) Methods and apparatus for ASR with embedded noise reduction
US20040064315A1 (en) Acoustic confidence driven front-end preprocessing for speech recognition in adverse environments
Górriz et al. Improved likelihood ratio test based voice activity detector applied to speech recognition
Saleem et al. Time domain speech enhancement with CNN and time-attention transformer
KR20110024969A (en) Apparatus for filtering noise by using statistical model in voice signal and method thereof
EP2645738B1 (en) Signal processing device, signal processing method, and signal processing program
Li et al. Robust log-energy estimation and its dynamic change enhancement for in-car speech recognition
Sarafnia et al. Implementation of Bayesian recursive state-space Kalman filter for noise reduction of speech signal
Chen et al. Research on Speech Recognition of Sanitized Robot Based on Improved Speech Enhancement Algorithm
Setiawan Exploration and optimization of noise reduction algorithms for speech recognition in embedded devices
Singh et al. Speech Enhancement using Segmental Non-Negative Matrix Factorization (SNMF) and Hidden Marvok Model (HMM)
Kang et al. A Unified Approach of Compensation and Soft Masking Incorporating a Statistical Model into the Wiener Filter
Kleinschmidt et al. A likelihood-maximizing framework for enhanced in-car speech recognition based on speech dialog system interaction

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180027602.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11801384

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 2013513424

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20127031843

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2011801384

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE