WO2012158165A1 - Non-linear post-processing for super-wideband acoustic echo cancellation - Google Patents

Non-linear post-processing for super-wideband acoustic echo cancellation Download PDF

Info

Publication number
WO2012158165A1
WO2012158165A1 PCT/US2011/036863 US2011036863W WO2012158165A1 WO 2012158165 A1 WO2012158165 A1 WO 2012158165A1 US 2011036863 W US2011036863 W US 2011036863W WO 2012158165 A1 WO2012158165 A1 WO 2012158165A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal stream
frequencies
range
audio streams
signal
Prior art date
Application number
PCT/US2011/036863
Other languages
French (fr)
Inventor
Jan Skoglund
Marco Paniconi
Andrew John Macdonald
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Priority to EP11721217.5A priority Critical patent/EP2710789A1/en
Priority to PCT/US2011/036863 priority patent/WO2012158165A1/en
Publication of WO2012158165A1 publication Critical patent/WO2012158165A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M9/00Arrangements for interconnection not involving centralised switching
    • H04M9/08Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
    • H04M9/085Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic using digital techniques

Definitions

  • Speech quality is an important factor for telephony system suppliers.
  • An echo which is a delayed version of what was originally transmitted, is regarded as a severe distraction to the speaker if the delay is long. For short round trip delays of less than approximately 20 ms, the speaker will not be able to distinguish the echo from the side tone in the handset. However, for long-distance communications, such as satellite communications, a remotely generated echo signal often has a substantial delay. Moreover, the speech and channel coding compulsory in digital radio communications systems and for telephony over the Internet protocol (IP telephony, for short) also result in significant delays which make the echoes generated a relatively short distance away clearly audible to the speaker. Hence, canceling the echo is a significant factor in maintaining speech quality.
  • An echo canceller typically includes a linear filtering part which essentially is an adaptive filter that tries to adapt to the echo path. In this way, a replica of the echo can be produced from the far-end signal and subtracted from the near-end signal, thereby canceling the echo.
  • Supper-wideband may refer to signals with a sampling rate above wideband sampling rate, for example, 32 kHz (as compared to 8 kHz and 16 kHz for narrowband and wideband, respectively).
  • a method for removing echo from audio streams includes receiving input audio streams, splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
  • the method includes computing the single upper-band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
  • the method includes computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the suppression factors.
  • a system for removing echo from audio streams includes a splitting filter that receives input audio streams and splits the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies.
  • the system also includes a non-linear processor that applies a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
  • the non-linear processor computes the single upper-band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
  • the non-linear processor computes the single upper-band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
  • the non-linear processor is configured to: compute a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal; compute a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and apply the first and second coherence values to compute the suppression factors.
  • a computer-readable storage medium having stored thereon computer executable program for removing echo from audio streams.
  • the computer program when executed causes a processor to execute the steps of: receiving input audio streams, splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
  • the computer program when executed causes the processor to further execute the step of computing the single upper- band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
  • the computer program when executed causes the processor to further execute the step of computing the single upper- band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
  • the computer program when executed causes the processor, to further execute the steps of: computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the suppression factors.
  • a method for generating comfort noise for audio streams includes receiving input audio streams and splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
  • the method includes computing the single upper-band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
  • the method includes computing the single upper-band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
  • the method includes computing the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
  • the method includes generating comfort noise by utilizing the single-upper band noise estimate and a single upper- band suppression factor.
  • the method includes computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the upper-band suppression factor.
  • a system for generating comfort noise for audio streams is disclosed.
  • the system includes a splitting filter that receives input audio streams and splits the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and a non-linear processor that applies a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
  • the non-linear processor computes the single upper-band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
  • the non-linear processor computes the single upper-band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
  • the non-linear processor computes the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
  • the non-linear processor is configured to: compute a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal, compute a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and apply the first and second coherence values to compute the upper-band suppression factor.
  • a computer-readable storage medium having stored thereon computer executable program for generating comfort noise for audio streams
  • the computer program when executed causes a processor to execute the steps of: receiving input audio streams and splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
  • the computer program when executed causes the processor to further execute the step of computing the single upper- band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
  • the computer program when executed causes the processor to further execute the step of computing the single upper- band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
  • the computer program when executed causes the processor to further execute the step of computing the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
  • the computer program when executed causes the processor to further execute the step of generating comfort noise by utilizing the single-upper band noise estimate and a single upper-band suppression factor.
  • the computer program when executed causes the processor to further execute the steps of: computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the upper-band suppression factor.
  • Fig. 1 is a block diagram of an acoustic echo canceller in accordance with an embodiment of the present invention.
  • Fig. 2 illustrates a more detailed block diagram describing the functions performed in the adaptive filter of Fig. 1 in accordance with an embodiment of the present invention.
  • FIG. 3 illustrates computational stages of the adaptive filter of Fig. 2 in accordance with an embodiment of the present invention.
  • FIG. 4 illustrates a more detailed block diagram describing block G m in Fig.
  • Fig. 6 is a block diagram of an acoustic echo canceller for processing lower-band and upper-band signal streams in accordance with an embodiment of the present invention.
  • Fig. 7 is a flow diagram illustrating operations performed by the acoustic echo canceller according to an embodiment of the present invention illustrated in Fig. 6.
  • Fig. 8 is a flow diagram illustrating operations performed by the acoustic echo canceller according to a further embodiment of the present invention illustrated in Fig. 6.
  • Fig. 1 illustrates an acoustic echo canceller (AEC) 100 in accordance with an exemplary embodiment of the present invention.
  • AEC acoustic echo canceller
  • the AEC 100 is designed as a high quality echo canceller for voice and audio communication over packet switched networks. More specifically, the AEC 100 is designed to cancel acoustic echo 130 that emerges due to the reflection of sound waves of a render device 10 from boundary surfaces and other objects back to a near-end capture device 20. The echo 130 may also exist due to the direct path from render device 10 to the capture device 20.
  • Render device 10 may be any of a variety of audio output devices, including a loudspeaker or group of loudspeakers configured to output sound from one or more channels.
  • Capture device 20 may be any of a variety of audio input devices, such as one or more microphones configured to capture sound and generate input signals.
  • render device 10 and capture device 20 may be hardware devices internal to a computer system, or external peripheral devices connected to a computer system via wired and/or wireless connections.
  • render device 10 and capture device 20 may be components of a single device, such as a microphone, telephone handset, etc.
  • one or both of render device 10 and capture device 20 may include analog-to-digital and/or digital-to-analog transformation functionalities.
  • the echo canceller 100 includes a linear filter 102, a nonlinear processor (NLP) 104, a far-end buffer 106, and a blocking buffer 108.
  • a far- end signal 110 generated at the far-end and transmitted to the near-end is input to the filter 102 via the far-end buffer (FEBuf) 106 and the blocking buffer 108.
  • the far-end signal 110 is also input to a play-out buffer 112 located near the render device 10.
  • the output signal 116 of the far-end buffer 106 is input to the blocking buffer 108 and the output signal 118 of the blocking buffer is input to the linear filter 102.
  • the far-end buffer 106 is configured to compensate for and synchronize to buffering at sound devices (not shown).
  • the blocking buffer 108 is configured to block the signal samples for a frequency-domain transformation to be performed by the linear filter 102 and the NLP 104.
  • the linear filter 102 is an adaptive filter.
  • Linear filter 102 operates in the frequency domain through, e.g., the Discrete Fourier Transform (DFT).
  • the DFT may be implemented as a Fast Fourier Transform (FFT).
  • FFT Fast Fourier Transform
  • the other input to the filter 102 is the near-end signal (Sin) 122 from the capture device 20 via a recording buffer 114.
  • the near-end signal 122 includes near-end speech 120 and the echo 130.
  • the NLP 104 receives three signals as input. It receives (1) the far-end signal via the far-end buffer 106 and blocking buffer 108, (2) the near-end signal via the recording buffer 1 14, and (3) the output signal 124 of the filter 102.
  • the output signal 124 is also referred to as an error signal. In a case when the NLP 104 attenuates the output signal 124, a comfort noise signal is generated which will be explained later.
  • each frame is divided into 64 sample blocks. Since this choice of block size does not produce an integer number of blocks per frame the signal needs to be buffered before the processing. This buffering is handled by the blocking buffer 108 as discussed above. Both the filter 102 and the NLP 104 operate in the frequency domain and utilize DFTs of 128 samples.
  • the performance of the AEC 100 is influenced by the operation of the play- out buffer 112 and the recording buffer 114 at the sound device.
  • the AEC 100 may not start unless the combined size of the play-out buffer 112 and the recording buffer 1 14 is reasonably stable within a predetermined limit. For example, if the combined size is stable within +/- 8 ms of the first started size, for four consecutive frames, the AEC 100 is started by filling up the internal far-end buffer 106.
  • FIG. 2 illustrates a more detailed block diagram describing the functions performed in the filter 102 of Fig. 1.
  • Fig. 3 illustrates computational stages of the filter 102 in accordance with an embodiment of the present invention.
  • the adaptive filter 102 includes a first transform section 200, an inverse transform section 202, a second transform section 204, and an impulse response section (H) 206.
  • the far-end signal x(n) 210 to be rendered at the render device 10 is input to the first transform section 200.
  • the output signal X(n, k) of the first transform section 200 is input to the impulse response section 206.
  • the output signal Y(n, k) is input to the second transform section 202 which outputs the signal y(n).
  • This signal y(n) is then subtracted from the near-end signal d(n) 220 captured by the capture device 20 to output an error signal e(n) 230 as the output of the linear stage of the filter 102.
  • the error signal 230 is also input to the second transform section 204 the output signal of which, E(n, k), is also input to the impulse response section 206.
  • the above-mentioned adaptive filtering approach relates to an implementation of a standard blocked time-domain Least Mean Square (LMS) algorithm.
  • LMS Least Mean Square
  • the complexity reduction is due to the filtering and the correlations being performed in the frequency domain, where time-domain convolution is replaced by multiplication.
  • the error is formed in the time domain and is transformed to the frequency domain for updating the filter 102 as illustrated in Fig. 2.
  • Fig. 4 illustrates a more detailed block diagram describing block G m in the FLMS method of Fig. 3 in accordance with an embodiment of the present invention.
  • v is a N x N-sized identity matrix, and 0 ⁇ is a N x N-sized zero matrix. This means that the time domain vector is appended with N zeros before the Fourier transform.
  • x ⁇ k-m [x ((k - m-2JN) ... x((k - m)N-l)
  • the estimated echo signal is then obtained as the N last coefficients of the inverse transformed sum of the filter products performed at step S320 from which first block is discarded at step S322.
  • the estimated echo signal is represented as
  • N zeros are inserted at step S316 to the error vector, and the augmented vector is transformed at step S318 as
  • Fig. 4 illustrates a more detailed block diagram describing block G m in Fig. 3 in accordance with an embodiment of the present invention where the filter coefficient update can be expressed as
  • the diagonal matrix X(k-m) is conjugated by the conjugate unit 420 which is then multiplied with vector B(k) prior to performing an inverse DFT transform by the Inverse Discrete Fourier Transform (IDFT) unit 430. Then the discard last block unit 440 discards the last block. After discarding the last block, a zero block is appended by the append zero block unit 450 prior to performing a DFT by the DFT unit 460. Then, a block delay is introduced by the delay unit 480 which outputs Wm(k).
  • IDFT Inverse Discrete Fourier Transform
  • the NLP 104 of the AEC 100 accepts three signals as input: i) the far-end signal x(n) 110 to be rendered by the render device 10, ii) the near-end signal d(n) 122 captured by the capture device 20, and iii) the output error signal e(n) 124 of the linear stage performed at the filter 102.
  • the error signal e(n) 124 typically contains residual echo that should be removed for good performance.
  • the objective of the NLP 104 is to remove this residual echo.
  • the first step is to transform all three input signals to the frequency domain.
  • the far-end signal 1 10 is transformed to the frequency domain.
  • the near-end signal 122 is transformed to the frequency domain and at step S501 ", the error signal 124 is transformed to the frequency domain.
  • the NLP 104 is block-based and shares the block length N of the linear stage, but uses an overlap-add method rather than overlap- save: consecutive blocks are concatenated, windowed and transformed. By defining o as the element-wise product operator, the k th transformed block is expressed as
  • the length 2N DFT vectors are retained.
  • the redundant N - 1 complex coefficients are discarded.
  • Xu, D 3 ⁇ 4 and E* refer to the frequency-domain representations of the k* far-end, near- end and error blocks, respectively.
  • echo suppression is achieved by multiplying each frequency band of the error signal e(n) 124 with a suppression factor between 0 and 1.
  • each band corresponds to an individual DFT coefficient. In general, however, each band may correspond to an arbitrary range of frequencies. Comfort noise is added and after undergoing an inverse FFT, the suppressed signal is windowed, and overlapped and added with the previous block to obtain the output.
  • the power spectral density (PSD) of each signal is obtained.
  • the PSD of the far-end signal x(n) 1 10 is computed.
  • the PSD of the near- end signal d(n) 122 is computed and at step S503", the PSD of the error signal e(n) 124 is computed.
  • the PSDs of the far-end signal 1 10, near-end signal 122, and the error signal 124 are represented by S x , S d , and S e , respectively.
  • This estimated delay index is used to select the best block at step S507 for use in the far-end PSDs. Additionally, the far-end auto-PSD is thresholded at step S509 in order to avoid numerical instability as follows:
  • the linear filter 102 diverges from a good echo path estimate. This tends to result in a highly distorted error signal, which although still useful for analysis, should not be used for output.
  • divergence may be detected fairly easily, as it usually adds rather than removes energy from the near-end signal d(n) 122.
  • the divergence state determined at step S51 1 is utilized to either select (S512) Ek or Dk as follows: If l
  • Sj3 ⁇ 4-3 ⁇ 4!li > £ ⁇ 4D fc lii then the "diverge" state is entered, in which the effect of the linear stage is reversed by setting E k O k .
  • the diverge state is left if o " o j3 ⁇ 4£;J
  • i, ⁇ 0 1.05 ⁇ Furthermore, if divergence is very high, such as
  • Coherence is a frequency- domain analog to time-domain correlation. It is a measure of similarity with 0 ⁇ c(n) ⁇ 1 ; where a higher coherence corresponds to more similarity.
  • the echo 130 is suppressed while allowing simultaneous near-end speech 120 to pass through.
  • the NLP 104 is configured to achieve this because the coherence is calculated independently for each frequency band. Thus, bands containing echo are fully or partially suppressed, while bands free of echo are not affected.
  • the average coherence across a set of preferred bands is computed at step S517 for Cd e , and at step S 5 1 7 ' for c ' x d as
  • f s is the sampling frequency
  • f s 16000 Hz in super-wideband due to the splitting.
  • the preferred bands were chosen from frequency regions most likely to be accurate across a range of scenarios.
  • step S519 the system either selects C de or c X( j.
  • x d is tracked over time to determine the broad state of the system at step S521. The purpose of this is to avoid suppression when the echo path is close to zero (e.g. during a call with a headset).
  • a thresholded minimum of c X d is computed at step S519 as follows:
  • the system may contain echo and otherwise does not contain echo.
  • the echo state is provided through an interface for potential use by other audio processing components.
  • the suppression factor s is computed at step S520 by selecting the minimum of C d e , c ' X ( j in each band as
  • suppression is limited by selecting suppression factors as follows at step S520, S524 and S518:
  • the minimum si level is computed at step S527 and tracked at step S529 over time i *t - i _ J i s i - s i ⁇ if Sl ⁇ 3 ⁇ 4- ⁇ ⁇ 0>6 b n - - 1
  • the overdrive ⁇ is set at step S531 such that applying it to the minimum will result in the target suppression level:
  • s, and ⁇ are configurable to control the suppression aggressiveness; by default they are set to -11.5 and 2, respectively.
  • the Sh level is computed at step S533.
  • the final suppression factors s T are produced according to the following algorithm.
  • s is first weighted towards Sh according to a weighting vector V S N with components 0 ⁇ (n) ⁇ 1 :
  • is artificial noise and at step S537, an inverse transform is performed to obtain the output signal y(n).
  • the suppression removes near-end noise as well as echo, resulting in an audible change in the noise level. This issue is mitigated by adding generated "comfort noise” to replace the lost noise.
  • the generation of N will be discussed in a later section below.
  • the first splitting filter 600, the second splitting filter 602, and the linear filter 604, in combination comprise the linear stage.
  • lower band and upper band signal streams may include components in frequency ranges other than the exemplary frequency ranges used herein.
  • the frequency ranges of 0-8 kHz and 8-16 kHz are used for the lower band and upper band signal streams, respectively.
  • a frequency range of 0-12 kHz may be used for the lower band signal stream and a frequency range of 12-24 kHz used for the upper band signal stream.
  • frequency ranges of 0-7 kHz and 7-20 kHz may be used for the lower band and upper band signal streams, respectively.
  • narrowband wideband
  • super-wideband is sometimes used herein to refer to audio signals with sampling rates at or above certain threshold sampling rates, or with sampling rates within certain ranges. These terms may also be used relative to one another in describing audio signals with particular sampling rates.
  • “super-wideband” is sometimes used herein to refer to audio signals with a sampling rate above wideband sampling rate of, e.g., 16 kHz.
  • super-wideband is used to refer to audio signals sampled at a higher rate of, e.g., 32 kHz or 48 kHz. It should be understood that such use of the terms “narrowband,” “wideband,” and/or “super-wideband” are not in any way intended to limit the scope of the disclosure.
  • the near-end signal 120 is input to the first splitting filter 600 and the far- end signal 110 is input to the second splitting filter 602.
  • the super- wideband input signals are split into two, e.g., 8 kHz frequency bands before arriving at the AEC 100.
  • the linear filter 604 processes the lower band.
  • the upper band is not used by the linear filter 604 at the linear stage.
  • the NLP 104 is relied upon to control echo in the upper-band.
  • the first splitting filter 600, the second splitting filter 602, and the linear filter 604 in combination comprise the linear stage.
  • the comfort noise generator 608 receives the output from the NLP 606 and the output of the noise generator 608 is input to the joining filter 610.
  • the 8-16 kHz frequency band of the near-end signal 120 is also input to the joining filter 610 after undergoing further processing by the NLP 606 and the comfort noise generator 608 according to the algorithms described below.
  • the joining filter 610 then outputs the full band of, e.g., 0-16 kHz.
  • the upper-band noise estimate and the upper-band suppression factor may be used by the noise generator 608 to compute upper-band comfort noise as follows:
  • dh is the upper-band near-end signal.
  • the suppression is directly applied to d3 ⁇ 4 here because the linear stage is not used.
  • the single block delay from d to y is required to synchronize with the lower-band.
  • N 3 ⁇ 4 Nfc a U2.V o y'l — s-v a s ⁇
  • Fig. 7 is a flow diagram illustrating operations performed by the AEC 100 according to an embodiment of the present invention illustrated in Fig. 6.
  • super-wideband audio streams e.g., audio streams with a sampling rate of 32 kHz, 48 kHz, etc.
  • the splitting filter 600 splits the received super-wideband audio streams into a first signal stream and a second signal stream, wherein the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies.
  • the first signal stream may include frequency ranges of, e.g., 0-8 kHz and the second frequency signal stream may include frequency ranges of, e.g., 8-16 kHz.
  • these exemplary frequency ranges are not intended to limit the scope of the disclosure in any way.
  • step S705 an average of the suppression factors over the first signal stream computed by the NLP 606 is used to derive a single upper-band suppression factor.
  • step S707 the single upper-band suppression factor is applied by the NLP 606 to the second signal stream to reduce echo from the near-end super- wideband audio streams.
  • Fig. 8 is a flow diagram illustrating operations performed by the AEC 100 according to a further embodiment of the present invention illustrated in Fig. 6.
  • audio streams are received at the splitting filter 600.
  • the splitting filter 600 splits the received audio streams into a first signal stream and a second signal stream, wherein the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies.
  • a single upper-band noise estimate is applied by the NLP 606 to generate comfort noise for the second signal stream of one of the input audio streams.
  • Fig. 9 is a block diagram illustrating an example computing device 900 that may be utilized to implement the AEC 100 including, but not limited to, the NLP 104, the filter 102, the far-end buffer 106, and the blocking buffer 108 as well as the first splitting filter 600, the second splitting filter 602, the linear filter 604, the NLP 606, the comfort noise generator 608 and the joining filter 610 in accordance with the present disclosure.
  • the computing device 900 may also be utilized to implement the processes illustrated in Figs. 3, 5, and 7 in accordance with the present disclosure.
  • computing device 900 typically includes one or more processors 910 and system memory 920.
  • a memory bus 930 can be used for communicating between the processor 910 and the system memory 920.
  • system memory 920 can be of any type including but not limited to volatile memory (such as RAM), non- volatile memory (such as ROM, flash memory, etc.) or any combination thereof.
  • System memory 920 typically includes an operating system 921 , one or more applications 922, and program data 924.
  • Application 922 includes an echo cancellation processing algorithm 923 that is arranged to remove echo from super-wide band audio streams.
  • Program Data 924 includes echo cancellation routing data 925 that is useful for removing echo from super-wide band audio streams, as will be further described below.
  • application 922 can be arranged to operate with program data 924 on an operating system 921 such that echo from super-wide band audio streams is removed. This described basic configuration is illustrated in Fig. 9 by those components within dashed line 901.
  • Computing device 900 can have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 901 and any required devices and interfaces.
  • a bus/interface controller 940 can be used to facilitate communications between the basic configuration 901 and one or more data storage devices 950 via a storage interface bus 941.
  • the data storage devices 950 can be removable storage devices 951, non-removable storage devices 952, or a combination thereof.
  • removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few.
  • Example computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • System memory 920, removable storage 951 and non-removable storage 952 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 900. Any such computer storage media can be part of device 900.
  • Computing device 900 can also include an interface bus 942 for facilitating communication from various interface devices (e.g., output interfaces, peripheral interfaces, and communication interfaces) to the basic configuration 901 via the bus/interface controller 940.
  • Example output devices 960 include a graphics processing unit 961 and an audio processing unit 962, which can be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 963.
  • a “modulated data signal” can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared (IR) and other wireless media.
  • RF radio frequency
  • IR infrared
  • computer readable media can include both storage media and communication media.
  • Computing device 900 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
  • a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
  • PDA personal data assistant
  • Computing device 900 can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
  • the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • DSPs digital signal processors
  • Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

A method and system for removing echo from super-wideband audio streams is disclosed. A splitting filter (600) receives input audio streams and splits the received audio streams into a first signal stream of, e.g., 0-8 kHz and a second signal stream of, e.g., 8-16 kHz. A non-linear processor (606) applies a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.

Description

NON-LINEAR POST-PROCESSING FOR SUPER-WIDEBAND ACOUSTIC ECHO
CANCELLATION
Technical Field of the Invention
[0001] The . present invention relates generally to a method and system for cancellation of echoes in telecommunication systems. It particularly relates to a method and system for removing echo from super- wideband audio streams.
Background of the Invention
[0002] Speech quality is an important factor for telephony system suppliers.
Customer demand makes it vital to strive for continuous improvements. An echo, which is a delayed version of what was originally transmitted, is regarded as a severe distraction to the speaker if the delay is long. For short round trip delays of less than approximately 20 ms, the speaker will not be able to distinguish the echo from the side tone in the handset. However, for long-distance communications, such as satellite communications, a remotely generated echo signal often has a substantial delay. Moreover, the speech and channel coding compulsory in digital radio communications systems and for telephony over the Internet protocol (IP telephony, for short) also result in significant delays which make the echoes generated a relatively short distance away clearly audible to the speaker. Hence, canceling the echo is a significant factor in maintaining speech quality.
[0003] An echo canceller typically includes a linear filtering part which essentially is an adaptive filter that tries to adapt to the echo path. In this way, a replica of the echo can be produced from the far-end signal and subtracted from the near-end signal, thereby canceling the echo.
[0004] The filter generating the echo replica may have a finite or infinite impulse response. Most commonly it is an adaptive, linear finite impulse response (FIR) filter with a number of delay lines and a corresponding number of coefficients, or filter delay taps. The coefficients are values, which when multiplied with delayed versions of the filter input signal, generate an estimate of the echo. The filter is adapted, i.e. updated, so that the coefficients converge to optimum values. A traditional way to cancel out the echo is to update a finite impulse response (FIR) filter using the normalized least mean square (NLMS) algorithm.
[0005] Conventionally, the AEC employs the linear filter as a first stage to model the system impulse response. An estimated echo signal is obtained by filtering the far-end signal. This estimated echo signal is then subtracted from the near-end signal to cancel the echo. A problem, however, is that some audible echo will generally remain in the residual error signal after this first stage. A second stage post-processor needs to be applied to remove the residual echo.
[0006] The above-mentioned problem is altered when processing super-wideband
(e.g., audio bandwidth higher than 8 kHz) streams, for two reasons: i) the complexity of an algorithm is increased due to the higher sampling rates involved and ii) the quality demands for the upper-band (e.g., portion higher than 8 kHz) processing are lower, due to decreased perceptual relevance. Supper-wideband may refer to signals with a sampling rate above wideband sampling rate, for example, 32 kHz (as compared to 8 kHz and 16 kHz for narrowband and wideband, respectively).
[0007] These factors present an opportunity to use a simple, reduced complexity algorithm to process the upper-band.
Summary of the Invention
[0008] This Summary introduces a selection of concepts in a simplified form in order to provide a basic understanding of some aspects of the present disclosure. This Summary is not an extensive overview of the disclosure, and is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. This Summary merely presents some of the concepts of the disclosure as a prelude to the Detailed Description provided below.
[0009] According to an aspect of the present invention, a method for removing echo from audio streams is disclosed. The method includes receiving input audio streams, splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
[0010] According to a further aspect of the present invention, the first range of frequencies includes frequencies between 0-8 kHz and the second range of frequencies includes frequencies between 8 kHz -16 kHz.
[0011] According to another aspect of the present invention, the method includes computing the single upper-band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
[0012] According to a further aspect of the present invention, the method includes computing the single upper-band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
[0013] According to yet another aspect of the present invention, the input audio streams include a far-end signal stream, a near-end signal stream, and an error signal stream output from a linear adaptive filter.
[0014] According to a further aspect of the present invention, the method includes computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the suppression factors.
[0015] According to another aspect of the present invention, a system for removing echo from audio streams is disclosed. The system includes a splitting filter that receives input audio streams and splits the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies. The system also includes a non-linear processor that applies a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo. [0016] According to a further aspect of the present invention, the non-linear processor computes the single upper-band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
[0017] According to a further aspect of the present invention, the non-linear processor computes the single upper-band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
[0018] According to another aspect of the present invention, the non-linear processor is configured to: compute a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal; compute a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and apply the first and second coherence values to compute the suppression factors.
[0019] According to a further aspect of the present invention, a computer-readable storage medium having stored thereon computer executable program for removing echo from audio streams is disclosed. The computer program when executed causes a processor to execute the steps of: receiving input audio streams, splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
[0020] According to another aspect of the present invention, the computer program when executed causes the processor to further execute the step of computing the single upper- band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
[0021] According to a further aspect of the present invention, the computer program when executed causes the processor to further execute the step of computing the single upper- band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
[0022] According to yet another aspect of the present invention, the computer program when executed causes the processor, to further execute the steps of: computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the suppression factors.
[0023] According to a further aspect of the present invention, a method for generating comfort noise for audio streams is disclosed. The method includes receiving input audio streams and splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
[0024] According to a further aspect of the present invention, the method includes computing the single upper-band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
[0025] According to yet another aspect of the present invention, the method includes computing the single upper-band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
[0026] According to a further aspect of the present invention, the method includes computing the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
[0027] According to yet another aspect of the present invention, the method includes generating comfort noise by utilizing the single-upper band noise estimate and a single upper- band suppression factor.
[0028] According to a further aspect of the present invention, the method includes computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the upper-band suppression factor. [0029] According to yet another aspect of the present invention, a system for generating comfort noise for audio streams is disclosed. The system includes a splitting filter that receives input audio streams and splits the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and a non-linear processor that applies a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
[0030] According to a further aspect of the present invention, the non-linear processor computes the single upper-band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
[0031] According to another aspect of the present invention, the non-linear processor computes the single upper-band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
[0032] According to a further aspect of the present invention, the non-linear processor computes the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
[0033] According to another aspect of the present invention, the non-linear processor generates comfort noise by utilizing the single-upper band noise estimate and a single upper- band suppression factor.
[0034] According to yet another aspect of the present invention, the non-linear processor is configured to: compute a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal, compute a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and apply the first and second coherence values to compute the upper-band suppression factor.
[0035] According to a further aspect of the present invention, a computer-readable storage medium having stored thereon computer executable program for generating comfort noise for audio streams is disclosed, the computer program when executed causes a processor to execute the steps of: receiving input audio streams and splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies, and applying a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
[0036] According to another aspect of the present invention, the computer program when executed causes the processor to further execute the step of computing the single upper- band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
[0037] According to a further aspect of the present invention, the computer program when executed causes the processor to further execute the step of computing the single upper- band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
[0038] According to yet another aspect of the present invention, the computer program when executed causes the processor to further execute the step of computing the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
[0039] According to a further aspect of the present invention, the computer program when executed causes the processor to further execute the step of generating comfort noise by utilizing the single-upper band noise estimate and a single upper-band suppression factor.
[0040] According to yet another aspect of the present invention, the computer program when executed causes the processor to further execute the steps of: computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal, computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal, and applying the first and second coherence values to compute the upper-band suppression factor.
Brief Description of the Drawings
[0041] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and together with the description, serve to explain the principles of the invention.
[0042] Fig. 1 is a block diagram of an acoustic echo canceller in accordance with an embodiment of the present invention.
[0043] Fig. 2 illustrates a more detailed block diagram describing the functions performed in the adaptive filter of Fig. 1 in accordance with an embodiment of the present invention.
[0044] Fig. 3 illustrates computational stages of the adaptive filter of Fig. 2 in accordance with an embodiment of the present invention.
[0045] Fig. 4 illustrates a more detailed block diagram describing block Gm in Fig.
3 in accordance with an embodiment of the present invention.
[0046] Fig. 5 illustrates a flow diagram describing computational stages of the nonlinear processor of Fig. 1 in accordance with an embodiment of the present invention.
[0047] Fig. 6 is a block diagram of an acoustic echo canceller for processing lower-band and upper-band signal streams in accordance with an embodiment of the present invention.
[0048] Fig. 7 is a flow diagram illustrating operations performed by the acoustic echo canceller according to an embodiment of the present invention illustrated in Fig. 6.
[0049] Fig. 8 is a flow diagram illustrating operations performed by the acoustic echo canceller according to a further embodiment of the present invention illustrated in Fig. 6.
[0050] Fig. 9 is a block diagram illustrating an exemplary computing device that is arranged for acoustic echo cancellation in accordance with an embodiment of the present invention.
Detailed Description
[0051] The following detailed description of the embodiments of the invention refers to the accompanying drawings. The following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents thereof. [0052] Fig. 1 illustrates an acoustic echo canceller (AEC) 100 in accordance with an exemplary embodiment of the present invention.
[0053] The AEC 100 is designed as a high quality echo canceller for voice and audio communication over packet switched networks. More specifically, the AEC 100 is designed to cancel acoustic echo 130 that emerges due to the reflection of sound waves of a render device 10 from boundary surfaces and other objects back to a near-end capture device 20. The echo 130 may also exist due to the direct path from render device 10 to the capture device 20.
[0054] Render device 10 may be any of a variety of audio output devices, including a loudspeaker or group of loudspeakers configured to output sound from one or more channels. Capture device 20 may be any of a variety of audio input devices, such as one or more microphones configured to capture sound and generate input signals. For example, render device 10 and capture device 20 may be hardware devices internal to a computer system, or external peripheral devices connected to a computer system via wired and/or wireless connections. In some arrangements, render device 10 and capture device 20 may be components of a single device, such as a microphone, telephone handset, etc. Additionally, one or both of render device 10 and capture device 20 may include analog-to-digital and/or digital-to-analog transformation functionalities.
[0055] With reference to Fig. 1, the echo canceller 100 includes a linear filter 102, a nonlinear processor (NLP) 104, a far-end buffer 106, and a blocking buffer 108. A far- end signal 110 generated at the far-end and transmitted to the near-end is input to the filter 102 via the far-end buffer (FEBuf) 106 and the blocking buffer 108. The far-end signal 110 is also input to a play-out buffer 112 located near the render device 10. The output signal 116 of the far-end buffer 106 is input to the blocking buffer 108 and the output signal 118 of the blocking buffer is input to the linear filter 102.
[0056] The far-end buffer 106 is configured to compensate for and synchronize to buffering at sound devices (not shown). The blocking buffer 108 is configured to block the signal samples for a frequency-domain transformation to be performed by the linear filter 102 and the NLP 104.
[0057] The linear filter 102 is an adaptive filter. Linear filter 102 operates in the frequency domain through, e.g., the Discrete Fourier Transform (DFT). The DFT may be implemented as a Fast Fourier Transform (FFT).
[0058] The other input to the filter 102 is the near-end signal (Sin) 122 from the capture device 20 via a recording buffer 114. The near-end signal 122 includes near-end speech 120 and the echo 130. The NLP 104 receives three signals as input. It receives (1) the far-end signal via the far-end buffer 106 and blocking buffer 108, (2) the near-end signal via the recording buffer 1 14, and (3) the output signal 124 of the filter 102. The output signal 124 is also referred to as an error signal. In a case when the NLP 104 attenuates the output signal 124, a comfort noise signal is generated which will be explained later.
[0059] According to an exemplary embodiment, each frame is divided into 64 sample blocks. Since this choice of block size does not produce an integer number of blocks per frame the signal needs to be buffered before the processing. This buffering is handled by the blocking buffer 108 as discussed above. Both the filter 102 and the NLP 104 operate in the frequency domain and utilize DFTs of 128 samples.
[0060] The performance of the AEC 100 is influenced by the operation of the play- out buffer 112 and the recording buffer 114 at the sound device. The AEC 100 may not start unless the combined size of the play-out buffer 112 and the recording buffer 1 14 is reasonably stable within a predetermined limit. For example, if the combined size is stable within +/- 8 ms of the first started size, for four consecutive frames, the AEC 100 is started by filling up the internal far-end buffer 106.
[0061] Fig. 2 illustrates a more detailed block diagram describing the functions performed in the filter 102 of Fig. 1. Fig. 3 illustrates computational stages of the filter 102 in accordance with an embodiment of the present invention.
[0062] With reference to Fig. 2, the adaptive filter 102 includes a first transform section 200, an inverse transform section 202, a second transform section 204, and an impulse response section (H) 206. The far-end signal x(n) 210 to be rendered at the render device 10 is input to the first transform section 200. The output signal X(n, k) of the first transform section 200 is input to the impulse response section 206. The output signal Y(n, k) is input to the second transform section 202 which outputs the signal y(n). This signal y(n) is then subtracted from the near-end signal d(n) 220 captured by the capture device 20 to output an error signal e(n) 230 as the output of the linear stage of the filter 102. The error signal 230 is also input to the second transform section 204 the output signal of which, E(n, k), is also input to the impulse response section 206.
[0063] The above-mentioned adaptive filtering approach relates to an implementation of a standard blocked time-domain Least Mean Square (LMS) algorithm. According to an embodiment of the invention, the complexity reduction is due to the filtering and the correlations being performed in the frequency domain, where time-domain convolution is replaced by multiplication. The error is formed in the time domain and is transformed to the frequency domain for updating the filter 102 as illustrated in Fig. 2.
[0064] There is a signal delay in the system due to the transform blocking. To reduce delay the filter 102 is partitioned in smaller segments and by overlap-save processing the overall delay is kept to the segment length. This method is referred to as partitioned block frequency domain method or multi-delay partitioned block frequency adaptive filter. For simplicity it is referred to as FLMS.
[0065] The operation of the FLMS method is illustrated in Fig. 3. Fig. 4 illustrates a more detailed block diagram describing block Gm in the FLMS method of Fig. 3 in accordance with an embodiment of the present invention.
[0066] With a total filter length L = M · N partitioned in blocks of N samples and with F = 2N x 2N Discrete Fourier Transform (DFT) matrix, the time domain impulse response of the filter 102, w(n), n = 0, 1, ... , L - 1 , can be expressed in the frequency domain as a collection of partitioned filters
Figure imgf000012_0001
where wm(k) = [wmN . . . w(m+i)N.i]T,
v is a N x N-sized identity matrix, and 0^ is a N x N-sized zero matrix. This means that the time domain vector is appended with N zeros before the Fourier transform.
[0067] The time domain filter coefficients, w(n) are not utilized in the algorithm and equation (1) is presented to establish the relation between the time- and frequency-domain coefficients.
[0068] As illustrated in Fig. 3, the far-end samples, x(n) 3 10, are blocked into vectors of 2N samples, i.e. two blocks, at step S3 12,
x{k-m)=[x ((k - m-2JN) ... x((k - m)N-l)
and transformed into a sequence of DFT vectors at step S 314,
X(k - m)= diag(Fx(k - m)).
[0069] This is implemented as a table of delayed DFT vectors, since the diagonal matrix also can be expressed as X(k - m)= DmX(k), where D is a delay operator. For each delayed block altering is performed as the multiplication of the diagonal matrix X(k - m) with a filter partition
Ym(k)=X(k - m)Wm(k) m = 0 , 1 , M- 1
[0070] The estimated echo signal is then obtained as the N last coefficients of the inverse transformed sum of the filter products performed at step S320 from which first block is discarded at step S322. The estimated echo signal is represented as
M-l
Y(k) - (ft/ {(* - l)N) . .. y(kN - l) - \0N lN) F~l ^ Ym{k)
[0071] The error is then formed in the time domain as e(k) = d(k) - y(k)
and this is also the output of the filter 102 of the AEC 100 as shown in Fig. 1. To adjust the filter coefficients, N zeros are inserted at step S316 to the error vector, and the augmented vector is transformed at step S318 as
Figure imgf000014_0001
[0072] Fig. 4 illustrates a more detailed block diagram describing block Gm in Fig. 3 in accordance with an embodiment of the present invention where the filter coefficient update can be expressed as
W, ■l} = Wm(^) + F j I v ON ¥~l μϋ X%k - m}B(k). with a stepsize μ0 = 0.5 and where B(k), as shown in Fig. 4, is a modified error vector. The modification includes a power normalization followed by a magnitude limiter 410. The normalized error vector, as also shown in Fig. 4, is
A(k) = ft(*}E (fr),
where
Q(k) = diag ([1/po . . . l/pajv-i]) is a diagonal step size matrix controlling the adjustment of each frequency component using power estimates ) = \ρ - 1) + (1 - λρ) |¾ |2 } j = 0s l , . . . , 2ΛΓ - 15 recursively calculated with a forgetting factor λρ = 0.9 and individual DFT coefficients Xy = {X(k)}j j is input to the magnitude limiter 410. The component magnitudes are then limited to a constant maximum magnitude, Ao = 1.5 x 10"6, into the vector B(k) with components
Figure imgf000015_0001
[0073] As illustrated in Fig. 4, the diagonal matrix X(k-m) is conjugated by the conjugate unit 420 which is then multiplied with vector B(k) prior to performing an inverse DFT transform by the Inverse Discrete Fourier Transform (IDFT) unit 430. Then the discard last block unit 440 discards the last block. After discarding the last block, a zero block is appended by the append zero block unit 450 prior to performing a DFT by the DFT unit 460. Then, a block delay is introduced by the delay unit 480 which outputs Wm(k).
[0074] Fig. 5 illustrates a flow diagram describing computational processes of the NLP 104 of Fig. 1 in accordance with an embodiment of the present invention.
[0075] The NLP 104 of the AEC 100 accepts three signals as input: i) the far-end signal x(n) 110 to be rendered by the render device 10, ii) the near-end signal d(n) 122 captured by the capture device 20, and iii) the output error signal e(n) 124 of the linear stage performed at the filter 102. The error signal e(n) 124 typically contains residual echo that should be removed for good performance. The objective of the NLP 104 is to remove this residual echo.
[0076] The first step is to transform all three input signals to the frequency domain. At step S501 , the far-end signal 1 10 is transformed to the frequency domain. At step S501 ', the near-end signal 122 is transformed to the frequency domain and at step S501 ", the error signal 124 is transformed to the frequency domain. The NLP 104 is block-based and shares the block length N of the linear stage, but uses an overlap-add method rather than overlap- save: consecutive blocks are concatenated, windowed and transformed. By defining o as the element-wise product operator, the kth transformed block is expressed as
Figure imgf000015_0002
where F is the 2N DFT matrix as before, is a length N time-domain sample column vector and is a length 2N square-root Harming window column vector with entries
Figure imgf000016_0001
[0077] The window is chosen such that the overlapping segments satisfy w2 (ra) + w2 (n - N) = 1 , n = AT, N + 1. 2N to provide perfect reconstruction. According to an embodiment of the invention, the length 2N DFT vectors are retained. Preferably, however, the redundant N - 1 complex coefficients are discarded.
[0078] Xu, D¾ and E* refer to the frequency-domain representations of the k* far-end, near- end and error blocks, respectively.
[0079] According to a further embodiment of the invention, echo suppression is achieved by multiplying each frequency band of the error signal e(n) 124 with a suppression factor between 0 and 1. According to a preferred embodiment, each band corresponds to an individual DFT coefficient. In general, however, each band may correspond to an arbitrary range of frequencies. Comfort noise is added and after undergoing an inverse FFT, the suppressed signal is windowed, and overlapped and added with the previous block to obtain the output.
[0080] For analysis, the power spectral density (PSD) of each signal is obtained. At step S503, the PSD of the far-end signal x(n) 1 10 is computed. At step S503', the PSD of the near- end signal d(n) 122 is computed and at step S503", the PSD of the error signal e(n) 124 is computed. The PSDs of the far-end signal 1 10, near-end signal 122, and the error signal 124 are represented by Sx, Sd, and Se, respectively.
[0081] In addition, the complex-valued cross-PSDs between i) the far-end signal x(n) 110 and near-end signal d(n) 122, and ii) the near-end signal d(n) 122 and error signal e(n) 124 are also obtained. At step S504, the complex-valued cross-PSD between the far-end signal 110 and the near-end signal 122 is computed and at step S504', the complex-valued cross-PSD between the near-end signal 122 and the error signal 124 is computed. The complex-valued cross-PSD of the far-end signal 110 and near-end signal 122 is represented as Sxd. The complex-valued cross-PSD of the near-end signal 122 and error signal 124 is represented as Sae- The PSDs are exponentially smoothed to avoid sudden erroneous shifts in echo suppression. The PSDs are given by
S¾¾ = AsS fc-i ^-i + (1 - <½)Xfc o Y£5 " > 0. SXaYo
where the "*" here represents the complex conjugate, and where the exponential smoothing factor is given by f 0.9 if fs = 8000
s ~ \ 0,93 otherwise
[0082] Note that = Y* for the "auto" PSDs, which are therefore real-valued while the cross-PSDs are complex valued.
[0083] Rather than using the current input far-end block, an old block is selected to best synchronize it with the corresponding echo in the near-end at step S505. The index of the partition, m, with maximum energy in the linear filter is chosen as follows: d = arg max( 11 Wm 112 }
m
[0084] This estimated delay index is used to select the best block at step S507 for use in the far-end PSDs. Additionally, the far-end auto-PSD is thresholded at step S509 in order to avoid numerical instability as follows:
S¾¾ = max( SA w 5b). <¾ = 15
[0085] It is sometimes the case that the linear filter 102 diverges from a good echo path estimate. This tends to result in a highly distorted error signal, which although still useful for analysis, should not be used for output. According to an embodiment of the invention, divergence may be detected fairly easily, as it usually adds rather than removes energy from the near-end signal d(n) 122. The divergence state determined at step S51 1 is utilized to either select (S512) Ek or Dk as follows: If l|Sj¾-¾!li > £¾Dfclii then the "diverge" state is entered, in which the effect of the linear stage is reversed by setting Ek = Ok. The diverge state is left if o"o j¾£;J|i < ||SDfcJDfc||i, σ0 = 1.05 ^ Furthermore, if divergence is very high, such as
||Sj¾j¾||i > σι||8¾£)¾||ι· Ci = 19.95 the linear filter 102 resets to its initial state
Wm(k) = QN, m = 0, 1, ... M - 1
The PSDs are used to compute the coherence measures for each frequency band between i) the far-end signal 110 and near-end signal 122 at step S513 as follows:
Figure imgf000018_0001
and ii) the near-end signal 122 and error signal 124 at step S515 as follows:
Figure imgf000018_0002
where the "*" here again represents the complex conjugate.
[0086] Denote a c vector entry in position n as c(n). Coherence is a frequency- domain analog to time-domain correlation. It is a measure of similarity with 0 < c(n) < 1 ; where a higher coherence corresponds to more similarity.
[0087] The primary effect of the NLP 104 is achieved through directly suppressing the error signal 124 with the coherence measures. Generally speaking, the output is given by
Yk = Efc O Cde-
Under the assumption that the linear stage is working properly, c(ri)de ~ 1 when no echo has been removed, allowing the error to pass through unchanged. In the opposite case of the linear stage having removed echo, 1 » c(n)de > 0, resulting in a suppression of the error, ideally removing any residual echo remaining after the linear filtering by the filter 102 at the linear stage.
[0088] According to an embodiment of the invention, x is considered to increase robustness, as described below, though ^e tends to be more useful in practice. Contrary to Cd e > cx d 1S relatively high when there is echo 130, and low otherwise. To have the two
t
measures in the same "domain" a modified coherence is defined as follows: c xd = 1 - cXd .
[0089] It is preferred that to achieve high AEC performance, the echo 130 is suppressed while allowing simultaneous near-end speech 120 to pass through. The NLP 104 is configured to achieve this because the coherence is calculated independently for each frequency band. Thus, bands containing echo are fully or partially suppressed, while bands free of echo are not affected.
[0090] According to an embodiment of the invention, several data analysis method are used to tweak the coherence before it is applied as a suppression factor, s. First, the average coherence across a set of preferred bands is computed at step S517 for Cde, and at step S 5 1 7 ' for c 'x d as
C = \ {500. 3500}
m— no + T c{n), {no, ni} =
π =Πι:·
where fs is the sampling frequency, with fs = 16000 Hz in super-wideband due to the splitting. The preferred bands were chosen from frequency regions most likely to be accurate across a range of scenarios.
[0091] At step S518, the system either selects C de or c X(j. According to an exemplary embodiment, xd is tracked over time to determine the broad state of the system at step S521. The purpose of this is to avoid suppression when the echo path is close to zero (e.g. during a call with a headset). First, a thresholded minimum of c Xd is computed at step S519 as follows:
Cxdk ~
Figure imgf000019_0001
+ /ic, 1) otherwise ' > ' ¾ώ ~ with a step-size μζ = 0.0006m β and factor m β given by if f3 = 8000
otherwise
[0092] This is used to construct two decision variables . A: > 0, w¾ = 0
Figure imgf000020_0001
and
0 if cxdk = 1 or uk = 1
U. , Jfc > 0, = 0
1 otherwise
[0093] The system is considered in the "coherent state" when uc = 1 and in the "echo state" when ue = 1. In the echo state, the system may contain echo and otherwise does not contain echo. The echo state is provided through an interface for potential use by other audio processing components.
[0094] While in the echo state, the suppression factor s is computed at step S520 by selecting the minimum of C d e , c ' X (j in each band as
[0095] Two overall suppression factors are computed at step S533 and S527 from order statistics across the preferred bands:
{Sh, Sf} = {n-h, n-i} = [no + {0.5, 0.75} (n* - ϋ 4- 1)J
[0096] This approach of selecting suppression factors is more robust to outliers than the average, and allows tuning through the exact selection of the order statistic position.
[0097] While in the "no echo state" (i.e. uc = 0), suppression is limited by selecting suppression factors as follows at step S520, S524 and S518:
{ zx' d
Figure imgf000020_0002
otherwise [0098] Across most scenarios, there is a typical suppression level required to reasonably remove all residual echo. This is considered to be the target suppression, s,. A scalar "overdrive" is applied to s to weight the bands towards s,. This improves performance in more difficult cases where the coherence measures are not accurate enough by themselves. The minimum si level is computed at step S527 and tracked at step S529 over time i *t - i _ J isi - si} if Sl < ¾-ϊ < 0>6 b n - - 1
- \ , minfi^ + μβ> 1) } otherwise , κ > v, $h - ι with a step-size μ3 = 0.0008 m js.
[0099] When the minimum s'ik is unchanged for two consecutive blocks, the overdrive γ is set at step S531 such that applying it to the minimum will result in the target suppression level:
St γ is smoothed and threshold as
0.99 if 7 <
%— Ay¾_i -f- (1 - A-,) max( , ηα), A7 =
0.9 otherwise such that it will tend to move faster upwards than downwards, s, and γο are configurable to control the suppression aggressiveness; by default they are set to -11.5 and 2, respectively. Additionally, when
C-xdk — 1
the smoothed overdrive is reset to the minimum,
7ft = To -
[00100] The Sh level is computed at step S533. Next, the final suppression factors sT are produced according to the following algorithm. At step S525 s is first weighted towards Sh according to a weighting vector VSN with components 0 < (n) < 1 :
( vsx(n}sh 4- [1 - v (n)}s(n) if a(n) > ¾ [00101] The weighting is selected to influence typically less accurate bands more heavily. Applying the overdriving at step S535, the following is derived:
where νγΝ is another weighting vector fulfilling a similar purpose as VSN- Overdriving through raising to a power serves to accentuate valleys in sv. Finally, at step S536 the frequency- domain output block is given by
Υ;· - s7 o Ei 4- N'fc
where NS|< is artificial noise and at step S537, an inverse transform is performed to obtain the output signal y(n). The suppression removes near-end noise as well as echo, resulting in an audible change in the noise level. This issue is mitigated by adding generated "comfort noise" to replace the lost noise. The generation of N will be discussed in a later section below.
[00102] The overlap-add transformation is inverted to arrive at the length N time- domain output signal as k≥ 0, y'0 = 0.Y
Figure imgf000022_0001
[00103] Fig. 6 is a block diagram of the AEC 100 for processing lower-band and upper- band signal streams in accordance with an embodiment of the present invention. According to an embodiment of the present invention, the AEC 100 includes a first splitting filter 600, a second splitting filter 602, a linear filter 604, a non-linear post-processor (NLP) 606, a comfort noise generator 608, and ajoining filter 610.
[00104] Note that, according to an exemplary embodiment, the linear filter 604 performs the same functionalities as the filter 102 described above with reference to Figs. 2-4 in addition to the functionalities described herein with reference to Fig. 6. Similarly, the NLP 606 performs the same functionalities as the NLP 104 described above with reference to Fig. 5 in addition to the functionalities described herein with reference to Fig. 6.
[00105] According to an embodiment of the present invention, the first splitting filter 600, the second splitting filter 602, and the linear filter 604, in combination, comprise the linear stage.
[00106] In some embodiments of the disclosure, lower band and upper band signal streams may include components in frequency ranges other than the exemplary frequency ranges used herein. In various examples described below, the frequency ranges of 0-8 kHz and 8-16 kHz are used for the lower band and upper band signal streams, respectively. These are exemplary frequency ranges used for purposes of describing various features of the disclosure. These exemplary frequency ranges are not intended to limit the scope of the disclosure in any way. Instead, numerous other frequency ranges may be used for the lower band and/or upper band signal streams in addition to or instead of those used in the various examples described herein.
[00107] For example, in a scenario where audio is sampled at 48 kHz, a frequency range of 0-12 kHz may be used for the lower band signal stream and a frequency range of 12-24 kHz used for the upper band signal stream. In a different scenario, frequency ranges of 0-7 kHz and 7-20 kHz may be used for the lower band and upper band signal streams, respectively.
[00108] Additionally, the terms "narrowband," "wideband," and "super-wideband" are sometimes used herein to refer to audio signals with sampling rates at or above certain threshold sampling rates, or with sampling rates within certain ranges. These terms may also be used relative to one another in describing audio signals with particular sampling rates. For example, "super-wideband" is sometimes used herein to refer to audio signals with a sampling rate above wideband sampling rate of, e.g., 16 kHz. As such, in describing various aspects of the disclosure, super-wideband is used to refer to audio signals sampled at a higher rate of, e.g., 32 kHz or 48 kHz. It should be understood that such use of the terms "narrowband," "wideband," and/or "super-wideband" are not in any way intended to limit the scope of the disclosure.
[00109] The near-end signal 120 is input to the first splitting filter 600 and the far- end signal 110 is input to the second splitting filter 602. As mentioned earlier, the super- wideband input signals are split into two, e.g., 8 kHz frequency bands before arriving at the AEC 100. The linear filter 604 processes the lower band. The upper band, however, is not used by the linear filter 604 at the linear stage. The NLP 104 is relied upon to control echo in the upper-band. According to an embodiment of the present invention, the first splitting filter 600, the second splitting filter 602, and the linear filter 604 in combination comprise the linear stage.
[00110] The first splitting filter 600 splits the frequency bands of the near-end signal 120 into streams of a lower frequency band of, e.g., 0-8 kHz and an upper frequency band of, e.g., 8-16 kHz. The second splitting filter 602 splits the frequency bands of the far-end signal 1 10 in a manner such that only the lower frequency band of 0- 8 kHz is input to the liner filter 604 and other frequency bands are discarded. The output from the first splitting filter 600 and the output from the second splitting filter 602 are input to the linear filter 604. As mentioned above, the linear filter 604 processes only the lower band since the upper band is not used by the linear filter 604. The NLP 606 receives the lower frequency band of 0-8 kHz output from the first splitting filter 600 and the second splitting filter 602 as well as the output of the linear filter 604.
[00111] The comfort noise generator 608 receives the output from the NLP 606 and the output of the noise generator 608 is input to the joining filter 610. The 8-16 kHz frequency band of the near-end signal 120 is also input to the joining filter 610 after undergoing further processing by the NLP 606 and the comfort noise generator 608 according to the algorithms described below. The joining filter 610 then outputs the full band of, e.g., 0-16 kHz.
[00112] First, the following single upper-band suppression factor is computed by the NLP 606
Figure imgf000024_0001
using an average of the suppression factors over the 4 - 8 kHz band. This approach works reasonably well because there is relatively little speech energy above 8 kHz. Then, the following upper-band noise estimate is computed by the NLP 606
¾ =— V-∑ -¾(")
Λ - v
-
using an average of the noise estimates over the 4 - 8 kHz band, but with a scaling factor of σ = 0.4 to account for the decrease in noise energy with increasing frequency. The upper-band noise estimate and the upper-band suppression factor may be used by the noise generator 608 to compute upper-band comfort noise as follows:
Figure imgf000025_0001
where u2^ is simply reused from the lower-band. The algorithm for generating the comfort noise will be discussed later.
[00113] Since a single upper-band suppression factor is applied, there is no need to transform the upper-band time-domain signal to the frequency domain, thereby outputting the following signal fhh = S7dftfc_1 + f¾ where
and dh is the upper-band near-end signal. As previously discussed, the suppression is directly applied to d¾ here because the linear stage is not used. The single block delay from d to y is required to synchronize with the lower-band.
[00114] To generate comfort noise, a reliable estimate of the true near-end background noise is required. According to an embodiment of the invention, a minimum statistics method is utilized to generate the comfort noise. More specifically, at every block a modified minimum of the near-end PSD is computed for each sub-band: j¾ nj = { A-v (¾¾{«) + ^k-i( } - ¾¾(«)}) if < ¾¾(») ( fe > 0} ^(η) = l 0e
t AvA¾-!(«) otherwise
[00115] with a step-size μ = 0.1 and ramp λΝ = 1.0002. No(n) is set such that it will be greater than any reasonable noise power. S z¾ is very similar to that discussed above, but is instead computed from the un- windowed DFT coefficients of the linear filter stage 102. White noise may be produced by generating a random complex vector, UJJV, on a unit circle. This is shaped to match Noic and weighted by the suppression levels to give the following comfort noise:
N ¾ = Nfc a U2.V o y'l — s-v a s~,
[00116] Fig. 7 is a flow diagram illustrating operations performed by the AEC 100 according to an embodiment of the present invention illustrated in Fig. 6. At step S701, super-wideband audio streams (e.g., audio streams with a sampling rate of 32 kHz, 48 kHz, etc.) are received at the splitting filter 600. At step S703, the splitting filter 600 splits the received super-wideband audio streams into a first signal stream and a second signal stream, wherein the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies.
[00117] According to an exemplary embodiment, the first signal stream may include frequency ranges of, e.g., 0-8 kHz and the second frequency signal stream may include frequency ranges of, e.g., 8-16 kHz. As mentioned earlier, these exemplary frequency ranges are not intended to limit the scope of the disclosure in any way.
[00118] At step S705, an average of the suppression factors over the first signal stream computed by the NLP 606 is used to derive a single upper-band suppression factor. Finally, at step S707, the single upper-band suppression factor is applied by the NLP 606 to the second signal stream to reduce echo from the near-end super- wideband audio streams.
[00119] Fig. 8 is a flow diagram illustrating operations performed by the AEC 100 according to a further embodiment of the present invention illustrated in Fig. 6. At step S801, audio streams are received at the splitting filter 600. At step S803, the splitting filter 600 splits the received audio streams into a first signal stream and a second signal stream, wherein the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies. Finally, at step S805, a single upper-band noise estimate is applied by the NLP 606 to generate comfort noise for the second signal stream of one of the input audio streams.
[00120] Fig. 9 is a block diagram illustrating an example computing device 900 that may be utilized to implement the AEC 100 including, but not limited to, the NLP 104, the filter 102, the far-end buffer 106, and the blocking buffer 108 as well as the first splitting filter 600, the second splitting filter 602, the linear filter 604, the NLP 606, the comfort noise generator 608 and the joining filter 610 in accordance with the present disclosure. The computing device 900 may also be utilized to implement the processes illustrated in Figs. 3, 5, and 7 in accordance with the present disclosure. In a very basic configuration 901, computing device 900 typically includes one or more processors 910 and system memory 920. A memory bus 930 can be used for communicating between the processor 910 and the system memory 920.
[00121] Depending on the desired configuration, processor 910 can be of any type including but not limited to a microprocessor (μΡ), a microcontroller (μθ), a digital signal processor (DSP), or any combination thereof. Processor 910 can include one more levels of caching, such as a level one cache 911 and a level two cache 912, a processor core 913, and registers 914. The processor core 913 can include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. A memory controller 915 can also be used with the processor 910, or in some implementations the memory controller 915 can be an internal part of the processor 910.
[00122] Depending on the desired configuration, the system memory 920 can be of any type including but not limited to volatile memory (such as RAM), non- volatile memory (such as ROM, flash memory, etc.) or any combination thereof. System memory 920 typically includes an operating system 921 , one or more applications 922, and program data 924. Application 922 includes an echo cancellation processing algorithm 923 that is arranged to remove echo from super-wide band audio streams. Program Data 924 includes echo cancellation routing data 925 that is useful for removing echo from super-wide band audio streams, as will be further described below. In some embodiments, application 922 can be arranged to operate with program data 924 on an operating system 921 such that echo from super-wide band audio streams is removed. This described basic configuration is illustrated in Fig. 9 by those components within dashed line 901.
[00123] Computing device 900 can have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 901 and any required devices and interfaces. For example, a bus/interface controller 940 can be used to facilitate communications between the basic configuration 901 and one or more data storage devices 950 via a storage interface bus 941. The data storage devices 950 can be removable storage devices 951, non-removable storage devices 952, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
[00124] System memory 920, removable storage 951 and non-removable storage 952 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 900. Any such computer storage media can be part of device 900.
[00125] Computing device 900 can also include an interface bus 942 for facilitating communication from various interface devices (e.g., output interfaces, peripheral interfaces, and communication interfaces) to the basic configuration 901 via the bus/interface controller 940. Example output devices 960 include a graphics processing unit 961 and an audio processing unit 962, which can be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 963. Example peripheral interfaces 970 include a serial interface controller 971 or a parallel interface controller 972, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 973. An example communication device 990 includes a network controller 991, which can be arranged to facilitate communications with one or more other computing devices 990 over a network communication via one or more communication ports 992. The communication connection is one example of a communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. A "modulated data signal" can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared (IR) and other wireless media. The term computer readable media as used herein can include both storage media and communication media.
[00126] Computing device 900 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. Computing device 900 can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
[00127] There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. There are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
[00128] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof.
[00129] In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure.
[00130] In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
[00131] Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation.
[00132] Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
[00133] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[00134] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims

What is claimed is:
1. A method for removing echo from audio streams, comprising:
receiving input audio streams;
splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies; and
applying a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
2. The method of claim 1, wherein the first range of frequencies includes frequencies between 0-8 kHz.
3. The method of any of claims 1-2, wherein the second range of frequencies includes frequencies between 8 kHz- 16 kHz.
4. The method of any of claims 1-3, further comprising computing the single upper-band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
5. The method of any of claims 1-4, further comprising computing the single upper-band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
6. The method according to any of claims 1-5, wherein said input audio streams include a far-end signal stream, a near-end signal stream, and an error signal stream output from a linear adaptive filter.
7. The method according to 6, further comprising:
computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal ; computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and
applying the first and second coherence values to compute the suppression factors.
8. A system for removing echo from audio streams,
comprising:
a splitting filter that receives input audio streams and splits the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies; and
a non-linear processor that applies a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
9. The system of claim 8, wherein the first range of frequencies includes frequencies between 0-8 kHz.
10. The system of any of claims 8-9, wherein the second range of frequencies includes frequencies between 8 kHz- 16 kHz.
11. The system of any of claims 8-10, said non-linear processor computing the single upper-band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
12. The system of any of claims 8-11, said non-linear processor computing the single upper-band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
13. The system of any of claims 8-12, wherein said input audio streams include a far-end signal stream, a near-end signal stream, and an error signal stream output from a linear adaptive filter.
14. The system according to 13, wherein said non-linear processor is configured to:
compute a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal;
compute a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and
apply the first and second coherence values to compute the suppression factors.
15. A computer-readable storage medium having stored thereon computer executable program for removing echo from audio streams, the computer program when executed causes a processor to execute the steps of:
receiving input audio streams;
splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies; and
applying a single upper-band suppression factor to the second signal stream of one of the input audio streams to reduce echo.
16. The computer-readable storage medium of claim 15, wherein the first range of frequencies includes frequencies between 0-8 kHz.
17. The computer-readable storage medium of any of claims 15-16, wherein the second range of frequencies includes frequencies between 8 kHz- 16 kHz.
18. The computer-readable storage medium of any of claims 15-17, wherein the computer program when executed causes the processor to further execute the step of computing the single upper-band suppression factor by averaging suppression factors from a range of frequency bands included in the first signal stream.
19. The computer-readable storage medium of any of claims 15-18, wherein the computer program when executed causes the processor to further execute the step of computing the single upper-band suppression factor by averaging suppression factors from the 4-8 kHz frequency band included in the first signal stream.
20. The computer-readable storage medium of any of claims 15-19, wherein said input audio streams include a far-end signal stream, a near-end signal stream, and an error signal stream output from a linear adaptive filter.
21. The computer-readable storage medium of claim 20, wherein the computer program when executed causes the processor to further execute the steps of:
computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and the first signal stream of the near-end signal;
computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and
applying the first and second coherence values to compute the suppression factors.
22. A method for generating comfort noise for audio streams, comprising: receiving input audio streams and splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies; and
applying a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
23. The method of claim 22, wherein the first range of frequencies includes frequencies between 0-8 kHz.
24. The method of any of claims 22-23, wherein the second range of frequencies includes frequencies between 8 kHz- 16 kHz.
25. The method of any of claims 22-24, further comprising computing the single upper-band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
26. The method of any of claims 22-25, further comprising computing the single upper-band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
27. The method according to any of claims 22-26, wherein said input audio streams include a far-end signal stream, a near-end signal stream, and an error signal stream output from a linear adaptive filter.
28. The method according to claim 27, wherein the noise estimates are computed by utilizing a minimum statistic method on the near-end signal stream.
29. The method according to any of claims 22-28, further comprising generating comfort noise by utilizing the single-upper band noise estimate and a single upper-band suppression factor.
30. The method according to 29, further comprising:
computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal;
computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and
applying the first and second coherence values to compute the upper-band suppression factor.
31. A system for generating comfort noise for audio streams, comprising:
a splitting filter that receives input audio streams and splits the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies; and a non-linear processor that applies a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
32. The system of claim 31, wherein the first range of frequencies includes frequencies between 0-8 kHz.
33. The system of any of claims 31-32, wherein the second range of frequencies includes frequencies between 8 kHz- 16 kHz.
34. The system of any of claims 31-33, said non-linear processor computing the single upper-band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
35. The system of any of claims 31-34, said non-linear processor computing the single upper-band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
36. The system according to any of claims 31-35, wherein said input audio streams include a far-end signal stream, a near-end signal stream, and an error signal stream output from a linear adaptive filter.
37. The system according to claim 36, said non-linear processor computing the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
38. The system according to any of claims 31-37, said non-linear processor generating comfort noise by utilizing the single-upper band noise estimate and a single upper-band suppression factor.
39. The system according to 38, wherein said non-linear processor is configured to:
compute a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal;
compute a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and
apply the first and second coherence values to compute the upper-band suppression factor.
40. A computer-readable storage medium having stored thereon computer executable program for generating comfort noise for audio streams, the computer program when executed causes a processor to execute the steps of:
receiving input audio streams and splitting the received audio streams into a first signal stream and a second signal stream such that the first signal stream includes a first range of frequencies and the second signal stream includes a second range of frequencies higher than the first range of frequencies; and
applying a single upper-band noise estimate to generate comfort noise for the second signal stream of one of the input audio streams.
41. The computer-readable storage medium of claim 40, wherein the first range of frequencies includes frequencies between 0-8 kHz.
42. The computer-readable storage medium of any of claims 40-41, wherein the second range of frequencies includes frequencies between 8 kHz- 16 kHz.
43. The computer-readable storage medium of any of claims 40-42, wherein the computer program when executed causes the processor to further execute the step of computing the single upper-band noise estimate by averaging noise estimates from a range of frequency bands included in the first signal stream.
44. The computer-readable storage medium of any of claims 40-43, wherein the computer program when executed causes the processor to further execute the step of computing the single upper-band noise estimate by averaging noise estimates from the 4-8 kHz frequency band included in the first signal stream.
45. The computer-readable storage medium of any of claims 40-44, wherein said input audio streams include a far-end signal stream, a near-end signal stream, and an error signal stream output from a linear adaptive filter.
46. The computer-readable storage medium of claim 45, wherein the computer program when executed causes the processor to further execute the step of computing the noise estimates by utilizing a minimum statistic method on the near-end signal stream.
47. The computer-readable storage medium of any of claims 40-46, wherein the computer program when executed causes the processor to further execute the step of generating comfort noise by utilizing the single-upper band noise estimate and a single upper- band suppression factor.
48. The computer-readable storage medium of claim 47, wherein the computer program when executed causes the processor to further execute the steps of:
computing a first coherence value by comparing correlations between the first signal stream of the far-end signal and first signal stream of the near-end signal;
computing a second coherence value by comparing correlations between the first signal stream of the near-end signal and first signal stream of the error signal; and
applying the first and second coherence values to compute the upper-band suppression factor.
PCT/US2011/036863 2011-05-17 2011-05-17 Non-linear post-processing for super-wideband acoustic echo cancellation WO2012158165A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP11721217.5A EP2710789A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for super-wideband acoustic echo cancellation
PCT/US2011/036863 WO2012158165A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for super-wideband acoustic echo cancellation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/036863 WO2012158165A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for super-wideband acoustic echo cancellation

Publications (1)

Publication Number Publication Date
WO2012158165A1 true WO2012158165A1 (en) 2012-11-22

Family

ID=44201814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/036863 WO2012158165A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for super-wideband acoustic echo cancellation

Country Status (2)

Country Link
EP (1) EP2710789A1 (en)
WO (1) WO2012158165A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020124363A1 (en) * 2018-12-18 2020-06-25 Intel Corporation Display-based audio splitting in media environments
CN111341336A (en) * 2020-03-16 2020-06-26 北京字节跳动网络技术有限公司 Echo cancellation method, device, terminal equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5305307A (en) * 1991-01-04 1994-04-19 Picturetel Corporation Adaptive acoustic echo canceller having means for reducing or eliminating echo in a plurality of signal bandwidths
US6865270B1 (en) * 2000-09-21 2005-03-08 Rane Corporation Echo cancellation method and apparatus
US20080281584A1 (en) * 2007-05-07 2008-11-13 Qnx Software Systems (Wavemakers), Inc. Fast acoustic cancellation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5305307A (en) * 1991-01-04 1994-04-19 Picturetel Corporation Adaptive acoustic echo canceller having means for reducing or eliminating echo in a plurality of signal bandwidths
US6865270B1 (en) * 2000-09-21 2005-03-08 Rane Corporation Echo cancellation method and apparatus
US20080281584A1 (en) * 2007-05-07 2008-11-13 Qnx Software Systems (Wavemakers), Inc. Fast acoustic cancellation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020124363A1 (en) * 2018-12-18 2020-06-25 Intel Corporation Display-based audio splitting in media environments
US11474776B2 (en) 2018-12-18 2022-10-18 Intel Corporation Display-based audio splitting in media environments
CN111341336A (en) * 2020-03-16 2020-06-26 北京字节跳动网络技术有限公司 Echo cancellation method, device, terminal equipment and medium
CN111341336B (en) * 2020-03-16 2023-08-08 北京字节跳动网络技术有限公司 Echo cancellation method, device, terminal equipment and medium

Also Published As

Publication number Publication date
EP2710789A1 (en) 2014-03-26

Similar Documents

Publication Publication Date Title
EP2710787A1 (en) Non-linear post-processing for acoustic echo cancellation
WO2012158164A1 (en) Using echo cancellation information to limit gain control adaptation
JP5671147B2 (en) Echo suppression including modeling of late reverberation components
JP5450567B2 (en) Method and system for clear signal acquisition
KR100716377B1 (en) Digital adaptive filter and acoustic echo canceller using the same
US8023641B2 (en) Spectral domain, non-linear echo cancellation method in a hands-free device
US8488776B2 (en) Echo suppressing method and apparatus
JP2936101B2 (en) Digital echo canceller
US20140064476A1 (en) Systems and methods of echo &amp; noise cancellation in voice communication
US8073132B2 (en) Echo canceler and echo canceling program
CN109273019B (en) Method for double-talk detection for echo suppression and echo suppression
JP5223576B2 (en) Echo canceller, echo cancellation method and program
JP2012501152A (en) Method for determining updated filter coefficients of an adaptive filter adapted by an LMS algorithm with pre-whitening
WO2012158168A1 (en) Clock drift compensation method and apparatus
EP2716023B1 (en) Control of adaptation step size and suppression gain in acoustic echo control
EP2710789A1 (en) Non-linear post-processing for super-wideband acoustic echo cancellation
JP3611493B2 (en) Echo canceller device
JP5057109B2 (en) Echo canceller
JP2023519249A (en) Echo residual suppression
KR19990080327A (en) Adaptive echo canceller with hierarchical structure
KR102649227B1 (en) Double-microphone array echo eliminating method, device and electronic equipment
KR100431965B1 (en) Apparatus and method for removing echo-audio signal using time-varying algorithm with time-varying step size
JP2008066782A (en) Signal processing apparatus and signal processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11721217

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011721217

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE