WO2009035615A1 - Speech enhancement - Google Patents

Speech enhancement Download PDF

Info

Publication number
WO2009035615A1
WO2009035615A1 PCT/US2008/010591 US2008010591W WO2009035615A1 WO 2009035615 A1 WO2009035615 A1 WO 2009035615A1 US 2008010591 W US2008010591 W US 2008010591W WO 2009035615 A1 WO2009035615 A1 WO 2009035615A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech
audio signal
channel
center channel
center
Prior art date
Application number
PCT/US2008/010591
Other languages
French (fr)
Inventor
Phillip C. BROWN
Original Assignee
Dolby Laboratories Licensing Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corporation filed Critical Dolby Laboratories Licensing Corporation
Priority to JP2010524855A priority Critical patent/JP2010539792A/en
Priority to CN200880106533.0A priority patent/CN101960516B/en
Priority to AT08831097T priority patent/ATE514163T1/en
Priority to EP08831097A priority patent/EP2191467B1/en
Priority to US12/676,410 priority patent/US8891778B2/en
Publication of WO2009035615A1 publication Critical patent/WO2009035615A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Definitions

  • a method for extracting a center channel of sound from an audio signal with multiple channels may include multiplying (1) a first channel of the audio signal, less a proportion ⁇ of a candidate center channel and (2) a conjugate of a second channel of the audio signal, less the proportion ⁇ of the candidate center channel, approximately minimizing ⁇ and creating the extracted center channel by multiplying the candidate center channel by the approximately minimized ⁇ .
  • a method for flattening the spectrum of an audio signal may include separating a presumed speech channel into perceptual bands, determining which of the perceptual bands has the most energy and increasing the gain of perceptual bands with less energy, thereby flattening the spectrum of any speech in the audio signal.
  • the increasing may include increasing the gain of perceptual bands with less energy, up to a maximum.
  • a method for detecting speech in an audio signal may include measuring spectral fluctuation in a candidate center channel of the audio signal, measuring spectral fluctuation of the audio signal less the candidate center channel and comparing the spectral fluctuations, thereby detecting speech in the audio signal.
  • a method for enhancing speech may include extracting a center channel of an audio signal, flattening the spectrum of the center channel and mixing the flattened speech channel with the audio signal, thereby enhancing any speech in the audio signal.
  • the method may further include generating a confidence in detecting speech in the center channel and the mixing may include mixing the flattened speech channel with the audio signal proportionate to the confidence of having detected speech.
  • the confidence may vary from a lowest possible probability to a highest possible probability, and the generating may include further limiting the generated confidence to a value higher than the lowest possible probability and lower than the highest possible probability.
  • the extracting may include extracting a center channel of an audio signal, using the method described above, he flattening may include flattening the spectrum of the center channel using the method described above.
  • the generating may include generating a confidence in detecting speech in the center channel, using the method described above.
  • the extracting may include extracting a center channel of an audio signal, using the method described above; the flattening may include flattening the spectrum of the center channel using the method described above; and the generating may include generating a confidence in detecting speech in the center channel, using the method described above.
  • a computer-readable storage medium wherein is located a computer program for executing any of the methods described above, as well as a computer system including a CPU, the storage medium and a bus coupling the CPU and the storage medium.
  • Figure l is a functional block diagram of a speech enhancer according to one embodiment of the invention.
  • Figure 2 depicts a suitable set of filters with a spacing of 1 ERB, resulting in a total of 40 bands.
  • Figure 3 describes the mixing process according to one embodiment of the invention.
  • Figure 4 illustrates a computer system according to one embodiment of the invention.
  • FIG. 1 is a functional block diagram of a speech enhancer 1 according to one embodiment of the invention.
  • the speech enhancer 1 includes an input signal 17, Discrete Fourier Transformers 10a, 10b, a center-channel extractor 11, a spectral flattener 12, a voice activity detector 13, variable-gain amplifiers 15a, 15c, inverse Discrete
  • the input signal 17 consists of left and right channels 17a, 17b, respectively, and the output signal 18 similarly consists of left and right channels 18a, 18b, respectively.
  • Respective Discrete Fourier Transformers 18 receives the left and right channels 17a , 17b of the input signal 17 as input and produces as output the transforms 19a, 19b.
  • the center-channel extractor 11 receives the transforms 19 and produces as output the phantom center channel C 20.
  • the spectral flattener 12 receives as input the phantom center channel C 20 and produces as output the shaped center channel 24, while the voice activity detector 13 receives the same input C 20 and produces as output the control signal 22 for variable-gain amplifiers 14a and 14c on the on hand and, on the other, the control signal 21 for variable-gain amplifier 14b.
  • the amplifier 14a receives as input and control signal the left-channel transform 19a and the output control signal 22 of the voice activity detector 13, respectively.
  • the amplifier 14c receives as input and control signal the right-channel transform 19b and the voice-activity-detector output control signal 22, respectively.
  • the amplifier 14b receives as input and control signal the spectrally shaped center channel 24 and the output voice-activity-detector control signal 21 of the spectral flattener 12.
  • the mixer 15a receives the gain-adjusted left transform 23a output from the amplifier 14 and the gain-adjusted spectrally shaped center channel 25 and produces as output the signal 26a.
  • the mixer 15b receives the gain-adjusted right transform 23b from the amplifier 14c and the gain-adjusted spectrally shaped center channel 25 and produces as output the signal 26b.
  • Inverse transformers 18a, 18b receive respective signals 26a, 26b and produce respective derived left- and right-channel signals L 1 18a, R' 18b.
  • the operation of the speech enhancer 1 is described in more detail below.
  • the processes of center-channel extraction, spectral flattening, voice activity detection and mixing, according to one embodiment, are described in turn — first in rough summary, then in more detail.
  • the signal of interest 17 contains speech.
  • the true panned center consists of a proportion alpha ( ⁇ ) of the source left and right signals.
  • the center-channel extractor 11 extracts the center- panned content C 20 from the stereo signal 17.
  • the center-panned content identical regions of both left and right channels contain that center-panned content.
  • the center- panned content is extracted by removing the identical portions from both the left and right channels.
  • One may calculate LR* 0 (where * indicates the conjugate) for the remaining left and right signals (over a frame of blocks or using a method that continually updates as a new block enters) and adjust a proportion ⁇ until that quantity is sufficiently near zero.
  • Auditory filters separate the speech in the presumed speech channel into perceptual bands.
  • the band with the most energy is determined for each block of data.
  • the spectral shape of the speech channel for that block is then altered to compensate for the lower energy in the remaining bands.
  • the spectrum is flattened: Bands with lower energies have their gains increased, up to some maximum. In one embodiment, all bands may share a maximum gain. In an alternate embodiment, each band may have its own maximum gain. (In the degenerate case where all of the bands have the same energy, then the spectrum is already flat. One may consider the spectral shaping as not occurring, or one may consider the spectral shaping as achieved with identity functions.)
  • the spectral flattening occurs regardless of the channel content. Non-speech may be processed but is not used later in the system. Non-speech has a very different spectrum than speech, and so the flattening for non-speech is generally not the same as for speech.
  • Speech content is determined by measuring spectral fluctuations in adjacent frames of data. (Each frame may consist of many blocks of data, but a frame is typically two, four or eight blocks at a 48 kHz sample rate.)
  • the residual stereo signal may assist with the speech analysis. This concept applies more generally to adjacent channels in any multi-channel source.
  • the flattened speech channel is mixed with the original signal in some proportion relative to the confidence that the speech channel indeed contains speech. In general, when the confidence is high, more of the flattened speech channel is used. When confidence is low, less of the flattened speech channel is used.
  • a stereo signal with orthogonal channels remains.
  • a similar method derives a phantom surround channel from the surround-panned audio.
  • left and right channels each contains unique information, as well as common information.
  • L r is the real part of L
  • L is the imaginary part of L, and similarly for R.
  • R R-CcC (7)
  • Equation (3) Substituting Equations (6) and (7) into Equation (3):
  • Equation (8) is in the form of the quadratic equation:
  • Equation (10) Substituting Equations (14), (15) and (16) into Equation (10) and solving for ⁇ : Choosing the negative root for the solution to ⁇ and limiting ⁇ to the range of ⁇ 0, 0.5 ⁇ avoid confusion with surround panned information (although the values are not critical to the invention). The phantom center channel equation then becomes:
  • a phantom surround channel can similarly be derived as:
  • L f L -C - S (22)
  • R' R - C +S (23)
  • L' is the derived left, C the derived center, R' the derived right and S derived surround channels.
  • the primary concern is the extraction of the center channel.
  • the technique described above is applied to a complex frequency domain representation of an audio signal.
  • the first step in extraction of the phantom center channel is to perform a DFT on a block of audio samples and obtain the resulting transform coefficients.
  • a windowing function w[n] such as a Hamming window weights the block of samples prior to application of the transform:
  • n is an integer
  • N is the number of samples in a block.
  • Equation (25) calculates the DFT coefficients as:
  • X m [k,c] is transform coefficient k in channel c for samples in block m.
  • the number of channels is three: left, right and phantom center (in the case of xfn.cj, only left and right).
  • the Fast Fourier Transform can efficiently implement the DFT.
  • the sum and difference of left and right are found on a per- frequency-bin basis.
  • the real and imaginary parts are grouped and squared.
  • Each bin is then smoothed in- between blocks prior to calculating ⁇ .
  • the smoothing reduces audible artifacts that occur when the power in a bin changes too rapidly between blocks of data. Smoothing may be done by, for example, leaky integrator, non-linear smoother, linear but multi-pole low- pass smoother or even more elaborate smoother.
  • Discrete Fourier Transform or a related transform.
  • the magnitude spectrum is then transformed into a power spectrum by squaring the transform frequency bins.
  • the frequency bins are then grouped into bands possibly on a critical or auditory- filter scale.
  • Dividing the speech signal into critical bands mimics the human auditory system — specifically the cochlea.
  • These filters exhibit an approximately rounded exponential shape and are spaced uniformly on the Equivalent Rectangular Bandwidth (ERB) scale.
  • the ERB scale is simply a measure used in psychoacoustics that approximates the bandwidth and spacing of auditory filters.
  • Figure 2 depicts a suitable set of filters with a spacing of 1 ERB, resulting in a total of 40 bands.
  • Banding the audio data also helps eliminate audible artifacts that can occur when working on a per-bin basis.
  • the critically banded power is then smoothed with respect to time, that is to say, smoothed across adjacent blocks.
  • the maximum power among the smoothed critical bands is found and corresponding gains are calculated for the remaining (non-maximum) bands to bring their power closer to the maximum power.
  • the gain compensation is similar to the compressive (non-linear) nature of the basilar membrane. These gains are limited to a maximum to avoid saturation.
  • the per-band power gains are first transformed back into frequency bin power gains, then per-bin power gains are then converted to magnitude gains by taking the square root of each bin.
  • the original signal transform bins can then be multiplied by the calculated per-bin magnitude gains.
  • the spectrally flattened signal is then transformed from the frequency domain back into the time domain. In the case of the phantom center, it is first mixed with the original signal prior to being returned to the time domain. Figure 3 describes this process.
  • the spectral flattening system described above does not take into account the nature of input signal. If a non-speech signal was flattened, the perceived change in timbre could be severe.
  • the method described above can be coupled with a voice activity detector 13. When the voice activity detector 13 indicates the presence of speech, the flattened speech is used. It is assumed that the signal to be flattened has been converted to the frequency domain as previously described. For simplicity, the channel notation used above has been omitted. The DFT coefficients are converted to power, and then from the DFT domain to critical bands
  • H[k,p] are P critical band filters.
  • the power in each band is then smoothed in-between blocks, similar to the temporal integration that occurs at the cortical level of the brain. Smoothing may be done by, for example, leaky integrator, non-linear smoother, linear but multi-pole low-pass smoother or even more elaborate smoother. This smoothing also helps eliminate transient behavior that can cause the gains to fluctuate too rapidly between blocks, causing audible pumping. The peak power is then found.
  • E m [p] is the smoothed, critically banded power
  • ⁇ 2 is the leaky-integrator coefficient
  • E max is the peak power.
  • the leaky integrator has a low-pass-filtering effect, and again, a typical value for ⁇ 2 is 0.9.
  • G m [p] is the power gain to be applied to each band
  • G max is the maximum power gain allowable
  • determines the degree of leveling of the spectrum. In practice, ⁇ is close to unity. G max depends on the dynamic range (or headroom) if the system performing the processing, as well as any other global limits on the amount of gain specified. A typical value for G max is 2OdB.
  • the per-band power gains are next converted to per-bin power, and the square root is taken to get per-bin magnitude gains:
  • the magnitude gain is next modified based on the voice-activity-detector output 21, 22.
  • the method for voice activity detection is described next.
  • Spectral flux measures the speed with which the power spectrum of a signal changes, comparing the power spectrum between adjacent frames of audio. (A frame is multiple blocks of audio data.) Spectral flux indicates voice activity detection or speech- versus-other determination in audio classification. Often, additional indicators are used, and the results pooled to make a decision as to whether or not the audio is indeed speech. In general, the spectral flux of speech is somewhat higher than that of music, that is to say, the music spectrum tends be more stable between frames than the speech spectrum.
  • the DFT coefficients are first split into the center and the side audio (original stereo minus phantom center). This differs from traditional mid/side stereo processing in that mid/side processing is typically (L+R)/2, (L-R)/2; whereas center/side processing is C, L+R-2C.
  • mid/side processing is typically (L+R)/2, (L-R)/2; whereas center/side processing is C, L+R-2C.
  • center/side processing is C, L+R-2C.
  • the critical-band power is then used to calculate the spectral flux of both the center and the side:
  • X 1n [p] is the critical band version of the phantom center
  • S m [p] is the critical band version of the residual signal (sum of left and right minus the center)
  • H[k,p] are P critical band filters as previously described.
  • the next step calculates a weight W for the center channel from the average power of the current and previous frames. This is done over a limited range of bands:
  • the range of bands is limited to the primary bandwidth of speech — approximately 100- 8000 Hz.
  • the unweighted spectral flux for both the center and the side is then calculated:
  • F x (m) is the unweighted spectral flux of center and F s (m) is the un-weighted spectral flux of side.
  • a final, smoothed value for the spectral flux is calculated by low pass filtering the values of F Tol (m) with a simple 1 st order HR low-pass filter.
  • F Tot (m) is then clipped to a range of 0 ⁇ F Tot (m) ⁇ 1 : (The min ⁇ and max ⁇ functions limit F Tot (m) to the range of ⁇ 0, 1 ⁇ according to this embodiment.)
  • the flattened center channel is mixed with the original audio signal based on the output of the voice activity detector.
  • F Tot may be limited to a narrower range of values. For example, 0.1 ⁇ F Tot (m) ⁇ 0.9 preserves a small amount of both the flattened signal and the original in the final mix.
  • x is the enhanced version of x, the original stereo input signal.
  • Figure 4 illustrates a computer 4 according to one embodiment of the invention.
  • the computer 4 includes a memory 41, a CPU 42 and a bus 43.
  • the bus 43 communicatively couples the memory 41 and CPU 42.
  • the memory 41 stores a computer program for executing any of the methods described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method for enhancing speech includes extracting a center channel of an audio signal, flattening the spectrum of the center channel, and mixing the flattened speech channel with the audio signal, thereby enhancing any speech in the audio signal. Also disclosed are a method for extracting a center channel of sound from an audio signal with multiple channels, a method for flattening the spectrum of an audio signal, and a method for detecting speech in an audio signal. Also disclosed is a speech enhancer that includes a center-channel extract, a spectral flattener, a speech-confidence generator, and a mixer for mixing the flattened speech channel with original audio signal proportionate to the confidence of having detected speech, thereby enhancing any speech in the audio signal.

Description

Description
SPEECH ENHANCEMENT
Disclosure of the Invention Herein are described methods and apparatus for extracting a center channel of sound from an audio signal with multiple channels, for flattening the spectrum of an audio signal, for detecting speech in an audio signal and for enhancing speech. A method for extracting a center channel of sound from an audio signal with multiple channels may include multiplying (1) a first channel of the audio signal, less a proportion α of a candidate center channel and (2) a conjugate of a second channel of the audio signal, less the proportion α of the candidate center channel, approximately minimizing α and creating the extracted center channel by multiplying the candidate center channel by the approximately minimized α.
A method for flattening the spectrum of an audio signal may include separating a presumed speech channel into perceptual bands, determining which of the perceptual bands has the most energy and increasing the gain of perceptual bands with less energy, thereby flattening the spectrum of any speech in the audio signal. The increasing may include increasing the gain of perceptual bands with less energy, up to a maximum.
A method for detecting speech in an audio signal may include measuring spectral fluctuation in a candidate center channel of the audio signal, measuring spectral fluctuation of the audio signal less the candidate center channel and comparing the spectral fluctuations, thereby detecting speech in the audio signal.
A method for enhancing speech may include extracting a center channel of an audio signal, flattening the spectrum of the center channel and mixing the flattened speech channel with the audio signal, thereby enhancing any speech in the audio signal. The method may further include generating a confidence in detecting speech in the center channel and the mixing may include mixing the flattened speech channel with the audio signal proportionate to the confidence of having detected speech. The confidence may vary from a lowest possible probability to a highest possible probability, and the generating may include further limiting the generated confidence to a value higher than the lowest possible probability and lower than the highest possible probability. The extracting may include extracting a center channel of an audio signal, using the method described above, he flattening may include flattening the spectrum of the center channel using the method described above. The generating may include generating a confidence in detecting speech in the center channel, using the method described above.
The extracting may include extracting a center channel of an audio signal, using the method described above; the flattening may include flattening the spectrum of the center channel using the method described above; and the generating may include generating a confidence in detecting speech in the center channel, using the method described above.
Herein is taught a computer-readable storage medium wherein is located a computer program for executing any of the methods described above, as well as a computer system including a CPU, the storage medium and a bus coupling the CPU and the storage medium.
Description of the Drawings
Figure l is a functional block diagram of a speech enhancer according to one embodiment of the invention. Figure 2 depicts a suitable set of filters with a spacing of 1 ERB, resulting in a total of 40 bands.
Figure 3 describes the mixing process according to one embodiment of the invention.
Figure 4 illustrates a computer system according to one embodiment of the invention.
Best Mode for Carrying Out the Invention
Figure 1 is a functional block diagram of a speech enhancer 1 according to one embodiment of the invention. The speech enhancer 1 includes an input signal 17, Discrete Fourier Transformers 10a, 10b, a center-channel extractor 11, a spectral flattener 12, a voice activity detector 13, variable-gain amplifiers 15a, 15c, inverse Discrete
Fourier Transformers 18a, 18b and the output signal 18. The input signal 17 consists of left and right channels 17a, 17b, respectively, and the output signal 18 similarly consists of left and right channels 18a, 18b, respectively.
Respective Discrete Fourier Transformers 18 receives the left and right channels 17a , 17b of the input signal 17 as input and produces as output the transforms 19a, 19b. The center-channel extractor 11 receives the transforms 19 and produces as output the phantom center channel C 20. The spectral flattener 12 receives as input the phantom center channel C 20 and produces as output the shaped center channel 24, while the voice activity detector 13 receives the same input C 20 and produces as output the control signal 22 for variable-gain amplifiers 14a and 14c on the on hand and, on the other, the control signal 21 for variable-gain amplifier 14b.
The amplifier 14a receives as input and control signal the left-channel transform 19a and the output control signal 22 of the voice activity detector 13, respectively. Likewise, the amplifier 14c receives as input and control signal the right-channel transform 19b and the voice-activity-detector output control signal 22, respectively. The amplifier 14b receives as input and control signal the spectrally shaped center channel 24 and the output voice-activity-detector control signal 21 of the spectral flattener 12. The mixer 15a receives the gain-adjusted left transform 23a output from the amplifier 14 and the gain-adjusted spectrally shaped center channel 25 and produces as output the signal 26a. Similarly, the mixer 15b receives the gain-adjusted right transform 23b from the amplifier 14c and the gain-adjusted spectrally shaped center channel 25 and produces as output the signal 26b.
Inverse transformers 18a, 18b receive respective signals 26a, 26b and produce respective derived left- and right-channel signals L1 18a, R' 18b.
The operation of the speech enhancer 1 is described in more detail below. The processes of center-channel extraction, spectral flattening, voice activity detection and mixing, according to one embodiment, are described in turn — first in rough summary, then in more detail.
CENTER-CHANNEL EXTRACTION The assumptions are as follow:
(1) The signal of interest 17 contains speech.
(2) In the case of a multi-channel signal (i.e., left and right, or stereo), the speech is center panned.
(3) The true panned center consists of a proportion alpha (α) of the source left and right signals.
(4) The result of subtracting that proportion is a pair of orthogonal signals, Operating on these assumptions, the center-channel extractor 11 extracts the center- panned content C 20 from the stereo signal 17. For center-panned content, identical regions of both left and right channels contain that center-panned content. The center- panned content is extracted by removing the identical portions from both the left and right channels.
One may calculate LR*=0 (where * indicates the conjugate) for the remaining left and right signals (over a frame of blocks or using a method that continually updates as a new block enters) and adjust a proportion α until that quantity is sufficiently near zero.
SPECTRAL FLATTENING
Auditory filters separate the speech in the presumed speech channel into perceptual bands. The band with the most energy is determined for each block of data. The spectral shape of the speech channel for that block is then altered to compensate for the lower energy in the remaining bands. The spectrum is flattened: Bands with lower energies have their gains increased, up to some maximum. In one embodiment, all bands may share a maximum gain. In an alternate embodiment, each band may have its own maximum gain. (In the degenerate case where all of the bands have the same energy, then the spectrum is already flat. One may consider the spectral shaping as not occurring, or one may consider the spectral shaping as achieved with identity functions.) The spectral flattening occurs regardless of the channel content. Non-speech may be processed but is not used later in the system. Non-speech has a very different spectrum than speech, and so the flattening for non-speech is generally not the same as for speech.
VOICE ACTIVITY DETECTOR
Once the assumed speech is isolated to a single channel, it is analyzed for speech content. Does it contain speech? Content is analyzed independent of spectral flattening. Speech content is determined by measuring spectral fluctuations in adjacent frames of data. (Each frame may consist of many blocks of data, but a frame is typically two, four or eight blocks at a 48 kHz sample rate.)
Where the speech channel is extracted from stereo, the residual stereo signal may assist with the speech analysis. This concept applies more generally to adjacent channels in any multi-channel source. MlXING
When speech is deemed present, the flattened speech channel is mixed with the original signal in some proportion relative to the confidence that the speech channel indeed contains speech. In general, when the confidence is high, more of the flattened speech channel is used. When confidence is low, less of the flattened speech channel is used.
The processes of center-channel extraction, spectral flattening, voice activity detection and mixing, according to one embodiment, are described in turn in more detail.
EXTRACTION OF PHANTOM CENTER AND SURROUND CHANNELS FROM 2-CHANNEL SOURCES
With speech enhancement, one desires to extract, process and re-insert only the center panned audio. In a stereo mix, speech is most often center panned. The extraction of center panned audio (phantom center channel) from a 2-channel mix is now described. A mathematical proof composes a first part. The second part applies the proof to a real -world stereo signal to derive the phantom center.
When the phantom center is subtracted from the original stereo, a stereo signal with orthogonal channels remains. A similar method derives a phantom surround channel from the surround-panned audio.
CENTER CHANNEL EXTRACTION - MATHEMATICAL PROOF
Given some two-channel signal, one may separate the channels into left (L) and right (R). The left and right channels each contains unique information, as well as common information. One may represent the common information as C (center panned), and the unique information as L and R — left only and right only, respectively.
L = L + C
R = R + C 0)
"Unique" implies that L and R are orthogonal to each other:
LR * = 0 (2) If one separates L and R into real and imaginary parts,
Figure imgf000008_0001
where Lr is the real part of L, L, is the imaginary part of L, and similarly for R.
Now assume that the orthogonal pair (L and R) is created from the non-orthogonal pair (L and R) by subtracting the center panned C from L and R.
L = L - C (4)
R = R - C (5)
Now let C = aC , where C is an assumed center channel and α is a scaling factor:
Figure imgf000008_0002
R = R-CcC (7)
Substituting Equations (6) and (7) into Equation (3):
LrRr +L,R, =(Lr -aCr)(Rr -«Cr) + (Z, -aCχRt -aCt)
= LrRr-aCr(Lr+ Rr) + Ct2C^+L1R1 -CcC1(L1 +R1) + Cc2C1 2 =cc2[cr 2+Cl 2]+a[-Cr(Lr+Rr)-Cl(Ll+Rl)]+[LrRr +L1R1]
= 0
(8)
Equation (8) is in the form of the quadratic equation:
a2X + aY + Z = 0 (9)
where the roots are found by:
Figure imgf000009_0001
Now let the assumed C in Equations (6) and (7) be as follows:
C = L+R (H)
Separating into real and imaginary:
Cr=Lr+Rr (12) C1=L1 +R1 (13)
Then in the quadratic Equation (9): χ = c; + c; = (Lr + Rrγ -KL, + Ry {λA)
Y = -Cr{Lr + Rr) - C1(L1 + R{) = -(Lr + Rr)2 -(L1 + R1)2 = -X (15)
Z = LrRr +L.R. (16)
Substituting Equations (14), (15) and (16) into Equation (10) and solving for α:
Figure imgf000009_0002
Choosing the negative root for the solution to α and limiting α to the range of {0, 0.5} avoid confusion with surround panned information (although the values are not critical to the invention). The phantom center channel equation then becomes:
C = aC = a(L + R)
Figure imgf000010_0001
where
a = min-j max40, — x 1 - >,0.5 ! (19)
(Lr +Rr)2 HL, +R1Y (The min{} and max{} functions limit α to the range of {0, 0.5}, although the values are not critical to the invention..)
A phantom surround channel can similarly be derived as:
S = βS = β(L - R)
Figure imgf000010_0002
β = min<j max<j 0, — x 1
Figure imgf000010_0003
where S is the surround panned audio in the original stereo pair (L, R) and S is the assumed to be (L - R). Again, choosing the negative root for the solution to β and limiting β to the range of {0, 0.5} avoid confusion with center panned information (although the values are not critical to the invention).
Now that C and S have been derived, they can be removed from the original stereo pair (L and R) to make four channels of audio from the original two:
Lf= L -C - S (22) R'= R - C +S (23) where L' is the derived left, C the derived center, R' the derived right and S derived surround channels.
CENTER CHANNEL EXTRACTION - APPLICATION
As stated above, for the speech enhancement method, the primary concern is the extraction of the center channel. In this part, the technique described above is applied to a complex frequency domain representation of an audio signal.
The first step in extraction of the phantom center channel is to perform a DFT on a block of audio samples and obtain the resulting transform coefficients. The block size of the DFT depends on the sampling rate. For example, at a sampling rate fs of 48kHz, a block size of N= 512 samples would be acceptable. A windowing function w[n] such as a Hamming window weights the block of samples prior to application of the transform:
0 ≤ H < N (24)
Figure imgf000011_0001
where n is an integer, and N is the number of samples in a block.
Equation (25) calculates the DFT coefficients as:
Λ'-l -jlπkn ,
Xm[k,c] = Yx[mN + n,cMn]e N ~ (25) ti l ≤ c < 3 l ]
where x[n,c] is sample number n in channel c of block m,j is the imaginary unit if = -1), and Xm[k,c] is transform coefficient k in channel c for samples in block m. Note that the number of channels is three: left, right and phantom center (in the case of xfn.cj, only left and right). In the equations below, the left channel is designated as c=l, the phantom center as c=2 (not yet derived) and the right channel as c=3. Also, the Fast Fourier Transform (FFT) can efficiently implement the DFT.
The sum and difference of left and right are found on a per- frequency-bin basis. The real and imaginary parts are grouped and squared. Each bin is then smoothed in- between blocks prior to calculating α. The smoothing reduces audible artifacts that occur when the power in a bin changes too rapidly between blocks of data. Smoothing may be done by, for example, leaky integrator, non-linear smoother, linear but multi-pole low- pass smoother or even more elaborate smoother.
Figure imgf000012_0001
Figure imgf000012_0005
Figure imgf000012_0002
Figure imgf000012_0003
where Re{} is the real part, Im{} is the imaginary part, and λ1 is a leaky integrator coefficient. The leaky integrator has a low pass filtering effect, and a typical value for λ1 is 0.9. The extraction coefficient a for block m is then derived using Equation (19):
Figure imgf000012_0004
The phantom center channel for block m is then derived using Equation (18):
Figure imgf000012_0006
SPECTRAL FLATTENING
A description of an embodiment of the spectral flattening of the invention follows. Assuming a single channel that is predominantly speech, the speech signal is transformed into the frequency domain by the Discrete Fourier Transform (DFT) or a related transform. The magnitude spectrum is then transformed into a power spectrum by squaring the transform frequency bins.
The frequency bins are then grouped into bands possibly on a critical or auditory- filter scale. Dividing the speech signal into critical bands mimics the human auditory system — specifically the cochlea. These filters exhibit an approximately rounded exponential shape and are spaced uniformly on the Equivalent Rectangular Bandwidth (ERB) scale. The ERB scale is simply a measure used in psychoacoustics that approximates the bandwidth and spacing of auditory filters. Figure 2 depicts a suitable set of filters with a spacing of 1 ERB, resulting in a total of 40 bands. Banding the audio data also helps eliminate audible artifacts that can occur when working on a per-bin basis. The critically banded power is then smoothed with respect to time, that is to say, smoothed across adjacent blocks.
The maximum power among the smoothed critical bands is found and corresponding gains are calculated for the remaining (non-maximum) bands to bring their power closer to the maximum power. The gain compensation is similar to the compressive (non-linear) nature of the basilar membrane. These gains are limited to a maximum to avoid saturation. In order to apply these gains to the original signal, they must be transformed back to a DFT format. Therefore, the per-band power gains are first transformed back into frequency bin power gains, then per-bin power gains are then converted to magnitude gains by taking the square root of each bin. The original signal transform bins can then be multiplied by the calculated per-bin magnitude gains. The spectrally flattened signal is then transformed from the frequency domain back into the time domain. In the case of the phantom center, it is first mixed with the original signal prior to being returned to the time domain. Figure 3 describes this process.
The spectral flattening system described above does not take into account the nature of input signal. If a non-speech signal was flattened, the perceived change in timbre could be severe. In order to avoid the processing of non-speech signals, the method described above can be coupled with a voice activity detector 13. When the voice activity detector 13 indicates the presence of speech, the flattened speech is used. It is assumed that the signal to be flattened has been converted to the frequency domain as previously described. For simplicity, the channel notation used above has been omitted. The DFT coefficients are converted to power, and then from the DFT domain to critical bands
Figure imgf000014_0001
where H[k,p] are P critical band filters.
The power in each band is then smoothed in-between blocks, similar to the temporal integration that occurs at the cortical level of the brain. Smoothing may be done by, for example, leaky integrator, non-linear smoother, linear but multi-pole low-pass smoother or even more elaborate smoother. This smoothing also helps eliminate transient behavior that can cause the gains to fluctuate too rapidly between blocks, causing audible pumping. The peak power is then found.
Figure imgf000014_0004
Figure imgf000014_0002
where Em[p] is the smoothed, critically banded power, λ2 is the leaky-integrator coefficient, and Emax is the peak power. The leaky integrator has a low-pass-filtering effect, and again, a typical value for λ2 is 0.9.
The per-band power gains are next found, with the maximum gain constrained to avoid overcompensating:
Figure imgf000014_0003
0 < γ < 1 (3ib) where Gm[p] is the power gain to be applied to each band, Gmax is the maximum power gain allowable, and γ determines the degree of leveling of the spectrum. In practice, γ is close to unity. Gmax depends on the dynamic range (or headroom) if the system performing the processing, as well as any other global limits on the amount of gain specified. A typical value for Gmax is 2OdB.
The per-band power gains are next converted to per-bin power, and the square root is taken to get per-bin magnitude gains:
Figure imgf000015_0001
where Ym[k] is the per-bin magnitude gain.
The magnitude gain is next modified based on the voice-activity-detector output 21, 22. The method for voice activity detection, according to one embodiment of the invention, is described next.
VOICE ACTIVITY DETECTION
Spectral flux measures the speed with which the power spectrum of a signal changes, comparing the power spectrum between adjacent frames of audio. (A frame is multiple blocks of audio data.) Spectral flux indicates voice activity detection or speech- versus-other determination in audio classification. Often, additional indicators are used, and the results pooled to make a decision as to whether or not the audio is indeed speech. In general, the spectral flux of speech is somewhat higher than that of music, that is to say, the music spectrum tends be more stable between frames than the speech spectrum.
In the case of stereo, where a phantom center channel is extracted, the DFT coefficients are first split into the center and the side audio (original stereo minus phantom center). This differs from traditional mid/side stereo processing in that mid/side processing is typically (L+R)/2, (L-R)/2; whereas center/side processing is C, L+R-2C. With the signal converted to the frequency domain as previously described, the DFT coefficients are converted to power and then from the DFT domain to the critical- band domain. The critical-band power is then used to calculate the spectral flux of both the center and the side:
Figure imgf000016_0001
Figure imgf000016_0002
where X1n [p] is the critical band version of the phantom center, Sm[p] is the critical band version of the residual signal (sum of left and right minus the center) and H[k,p] are P critical band filters as previously described.
Two frame buffers are created (for the center and side magnitudes) from the previous 2 J blocks of data:
Figure imgf000016_0003
Figure imgf000016_0005
The next step calculates a weight W for the center channel from the average power of the current and previous frames. This is done over a limited range of bands:
Figure imgf000016_0004
The range of bands is limited to the primary bandwidth of speech — approximately 100- 8000 Hz. The unweighted spectral flux for both the center and the side is then calculated:
Figure imgf000017_0001
where Fx (m) is the unweighted spectral flux of center and Fs(m) is the un-weighted spectral flux of side.
A biased estimate of the spectral flux is then calculated as follows:
Figure imgf000017_0002
Figure imgf000017_0003
otherwise,
FTot (m) = 0 (37c)
where FTot (m) is total flux estimate, and Wmin is the minimum weight allowed. Wmin depends on dynamic range, but a typical value would be Wmin = -6OdB.
A final, smoothed value for the spectral flux is calculated by low pass filtering the values of FTol (m) with a simple 1st order HR low-pass filter. This filter depends on the signal's sample rate and block size but, in one embodiment, can be defined by a first- order, low-pass filter with a normalized cutoff of 0.025*∫s for fs = 48kHz, where/s is the sample rate of a digital system.
FTot (m) is then clipped to a range of 0 < FTot (m) ≤ 1 :
Figure imgf000018_0004
(The min{} and max{} functions limit FTot (m) to the range of {0, 1 } according to this embodiment.)
MIXING
The flattened center channel is mixed with the original audio signal based on the output of the voice activity detector.
The per-bin magnitude gains Ym [k] for spectral flattening (as shown above) are applied to the phantom center channel Xm[k,2] (as derived above):
Figure imgf000018_0003
When the voice activity detector 13 detects speech, let FTot (t) = 1 ; when it detects non- speech, let FTot (m) = 0. Values between 0 and 1 are possible, win which case the voice activity detector 13 makes a soft decision on the presence of speech. For the left channel,
Figure imgf000018_0002
Similarly, for the right channel,
Figure imgf000018_0001
In practice, FTot may be limited to a narrower range of values. For example, 0.1 ≤ FTot(m) ≤ 0.9 preserves a small amount of both the flattened signal and the original in the final mix.
The per-bin magnitude gains are then applied to the original input signal, which is then converted back to the time domain via the inverse DFT:
Figure imgf000019_0001
where x is the enhanced version of x, the original stereo input signal.
Figure 4 illustrates a computer 4 according to one embodiment of the invention.
The computer 4 includes a memory 41, a CPU 42 and a bus 43. The bus 43 communicatively couples the memory 41 and CPU 42. The memory 41 stores a computer program for executing any of the methods described above.
A number of embodiments of the invention have been described. Nevertheless, one of ordinary skill in the art understands how to variously modify the described embodiments without departing from the spirit and scope of the invention. For example, while the description includes Discrete Fourier Transforms, one of ordinary skill in the art understands the various alternative methods of transforming from the time domain to the frequency domain and vice versa.
PriorArt
Schaub, A. and P. Straub, P., "Spectral sharpening for speech enhancement noise reduction", Proc. ICASSP 1991, Toronto, Canada, May 1991, pp. 993-996.
Sondhi, M., "New methods of pitch extraction", Audio and Electroacoustics, IEEE Transactions, June 1968, Volume 16, Issue 2, pp 262-266.
Villchur, E., "Signal Processing to Improve Speech Intelligibility for the Hearing Impaired", 99th Audio Engineering Society Convention, September 1995.
Thomas, I. and Niederjohn, R., "Preprocessing of Speech for Added Intelligibility in High Ambient Noise", 34th Audio Engineering Society Convention, March 1968.
Moore, B. et. al., "A Model for the Prediction of Thresholds, Loudness, and Partial Loudness", J. Audio Eng. Soc, Vol. 45, No. 4, April 1997.
Moore, B. and Oxenham, A., "Psychoacoustic consequences of compression in the peripheral auditory system", The Journal of the Acoustical Society of America - December 2002 - Volume 112, Issue 6, pp. 2962-2966
PRIOR ART - SPECTRAL FLATTENING
US PATENTS
US 6732073 Bl Spectral enhancement of acoustic signals to provide improved recognition of speech
US 06993480 Bl Voice intelligibility enhancement system
US 2006/0206320 Al Apparatus and method for noise reduction and speech enhancement with microphones and loudspeakers
US 07191 122 Speech compression system and method US 2007/0094017 Frequency domain format enhancement
INTERNATIONAL PATENTS
WO 2004/013840 Al Digital Signal Processing Techniques For Improving Audio Clarity And Intelligibility
WO 2003/015082 Sound Intelligibility Enhancement Using A Psychoacoustic Model And An Oversampled Filterbank
PAPERS
Sallberg, B. et. al; "Analog Circuit Implementation for Speech Enhancement Purposes Signals"; Systems and Computers, 2004. Conference Record of the Thirty-Eighth Asilomar Conference.
Magotra, N. and Sirivara, S.; "Real-time digital speech processing strategies for the hearing impaired"; Acoustics, Speech, and Signal Processing. 1997. ICASSP-97., 1997 page(s): 1211-1214 vol. 2
Walker, G., Byrne, D., and Dillon, H.; "The effects of multichannel compression/expansion amplification on the intelligibility of nonsense syllables in noise"; The Journal of the Acoustical Society of America — September 1984 — Volume 76, Issue 3, pp. 746-757
PRIOR ART - CENTER EXTRACTION
Adobe Audition has a vocal/instrument extraction function http://www.adobeforums.eom/cgi-bin/webx/.3bc3a3e5 "center cut" for winamp http://www.hvdrogenaudio.org/forums/lofiversion/index.php/tl7450.html
PRIOR ART - SPECTRAL FLUX
Vinton, M, and Robinson C; "Automated Speech/Other Discrimination for Loudness Monitoring," AES 118th Convention. 2005
Scheirer E., and Slaney M., "Construction and evaluation of a robust multifeature speech/music discriminator", IEEE Transactions on Acoustics, Speech, and Signal Processing (ICASSP'97), 1997, pp. 1331 -- 1334.

Claims

1. A method for extracting a center channel of sound from an audio signal with multiple channels, the method comprising: multiplying
(1) a first channel of the audio signal, less a proportion α of a candidate center channel; and
(2) a conjugate of a second channel of the audio signal, less the proportion α of the candidate center channel; approximately minimizing α; and creating the extracted center channel by multiplying the candidate center channel by the approximately minimized α.
2. A method for flattening the spectrum of an audio signal, the method comprising: separating a presumed speech channel into perceptual bands; determining which of the perceptual bands has the most energy; and increasing the gain of perceptual bands with less energy, thereby flattening the spectrum of any speech in the audio signal.
3. The method of claim 2 wherein the increasing comprises increasing the gain of perceptual bands with less energy, up to a maximum.
4. A method for detecting speech in an audio signal, the method comprising: measuring spectral fluctuation in a candidate center channel of the audio signal; measuring spectral fluctuation of the audio signal less the candidate center channel; and comparing the spectral fluctuations, thereby detecting speech in the audio signal.
5. A method for enhancing speech, the method comprising: extracting a center channel of an audio signal; flattening the spectrum of the center channel; and mixing the flattened speech channel with the audio signal, thereby enhancing any speech in the audio signal.
6. The method of claim 5 further comprising: generating a confidence in detecting speech in the center channel; and wherein the mixing comprises mixing the flattened speech channel with the audio signal proportionate to the confidence of having detected speech.
7. The method of claim 6 wherein the confidence varies from a lowest possible probability to a highest possible probability, and the generating comprises further limiting the generated confidence to a value higher than the lowest possible probability and lower than the highest possible probability.
8. The method of claim 5, wherein the extracting comprises: extracting a center channel of an audio signal, using the method of claim 1
9. The method of claim 5, wherein the flattening comprises: flattening the spectrum of the center channel, using the method of claim 2
10. The method of claim 5, wherein the generating comprises: generating a confidence in detecting speech in the center channel, using the method of claim 3 .
11. The method of claim 5, wherein the extracting comprises: extracting a center channel of an audio signal, using the method of claim 1 ;wherein the flattening comprises: flattening the spectrum of the center channel, using the method of claim 2 ; and wherein the generating comprises: generating a confidence in detecting speech in the center channel, using the method of claim 3 .
12. A computer-readable storage medium wherein is located a computer program for executing the method of any of claims 1 - 11 .
13. A computer system comprising a CPU; the storage medium of claim 12 ; and a bus coupling the CPU and the storage medium.
14. A speech enhancer comprising: a center-channel extract for extracting a center channel of an audio signal; a spectral flattener for flattening the spectrum of the center channel; a speech-confidence generator for generating a confidence in detecting speech in the center channel; and a mixer for mixing the flattened speech channel with original audio signal proportionate to the confidence of having detected speech, thereby enhancing any speech in the audio signal.
PCT/US2008/010591 2007-09-12 2008-09-10 Speech enhancement WO2009035615A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2010524855A JP2010539792A (en) 2007-09-12 2008-09-10 Speech enhancement
CN200880106533.0A CN101960516B (en) 2007-09-12 2008-09-10 Speech enhancement
AT08831097T ATE514163T1 (en) 2007-09-12 2008-09-10 LANGUAGE EXPANSION
EP08831097A EP2191467B1 (en) 2007-09-12 2008-09-10 Speech enhancement
US12/676,410 US8891778B2 (en) 2007-09-12 2008-09-10 Speech enhancement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US99360107P 2007-09-12 2007-09-12
US60/993,601 2007-09-12

Publications (1)

Publication Number Publication Date
WO2009035615A1 true WO2009035615A1 (en) 2009-03-19

Family

ID=40016128

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/010591 WO2009035615A1 (en) 2007-09-12 2008-09-10 Speech enhancement

Country Status (6)

Country Link
US (1) US8891778B2 (en)
EP (1) EP2191467B1 (en)
JP (2) JP2010539792A (en)
CN (1) CN101960516B (en)
AT (1) ATE514163T1 (en)
WO (1) WO2009035615A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110072923A (en) * 2009-12-23 2011-06-29 삼성전자주식회사 Signal processing method and apparatus
JP2012027101A (en) * 2010-07-20 2012-02-09 Sharp Corp Sound playback apparatus, sound playback method, program, and recording medium
US9237400B2 (en) 2010-08-24 2016-01-12 Dolby International Ab Concealment of intermittent mono reception of FM stereo radio receivers
WO2016091332A1 (en) * 2014-12-12 2016-06-16 Huawei Technologies Co., Ltd. A signal processing apparatus for enhancing a voice component within a multi-channel audio signal

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009086174A1 (en) 2007-12-21 2009-07-09 Srs Labs, Inc. System for adjusting perceived loudness of audio signals
EP2151822B8 (en) * 2008-08-05 2018-10-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for processing an audio signal for speech enhancement using a feature extraction
US8406462B2 (en) * 2008-08-17 2013-03-26 Dolby Laboratories Licensing Corporation Signature derivation for images
DE112009005215T8 (en) * 2009-08-04 2013-01-03 Nokia Corp. Method and apparatus for audio signal classification
US8538042B2 (en) 2009-08-11 2013-09-17 Dts Llc System for increasing perceived loudness of speakers
US9324337B2 (en) * 2009-11-17 2016-04-26 Dolby Laboratories Licensing Corporation Method and system for dialog enhancement
JP6010539B2 (en) * 2011-09-09 2016-10-19 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoding device, decoding device, encoding method, and decoding method
US9496839B2 (en) * 2011-09-16 2016-11-15 Pioneer Dj Corporation Audio processing apparatus, reproduction apparatus, audio processing method and program
US20130253923A1 (en) * 2012-03-21 2013-09-26 Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry Multichannel enhancement system for preserving spatial cues
US9312829B2 (en) 2012-04-12 2016-04-12 Dts Llc System for adjusting loudness of audio signals in real time
CN104078050A (en) 2013-03-26 2014-10-01 杜比实验室特许公司 Device and method for audio classification and audio processing
CN105247614B (en) 2013-04-05 2019-04-05 杜比国际公司 Audio coder and decoder
CN105493182B (en) * 2013-08-28 2020-01-21 杜比实验室特许公司 Hybrid waveform coding and parametric coding speech enhancement
US9269370B2 (en) * 2013-12-12 2016-02-23 Magix Ag Adaptive speech filter for attenuation of ambient noise
CN108462936A (en) * 2013-12-13 2018-08-28 无比的优声音科技公司 Device and method for sound field enhancing
US9344825B2 (en) 2014-01-29 2016-05-17 Tls Corp. At least one of intelligibility or loudness of an audio program
TWI569263B (en) * 2015-04-30 2017-02-01 智原科技股份有限公司 Method and apparatus for signal extraction of audio signal
WO2016183379A2 (en) 2015-05-14 2016-11-17 Dolby Laboratories Licensing Corporation Generation and playback of near-field audio content
JP6687453B2 (en) * 2016-04-12 2020-04-22 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Stereo playback device
CN115881146A (en) * 2021-08-05 2023-03-31 哈曼国际工业有限公司 Method and system for dynamic speech enhancement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003022003A2 (en) * 2001-09-06 2003-03-13 Koninklijke Philips Electronics N.V. Audio reproducing device
US20030161479A1 (en) * 2001-05-30 2003-08-28 Sony Corporation Audio post processing in DVD, DTV and other audio visual products
WO2004049759A1 (en) * 2002-11-22 2004-06-10 Nokia Corporation Equalisation of the output in a stereo widening network
US20070041592A1 (en) * 2002-06-04 2007-02-22 Creative Labs, Inc. Stream segregation for stereo signals

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04149598A (en) * 1990-10-12 1992-05-22 Pioneer Electron Corp Sound field correction device
DE69423922T2 (en) * 1993-01-27 2000-10-05 Koninkl Philips Electronics Nv Sound signal processing arrangement for deriving a central channel signal and audio-visual reproduction system with such a processing arrangement
JP3284747B2 (en) 1994-05-12 2002-05-20 松下電器産業株式会社 Sound field control device
US6993480B1 (en) * 1998-11-03 2006-01-31 Srs Labs, Inc. Voice intelligibility enhancement system
US6732073B1 (en) * 1999-09-10 2004-05-04 Wisconsin Alumni Research Foundation Spectral enhancement of acoustic signals to provide improved recognition of speech
US6959274B1 (en) * 1999-09-22 2005-10-25 Mindspeed Technologies, Inc. Fixed rate speech compression system and method
US20030023429A1 (en) 2000-12-20 2003-01-30 Octiv, Inc. Digital signal processing techniques for improving audio clarity and intelligibility
US20030028386A1 (en) * 2001-04-02 2003-02-06 Zinser Richard L. Compressed domain universal transcoder
CA2354755A1 (en) 2001-08-07 2003-02-07 Dspfactory Ltd. Sound intelligibilty enhancement using a psychoacoustic model and an oversampled filterbank
JP2003084790A (en) * 2001-09-17 2003-03-19 Matsushita Electric Ind Co Ltd Speech component emphasizing device
CA2454296A1 (en) 2003-12-29 2005-06-29 Nokia Corporation Method and device for speech enhancement in the presence of background noise
JP2005258158A (en) * 2004-03-12 2005-09-22 Advanced Telecommunication Research Institute International Noise removing device
US20060206320A1 (en) * 2005-03-14 2006-09-14 Li Qi P Apparatus and method for noise reduction and speech enhancement with microphones and loudspeakers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030161479A1 (en) * 2001-05-30 2003-08-28 Sony Corporation Audio post processing in DVD, DTV and other audio visual products
WO2003022003A2 (en) * 2001-09-06 2003-03-13 Koninklijke Philips Electronics N.V. Audio reproducing device
US20070041592A1 (en) * 2002-06-04 2007-02-22 Creative Labs, Inc. Stream segregation for stereo signals
WO2004049759A1 (en) * 2002-11-22 2004-06-10 Nokia Corporation Equalisation of the output in a stereo widening network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JOT J M ET AL: "Spatial Enhancement of Audio Recordings", PROCEEDINGS OF THE INTERNATIONAL AES CONFERENCE, XX, XX, 23 May 2003 (2003-05-23), pages 1 - 11, XP002401944 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110072923A (en) * 2009-12-23 2011-06-29 삼성전자주식회사 Signal processing method and apparatus
KR101690252B1 (en) * 2009-12-23 2016-12-27 삼성전자주식회사 Signal processing method and apparatus
JP2012027101A (en) * 2010-07-20 2012-02-09 Sharp Corp Sound playback apparatus, sound playback method, program, and recording medium
US9237400B2 (en) 2010-08-24 2016-01-12 Dolby International Ab Concealment of intermittent mono reception of FM stereo radio receivers
WO2016091332A1 (en) * 2014-12-12 2016-06-16 Huawei Technologies Co., Ltd. A signal processing apparatus for enhancing a voice component within a multi-channel audio signal
AU2014413559B2 (en) * 2014-12-12 2018-10-18 Huawei Technologies Co., Ltd. A signal processing apparatus for enhancing a voice component within a multi-channel audio signal
RU2673390C1 (en) * 2014-12-12 2018-11-26 Хуавэй Текнолоджиз Ко., Лтд. Signal processing device for amplifying speech component in multi-channel audio signal
KR101935183B1 (en) * 2014-12-12 2019-01-03 후아웨이 테크놀러지 컴퍼니 리미티드 A signal processing apparatus for enhancing a voice component within a multi-channal audio signal
US10210883B2 (en) 2014-12-12 2019-02-19 Huawei Technologies Co., Ltd. Signal processing apparatus for enhancing a voice component within a multi-channel audio signal

Also Published As

Publication number Publication date
US20100179808A1 (en) 2010-07-15
JP5507596B2 (en) 2014-05-28
EP2191467B1 (en) 2011-06-22
ATE514163T1 (en) 2011-07-15
EP2191467A1 (en) 2010-06-02
JP2012110049A (en) 2012-06-07
US8891778B2 (en) 2014-11-18
JP2010539792A (en) 2010-12-16
CN101960516B (en) 2014-07-02
CN101960516A (en) 2011-01-26

Similar Documents

Publication Publication Date Title
EP2191467B1 (en) Speech enhancement
KR101935183B1 (en) A signal processing apparatus for enhancing a voice component within a multi-channal audio signal
RU2520420C2 (en) Method and system for scaling suppression of weak signal with stronger signal in speech-related channels of multichannel audio signal
US6405163B1 (en) Process for removing voice from stereo recordings
JP5149968B2 (en) Apparatus and method for generating a multi-channel signal including speech signal processing
JP5341128B2 (en) Improved stability in hearing aids
US9324337B2 (en) Method and system for dialog enhancement
US20160351179A1 (en) Single-channel, binaural and multi-channel dereverberation
NO20180266A1 (en) Audio gain control using specific volume-based hearing event detection
EP2579252B1 (en) Stability and speech audibility improvements in hearing devices
JP5375400B2 (en) Audio processing apparatus, audio processing method and program
CN101533641B (en) Method for correcting channel delay parameters of multichannel signals and device
Kates Modeling the effects of single-microphone noise-suppression
EP2720477B1 (en) Virtual bass synthesis using harmonic transposition
JP4790319B2 (en) Unified processing method for resolved and unresolved harmonics
JP2005157363A (en) Method of and apparatus for enhancing dialog utilizing formant region
JP5774191B2 (en) Method and apparatus for attenuating dominant frequencies in an audio signal
EP3335218B1 (en) An audio signal processing apparatus and method for processing an input audio signal
JP2008072600A (en) Acoustic signal processing apparatus, acoustic signal processing program, and acoustic signal processing method
JP6231762B2 (en) Receiving apparatus and program
CN112640301A (en) Multi-band compressor with distortion reduction based on dynamic threshold of scene cut analyzer-directed distortion audibility model
JP2011141540A (en) Voice signal processing device, television receiver, voice signal processing method, program and recording medium
KR102721794B1 (en) Signal processing processor and controlling method thereof
JP6531418B2 (en) Signal processor
WO2023172852A1 (en) Target mid-side signals for audio applications

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200880106533.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08831097

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 12676410

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2010524855

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2008831097

Country of ref document: EP