US10325583B2 - Multichannel sub-band audio-signal processing using beamforming and echo cancellation - Google Patents
Multichannel sub-band audio-signal processing using beamforming and echo cancellation Download PDFInfo
- Publication number
- US10325583B2 US10325583B2 US15/725,217 US201715725217A US10325583B2 US 10325583 B2 US10325583 B2 US 10325583B2 US 201715725217 A US201715725217 A US 201715725217A US 10325583 B2 US10325583 B2 US 10325583B2
- Authority
- US
- United States
- Prior art keywords
- sub
- band
- signal
- modules
- analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000012545 processing Methods 0.000 title claims abstract description 56
- 230000005236 sound signal Effects 0.000 title abstract description 14
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 60
- 230000009466 transformation Effects 0.000 claims abstract description 22
- 238000005070 sampling Methods 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 4
- 238000000034 method Methods 0.000 abstract description 35
- 230000008569 process Effects 0.000 description 10
- 238000001228 spectrum Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000004044 response Effects 0.000 description 8
- 238000003860 storage Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000012993 chemical processing Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/18—Methods or devices for transmitting, conducting or directing sound
- G10K11/26—Sound-focusing or directing, e.g. scanning
- G10K11/34—Sound-focusing or directing, e.g. scanning using electrical steering of transducer arrays, e.g. beam steering
- G10K11/341—Circuits therefor
- G10K11/346—Circuits therefor using phase variation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L2021/02082—Noise filtering the noise being echo, reverberation of the speech
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02166—Microphone arrays; Beamforming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/03—Synergistic effects of band splitting and sub-band processing
Definitions
- the present invention pertains, among other things, to systems, methods and techniques for audio-signal processing and is relevant, e.g., to systems and techniques that process multiple different frequency bands within each of multiple different audio signal channels, and particularly to systems and techniques that attempt to isolate one sound from multiple different sounds that might be present, using such processing.
- One such purpose is to remove “echo” and ambient interference signals or “noise” from one or multiple input audio channels, in order to isolate the sound that would be present in the absence of such signals.
- smart-speaker devices such as the Amazon EchoTM device
- far-field voice signal isolation and processing have become more important.
- Such devices typically include one or more microphones, for receiving spoken input from a user. They also include one or more speakers (1) for responding to, and/or providing information requested by, the user, using text-to-speech (TTS) processing, and/or (2) for playing other audio content, such as music.
- TTS text-to-speech
- the audio signal received at the device's microphones typically contains some version of such other played audio content, in addition to the user's voice.
- Echo cancellation i.e., removal, or at least reduction, of the portion of the received audio signal resulting from the played content
- KA keyword activation
- ASR speech recognition
- EC echo cancellation
- Beamforming also can significantly improve KA and ASR performance, particularly in the presence of room reverberation and environmental noise.
- FIG. 1 An exemplary conventional system 10 is illustrated in FIG. 1 .
- multiple microphones 12 e.g., microphones 12 A-C
- Each such audio signal (typically after analog-to-digital conversion, not shown) is then decomposed into separate frequency bands using a corresponding analysis/decomposition module 14 (e.g., one of modules 14 A-C).
- a reference signal 15 typically a digital signal corresponding to what is being played through the device's speaker(s), similarly is decomposed into separate frequency bands using an analysis/decomposition module 14 (module 14 D in FIG. 1 ).
- Each such decomposed input audio signal (from a given microphone) is then processed together with the decomposed reference signal in a separate corresponding echo-cancellation module 18 (e.g., one of modules 18 A-C).
- a separate beamformer module 20 (e.g., one of modules 20 A-C) processes the output for that subband from all of the echo-cancellation modules 18 .
- the individual frequency bands output by the corresponding individual beamformer modules 20 are then resynthesized by subband resynthesis module 24 to provide a final output signal 25 .
- the echo reference signal is denoted herein as r(t).
- Both x i (t) and r(t) are processed by the sub-band analysis/decomposition modules 14 , which processing typically includes D times down-sampling.
- each microphone's echo cancellation is done independently in a separate echo-cancellation module 18 (e.g., one of modules 18 A-C).
- Each such echo-cancellation module 18 typically includes M sub-band EC submodules (not shown).
- the beamforming 20 is done in each sub-band independently. That is, each beamformer module 20 processes a different sub-band across all the EC-processed microphone signals.
- Each sub-band's beamforming can be done as if in the time domain, i.e. filter-and-sum.
- Another option is to first conduct a Fast Fourier Transform (FFT) analysis in each sub-band and then do beamforming in each bin, followed by inverse Fast Fourier Transform (iFFT) processing, so that a sub-band signal stream is again obtained.
- FFT Fast Fourier Transform
- iFFT inverse Fast Fourier Transform
- the present inventors have discovered that the down-sampling within the sub-band analysis/decomposition modules 14 often will introduce frequency aliasing in some or all of the sub-bands. Such aliasing can cause significant performance degradation in the beamformer 20 because, in the overlapped frequencies, both phase and magnitude information are disturbed.
- the present invention addresses this problem by, among other things, providing a new sub-band analysis/decomposition structure that can reduce frequency aliasing, often with moderate to no increase in computational complexity.
- one embodiment of the invention is directed to an audio-signal-processing system which includes HT sub-band analysis/decomposition modules, each including (a) a Hilbert Transformation module having an input and an output that provides a Hilbert Transformed version of a signal at the input of the Hilbert Transformation module; and (b) an analysis/decomposition filter bank having (i) an input coupled to the output of the Hilbert Transformation module and (ii) a number of outputs, each providing a different frequency sub-band for a signal provided at the input of the analysis/decomposition filter bank.
- the system also includes echo-cancellation modules, each having (i) a first set of sub-band inputs coupled to corresponding sub-band outputs of a different one of the HT sub-band analysis/decomposition modules, (ii) a second set of sub-band inputs coupled to corresponding sub-band outputs of a common one of the HT sub-band analysis/decomposition modules, and (iii) outputs that provide such sub-bands after echo-cancellation processing.
- each of the inputs of such beamforming module are coupled to the same sub-band output from different echo-cancellation modules, and the output of such beamforming module provides that sub-band after beamforming.
- a resynthesis stage has inputs coupled to the different sub-band outputs of the different beamforming modules and resynthesizes such different sub-band outputs in order to provide a system output signal.
- Another embodiment is directed to an audio-signal-processing system which includes two HT sub-band analysis/decomposition modules, each including (a) a Hilbert Transformation module having an input and an output that provides a Hilbert Transformed version of a signal at the input of the Hilbert Transformation module; and (b) an analysis/decomposition filter bank having (i) an input coupled to the output of the Hilbert Transformation module and (ii) a number of outputs, each providing a different frequency sub-band for a signal provided at the input of the analysis/decomposition filter bank.
- the first one of the HT sub-band analysis/decomposition modules inputs an audio signal (e.g., from a microphone) and a second one inputs an echo reference signal.
- An echo-cancellation module includes (i) a first set of sub-band inputs coupled to the sub-band outputs of the first HT sub-band analysis/decomposition module, (ii) a second set of sub-band inputs coupled to corresponding sub-band outputs of the second HT sub-band analysis/decomposition module, and (iii) outputs that provide such sub-bands after echo-cancellation processing.
- a resynthesis stage has inputs coupled to the different sub-band outputs of the echo-cancellation module and resynthesizes such different sub-band outputs in order to provide a system output signal.
- FIG. 1 is a block diagram of a conventional multichannel subband-based audio signal processing system.
- FIG. 2 is a block diagram of a HT sub-band analysis/decomposition module according to a representative embodiment of the present invention.
- FIG. 3 shows the frequency response of a Hilbert Transformation module.
- FIG. 4 shows a simplified version of the frequency spectra of the sub-band signals produced by a filter bank.
- FIG. 5 shows a simplified version of the frequency spectra of the sub-band signals after frequency shifting.
- FIG. 6 shows a simplified version of the frequency spectra of the sub-band signals after down-sampling.
- FIG. 7 is a block diagram of a system according to the present invention that includes Hilbert-Transformation sub-band analysis/decomposition modules.
- FIG. 8 is a block diagram of the resynthesis stage of the system shown in FIG. 7 .
- FIG. 9 shows a simplified version of the frequency spectrum of a sub-band signal after shifting to a center frequency of 0 .
- FIG. 10 is a block diagram illustrating an alternate structure for a Hilbert Transformation sub-band analysis/decomposition module according to the present invention.
- FIG. 11 is a block diagram of a system that includes the alternate Hilbert-Transformation sub-band analysis/decomposition modules.
- references or indications can encompass either continuous or sampled time.
- the notation ⁇ (t) should be construed to mean that the indicated function ⁇ is in the time domain, which could be continuous or sampled time.
- the current preference for a particular step, component, operation or function in the described embodiment is indicated by the context or by other portions of the description.
- no loss of generality is intended. That is, for example, even when a particular description indicates that a signal includes, or processing operates on, discrete time samples, in alternate embodiments, the signal or processing, as applicable, is in continuous time, and vice versa.
- FIG. 2 illustrates the structure of a HT sub-band analysis/decomposition module 100 according to an initial representative embodiment of the present invention.
- Sub-band analysis/decomposition modules 100 can replace the analysis/decomposition modules 14 shown in FIG. 1 , allowing changes to other components of the system 10 , e.g., as discussed in greater detail below.
- an input signal x(t) is provided on the input line 102 of the Hilbert Transformation module 105 , which performs the Hilbert Transformation on input signal x(t) and thereby removes the negative frequency components from it.
- the output ⁇ tilde over (x) ⁇ (t) of the Hilbert Transformation module 105 is a complex signal (having real and imaginary or in-phase and quadrature components).
- FIG. 3 shows the frequency response of the Hilbert Transformation module 105 .
- the output of the Hilbert Transformation module 105 is coupled to the input of analysis/decomposition filter bank 110 , which preferably includes a set of M individual bandpass filters (e.g., filters 110 A-C).
- bandpass filters can be implemented, e.g., as conventional Quadrature Mirror Filters (QMFs), as described in P. P. Vaidyanathan (1993) “Multirate Systems And Filter Banks”, Dorling Kindersley, ISBN-13: 978-013605718, with contiguous frequency passband responses, i.e., using a filter bank that is conventionally used for the present purposes.
- QMFs Quadrature Mirror Filters
- module 105 output signal ⁇ tilde over (x) ⁇ (t) (with or without any additional intermediate processing) is then processed by the analysis/decomposition filter bank 110 .
- the frequency spectra of the sub-band signals ⁇ tilde over (x) ⁇ m (t) are shown conceptually in FIG. 4 (e.g., with simplified roll-offs).
- all the M sub-bands i.e., the bands of the individual bandpass filters
- each sub-band has leakage into its two neighboring bands, which is the root-cause of the frequency aliasing mentioned in the Summary of the Invention section, above, and which causes problems, e.g., in beamforming.
- Each of the outputs of the analysis/decomposition filter bank 110 i.e., each ⁇ tilde over (x) ⁇ m (t)
- a frequency-shifting module 112 e.g., one of modules 112 A-C
- each such module 112 implements
- the frequency spectra of the x m (t) now appear as shown (again, in simplified form) in FIG. 5 .
- each frequency-shifting module 112 is coupled to the input of a down-sampling module 114 which preferably performs M/2 down-sampling (e.g., using decimation, averaging or any other conventional technique), thereby providing output signals x m M/2 (t).
- the frequency spectra of such output signals x m M/2 (t) are shown (again, in simplified form) in FIG. 6 .
- FIG. 7 A system 200 that includes such Hilbert-Transformation sub-band analysis/decomposition modules 100 (e.g., modules 100 A-D) is illustrated in FIG. 7 .
- the audio signal from each of a plurality of microphones 12 e.g., microphones 12 A-C
- the input line 102 e.g., the corresponding one of input lines 102 A-C
- a different Hilbert-Transformation sub-band analysis/decomposition module 100 e.g., one of modules 100 A-C.
- the input line 102 D of one of the Hilbert-Transformation sub-band analysis/decomposition modules 100 is coupled to echo reference signal 15 which preferably represents, or at least corresponds to, an audio signal that is being output by the speaker(s) of a device of which system 200 also is a part.
- each echo-cancellation module 218 (e.g., one of modules 218 A-C) is coupled to the outputs of a microphone-signal-processing Hilbert-Transformation sub-band analysis/decomposition module 100 (e.g., one of modules 100 A-C). That is, each such echo-cancellation module 218 preferably inputs the sub-band signals from a different one of the microphones 12 (following such Hilbert-Transformation sub-band analysis/decomposition and, optionally, any other desired processing).
- each such echo-cancellation module 218 is coupled to the outputs of a common Hilbert-Transformation sub-band analysis/decomposition module, e.g., module 100 D that processes the echo reference signal 15 .
- a common Hilbert-Transformation sub-band analysis/decomposition module e.g., module 100 D that processes the echo reference signal 15 .
- the signals u m (t) output by modules 100 A-D do not contain negative frequency components. Therefore, when such signals are EC processed in modules 218 , the negative-frequency response can be ignored. As a result, the EC transfer function of each such module 218 preferably is implemented using only real numbers. Otherwise, echo cancellation, as performed by modules 218 , can be implemented, e.g., as discussed in commonly assigned U.S. patent application Ser. No. 15/704,235, which application is incorporated by reference herein as though set forth herein in full, or using a conventional EC approach.
- the sub-band outputs of the EC modules 218 are coupled to the inputs of beamformer modules 220 (e.g., modules 220 A-C), with the same sub-band across all the EC modules 218 being input to the same beamformer module 220 , e.g., with each beamformer module 220 processing a particular sub-band that has been received from all the EC modules 218 and with all the beamformer modules 220 collectively processing all of the corresponding sub-bands.
- beamformer modules 220 e.g., modules 220 A-C
- beamformer module 220 A might process the sub-band 1 outputs from all the EC modules 218
- beamformer module 220 B processes the sub-band 2 outputs from all the EC modules 218
- beamformer module 220 C processes the sub-band 3 outputs from all the EC modules 218 .
- beamforming preferably is performed only in the positive frequency range. Otherwise, any conventional beamforming technique may be used.
- the currently preferred technique is Minimum Variance Distortionless Response (MVDR) Beamformer, as described in Van Trees, H. L. (2002) “Optimum Array Processing”, Wiley, N.Y.
- MVDR Minimum Variance Distortionless Response
- the resynthesis stage 222 which includes individual sub-band resynthesis modules (e.g., modules 224 A-C) and adder 225 .
- An exemplary embodiment of the resynthesis stage 222 is shown in greater detail in FIG. 8 .
- the present discussion primarily refers to just one of the resynthesis modules, module 224 A. However, the discussion also is generalized (e.g., by referring to sub-band m) in order to apply to any of the M resynthesis modules (e.g., modules 224 A-C), processing any of the corresponding M sub-bands.
- v m (t) is the output of the frequency shifter 231 .
- the output of frequency shifter 231 is coupled to the input of up-sampler 232 , in which v m (t) preferably is up-sampled by the same factor as the previously performed down-sampling (i.e., M/2 times in the current embodiment), e.g., by inserting zeros.
- the output of up-sampler 232 is coupled to the input of lowpass filter (LPF) 233 which has a cutoff frequency above the spectrum of the original signal but below the spectra of the M/2 images, thereby filtering out such M/2 images.
- LPF lowpass filter
- the coefficients of LPF 233 preferably are entirely real-valued, and its transition band preferably is within the range of ( ⁇ /M, 3 ⁇ /M). Hence, if LPF 233 is implemented as a finite impulse response (FIR) filter, it can be much shorter than the prototype filter for the filter bank.
- module 235 the imaginary (or quadrature) part of ⁇ tilde over (v) ⁇ m (t) is discarded, and only the real (or in-phase) part of the signal is retained. That is, the output of module 235 preferably is:
- resynthesis filter 236 which can be implemented as a conventional resynthesis filter.
- resynthesis filter 236 can be a QMF.
- the outputs of the resynthesis filters 236 are coupled to the input of adder 225 , which sums or combines its input signals to produce a final output signal 250 (y(t)).
- Hilbert Transformation module 105 use of the Hilbert Transformation module 105 often can provide significant processing advantages over conventional systems.
- the Hilbert Transformation can be implemented as a FIR or as an infinite impulse response (IIR) filter. If it is implemented as FIR, then the real part of its impulse response function is just a delta function (i.e., single tab).
- IIR infinite impulse response
- the Hilbert Transformation converts a real signal to a complex signal, in terms of the present implementation, it can be as computationally complex as a real-to-real FIR filter with the same or even half of the filter length.
- an alternate embodiment of the present invention includes a modification to the frequency-shifting module 112 , described above, to instead perform multiplication every M/2 samples, i.e.:
- modules 100 ′ typically will be much faster than module 100 . Therefore, in a more-preferred embodiment, modules 100 , shown in FIG. 7 and referenced in the discussion pertaining to it, are replaced with modules 100 ′ (e.g., modules 100 A-D′), as shown in FIG. 11 . Otherwise, system 200 ′ is identical to system 200 .
- module 100 ′ also includes a Hilbert Transformation module 105 (described above) with an input coupled to the input signal (x(t)).
- the real (or in-phase) and imaginary (or quadrature) outputs of module 105 are coupled to separate analysis-and-M/2-down-sampling filter banks 310 , which preferably is implemented, e.g., as a conventional analysis/decomposition/down-sampling filter bank in which down-sampling is performed simultaneously with filtering, e.g., using a QMF.
- the outputs of filter banks 310 are then coupled to inputs of frequency-shifting module 312 which multiplies each sub-sampled complex-valued input
- FIGS. 7 and 11 input audio signals from multiple microphones 12 .
- only a single microphone 12 is utilized, in which case only a single microphone HT sub-band analysis/decomposition module 100 or 100 ′ (along with another HT sub-band analysis/decomposition module 100 or 100 ′ for the echo reference signal 15 ) is provided.
- only a single echo-cancellation module 218 is provided, and its output is coupled to the resynthesis stage 222 without any intervening beamforming module(s) 220 .
- Such devices typically will include, for example, at least some of the following components coupled to each other, e.g., via a common bus: (1) one or more central processing units (CPUs); (2) read-only memory (ROM); (3) random access memory (RAM); (4) other integrated or attached storage devices; (5) input/output software and circuitry for interfacing with other devices (e.g., using a hardwired connection, such as a serial port, a parallel port, a USB connection or a FireWire connection, or using a wireless protocol, such as radio-frequency identification (RFID), any other near-field communication (NFC) protocol, Bluetooth or a 802.11 protocol); (6) software and circuitry for connecting to one or more networks, e.g., using a hardwired connection such as an Ethernet
- the process steps to implement the above methods and functionality typically initially are stored in mass storage (e.g., a hard disk or solid-state drive), are downloaded into RAM, and then are executed by the CPU out of RAM.
- mass storage e.g., a hard disk or solid-state drive
- the process steps initially are stored in RAM or ROM and/or are directly executed out of mass storage.
- Suitable general-purpose programmable devices for use in implementing the present invention may be obtained from various vendors.
- different types of devices are used depending upon the size and complexity of the tasks.
- Such devices can include, e.g., mainframe computers, multiprocessor computers, one or more server boxes, workstations, personal (e.g., desktop, laptop, tablet or slate) computers and/or even smaller computers, such as personal digital assistants (PDAs), wireless telephones (e.g., smartphones) or any other programmable appliance or device, whether stand-alone, hard-wired into a network or wirelessly connected to a network.
- PDAs personal digital assistants
- wireless telephones e.g., smartphones
- any other programmable appliance or device whether stand-alone, hard-wired into a network or wirelessly connected to a network.
- any of the functionality described above can be implemented by a general-purpose processor executing software and/or firmware, by dedicated (e.g., logic-based) hardware, or any combination of these approaches, with the particular implementation being selected based on known engineering tradeoffs.
- any process and/or functionality described above is implemented in a fixed, predetermined and/or logical manner, it can be accomplished by a processor executing programming (e.g., software or firmware), an appropriate arrangement of logic components (hardware), or any combination of the two, as will be readily appreciated by those skilled in the art.
- programming e.g., software or firmware
- logic components hardware
- compilers typically are available for both kinds of conversions.
- the present invention also relates to machine-readable tangible (or non-transitory) media on which are stored software or firmware program instructions (i.e., computer-executable process instructions) for performing the methods and functionality and/or for implementing the modules and components of this invention.
- Such media include, by way of example, magnetic disks, magnetic tape, optically readable media such as CDs and DVDs, or semiconductor memory such as various types of memory cards, USB flash memory devices, solid-state drives, etc.
- the medium may take the form of a portable item such as a miniature disk drive or a small disk, diskette, cassette, cartridge, card, stick etc., or it may take the form of a relatively larger or less-mobile item such as a hard disk drive, ROM or RAM provided in a computer or other device.
- references to computer-executable process steps stored on a computer-readable or machine-readable medium are intended to encompass situations in which such process steps are stored on a single medium, as well as situations in which such process steps are stored across multiple media.
- a server generally can (and often will) be implemented using a single device or a cluster of server devices (either local or geographically dispersed), e.g., with appropriate load balancing.
- a server device and a client device often will cooperate in executing the process steps of a complete method, e.g., with each such device having its own storage device(s) storing a portion of such process steps and its own processor(s) executing those process steps.
- the term “coupled”, or any other form of the word is intended to mean either directly connected or connected through one or more other elements or processing blocks, e.g., for the purpose of preprocessing.
- the drawings and/or the discussions of them where individual steps, modules or processing blocks are shown and/or discussed as being directly connected to each other, such connections should be understood as couplings, which may include additional steps, modules, elements and/or processing blocks.
- references to a signal herein mean any processed or unprocessed version of the signal. That is, specific processing steps discussed and/or claimed herein are not intended to be exclusive; rather, intermediate processing may be performed between any two processing steps expressly discussed or claimed herein.
- attachment As used herein, the term “attached”, or any other form of the word, without further modification, is intended to mean directly attached, attached through one or more other intermediate elements or components, or integrally formed together.
- attachments should be understood as being merely exemplary, and in alternate embodiments the attachment instead may include additional components or elements between such two components.
- method steps discussed and/or claimed herein are not intended to be exclusive; rather, intermediate steps may be performed between any two steps expressly discussed or claimed herein.
- any criterion or condition can include any combination (e.g., Boolean combination) of actions, events and/or occurrences (i.e., a multi-part criterion or condition).
- functionality sometimes is ascribed to a particular module or component. However, functionality generally may be redistributed as desired among any different modules or components, in some cases completely obviating the need for a particular component or module and/or requiring the addition of new components or modules.
- the precise distribution of functionality preferably is made according to known engineering tradeoffs, with reference to the specific embodiment of the invention, as will be understood by those skilled in the art.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Otolaryngology (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
with
where
{tilde over (v)} m(t)=
where {tilde over (v)}m(t) is the output of the
The output of
As a result, the HT sub-band analysis/
thereby providing the sub-sampled frequency-shifted output signal
of
Claims (16)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/725,217 US10325583B2 (en) | 2017-10-04 | 2017-10-04 | Multichannel sub-band audio-signal processing using beamforming and echo cancellation |
| CN201811166437.7A CN109616134B (en) | 2017-10-04 | 2018-10-08 | Multi-channel subband processing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/725,217 US10325583B2 (en) | 2017-10-04 | 2017-10-04 | Multichannel sub-band audio-signal processing using beamforming and echo cancellation |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20190103088A1 US20190103088A1 (en) | 2019-04-04 |
| US10325583B2 true US10325583B2 (en) | 2019-06-18 |
Family
ID=65896181
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/725,217 Active 2037-11-07 US10325583B2 (en) | 2017-10-04 | 2017-10-04 | Multichannel sub-band audio-signal processing using beamforming and echo cancellation |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US10325583B2 (en) |
| CN (1) | CN109616134B (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11665482B2 (en) | 2011-12-23 | 2023-05-30 | Shenzhen Shokz Co., Ltd. | Bone conduction speaker and compound vibration device thereof |
| EP3834200A4 (en) * | 2018-09-12 | 2021-08-25 | Shenzhen Voxtech Co., Ltd. | SIGNAL PROCESSING DEVICE INCLUDING MULTIPLE ELECTROACOUSTIC TRANSDUCERS |
| CN110907933B (en) * | 2019-11-26 | 2022-12-27 | 西安空间无线电技术研究所 | Distributed-based synthetic aperture correlation processing system and method |
| CN111615035B (en) * | 2020-05-22 | 2021-05-14 | 歌尔科技有限公司 | Beam forming method, device, equipment and storage medium |
| CN111726464B (en) * | 2020-06-29 | 2021-04-20 | 珠海全志科技股份有限公司 | Multichannel echo filtering method, filtering device and readable storage medium |
| WO2022173706A1 (en) | 2021-02-09 | 2022-08-18 | Dolby Laboratories Licensing Corporation | Echo reference prioritization and selection |
| CN115620736A (en) * | 2021-07-16 | 2023-01-17 | 腾讯科技(深圳)有限公司 | Audio sharing method and device, computer readable storage medium and electronic equipment |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110302230A1 (en) * | 2009-02-18 | 2011-12-08 | Dolby International Ab | Low delay modulated filter bank |
| US20120243698A1 (en) * | 2011-03-22 | 2012-09-27 | Mh Acoustics,Llc | Dynamic Beamformer Processing for Acoustic Echo Cancellation in Systems with High Acoustic Coupling |
| US8660274B2 (en) * | 2008-07-16 | 2014-02-25 | Nuance Communications, Inc. | Beamforming pre-processing for speaker localization |
| US20170127181A1 (en) * | 2015-10-30 | 2017-05-04 | Guoguang Electric Company Limited | Addition of Virtual Bass in the Frequency Domain |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2026597B1 (en) * | 2007-08-13 | 2009-11-11 | Harman Becker Automotive Systems GmbH | Noise reduction by combined beamforming and post-filtering |
| CN102347028A (en) * | 2011-07-14 | 2012-02-08 | 瑞声声学科技(深圳)有限公司 | Double-microphone speech enhancer and speech enhancement method thereof |
-
2017
- 2017-10-04 US US15/725,217 patent/US10325583B2/en active Active
-
2018
- 2018-10-08 CN CN201811166437.7A patent/CN109616134B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8660274B2 (en) * | 2008-07-16 | 2014-02-25 | Nuance Communications, Inc. | Beamforming pre-processing for speaker localization |
| US20110302230A1 (en) * | 2009-02-18 | 2011-12-08 | Dolby International Ab | Low delay modulated filter bank |
| US20120243698A1 (en) * | 2011-03-22 | 2012-09-27 | Mh Acoustics,Llc | Dynamic Beamformer Processing for Acoustic Echo Cancellation in Systems with High Acoustic Coupling |
| US20170127181A1 (en) * | 2015-10-30 | 2017-05-04 | Guoguang Electric Company Limited | Addition of Virtual Bass in the Frequency Domain |
Non-Patent Citations (2)
| Title |
|---|
| P.P. Vaidyanathan (1993) "Multirate Systems and Filter Banks", Dorling Kindersley, ISBN-13: 978-013605718, pp. 353-391. |
| Van Trees, H. L. (2002) "Optimum Array Processing", Wiley, NY, pp. 428-452. |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109616134B (en) | 2020-11-03 |
| US20190103088A1 (en) | 2019-04-04 |
| CN109616134A (en) | 2019-04-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10325583B2 (en) | Multichannel sub-band audio-signal processing using beamforming and echo cancellation | |
| CN102576538B (en) | A method and an apparatus for processing an audio signal | |
| US9794688B2 (en) | Addition of virtual bass in the frequency domain | |
| EP1879293A2 (en) | Partitioned fast convolution in the time and frequency domain | |
| US11956608B2 (en) | System and method for adjusting audio parameters for a user | |
| US10405094B2 (en) | Addition of virtual bass | |
| EP3591993B1 (en) | Addition of virtual bass | |
| CN102576537B (en) | Method and apparatus for processing audio signals | |
| US20200152220A1 (en) | Echo cancellation for keyword spotting | |
| CN109215675B (en) | Howling suppression method, device and equipment | |
| WO2023079456A1 (en) | Audio processing device and method for suppressing noise | |
| CN101106384A (en) | Partitioned fast convolution in the time and frequency domain | |
| CN109509481B (en) | Audio signal echo reduction | |
| US10893362B2 (en) | Addition of virtual bass | |
| Lüke et al. | In-car communication | |
| CN115862650A (en) | Noise reduction method and training method, device, equipment and chip realized by neural network | |
| US12108227B2 (en) | System and method for adjusting audio parameters for a user |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: GUOGUANG ELECTRIC COMPANY LIMITED, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YOU, YULI;ZHENG, JIMENG;REEL/FRAME:044166/0979 Effective date: 20171004 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |