EP2437517B1 - Sound scene manipulation - Google Patents

Sound scene manipulation Download PDF

Info

Publication number
EP2437517B1
EP2437517B1 EP10275102.1A EP10275102A EP2437517B1 EP 2437517 B1 EP2437517 B1 EP 2437517B1 EP 10275102 A EP10275102 A EP 10275102A EP 2437517 B1 EP2437517 B1 EP 2437517B1
Authority
EP
European Patent Office
Prior art keywords
signal
audio
factors
auxiliary signal
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP10275102.1A
Other languages
German (de)
French (fr)
Other versions
EP2437517A1 (en
Inventor
Toon Van Waterschoot
Wouter Tirry
Marc Moonen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NXP BV
Original Assignee
NXP BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NXP BV filed Critical NXP BV
Priority to EP10275102.1A priority Critical patent/EP2437517B1/en
Priority to US13/248,805 priority patent/US20120082322A1/en
Priority to CN2011103036497A priority patent/CN102447993A/en
Publication of EP2437517A1 publication Critical patent/EP2437517A1/en
Application granted granted Critical
Publication of EP2437517B1 publication Critical patent/EP2437517B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic

Definitions

  • This invention relates to manipulation of a sound scene comprising multiple sound sources. It is particularly relevant in the case of simultaneous recording of audio by multiple microphones.
  • an audio-processing device according to claim 1.
  • a device addresses the problem of sound scene manipulation from a fundamentally different perspective, in that it allows any specified level change to be performed for each of the individual sound source components in the observed mixture(s), without relying on an explicit sound source separation.
  • the disadvantages overcome by the device as compared to the state of the art can be explained by considering each of the three approaches highlighted already above:
  • One application of a method or device is the enhancement of acoustic signals like speech or music.
  • the sound scene consists of desired as well as undesired sound sources, and the aim of the sound scene manipulation comprises reducing the level of the undesired sound sources relative to the level of the desired sound sources.
  • a handheld personal electronic device comprising a plurality of microphones; and the audio processing device referred to above.
  • the invention is particularly suited to mobile, handheld applications, since it has relatively light computational demands. It may therefore be usable with a mobile device having limited processing resources or may enable power consumption to be reduced.
  • the mobile or handheld device preferably incorporates a video recording apparatus with a visual zoom capability, and the audio processing device is preferably adapted to modify the desired gain factors in accordance with a configuration of the visual zoom. This enables the device to implement an acoustic zoom function.
  • the microphones are preferably omni-directional microphones.
  • the present device may be particularly beneficial in these circumstances, because the source separation problem is inherently more difficult when using omni-directional microphones. If the microphones are unidirectional, there will often be significant selectivity (in terms of signal power) between the sources among the diverse audio signals. This can make the manipulation task easier.
  • the present device is able to work also with omnidirectional microphones, where there will be less selectivity in the raw audio signals.
  • the present device is therefore more flexible. For example, it can exploit spatial selectivity by means of beamforming techniques, but it is not limited to spatial selectivity through the use of unidirectional microphones.
  • the parameters of the different mixture may be reweighting factors, which relate the levels of the components in the at least one auxiliary signal to their respective levels in the reference audio signal.
  • the method is particularly relevant to configurations with more than one microphone. Sound from all of the sound sources is detected at each microphone. Therefore, each sound source gives rise to a corresponding component in each audio signal.
  • the number of sources may be less than, equal to, or greater than the number of audio signals (which is equal to the number of microphones).
  • the sum of the number of audio signals and the number of auxiliary signals should be at least equal to the number of sources which it is desired to independently control.
  • Each auxiliary signal contains a different mixture of the components. That is, the components occur with different amplitude in each of the auxiliary signals (according to the reweighting factors).
  • the auxiliary signals and the audio signals should be linearly independent; and the sets of reweighting factors which relate the signal components to each auxiliary signal should also be linearly independent of one another.
  • the levels of the source signal components in the auxiliary signals are varied by a power ratio in the range -40dB to +6dB, more preferably -30dB to 0dB, still more preferably - 25dB to 0dB, compared to their levels in the reference audio signal(s).
  • a scaling coefficient is preferably applied to the reference original audio signal and the result is combined with the scaled auxiliary signals.
  • the scaled auxiliary signals and/or scaled audio signals may be combined by summing them.
  • the scaling coefficients and the desired gain factors will have different values (and may be different in number). They would only be identical if the auxiliary signals were to achieve perfect separation of the sources, which is usually impossible in practice.
  • Each desired gain factor corresponds to the desired volume (amplitude) of a respective one of the sound-sources.
  • the scaling coefficients correspond to the auxiliary signals and/or input audio signals.
  • the number of reweighting factors is equal to the product of the number of signal components and the number of auxiliary signals, since in general each auxiliary signal will comprise a mixture of all of the signal components.
  • the desired gain factors; the reweighting factors and the scaling coefficients are related by a linear system of equations; and the step of calculating the set of scaling coefficients comprises solving the system of equations.
  • the step of calculating the set of scaling coefficients may comprise: calculating the inverse of a matrix of the reweighting factors; and multiplying the desired gain factors by the result of this inversion calculation.
  • the reweighting factors may be formed into a matrix and the inverse of this matrix may be calculated explicitly. Alternatively, the inverse may be calculated implicitly by equivalent linear algebraic calculations. The result of the inversion may be expressed as a matrix, though this is not essential.
  • the at least one auxiliary signal may be a linear combination of any of: one or more of the audio signals; one or more shifted versions of the audio signals; and one or more filtered versions of the audio signals.
  • the at least one auxiliary signal may be generated by at least one of: fixed beamforming; adaptive beamforming; and adaptive spectral modification.
  • fixed beamforming means a spatially selective signal processing operation with a time-invariant spatial response.
  • Adaptive beamforming means a spatially selective signal processing operation with a time-varying spatial response.
  • Adaptive spectral modification means a frequency-selective signal processing operation with a time-varying frequency response, such as the class of methods known in the art as adaptive spectral attenuation or adaptive spectral subtraction.
  • An adaptive spectral modification process typically does not exploit spatial diversity, but only frequency diversity among the signal components.
  • Fixed beamforming may be beneficial when there is some prior expectation that one or more of the sound sources is localised and located in a predetermined direction relative to a set of microphones. The fixed beamformer will then modify the power of the corresponding signal component, relative to other components.
  • Adaptive beamforming may be beneficial when a localised sound source is expected, but its orientation relative to the microphone(s) is unknown.
  • Adaptive spectral modification may be useful when sound sources can be discriminated to some extent by their spectral characteristics. This may be the case for a diffuse noise source, for example.
  • the methods of generating the auxiliary signal or signals are preferably chosen according to the expected sound environment in a given application. For example, if several sources in known directions are expected, it may be appropriate to use multiple fixed beamformers. If multiple moving sources are expected, multiple adaptive beamformers may be beneficial. In this way - as will be apparent to those skilled in the art - one or more instances of different means of generating the auxiliary signal may be combined, in embodiments.
  • a first auxiliary signal is generated by a first method; a second auxiliary signal is generated by a second, different method; and the second auxiliary signal is generated based on an output of the first method.
  • the fixed beamforming may be adapted to emphasize sounds originating directly in front of the microphone or microphone array. For example, this may be useful when the microphone is used in conjunction with a camera, because the camera (and therefore the microphone) is likely to be aimed at a subject who is one of the sound sources.
  • An output of the fixed beamformer may be input to the adaptive beamformer.
  • This may be a noise reference output of the fixed beamformer, wherein the power ratio of a component originating from the fixed direction is reduced relative to other components. It is advantageous to use this signal in the adaptive beamformer, in order to find a (remaining) localised source in an unknown direction, because the burden on the adaptive beamformer to suppress the fixed signals may be reduced.
  • An output of the adaptive beamformer may be input to the adaptive spectral modification.
  • the method of the invention may be seen as a flexible framework for combining weak separators, to allow an arbitrary desired weighting on sound sources.
  • the individual operations of beamforming or spectral modification preferably cause a change in the signal power of individual sound source components in the range -25dB to 0dB. This refers to the input/output power ratio of each operation, ignoring cascade effects due to the output of one unit being connected to the input of another
  • the method may optionally comprising: synthesizing a first output audio signal by applying scaling coefficients to a first reference audio signal and at least one first auxiliary signal and combining the results; and synthesizing a second output audio signal by applying scaling coefficients to a second, different reference audio signal and at least one second auxiliary signal and combining the results.
  • the at least one first auxiliary signal and at least one second auxiliary signal may be the same or different signals.
  • the two different reference audio signals should be selected from appropriately arranged microphones, for a desired stereo effect.
  • the method can be extended to synthesize an arbitrarily greater number of outputs, as desired for any particular application.
  • the sound sources may comprise one or more localised sound sources and a diffuse noise field.
  • the desired gain factors may be time-varying.
  • the method is particularly well suited to real-time implementation, which means that the desired gain can be adjusted dynamically. This may be useful for example for dynamically balancing changing sound sources, or for acoustic zooming.
  • the sound scene can be balanced using time-invariant gain factors, while in a dynamic scenario (that is, with moving or temporally modulated sound sources) the use of time-varying gain factors is more relevant.
  • the desired gain factors can be chosen in dependence upon the state of a visual zoom function.
  • a key example is the process of manipulating the sound scene such that it properly matches the video zooming operations. For example, when zooming in on a particular subject, the sound level of this subject should increase accordingly while keeping the level of the other sound sources constant. In this case, the desired gain factor corresponding to the sound source in front of the camera will be increased over time, while the other gain factors are time-invariant.
  • Also provided is a computer program comprising computer program code means adapted to perform all the steps of a method as described above, when said program is run on a computer; and such a computer program embodied on a computer readable medium.
  • each of the N audio signals U n ( ⁇ ) detected at the microphones as a function of the localized sound sources and the diffuse sound field in the frequency domain as follows:
  • U n 0 ⁇ denotes the diffuse noise component.
  • U n 0 ⁇ denotes the diffuse noise component.
  • the aim of the envisaged sound scene manipulation is to produce N manipulated signals, or audio output signals, ⁇ n (t), in which each of the levels of the individual sound source components is changed in a user-specified way as compared to the respective levels in the nth microphone signal.
  • these will be referred to as the "desired gain factors”.
  • the reweighting factors can be calculated or estimated depending on the embodiment of the invention, as described in greater detail below.
  • equation (7) above is a model for the auxiliary signals which will usually be satisfied only approximately, in practice.
  • the auxiliary signals will be derived from the various microphone signals. Therefore, they will be composed of filtered versions of the sound source components, instead of the unfiltered ("dry") sound source components themselves suggested by equation (7).
  • the number of localized interfering sound sources is taken to be one, for the purposes of this explanation. Furthermore, in this example, it is assumed that the capture device is equipped with two or more microphones. Those skilled in the art will appreciate that none of these assumptions should be taken to limit the scope of the invention.
  • Both embodiments have the general structure shown in the block diagram of Fig. 1 .
  • An array of microphones 4 produces a corresponding plurality of audio signals 6. These are fed as input to an auxiliary signal generator 10.
  • the auxiliary signal generator generates auxiliary signals, each comprising a mixture of the same sound source components detected by the microphones 4, but with the components present in the mixture with different relative strengths (as compared with their levels in the original audio signals 6). In the embodiments described below, these auxiliary signals are derived by processing combinations of the audio signals 6 in various ways.
  • the auxiliary signals and the input audio signals 6 are fed as inputs to an audio synthesis unit 20. This unit 20 applies scaling coefficients to the signals and sums them, to produce output signals 40. In the output signals 40, the sound source components are present with desired strengths.
  • the scaling coefficient calculator 30 converts the desired gains ⁇ g(t) ⁇ into a set of scaling coefficients ⁇ a(t) ⁇ .
  • Each of the desired gains is associated with a sound source detectable at the microphones 4; whereas each of the scaling coefficients is associated with one of the auxiliary signals.
  • the scaling coefficient calculator 30 exploits knowledge about the parameters of the auxiliary signals to transform from desired gains ⁇ g(t) ⁇ to suitable scaling coefficients ⁇ a(t) ⁇ .
  • Fig. 2 shows a block structure for the calculation of the auxiliary signals x n (t), y n (t), and z n (t) required in the algorithm.
  • the auxiliary signal generator 10 consists of three functional blocks 210, 212, 214:
  • the audio synthesis unit 20 is indicated by the dashed box 220. This produces the output signal ⁇ 0 ( t ) as a weighted summation of the auxiliary signals x 0 , y 0 , and z 0 , as well as the reference audio signal u 0 .
  • the weights are the scaling coefficients, a, derived by the scaling coefficient calculator 30 (not shown in Fig. 2 ).
  • auxiliary signals are not explicitly used to calculate the output signal.
  • these signals are used internally in the adaptive beamformer and adaptive spectral attenuation algorithms.
  • the signals x n (t), n > 0 at the output of the fixed beamformer will be constructed to be "noise reference signals"; that is, signals in which the desired (front and optionally back) sound sources have been suppressed and which are used subsequently in the adaptive beamformer to estimate the localized interfering sound source component in the primary output signal x 0 (t) of the fixed beamformer.
  • the signal y 1 (t) is then constructed to be a "diffuse noise reference” that is used by the adaptive spectral attenuation algorithm to estimate the diffuse noise component in the primary output signal y 0 (t) of the fixed beamformer.
  • a stereo output signal should preferably not be created by calculating ⁇ 0 ( t ) and ⁇ 1 ( t ) using these auxiliary signals.
  • the block structure shown in Fig. 3 is used for the stereo case.
  • N 2 (that is, when the array consists of more than two microphones)
  • u 0 (t) and u 1 (t) to be those two microphone signals that are best suited to deliver a stereo image.
  • this will typically depend on the placement of the microphones.
  • the scaling coefficient calculator 30 uses knowledge of the reweighting factors ⁇ n ( p,m ) to derive the scaling coefficients, a(t), from the desired gains, g(t).
  • the reweighting factors are found by using knowledge of the characteristics of the various blocks 210, 212, 214 in the auxiliary signal generator.
  • the reweighting factors are determined offline.
  • the input-output relation of the three functional blocks in the block structure can be described in the frequency domain as follows.
  • the secondary adaptive beamformer output signal should ideally be an estimate of the diffuse noise component in the primary adaptive beamformer output signal.
  • a more efficient approach involves setting the values of the reweighting factors off-line (in advance), making use of the fixed beamformer response (known a priori) and of heuristics about the behaviour of the adaptive beamformer and spectral attenuation response.
  • the values chosen can be approximations of the theoretical values predicted by the equations above. For example, the values may be set heuristically in 5dB steps. In many applications, the method will be largely insensitive to 5dB or 10dB deviations from the precise theoretical values.
  • the fixed beamformer creates a primary output signal X 0 ( ⁇ ) that spatially enhances the front sound source signal, as well as a number of other output signals X n ( ⁇ ), n > 0 that serve as "noise references" for the adaptive beamformer.
  • X 0
  • BM blocking matrix
  • a superdirective (SD) design method which is recommendable when the aim is to maximize the directivity factor of the microphone array - that is, to maximize the array gain in the presence of a diffuse noise field.
  • SD superdirective
  • I N represents the N ⁇ N identity matrix
  • is a regularization parameter
  • DF dB 10 log 10 1 2 ⁇ ⁇ ⁇ ⁇ 0 2 ⁇ ⁇ ⁇ W 1 , : 1 H ⁇ ⁇ G ⁇ ⁇ ⁇ F 2 W 1 , : 1 H ⁇ ⁇ ⁇ ⁇ U N ⁇ W 1 , : 1 ⁇ ⁇ d ⁇
  • FBRR dB 10 log 10 ⁇ 0 2 ⁇ ⁇ ⁇ W 1 , : 1 H ⁇ ⁇ G ⁇ ⁇ ⁇ F 2 ⁇ d ⁇ ⁇ 0 2 ⁇ ⁇ ⁇ W 1 , : 1 H ⁇ ⁇ G ⁇ ⁇ ⁇ B 2 ⁇ d ⁇ .
  • the FBRR increases for higher filter lengths and approximately saturates for a length greater than or equal to 128.
  • the frequency-domain SD design is executed at L FSB /2 frequencies that are uniformly distributed in the Nyquist interval, after which the frequency-domain FSB coefficients are transformed to length-L FSB time-domain filters.
  • Experiments have also shown a significant performance gap between the 2-mic configuration and other configurations, with greater than 2 microphones, both in terms of directivity and FBRR.
  • the BM in the fixed beamformer consists of a number of filter-and-sum beamformers that each operate on one particular subset of microphone signals. In this way, a number of noise reference signals is created, in which the power of the desired signal components is maximally reduced relative to the power of these components in the microphone signals.
  • N-1 noise references are created by designing N-1 different filter-and-sum beamformers.
  • it might be preferable to create fewer than N-1 noise references which then leads to a reduction of the number of input signals x n (t) for the adaptive beamformer.
  • BM design In the context of the BM design, we consider the back sound source (if any) to be an undesired signal (which should be cancelled by the adaptive beamformer); hence the BM design reduces to a front-cancelling beamformer (FCB) design.
  • FCB front-cancelling beamformer
  • one of several different fixed beamformer design methods can be employed.
  • FCB design we should specify a zero response in the front direction and a non-zero response in any other direction.
  • the latter direction should the back direction to avoid that the design would actually correspond to a front-back-cancelling beamformer design.
  • M the number of equations in the linear system of equations above
  • the back response is indeed close to a unity response for most microphone configurations and filter length values.
  • the front source response varies heavily according to the microphone configuration and filter length used.
  • At least one microphone pair in an endfire configuration should preferably be included in the array to obtain a satisfactory power reduction of the front sound source component.
  • Concerning the choice of the BM filter length experiments show that there is no clear threshold effect - that is, the response in the front direction decreases with a nearly constant slope (provided an endfire microphone pair is included).
  • the BM filter length should preferably be chosen according to the desired front sound source power reduction.
  • the adaptive beamformer in the block scheme may be implemented using a generalized sidelobe canceller (GSC) algorithm; a multi-channel Wiener filtering (MWF) algorithm; or any other adaptive algorithm.
  • GSC generalized sidelobe canceller
  • MMF multi-channel Wiener filtering
  • SDW-MWF speech-distortion-weighted multi-channel Wiener filtering
  • the parameter ⁇ can be tuned to trade off energy reduction of the undesired components versus distortion of the desired component.
  • GSVD generalized singular value decomposition
  • QR decomposition QR decomposition
  • time-domain stochastic gradient method time-domain stochastic gradient method
  • frequency-domain stochastic gradient method a common feature of these implementations.
  • the correlation matrices ⁇ x ( F ) ( ⁇ ) and ⁇ x ( B,I,N ) ( ⁇ ) are explicitly estimated before the SDW-MWF filter estimate is computed.
  • the mean SNR at the microphones is equal to 10 dB.
  • the adaptation of the SDW-MWF algorithm is based on a stochastic gradient frequency-domain implementation, and is controlled by a perfect (manual) voice activity detection (VAD). Two features of the SDW-MWF have been evaluated, namely:
  • the algorithm without a feedforward filter corresponds to the GSC algorithm, while the algorithm with a feedforward filter is not relevant due to an intolerable speech distortion.
  • the adaptive spectral attenuation block is included in the structure with the aim of reducing the diffuse noise energy in the primary adaptive beamformer output signal.
  • are estimated by means of a Discrete Fourier transform (DFT), with k and I denoting the DFT frequency bin and time frame indices.
  • DFT Discrete Fourier transform
  • is subsequently transformed back to the time domain by applying an inverse DFT (IDFT), and by using the phase spectrum of the primary adaptive beamformer output signal Y 0 ( ⁇ k , l ).
  • IDFT inverse DFT
  • a first possibility is to regard the back sound source as an undesired sound source, in which case its level should remain constant. However, since the back sound source is typically very close to the camera, its level should often be reduced to obtain an acceptable balance between the back sound source and the other sound sources.
  • a second possibility is to have the back sound source gain factor follow the inverse trajectory of the front sound source gain factor, possibly combined with a fixed back sound source level reduction. While such an inverse level trajectory would obviously make sense from a physical point of view, it may be perceived somewhat too artificial, since the front sound source level change is then supported by visual cues, while the back sound source level change is not.
  • the front sound source is a male speech signal corresponding to a camera recording that consists of a far shot phase (5 s), a zoom-in phase (10 s), and a close-up phase (11 s).
  • ⁇ I 90 deg.
  • a 3-microphone array was used, employing microphones 1,3, and 4 as indicated in Fig. 1 .
  • the fixed beamformer consists of a superdirective FSB and a single-noise-reference front-cancelling BM, both a with filter length of 64.
  • the adaptive beamformer is calculated using a GSC algorithm and has a filter length of 128.
  • the desired AZ effect consists in keeping the level of the undesired sound sources (including the back sound source in the second simulation) unaltered, while increasing the level of the front sound source during the zoom-in phase, according to the perceptually optimal trajectory defined above.
  • the values of the re-weighting factors were determined empirically in advance, rather than at run-time (as described previously above).
  • the performance of the method depends in part upon the accuracy to which the reweighting factors can be estimated. The greater the accuracy, the better the performance of the manipulation will be.
  • Fig. 4 is a flowchart summarising a method according to an embodiment.
  • audio signals 6 are received from the microphones 4.
  • the desired gain factors 8 are input.
  • the auxiliary signal generator generates the auxiliary signals.
  • the scaling coefficient calculator 30 calculates the scaling coefficients, a(t).
  • the audio synthesis unit 20 applies the scaling coefficients to the generated auxiliary signals and reference audio signals, to synthesise output audio signals 40.
  • auxiliary signal calculation should be such that it exploits the diversity of the individual sound sources in the sound scene.
  • multiple microphones are used, then exploiting spatial diversity is often the most straightforward option - and this is exploited by the beamformers in the embodiments described above.
  • auxiliary signal generator will vary according to the application and the characteristics of the audio environment.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)

Description

  • This invention relates to manipulation of a sound scene comprising multiple sound sources. It is particularly relevant in the case of simultaneous recording of audio by multiple microphones.
  • Most existing sound scene manipulation methods operate in a two-stage fashion: in a first stage, the individual sound sources are extracted from one or more microphone recordings; and in a second stage, the separated sound sources are recombined according to the desired sound scene manipulation. When the manipulation consists of a change in the desired level of the individual sound sources (which is commonly the case), the second stage is trivial, once the first stage has been executed. Indeed, the recombination in the second stage then reduces to a simple linear combination of the separated sound sources obtained from the first stage. Unfortunately, the extraction of the individual sound sources from the recorded microphone signal(s) is a difficult problem, on which a lot of research effort has been spent. Broadly speaking, the state of the art in sound source extraction can be classified into three approaches:
    1. 1. Blind source separation (BSS): this approach allows for estimating a number of individual sound source components from a number of observed mixtures, by exploiting the statistical independence of the individual sources. Traditional BSS methods rely on the assumption that the number of sources is less than or equal to the number of observed mixtures, which implies that a potentially large number of microphones is required. Underdetermined BSS methods are capable of bypassing this condition, but they rely on a significant amount of prior knowledge regarding the individual sound sources. Since BSS methods tend to be computationally intensive, these are often not suited for real-time applications.
      A BSS approach is shown in the paper "An Object-based 3D Audio Broadcasting System for Interactive Services", AES Convention 118, May 2005, Barcelona.
    2. 2. Computational auditory scene analysis (CASA): the aim of CASA is to analyze a sound scene in a way that mimics the human auditory system, by identifying and grouping perceptual attributes from the observed mixtures. Since CASA operates on two (binaural) microphone recordings, it is essentially an underdetermined BSS method as soon as the sound scene comprises more than two sources. While CASA has attracted the interest of many researchers, it is still considered not sufficiently mature to be used in real-life applications. Moreover, its computational requirements are typically very high.
    3. 3. Beamforming: this approach relies on the application of spatially selective filtering operations to two or more observed mixtures. There is no hard constraint on the number of observations required to separate a given number of sound sources, and moreover most beamforming implementations are computationally less demanding than the BSS or CASA approaches. However, beamforming either relies on prior knowledge about the sound source positions (in which case fixed beamformers can be applied) or requires a significant amount of additional processing for "supervision" (in the case of adaptive beamformers).
  • According to an aspect of the present invention there is provided an audio-processing device according to claim 1.
  • A device according to an embodiment of the invention addresses the problem of sound scene manipulation from a fundamentally different perspective, in that it allows any specified level change to be performed for each of the individual sound source components in the observed mixture(s), without relying on an explicit sound source separation. The disadvantages overcome by the device as compared to the state of the art can be explained by considering each of the three approaches highlighted already above:
    1. 1. Advantages with respect to the BSS approach: similarly to the traditional BSS approach, the processing method implemented by the device requires as many different mixtures as the number of sound source levels that it is desired to alter independently. However, these mixtures can be generated from a smaller number of microphone recordings. For example, auxiliary mixtures can be generated by combining one microphone recording with one or more other microphone recordings. As a consequence, the method can also be used in scenarios with fewer microphones than sound sources, without a significant increase in computational burden. The proposed method has moderate computational complexity, which increases only linearly with the number of observed microphone signal samples. It is thus particularly suited for real-time applications. Finally, the method does not rely on any prior knowledge about the statistics of the individual sound sources.
    2. 2. Advantages with respect to the CASA approach: whereas the CASA approach operates on a collection of auditory features of the sound sources, the present processing method operates directly on the observed microphone signals and on a number of auxiliary signals derived from these microphone signals. Consequently, the present method does not require the estimation and detection of auditory features, which is advantageous both in terms of robustness and in terms of computational complexity.
    3. 3. Advantages with respect to the beamforming approach: whereas the beamforming approach operates only on the observed microphone signals, the present method operates on a number of auxiliary signals in addition to the microphone signals. These auxiliary signals may be generated by combining the observed microphone signals. However, there is no restriction on the mapping from the observed microphone signals to the auxiliary signals, and hence the proposed method is much more flexible than the beamforming approach. As indicated below, one embodiment of the invention may include fixed as well as adaptive beamformers for generating the auxiliary signals from the microphone signals.
  • One application of a method or device according to an embodiment is the enhancement of acoustic signals like speech or music. In this case, the sound scene consists of desired as well as undesired sound sources, and the aim of the sound scene manipulation comprises reducing the level of the undesired sound sources relative to the level of the desired sound sources.
  • According to another aspect of the invention, there is provided a handheld personal electronic device comprising a plurality of microphones; and the audio processing device referred to above.
  • The invention is particularly suited to mobile, handheld applications, since it has relatively light computational demands. It may therefore be usable with a mobile device having limited processing resources or may enable power consumption to be reduced.
  • The mobile or handheld device preferably incorporates a video recording apparatus with a visual zoom capability, and the audio processing device is preferably adapted to modify the desired gain factors in accordance with a configuration of the visual zoom. This enables the device to implement an acoustic zoom function.
  • The microphones are preferably omni-directional microphones.
  • The present device may be particularly beneficial in these circumstances, because the source separation problem is inherently more difficult when using omni-directional microphones. If the microphones are unidirectional, there will often be significant selectivity (in terms of signal power) between the sources among the diverse audio signals. This can make the manipulation task easier. The present device is able to work also with omnidirectional microphones, where there will be less selectivity in the raw audio signals. The present device is therefore more flexible. For example, it can exploit spatial selectivity by means of beamforming techniques, but it is not limited to spatial selectivity through the use of unidirectional microphones.
  • According to a further aspect of the invention, there is provided a method of processing audio signals according to claim 4.
  • The parameters of the different mixture may be reweighting factors, which relate the levels of the components in the at least one auxiliary signal to their respective levels in the reference audio signal.
  • The method is particularly relevant to configurations with more than one microphone. Sound from all of the sound sources is detected at each microphone. Therefore, each sound source gives rise to a corresponding component in each audio signal. The number of sources may be less than, equal to, or greater than the number of audio signals (which is equal to the number of microphones). The sum of the number of audio signals and the number of auxiliary signals should be at least equal to the number of sources which it is desired to independently control.
  • Each auxiliary signal contains a different mixture of the components. That is, the components occur with different amplitude in each of the auxiliary signals (according to the reweighting factors). In other words, the auxiliary signals and the audio signals should be linearly independent; and the sets of reweighting factors which relate the signal components to each auxiliary signal should also be linearly independent of one another.
  • Explicit source separation is not necessary. Preferably the levels of the source signal components in the auxiliary signals are varied by a power ratio in the range -40dB to +6dB, more preferably -30dB to 0dB, still more preferably - 25dB to 0dB, compared to their levels in the reference audio signal(s).
  • In the step of synthesizing the output signal, a scaling coefficient is preferably applied to the reference original audio signal and the result is combined with the scaled auxiliary signals.
  • The scaled auxiliary signals and/or scaled audio signals may be combined by summing them.
  • In general, in practice, the scaling coefficients and the desired gain factors will have different values (and may be different in number). They would only be identical if the auxiliary signals were to achieve perfect separation of the sources, which is usually impossible in practice. Each desired gain factor corresponds to the desired volume (amplitude) of a respective one of the sound-sources. On the other hand, the scaling coefficients correspond to the auxiliary signals and/or input audio signals. The number of reweighting factors is equal to the product of the number of signal components and the number of auxiliary signals, since in general each auxiliary signal will comprise a mixture of all of the signal components.
  • Preferably, the desired gain factors; the reweighting factors and the scaling coefficients are related by a linear system of equations; and the step of calculating the set of scaling coefficients comprises solving the system of equations.
  • For example, the step of calculating the set of scaling coefficients may comprise: calculating the inverse of a matrix of the reweighting factors; and multiplying the desired gain factors by the result of this inversion calculation.
  • The reweighting factors may be formed into a matrix and the inverse of this matrix may be calculated explicitly. Alternatively, the inverse may be calculated implicitly by equivalent linear algebraic calculations. The result of the inversion may be expressed as a matrix, though this is not essential.
  • The at least one auxiliary signal may be a linear combination of any of: one or more of the audio signals; one or more shifted versions of the audio signals; and one or more filtered versions of the audio signals.
  • The at least one auxiliary signal may be generated by at least one of: fixed beamforming; adaptive beamforming; and adaptive spectral modification.
  • Here, fixed beamforming means a spatially selective signal processing operation with a time-invariant spatial response. Adaptive beamforming means a spatially selective signal processing operation with a time-varying spatial response. Adaptive spectral modification means a frequency-selective signal processing operation with a time-varying frequency response, such as the class of methods known in the art as adaptive spectral attenuation or adaptive spectral subtraction. An adaptive spectral modification process typically does not exploit spatial diversity, but only frequency diversity among the signal components.
  • These are advantageous examples of ways to create the auxiliary signals. Fixed beamforming may be beneficial when there is some prior expectation that one or more of the sound sources is localised and located in a predetermined direction relative to a set of microphones. The fixed beamformer will then modify the power of the corresponding signal component, relative to other components.
  • Adaptive beamforming may be beneficial when a localised sound source is expected, but its orientation relative to the microphone(s) is unknown.
  • Adaptive spectral modification (for example, by attenuation) may be useful when sound sources can be discriminated to some extent by their spectral characteristics. This may be the case for a diffuse noise source, for example.
  • The methods of generating the auxiliary signal or signals are preferably chosen according to the expected sound environment in a given application. For example, if several sources in known directions are expected, it may be appropriate to use multiple fixed beamformers. If multiple moving sources are expected, multiple adaptive beamformers may be beneficial. In this way - as will be apparent to those skilled in the art - one or more instances of different means of generating the auxiliary signal may be combined, in embodiments.
  • Optionally, a first auxiliary signal is generated by a first method; a second auxiliary signal is generated by a second, different method; and the second auxiliary signal is generated based on an output of the first method.
  • For example, the fixed beamforming may be adapted to emphasize sounds originating directly in front of the microphone or microphone array. For example, this may be useful when the microphone is used in conjunction with a camera, because the camera (and therefore the microphone) is likely to be aimed at a subject who is one of the sound sources.
  • An output of the fixed beamformer may be input to the adaptive beamformer. This may be a noise reference output of the fixed beamformer, wherein the power ratio of a component originating from the fixed direction is reduced relative to other components. It is advantageous to use this signal in the adaptive beamformer, in order to find a (remaining) localised source in an unknown direction, because the burden on the adaptive beamformer to suppress the fixed signals may be reduced.
  • An output of the adaptive beamformer may be input to the adaptive spectral modification.
  • Typically, neither of the beamformers nor an adaptive spectral attenuator will be sufficiently selective to separate individual sources from the mixture. In this context, the method of the invention may be seen as a flexible framework for combining weak separators, to allow an arbitrary desired weighting on sound sources. The individual operations of beamforming or spectral modification preferably cause a change in the signal power of individual sound source components in the range -25dB to 0dB. This refers to the input/output power ratio of each operation, ignoring cascade effects due to the output of one unit being connected to the input of another
  • The method may optionally comprising: synthesizing a first output audio signal by applying scaling coefficients to a first reference audio signal and at least one first auxiliary signal and combining the results; and synthesizing a second output audio signal by applying scaling coefficients to a second, different reference audio signal and at least one second auxiliary signal and combining the results.
  • This may be particularly useful for generating binaural (for example, stereo) outputs. The at least one first auxiliary signal and at least one second auxiliary signal may be the same or different signals. The two different reference audio signals should be selected from appropriately arranged microphones, for a desired stereo effect.
  • In a similar way, the method can be extended to synthesize an arbitrarily greater number of outputs, as desired for any particular application.
  • The sound sources may comprise one or more localised sound sources and a diffuse noise field.
  • The desired gain factors may be time-varying.
  • The method is particularly well suited to real-time implementation, which means that the desired gain can be adjusted dynamically. This may be useful for example for dynamically balancing changing sound sources, or for acoustic zooming.
  • In a sound scene consisting of multiple desired sound sources, one often encounters the problem that the levels of the different sources are not sufficiently balanced in the microphone recordings - for example, if one of the sources is positioned closer to the microphone array than the others. In a static scenario, the sound scene can be balanced using time-invariant gain factors, while in a dynamic scenario (that is, with moving or temporally modulated sound sources) the use of time-varying gain factors is more relevant.
  • The desired gain factors can be chosen in dependence upon the state of a visual zoom function.
  • In applications where joint audio and video recordings are made (for example, camcorder or video-phone applications), it may be beneficial to match the auditory and visual cues in the recordings to obtain an easier and/or faster multisensory integration. A key example is the process of manipulating the sound scene such that it properly matches the video zooming operations. For example, when zooming in on a particular subject, the sound level of this subject should increase accordingly while keeping the level of the other sound sources constant. In this case, the desired gain factor corresponding to the sound source in front of the camera will be increased over time, while the other gain factors are time-invariant.
  • Also provided is a computer program comprising computer program code means adapted to perform all the steps of a method as described above, when said program is run on a computer; and such a computer program embodied on a computer readable medium.
  • The invention will now be described by way of example with reference to the accompanying drawings, in which:
    • Fig. 1 shows a block diagram of an audio processing device according to an embodiment;
    • Fig. 2 shows in greater detail an auxiliary signal generator and audio synthesis unit suitable for a monaural implementation of the embodiment of Fig. 1;
    • Fig. 3 shows in greater detail an auxiliary signal generator and audio synthesis unit suitable for a binaural (stereo) implementation of the embodiment of Fig. 1; and
    • Fig. 4 is a flowchart of a method according to an embodiment.
  • In the following, a theoretical explanation of a method according to an embodiment will first be given, along with an indication of the conditions under which this theory can be used for sound scene manipulation.
  • Consider a sound scene consisting of M localized sound sources sm (t), m = 1,...,M positioned in different directions in the three-dimensional plane (as characterized by the azimuth-elevation angle pairs (θmm ), m = 1,...,M), in addition to a diffuse sound field that cannot be attributed to a single sound source or direction. Further to this, consider a microphone array consisting of N microphones (N≥2) and having an arbitrary three-dimensional geometry. Each of the microphones may have a different frequency- and angle-dependent response, as defined by A n ω θ ϕ = a n ω θ ϕ e - j ψ n ω θ ϕ , n = 0 , , N - 1.
    Figure imgb0001
  • The acoustic response (including the effect of the direct path time delay as well as reverberation) of a sound source at angle (θ,φ) to each of the microphones is given by F n ω θ ϕ = f n ω θ ϕ e - j ξ n ω θ ϕ , n = 0 , , N - 1.
    Figure imgb0002
  • For ease of notation, we introduce the joint acoustic and microphone response, defined as G n ω θ ϕ = A n ω θ ϕ F n ω θ ϕ , n = 0 , , N - 1.
    Figure imgb0003
  • Using the above definitions, we can express each of the N audio signals Un (ω) detected at the microphones as a function of the localized sound sources and the diffuse sound field in the frequency domain as follows: U n ω = U n 0 ω + m = 1 M G n ω θ m ϕ m S m ω U n m ω , n = 0 , , N - 1
    Figure imgb0004

    where U n 0 ω
    Figure imgb0005
    denotes the diffuse noise component. The above relation can equivalently be written in the time domain as follows, u n t = u n 0 t + m = 1 M u n m t .
    Figure imgb0006
  • The aim of the envisaged sound scene manipulation is to produce N manipulated signals, or audio output signals, ζ n (t), in which each of the levels of the individual sound source components is changed in a user-specified way as compared to the respective levels in the nth microphone signal. Mathematically, the aim is to produce the signals ζ n t = g n 0 t u n 0 t + m = 1 M g n m t u n m t , n = 0 , , N - 1
    Figure imgb0007

    where g n m
    Figure imgb0008
    (t), m = 0,..., M denote the user-specified time-varying gain factors for the different sound source components. Hereinafter, these will be referred to as the "desired gain factors".
  • Suppose that one could generate M auxiliary signals x n p t , p = 1 , , M ,
    Figure imgb0009
    in which the different sound source components have been arbitrarily reweighted with respect to the corresponding components in the microphone signal un (t), that is, x n p t = m = 0 M γ n p m u n m t
    Figure imgb0010
  • Here, each of the reweighting factors is by definition equal to the square root of the power ratio of the corresponding sound source components, that is, γ n p m = σ x n p σ u n m = E x n p 2 E u n m 2 .
    Figure imgb0011
  • The nth manipulated signal (output audio signal) can now be calculated as a weighted sum of the nth microphone signal and the auxiliary signals x n p t , p = 1 , , M
    Figure imgb0012
    defined above, that is, ζ n t = a n 0 t u n t + p = 1 M a n p t x n p t .
    Figure imgb0013
  • By using the relations in equations (5) and (7), the expression for the calculated nth manipulated signal in equation (9) can be shown to be equivalent to the expression for the desired nth manipulated signal in equation (6) if the weights a n p t , p = 0 , , M
    Figure imgb0014
    satisfy the following relationship, a n 0 t a n 1 t a n M t = 1 γ n 1 0 γ z n M 0 1 γ n 1 1 γ n M 1 1 γ n 1 M γ n M M - 1 Γ - 1 g n 0 t g n 1 t g n M t
    Figure imgb0015
  • This implies that a unique set of weight trajectories a n p t , p = 0 , , M , t
    Figure imgb0016
    can be calculated that exactly produces the desired sound scene manipulation. Herein, the weight trajectories a n p t , p = 0 , , M , t
    Figure imgb0017
    are also referred to as "scaling coefficients".
  • There are two conditions for exact reproduction of the effect of an arbitrary set of desired gain factors gn (m)(t), according to equation (10):
    1. 1. The reweighting matrix r should be of full rank,
    2. 2. The reweighting factors γ n p m
      Figure imgb0018
      should be known.
  • Loosely speaking, the first condition requires that the microphone signal un (t) and the auxiliary signals x n p t , p = 1 , , M
    Figure imgb0019
    should be linearly independent (which leads to linear independent columns in Γ) and requires that the reweighting of the different sound source components in each of the auxiliary signals x n p t , p = 1 , , M
    Figure imgb0020
    should be linearly independent (which leads to linear independent rows in r). The reweighting factors can be calculated or estimated depending on the embodiment of the invention, as described in greater detail below.
  • Note that equation (7) above is a model for the auxiliary signals which will usually be satisfied only approximately, in practice. In the embodiments described below, the auxiliary signals will be derived from the various microphone signals. Therefore, they will be composed of filtered versions of the sound source components, instead of the unfiltered ("dry") sound source components themselves suggested by equation (7).
  • If the model of equation (7) could be satisfied precisely, exact recovery of a single sound source component would be possible (by choosing the desired gain factors appropriately). In the embodiment to be described below, this would demand the design of ideal beamformers that have a flat frequency response within the bandwidth of the source component of interest, and demand that the diffuse noise has no spectral overlap with the source component of interest. In practice, these restrictions are usually not met, and as a consequence the auxiliary signals will be linear combinations of filtered versions of the original sound source components (with non-uniform frequency response), rather than linear combinations of the original sound source components. This makes the exact recovery of a single sound source component impossible; however, this is a shortcoming of the practical embodiment rather than the theoretical method.
  • In the following, without loss of generality, an exemplary scenario will be considered in which the sound field in the acoustic environment is assumed to consist of four contributions coming from a different azimuthal directions:
    1. 1) a front sound source sF(t), which is considered to be the desired sound source and is located in front of the camera at an angle θF = 0 (by definition);
    2. 2) a back sound source sB(t), which may or may not be a desired sound source, corresponding to the sound produced by the camera operator (if any) at an angle θB = 180 degrees;
    3. 3) a number of localized interfering sound sources s I i t i = 1 N I
      Figure imgb0021
      which are considered to be undesired and originate from (unknown) directions θl (i) different from the front and back directions; and
    4. 4) a diffuse noise field, which cannot be attributed to a single sound source or direction, and which is also considered to be undesired.
  • The number of localized interfering sound sources is taken to be one, for the purposes of this explanation. Furthermore, in this example, it is assumed that the capture device is equipped with two or more microphones. Those skilled in the art will appreciate that none of these assumptions should be taken to limit the scope of the invention.
  • If the nth microphone signal un(t) is decomposed in the time domain as: u n t = u n F t + u n B t + u n I t + u n N t
    Figure imgb0022

    then the corresponding desired output of the algorithm can be written as follows: ζ n t = g F t u n F t + g B t u n B t + g I t u n I t + g N t u n N t
    Figure imgb0023

    where gF(t), gB(t), gl(t), and gN(t) denote the desired gain factors for the different sound source components. Note that one is not necessarily interested in calculating N output signals of the algorithm. Typically, the focus is on obtaining a mono or stereo output, which implies that the relation above only needs to be considered for one or two particular values of n, say n1 (and n2).
  • Nevertheless, all N microphone signals will typically be used to obtain an estimate of the two output signals, ζ n1 (t) n2 , (t). Also note that we have not included the output signal index n in the notation of the gain factors in the equation above, since typically the same gain factors will be used for the different output signals of the algorithm. (Of course, this is not essential).
  • Conventionally, it would be expected that the algorithm needs to perform some kind of source separation to isolate the different sound source components. However, since we are not interested in the separated sound source components, but rather in a mixture in which the levels of these components have been adjusted as compared to the microphone signals, an explicit source separation is not required. Let us denote three auxiliary signals as xn(t), yn(t), and zn(t), in which the different sound source components have been arbitrarily reweighted (by reweighting factors γ) with respect to the corresponding components in the microphone signal un(t), that is: x n t = γ x n , u n F u n F t + γ x n , u n B u n B t + γ x n , u n I u n I t + γ x n , u n N u n N t y n t = γ y n , u n F u n F t + γ y n , u n B u n B t + γ y n , u n I u n I t + γ y n , u n N u n N t z n t = γ z n , u n F u n F t + γ z n , u n B u n B t + γ z n , u n I u n I t + γ z n , u n N u n N t .
    Figure imgb0024
  • The output signal of the algorithm can now be calculated as a linear combination of the nth microphone signal and the auxiliary signals xn(t), yn(t), and Zn(t) defined above, that is: ζ n t = a n 0 t u n t + a n 1 t x n t + a n 2 t y n t + a n 3 t z n t .
    Figure imgb0025
  • This corresponds to equation (9) above. The corresponding form of equation (10) is: a n 0 t a n 1 t a n 2 t a n 3 t = 1 γ x n , u n F γ y n , u n F γ z n , u n F 1 γ x n , u n B γ y n , u n B γ z n , u n B 1 γ x n , u n I γ y n , u n I γ z n , u n I 1 γ x n , u n N γ y n , u n N γ z n , u n N - 1 Γ - 1 g F t g B t g I t g N t
    Figure imgb0026
  • This enables the scaling factors, a, to be calculated, provided the re-weighting factors are known. The estimation of the re-weighting factors will be described in greater detail below. Before that, two embodiments of the invention will be described.
  • Both embodiments have the general structure shown in the block diagram of Fig. 1. An array of microphones 4 produces a corresponding plurality of audio signals 6. These are fed as input to an auxiliary signal generator 10. The auxiliary signal generator generates auxiliary signals, each comprising a mixture of the same sound source components detected by the microphones 4, but with the components present in the mixture with different relative strengths (as compared with their levels in the original audio signals 6). In the embodiments described below, these auxiliary signals are derived by processing combinations of the audio signals 6 in various ways. The auxiliary signals and the input audio signals 6 are fed as inputs to an audio synthesis unit 20. This unit 20 applies scaling coefficients to the signals and sums them, to produce output signals 40. In the output signals 40, the sound source components are present with desired strengths. These desired strengths are expressed by gain factors 8, which are input to a scaling coefficient calculator 30. The scaling coefficient calculator 30 converts the desired gains {g(t)} into a set of scaling coefficients {a(t)}. Each of the desired gains is associated with a sound source detectable at the microphones 4; whereas each of the scaling coefficients is associated with one of the auxiliary signals. The scaling coefficient calculator 30 exploits knowledge about the parameters of the auxiliary signals to transform from desired gains {g(t)} to suitable scaling coefficients {a(t)}.
  • In the first embodiment the goal is to obtain a monaural (mono) output signal. Fig. 2 shows a block structure for the calculation of the auxiliary signals xn(t), yn(t), and zn(t) required in the algorithm.
  • In Fig. 2, the auxiliary signal generator 10 consists of three functional blocks 210, 212, 214:
    1. 1) Fixed beamformer 210: the purpose of this block is to perform reweighting of the sound source components of which the source direction is known a priori - that is, the front and back sound sources. The power ratios of these components are altered by the fixed beamformer, both relative to each other and relative to the other sound source components.
    2. 2) Adaptive beamformer 212: this block serves to perform reweighting of the localized interfering sound source(s). This necessarily requires an adaptive beamforming algorithm since the interfering sound source direction is unknown.
    3. 3) Adaptive spectral attenuation 214: this block reweights the diffuse noise field, by exploiting its assumed spectral diversity with reference to the localized sound source components.
  • The audio synthesis unit 20 is indicated by the dashed box 220. This produces the output signal ζ0(t) as a weighted summation of the auxiliary signals x0, y0, and z0, as well as the reference audio signal u0. The weights are the scaling coefficients, a, derived by the scaling coefficient calculator 30 (not shown in Fig. 2).
  • Note that in the mono output case of Fig. 2, some of the auxiliary signals (more particularly xn(t) and yn(t) for n > 0) are not explicitly used to calculate the output signal. However, these signals are used internally in the adaptive beamformer and adaptive spectral attenuation algorithms. More particularly, the signals xn(t), n > 0 at the output of the fixed beamformer will be constructed to be "noise reference signals"; that is, signals in which the desired (front and optionally back) sound sources have been suppressed and which are used subsequently in the adaptive beamformer to estimate the localized interfering sound source component in the primary output signal x0(t) of the fixed beamformer. The signal y1(t) is then constructed to be a "diffuse noise reference" that is used by the adaptive spectral attenuation algorithm to estimate the diffuse noise component in the primary output signal y0(t) of the fixed beamformer.
  • Because of the above discrimination between the primary beamformer output signals x0(t) and y0(t), on the one hand; and the other beamformer output signals xn(t) and yn(t) with n > 0, on the other hand, a stereo output signal should preferably not be created by calculating ζ0(t) and ζ1(t) using these auxiliary signals.
  • Instead, in the second embodiment, the block structure shown in Fig. 3 is used for the stereo case. Here, the stereo output signals are calculated as follows: ζ 0 t = a 0 0 t u 0 t + a 0 1 t x 0 t + a 0 2 t y 0 t + a 0 3 t z 0 t ζ 1 t = a 1 0 t u 1 t + a 1 1 t x 0 t + a 1 2 t y 0 t + a 1 3 t z 0 t
    Figure imgb0027
  • That is, the same set of auxiliary signals is used for generating both stereo outputs, but a different reference audio signal, un(t), is used in each case. This computation is performed by the audio synthesis unit 320 indicated by the dashed box.
  • In the case that N > 2 (that is, when the array consists of more than two microphones), one should select u0(t) and u1(t) to be those two microphone signals that are best suited to deliver a stereo image. As will be apparent to those skilled in the art, this will typically depend on the placement of the microphones.
  • Note that, due to the particular structure shown in Fig. 5, the weight calculation for the second output signal ζ1(t) should be slightly altered, to: a 1 0 t a 1 1 t a 1 2 t a 1 3 t = 1 γ x o , u 1 F γ y o , u 1 F γ z o , u 1 F 1 γ x o , u 1 B γ y o , u 1 B γ z o , u 1 B 1 γ x o , u 1 I γ y o , u 1 I γ z o , u 1 I 1 γ x o , u 1 N γ y o , u 1 N γ z o , u 1 N - 1 Γ - 1 g F t g B t g I t g N t
    Figure imgb0028
  • Meanwhile, the weights for the primary output signal ζ1 (t) can be calculated as before, with n = 0.
  • As the equations above show, the scaling coefficient calculator 30 uses knowledge of the reweighting factors γn (p,m) to derive the scaling coefficients, a(t), from the desired gains, g(t). In the presently described embodiments, the reweighting factors are found by using knowledge of the characteristics of the various blocks 210, 212, 214 in the auxiliary signal generator. Preferably, the reweighting factors are determined offline.
  • Examples of the calculation of the reweighting factors will be described below. These examples rely on a frequency-domain characterisation of the auxiliary signal generator blocks 210, 212, 214.
  • The input-output relation of the three functional blocks in the block structure can be described in the frequency domain as follows. The fixed beamformer can be specified by an N xN transfer function matrix W 1(ω), that is, X ω = W 1 H ω U ω
    Figure imgb0029

    where X ω = X 0 ω X N - 1 ω T W 1 ω = W 1 , 1 1 ω W 1 , 1 N ω W 1 , N 1 ω W 1 , N N ω
    Figure imgb0030

    and U(ω) is defined as U ω = U 0 ω U N - 1 ω T
    Figure imgb0031
  • The adaptive beamformer can be specified by an N x 1 transfer function vector W 2(ω) that defines the relation between the adaptive beamformer input and its primary output signal: Y 0 ω = W 2 H ω X ω
    Figure imgb0032

    where W 2 ω = W 2 , 1 ω W 2 , N ω T
    Figure imgb0033
  • As explained earlier, the secondary adaptive beamformer output signal should ideally be an estimate of the diffuse noise component in the primary adaptive beamformer output signal. The most straightforward approach is to choose the secondary output signal to be equal to one of the noise references at the output of the fixed beamformer- for example, Y1(ω) = X1(ω). Alternatively, one could attempt to remove the localized interfering sound source component from the secondary adaptive beamformer output signal, however, this approach is not used in the present embodiments. The adaptive spectral attenuation can finally be specified using a scalar transfer function W3(ω),that is, Z 0 ω = W 3 ω Y 0 ω
    Figure imgb0034
  • Using the above input-output relations, we can derive expressions for the different localized sound source components in the primary auxiliary signals X0(ω), Y0(ω), and Z0(ω) as a function of the corresponding dry sound source signals SF(ω), SB(ω), and SI(ω), X 0 c ω = W 1 , : 1 H ω G ω θ c S c ω Y 0 c ω = W 2 H ω W 1 H ω G ω θ c S c ω Z 0 c ω = W 3 ω W 2 H ω W 1 H ω G ω θ c S c ω
    Figure imgb0035
  • where c represents the component F, B, or I, and W1,(:,1)(ω) denotes the first column of W1(ω). Similarly, the diffuse noise component in the primary auxiliary signals can be expressed as a function of the diffuse noise components in the microphone signals, X 0 N ω = W 1 , : 1 H ω U N ω Y 0 N ω = W 2 H ω W 1 H ω U N ω Z 0 N ω = W 3 ω W 2 H ω W 1 H ω U N ω
    Figure imgb0036
  • We will now make the following assumptions, to simplify the calculation of the reweighting factors:
    1. 1) the joint acoustic and microphone responses have a flat magnitude response within the bandwidth and in the direction of the different sound source components, i.e., ω : S c ω 0 , U n N ω 0 G n ω θ c G n θ c , n = 0 , , N - 1 , c = F , B , I
      Figure imgb0037
    2. 2) the fixed and adaptive beamformers have a flat magnitude response within the bandwidth and in the direction of the different sound source components, that is, ω : S c ω 0 , U n N ω 0 { W 1 , m n ω W 1 , m n , W 2 , n ω W 2 , n , m = 1 , , N , n = 1 , , N , c = F , B , I
      Figure imgb0038
    3. 3) the diffuse noise spectrum does not overlap with the spectra of the different localized sound sources, ω : S c ω 0 & U n N ω 0 , n = 0 , , N - 1 , c = F , B , I
      Figure imgb0039
    4. 4) the adaptive spectral attenuation magnitude response is flat within the bandwidth of the localized sound sources and within the bandwidth of the diffuse noise, ω : S c ω 0 W 3 ω W 3 c , c = F , B , I ω : U n N ω 0 W 3 ω W 3 N , n = 0 , , N - 1
      Figure imgb0040
    5. 5) the diffuse noise power in each of the microphone signals is equal, σ u 0 N 2 = = σ u N - 1 N 2
      Figure imgb0041
  • Under these assumptions, the signal powers of the different sound source components in the microphone and auxiliary signals can be estimated as follows: σ u n c 2 = G n θ c 2 σ s c 2 , n = 0 , , N - 1 , c = F , B , I σ x o c 2 = W 1 , : 1 H G θ c 2 σ s c 2 , c = F , B , I σ y 0 c 2 = W 2 H W 1 H G θ c 2 σ s c 2 , c = F , B , I σ z 0 c 2 = W 3 c 2 W 2 H W 1 H G θ c 2 σ s c 2 , c = F , B , I σ x 0 N 2 = W 1 , : 1 2 2 σ u 0 N 2 σ y 0 N 2 = W 1 W 2 2 2 σ u o N 2 σ x 0 N 2 = W 3 N 2 W 1 W 2 2 2 σ u 0 N 2
    Figure imgb0042

    and consequently, the reweighting factors can be calculated as γ x o , u n c = W 1 , : 1 H G θ c G n θ c , n = 0 , , N - 1 , c = F , B , I γ y o , u n c = W 2 H W 1 H G θ c G n θ c , n = 0 , , N - 1 , c = F , B , I γ z o , u n c = W 3 c W 2 H W 1 H G θ c G n θ c , n = 0 , , N - 1 , c = F , B , I γ x o , u n N = W 1 , : 1 2 , n = 0 , , N - 1 γ y o , u n N = W 1 W 2 2 , n = 0 , , N - 1 γ z o , u n N = W 3 N W 1 W 2 2 , n = 0 , , N - 1
    Figure imgb0043
  • Finally, note that from a computational point of view, in some applications, it may be undesirable to calculate the reweighting factors online (in real-time) using the preceding formulae. A more efficient approach involves setting the values of the reweighting factors off-line (in advance), making use of the fixed beamformer response (known a priori) and of heuristics about the behaviour of the adaptive beamformer and spectral attenuation response. The values chosen can be approximations of the theoretical values predicted by the equations above. For example, the values may be set heuristically in 5dB steps. In many applications, the method will be largely insensitive to 5dB or 10dB deviations from the precise theoretical values.
  • The design of the fixed beamformer in an exemplary embodiment will now be described.
  • As explained previously above, the fixed beamformer creates a primary output signal X0(ω) that spatially enhances the front sound source signal, as well as a number of other output signals Xn(ω), n > 0 that serve as "noise references" for the adaptive beamformer. Here, we will first discuss the design of the so-called front source beamformer (FSB), and afterwards we will explain the design of the so-called blocking matrix (BM).
  • Depending on the kind of spatial enhancement one wants to achieve for the front sound source, different fixed beamformer design methods could be employed for the FSB; for example, an array pattern synthesis approach, or a differential or superdirective design method. These methods themselves are known in the art. In the present embodiment, we will adopt a superdirective (SD) design method, which is recommendable when the aim is to maximize the directivity factor of the microphone array - that is, to maximize the array gain in the presence of a diffuse noise field. The frequency-domain SD design equation for the FSB can be found in S. Doclo and M. Moonen ("Superdirective beamforming robust against microphone mismatch," IEEE Trans. Audio Speech Lang. Process., vol. 15, no. 2, pp. 617-631, Feb. 2007): W 1 , : 1 ω = Φ ˜ U N + μ I N - 1 G ω θ F G H ω θ F Φ ˜ U N + μ I N - 1 G ω θ F
    Figure imgb0044

    where G(ω, θF) denotes the front sound source steering vector G ω θ = G 0 ω θ G N - 1 ω θ T
    Figure imgb0045
  • IN represents the N×N identity matrix, µ is a regularization parameter, and Φ ˜ U N
    Figure imgb0046
    denotes the normalized diffuse noise correlation matrix, which can be calculated from the joint acoustic and microphone responses as follows, Φ ˜ U N = Φ ˜ U 0 , U 0 N Φ ˜ U 0 , U N - 1 N Φ ˜ U N - 1 , U 0 N Φ ˜ U N - 1 , U N - 1 N
    Figure imgb0047

    with Φ ˜ U m , U n N = 1 2 π 0 2 π G m ω θ G n * ω θ
    Figure imgb0048
  • The directivity factor (DF) and the ratio of the front and back response (FBRR) of the SD beamformer are defined as follows: DF dB = 10 log 10 1 2 π 0 2 π W 1 , : 1 H ω G ω θ F 2 W 1 , : 1 H ω Φ ˜ U N W 1 , : 1 ω FBRR dB = 10 log 10 0 2 π W 1 , : 1 H ω G ω θ F 2 0 2 π W 1 , : 1 H ω G ω θ B 2 .
    Figure imgb0049
  • Whereas the DF is nearly constant with FSB filter length, the FBRR increases for higher filter lengths and approximately saturates for a length greater than or equal to 128. Note that the frequency-domain SD design is executed at LFSB/2 frequencies that are uniformly distributed in the Nyquist interval, after which the frequency-domain FSB coefficients are transformed to length-LFSB time-domain filters. Experiments have also shown a significant performance gap between the 2-mic configuration and other configurations, with greater than 2 microphones, both in terms of directivity and FBRR.
  • The BM in the fixed beamformer consists of a number of filter-and-sum beamformers that each operate on one particular subset of microphone signals. In this way, a number of noise reference signals is created, in which the power of the desired signal components is maximally reduced relative to the power of these components in the microphone signals. Typically, in an N-microphone configuration, N-1 noise references are created by designing N-1 different filter-and-sum beamformers. However, in some cases it might be preferable to create fewer than N-1 noise references, which then leads to a reduction of the number of input signals xn(t) for the adaptive beamformer. In fact, in this embodiment we employ a BM consisting of only one filter-and-sum beamformer designed using the complete set of available microphone signals. In this way, the number of adaptive filters and hence the computational complexity of the adaptive beamformer can be considerably reduced.
  • In the context of the BM design, we consider the back sound source (if any) to be an undesired signal (which should be cancelled by the adaptive beamformer); hence the BM design reduces to a front-cancelling beamformer (FCB) design. Again, one of several different fixed beamformer design methods can be employed. In this embodiment, we use an array pattern synthesis method, different from existing methods.
  • In general, we can specify the frequency-domain FCB design at a set of angles {θ 0,....,θM -1} by the following linear system of equations: G 0 * ω θ 0 G N - 1 * ω θ 0 G 0 * ω θ M - 1 G N - 1 * ω θ M - 1 G H ω W 1 , : 2 ω = P 0 * ω P M - 1 * ω P * ω
    Figure imgb0050

    where P m (ω), m = 0,...,M-1 denotes the desired response at frequency ω and angle θm. The least-squares (LS) optimal solution is then given by W 1 , : 2 ω = G ω G H ω - 1 G ω P * ω
    Figure imgb0051
  • More specifically, to obtain an FCB design we should specify a zero response in the front direction and a non-zero response in any other direction. Preferably the latter direction should the back direction to avoid that the design would actually correspond to a front-back-cancelling beamformer design. As a consequence, the number of equations in the linear system of equations above is M=2, and the specification angles correspond to θ0 = θF and θ1 = θB. Finally, the desired response vector is equal to P*(ω) = [0,1] H .
  • With this design, the back response is indeed close to a unity response for most microphone configurations and filter length values. However, the front source response varies heavily according to the microphone configuration and filter length used. An important observation is that at least one microphone pair in an endfire configuration should preferably be included in the array to obtain a satisfactory power reduction of the front sound source component. Concerning the choice of the BM filter length, experiments show that there is no clear threshold effect - that is, the response in the front direction decreases with a nearly constant slope (provided an endfire microphone pair is included). As a consequence, the BM filter length should preferably be chosen according to the desired front sound source power reduction.
  • The design of the adaptive beamformer in an exemplary embodiment will now be described.
  • The adaptive beamformer in the block scheme may be implemented using a generalized sidelobe canceller (GSC) algorithm; a multi-channel Wiener filtering (MWF) algorithm; or any other adaptive algorithm. In this embodiment, we employ the speech-distortion-weighted multi-channel Wiener filtering (SDW-MWF) which includes the GSC and MWF as special cases. Details of this method can be found in S. Doclo, A. Spriet, J. Wouters, and M. Moonen ("Frequency-domain criterion for the speech distortion weighted multichannel wiener filter for robust noise reduction," Speech Commun., vol. 49, no. 7-8, pp. 636-656, Jul.-Aug. 2007, special Issue on Speech Enhancement).
  • The objective of the SDW-MWF is to jointly minimize the energy of the undesired components (B, I, N) and the distortion of the desired component (F) in the enhanced signal Y0(ω). That is, min W 2 ω E W 2 H ω X B ω + X I ω + X N ω 2 + 1 μ E X 0 F ω - W 2 H ω X F ω 2
    Figure imgb0052

    resulting in the adaptive beamformer estimate W 2 ω = Φ x F ω + μ Φ x B I N ω - 1 Φ x F ω e 0
    Figure imgb0053

    where e 0 1 0 0 T
    Figure imgb0054
    and the correlation matrices of the desired and undesired components in the adaptive beamformer input signal are defined as Φ x F ω = E X F ω X F ω H Φ x B I N ω = E X B ω + X I ω + X N ω X B ω + X I ω + X N ω H
    Figure imgb0055
  • The parameter µ can be tuned to trade off energy reduction of the undesired components versus distortion of the desired component. Several recursive implementations of the SDW-MWF filter estimate have been proposed, in which the adaptive SDW-MWF filter update is based on a generalized singular value decomposition (GSVD), a QR decomposition (QRD), a time-domain stochastic gradient method, or a frequency-domain stochastic gradient method. A common feature of these implementations is that the correlation matrices Φx (F)(ω) and Φx (B,I,N)(ω) are explicitly estimated before the SDW-MWF filter estimate is computed.
  • The signal-to-noise ratio (SNR) improvement provided by the SDW-MWF adaptive beamformer has been evaluated in a scenario with two localized sound sources: a front sound source consisting of a male speech signal (θF=0) and a localized interfering sound source consisting of a music signal (θI = 90 degrees).
  • The mean SNR at the microphones is equal to 10 dB. The fixed beamformer is implemented using a SD design for the FSB and a front-cancelling design for the BM, and an evaluation is done both for LFSB=LBM=64 and for LFSB=LBM=128. The adaptation of the SDW-MWF algorithm is based on a stochastic gradient frequency-domain implementation, and is controlled by a perfect (manual) voice activity detection (VAD). Two features of the SDW-MWF have been evaluated, namely:
    1. 1) the use of a feedforward filter W 2,(1)(ω)to include the fixed beamformer primary output signal X0(ω) as an additional noise reference in the adaptive beamformer; and
    2. 2) the value of the SDW-MWF trade-off parameter 1/µ (where 1/µ = 0 means no penalization of the desired component distortion).
  • Note that in case the desired component distortion is not penalized (1/µ=0), the algorithm without a feedforward filter corresponds to the GSC algorithm, while the algorithm with a feedforward filter is not relevant due to an intolerable speech distortion. The evaluation has shown that the GSC algorithm as well as the SDW-MWF algorithm with a small trade-off parameter (1/µ = 0.01) are well suited for the reduction of the localized interfering sound source power. Moreover, there appears to be no significant influence of the number of microphones and the FSB and BM filter lengths on the adaptive beamformer performance.
  • The design of the Adaptive Spectral Attenuation process in an exemplary embodiment will now be described.
  • The adaptive spectral attenuation block is included in the structure with the aim of reducing the diffuse noise energy in the primary adaptive beamformer output signal. To this end, the short-term magnitude spectra of the reference microphone signal, |U 0(ωk,l)|, and the primary and secondary adaptive beamformer output signals, |Y 0(ωk ,l)| and |Y 1(ωk,l)|, are estimated by means of a Discrete Fourier transform (DFT), with k and I denoting the DFT frequency bin and time frame indices. An instantaneous spectral gain function is then calculated as follows, G inst ω k l = | U 0 ω k l | - β n C ^ ω k l Y ^ 1 ω k l Y ^ 0 ω k l + ε
    Figure imgb0056

    where the subtraction factor βn ∈ [0,1] determines the amount of spectral attenuation and the regularization factor ε is a small constant which prevents division by zero. Since the secondary adaptive beamformer output signal Y1(ω) is equal to the noise reference X1(ω) at the output of the fixed beamformer, a spectral coherence function C(ωk,l) that relates the magnitude spectra of the diffuse noise components in the primary and secondary fixed beamformer output signals needs to be estimated and taken into account in the equation. The instantaneous gain function of the equation is then lowpass filtered and clipped, before being applied to the speech estimate, that is, G 1 p ω k l = 1 - α G 1 p ω k , l - 1 + α G inst ω k l G ω k l = max G 1 p ω k l , ξ n Z ω k l = G ω k l Y 0 ω k l
    Figure imgb0057

    where α denotes the lowpass filter pole and ξ n =1 - βn is the clipping level. The enhanced signal magnitude spectrum |Zk,l)| is subsequently transformed back to the time domain by applying an inverse DFT (IDFT), and by using the phase spectrum of the primary adaptive beamformer output signal Y 0(ωk ,l).
  • An exemplary use of the embodiment in an Acoustic Zoom (AZ) application will now be described.
    1. 1) Specification of the time-varying gain factors: In the AZ application, the aim is to keep the level of the undesired sound sources constant, while the level of the desired sound sources should adapt to the camera zoom state. As a consequence, we should set the gain factors for the localized interfering sound source and the diffuse noise as follows, g I t 1
      Figure imgb0058
      g N t 1
      Figure imgb0059
  • From preliminary results with the above zoom-in trajectory for the front sound source level, it was noted that a perceptually better trajectory could be designed. More particularly, a faster level increase at the start of the zoom-in operation would be desired, eventually converging to the same final level at close-up. A perceptually more attractive level trajectory was found to be g F t = 1 + 2 d zoom - 1 1.2 d zoom 1.2 υ zoom t , 0 t d zoom υ zoom
    Figure imgb0060
  • Concerning the specification of the back sound source gain factor, several possibilities exist. A first possibility is to regard the back sound source as an undesired sound source, in which case its level should remain constant. However, since the back sound source is typically very close to the camera, its level should often be reduced to obtain an acceptable balance between the back sound source and the other sound sources. A second possibility is to have the back sound source gain factor follow the inverse trajectory of the front sound source gain factor, possibly combined with a fixed back sound source level reduction. While such an inverse level trajectory would obviously make sense from a physical point of view, it may be perceived somewhat too artificial, since the front sound source level change is then supported by visual cues, while the back sound source level change is not.
  • Experiments have been performed to demonstrate the performance of the AZ algorithm. In both experiments, the front sound source is a male speech signal corresponding to a camera recording that consists of a far shot phase (5 s), a zoom-in phase (10 s), and a close-up phase (11 s). In addition, the sound field consists of diffuse babble noise and a localized interfering music source at θI = 90 deg. In the first simulation, no back sound source is present, while in the second simulation, a female speech signal is present in the back direction (θB = 180 deg).
  • A 3-microphone array was used, employing microphones 1,3, and 4 as indicated in Fig. 1. The fixed beamformer consists of a superdirective FSB and a single-noise-reference front-cancelling BM, both a with filter length of 64. The adaptive beamformer is calculated using a GSC algorithm and has a filter length of 128. The desired AZ effect consists in keeping the level of the undesired sound sources (including the back sound source in the second simulation) unaltered, while increasing the level of the front sound source during the zoom-in phase, according to the perceptually optimal trajectory defined above.
  • In these embodiments the values of the re-weighting factors were determined empirically in advance, rather than at run-time (as described previously above).
  • As will be apparent to those skilled in the art, the performance of the method depends in part upon the accuracy to which the reweighting factors can be estimated. The greater the accuracy, the better the performance of the manipulation will be.
  • Fig. 4 is a flowchart summarising a method according to an embodiment. In step 410, audio signals 6 are received from the microphones 4. In step 420, the desired gain factors 8 are input. In step 430, the auxiliary signal generator generates the auxiliary signals. In step 440, the scaling coefficient calculator 30 calculates the scaling coefficients, a(t). Finally, in step 450, the audio synthesis unit 20 applies the scaling coefficients to the generated auxiliary signals and reference audio signals, to synthesise output audio signals 40.
  • While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
  • For example, it is possible to operate the invention in an embodiment wherein different blocks are used to generate the auxiliary signals. The exemplary blocks described above (fixed or adaptive beamforming, or adaptive spectral modification) can be replaced or supplemented by other methods. Essentially, the auxiliary signal calculation should be such that it exploits the diversity of the individual sound sources in the sound scene. When multiple microphones are used, then exploiting spatial diversity is often the most straightforward option - and this is exploited by the beamformers in the embodiments described above. However, different kinds of diversity could equally be exploited, for example: diversity in the time domain (if not all of the sound sources are concurrently active); diversity in statistics (which could lead to the use of Wiener filtering, independent component analysis, and so on); or diversity in the degree of (non-)stationarity. The optimal choice of auxiliary signal generator will vary according to the application and the characteristics of the audio environment.
  • The ordering of the blocks described in embodiments herein and shown in the drawings is also not limiting on the scope of the invention. Blocks may be eliminated, re-ordered or duplicated.
  • Likewise, although the embodiments described herein have concentrated on monaural or stereo implementation, the invention can of course be implemented with a greater number of audio output signals than just one or two. Those skilled in the art will be readily able to generalise from the description above, to provide an arbitrary number of desired outputs. This may be useful, for example, for multi-channel or surround-sound audio applications.
  • Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Claims (14)

  1. An audio-processing device comprising:
    an audio input, for receiving one or more audio signals (6) detected at respective microphones (4), each of the audio signals comprising a mixture of a plurality of components, each component corresponding to a sound source;
    a control input, for receiving, for each sound source, a desired gain factor (8) associated with the source, by which it is desired to amplify the corresponding component;
    an auxiliary signal generator (10, 210, 212, 214), adapted to generate at least one auxiliary signal from the one or more audio signals, the at least one auxiliary signal comprising a different mixture of the components as compared with a reference one of the one or more audio signals, wherein the levels of the components in the at least one auxiliary signal are related to their respective levels in the reference audio signal by known reweighting factors;
    a scaling coefficient calculator (30), adapted to calculate a set of scaling coefficients in dependence upon the desired gain factors and upon the reweighting factors, each scaling coefficient associated with one of the at least one auxiliary signal and optionally the reference audio signal; and
    an audio synthesis unit (20, 220, 320), adapted to synthesize an output audio signal (40) by applying the scaling coefficients to the at least one auxiliary signal and optionally the reference audio signal and to combine the results,
    wherein the scaling coefficients are calculated from the desired gain factors and the reweighting factors such that the synthesized output signal (40) provides the desired gain factor for each component.
  2. A handheld personal electronic device comprising
    a plurality of microphones (4); and
    the audio processing device of claim 1.
  3. The mobile or handheld device of claim 2, wherein the microphones (4) are omni-directional microphones.
  4. A method of processing audio signals comprising:
    receiving (410) one or more audio signals detected at respective microphones, each of the audio signals comprising a mixture of a plurality of components, each component corresponding to a sound source;
    receiving (420), for each sound source, a desired gain factor associated with the source, by which it is desired to amplify the corresponding component;
    generating (430) at least one auxiliary signal from the one or more audio signals, the at least one auxiliary signal comprising a different mixture of the components as compared with a reference one of the one or more audio signals, wherein the levels of the components in the at least one auxiliary signal are related to their respective levels in the reference audio signal by known reweighting factors;
    calculating (440) a set of scaling coefficients in dependence upon the desired gain factors and upon the reweighting factors, each scaling coefficient associated with one of the at least one auxiliary signal and optionally the reference audio signal; and
    synthesizing (450) an output audio signal by applying the scaling coefficients to the at least one auxiliary signal and optionally the reference audio signal and combining the results,
    wherein the scaling coefficients are calculated from the desired gain factors and the reweighting factors such that the synthesized output signal provides the desired gain factor for each component.
  5. The method of claim 4, wherein:
    the desired gain factors; the weighting factors and the scaling coefficients are related by a linear system of equations; and
    the step of calculating the set of scaling coefficients comprises solving the system of equations.
  6. The method of claim 4 or 5, wherein the at least one auxiliary signal is a linear combination of any of:
    one or more of the audio signals;
    one or more temporally shifted versions of the audio signals; and
    one or more filtered versions of the audio signals.
  7. The method of any of claims 4 to 6, wherein the at least one auxiliary signal is generated by at least one of:
    fixed beamforming;
    adaptive beamforming; and
    adaptive spectral modification.
  8. The method of any of claims 4 to 7, wherein:
    a first auxiliary signal is generated by a first method;
    a second auxiliary signal is generated by a second, different method; and
    the second auxiliary signal is generated based on an output of the first method.
  9. The method of any of claims 4 to 8, comprising:
    synthesizing a first output audio signal by applying scaling coefficients to a first reference audio signal and at least one first auxiliary signal and combining the results; and
    synthesizing a second output audio signal by applying scaling coefficients to a second, different reference audio signal and at least one second auxiliary signal and combining the results.
  10. The method of any of claims 4 to 9, wherein the sound sources comprise one or more localised sound sources and a diffuse noise field.
  11. The method of any of claims 4 to 10, wherein the desired gain factors are time-varying.
  12. The method of any of claims 4 to 11,
    wherein the desired gain factors are chosen in dependence upon the state of a visual zoom function.
  13. A computer program comprising computer program code means adapted to perform all the steps of any of claims 4 to 12 when said program is run on a computer.
  14. A computer program as claimed in claim 13 embodied on a computer readable medium.
EP10275102.1A 2010-09-30 2010-09-30 Sound scene manipulation Active EP2437517B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP10275102.1A EP2437517B1 (en) 2010-09-30 2010-09-30 Sound scene manipulation
US13/248,805 US20120082322A1 (en) 2010-09-30 2011-09-29 Sound scene manipulation
CN2011103036497A CN102447993A (en) 2010-09-30 2011-09-29 Sound scene manipulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP10275102.1A EP2437517B1 (en) 2010-09-30 2010-09-30 Sound scene manipulation

Publications (2)

Publication Number Publication Date
EP2437517A1 EP2437517A1 (en) 2012-04-04
EP2437517B1 true EP2437517B1 (en) 2014-04-02

Family

ID=43533377

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10275102.1A Active EP2437517B1 (en) 2010-09-30 2010-09-30 Sound scene manipulation

Country Status (1)

Country Link
EP (1) EP2437517B1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130329908A1 (en) * 2012-06-08 2013-12-12 Apple Inc. Adjusting audio beamforming settings based on system state
GB2521649B (en) 2013-12-27 2018-12-12 Nokia Technologies Oy Method, apparatus, computer program code and storage medium for processing audio signals
KR101721424B1 (en) * 2015-12-31 2017-03-31 서강대학교산학협력단 Reverberation_robust multiple sound source localization based independent component analysis
CN110692257B (en) * 2017-05-29 2021-11-02 哈曼贝克自动系统股份有限公司 Sound capture
JP6942282B2 (en) * 2018-07-12 2021-09-29 ドルビー ラボラトリーズ ライセンシング コーポレイション Transmission control of audio devices using auxiliary signals
CN114203163A (en) * 2022-02-16 2022-03-18 荣耀终端有限公司 Audio signal processing method and device
CN114863944B (en) * 2022-02-24 2023-07-14 中国科学院声学研究所 Low-delay audio signal overdetermined blind source separation method and separation device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001069597A (en) * 1999-06-22 2001-03-16 Yamaha Corp Voice-processing method and device
US8705751B2 (en) * 2008-06-02 2014-04-22 Starkey Laboratories, Inc. Compression and mixing for hearing assistance devices

Also Published As

Publication number Publication date
EP2437517A1 (en) 2012-04-04

Similar Documents

Publication Publication Date Title
US20120082322A1 (en) Sound scene manipulation
EP2437517B1 (en) Sound scene manipulation
CN107479030B (en) Frequency division and improved generalized cross-correlation based binaural time delay estimation method
EP2537353B1 (en) Device and method for direction dependent spatial noise reduction
CN106233382B (en) A kind of signal processing apparatus that several input audio signals are carried out with dereverberation
US8577054B2 (en) Signal processing apparatus, signal processing method, and program
Marquardt et al. Theoretical analysis of linearly constrained multi-channel Wiener filtering algorithms for combined noise reduction and binaural cue preservation in binaural hearing aids
CN110557710B (en) Low complexity multi-channel intelligent loudspeaker with voice control
KR20090051614A (en) Method and apparatus for acquiring the multi-channel sound with a microphone array
EP2826264A1 (en) A method of applying a combined or hybrid sound -field control strategy
CN112075092B (en) Blind detection via binaural stereo content
WO2020029998A1 (en) Electroencephalogram-assisted beam former, beam forming method and ear-mounted hearing system
CN113257270B (en) Multi-channel voice enhancement method based on reference microphone optimization
Koutrouvelis et al. A convex approximation of the relaxed binaural beamforming optimization problem
CN114255777A (en) Mixing method and system for removing reverberation of real-time voice
Shabtai Optimization of the directivity in binaural sound reproduction beamforming
Kovalyov et al. Dfsnet: A steerable neural beamformer invariant to microphone array configuration for real-time, low-latency speech enhancement
EP3225037A1 (en) Method and apparatus for generating a directional sound signal from first and second sound signals
Stefanakis et al. Foreground suppression for capturing and reproduction of crowded acoustic environments
Zhang A parametric unconstrained binaural beamformer based noise reduction and spatial cue preservation for hearing-assistive devices
JP2017181761A (en) Signal processing device and program, and gain processing device and program
Xiao et al. Effect of target signals and delays on spatially selective active noise control for open-fitting hearables
Fahim et al. A planar microphone array for spatial coherence-based source separation
Jan et al. A blind source separation approach based on IVA for convolutive speech mixtures
Togami et al. Real-time stereo speech enhancement with spatial-cue preservation based on dual-path structure

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME RS

17P Request for examination filed

Effective date: 20121004

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20130618

RIN1 Information on inventor provided before grant (corrected)

Inventor name: TIRRY, WOUTER

Inventor name: MOONEN, MARC

Inventor name: VAN WATERSCHOOT, TOON

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20131028

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 660734

Country of ref document: AT

Kind code of ref document: T

Effective date: 20140415

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602010014766

Country of ref document: DE

Effective date: 20140515

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 660734

Country of ref document: AT

Kind code of ref document: T

Effective date: 20140402

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20140402

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140802

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140702

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140703

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140702

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140804

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602010014766

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20150106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602010014766

Country of ref document: DE

Effective date: 20150106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140930

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20140930

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140930

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140930

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140930

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20100930

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 7

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140402

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 602010014766

Country of ref document: DE

Owner name: GOODIX TECHNOLOGY (HK) COMPANY LIMITED, CN

Free format text: FORMER OWNER: NXP B.V., EINDHOVEN, NL

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240918

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240925

Year of fee payment: 15