EP3121811A1 - Method for performing audio restauration, and apparatus for performing audio restauration - Google Patents

Method for performing audio restauration, and apparatus for performing audio restauration Download PDF

Info

Publication number
EP3121811A1
EP3121811A1 EP15306212.0A EP15306212A EP3121811A1 EP 3121811 A1 EP3121811 A1 EP 3121811A1 EP 15306212 A EP15306212 A EP 15306212A EP 3121811 A1 EP3121811 A1 EP 3121811A1
Authority
EP
European Patent Office
Prior art keywords
audio signal
time domain
signal
tensor
input audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15306212.0A
Other languages
German (de)
French (fr)
Inventor
Cagdas Bilen
Alexey Ozerov
Patrick Perez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to EP15306212.0A priority Critical patent/EP3121811A1/en
Priority to EP16714898.0A priority patent/EP3281194B1/en
Priority to PCT/EP2016/057541 priority patent/WO2016162384A1/en
Priority to US15/564,378 priority patent/US20180211672A1/en
Publication of EP3121811A1 publication Critical patent/EP3121811A1/en
Priority to HK18103188.6A priority patent/HK1244946B/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method for performing audio inpainting, wherein missing portions in an input audio signal are recovered and a recovered audio signal is obtained, comprises computing a Short-Time Fourier Transform (STFT) on portions of the input audio signal, computing conditional expectations of the source power spectra of the input audio signal, wherein estimated source power spectra P (f,n,j) are obtained and wherein the variance tensor V and complex Short-Time Fourier Transform (STFT) coefficients of the input audio signals are used, iteratively re-calculating the variance tensor V from the estimated power spectra P(f, n, j) and re-calculating updated estimated power spectra P (f, n, j), computing an array of STFT coefficients from the resulting variance tensor V according to (f,n,j) = E{ S (f, n, j)| x, IS, IL, V }, and converting the array of STFT coefficients to the time domain, wherein coefficients 1, 2, ..., J of the recovered audio signal are obtained.

Description

    Field of the invention
  • This invention relates to a method for performing audio restauration and to an apparatus for performing audio restauration. One particular type of audio restauration is audio inpainting.
  • Background
  • The problem of audio inpainting can be defined as the one of reconstructing the missing parts in an audio signal [1]. The name of "audio inpainting" was given to this problem to draw an analogy with image inpainting, where the goal is to reconstruct some missing regions in an image. A particular problem is audio inpainting in the case where some temporal samples of the audio are lost, ie. samples of the time domain. This is different from some known solutions that focus on lost samples in the time-frequency domain. This problem occurs e.g. in the case of saturation of amplitude (clipping) or interference of high amplitude impulsive noise (clicking). In such case, the samples need to be recovered (de-clipping or de-clicking respectively).
  • There exist methods for audio inpainting problems such as audio de-clipping [1], [2] and de-clicking [1]. In [1], audio inpainting is accomplished by enforcing sparsity of the audio signal in a Gabor dictionary which can be used both for audio de-clipping and de-clicking. For de-clipping, the approach proposed in [2] similarly relies on sparsity of audio signals in Gabor dictionaries while also optimizing for an adaptive sparsity pattern using the concept of social sparsity. Combined by the constraint of signal magnitude having to be greater than a clipping threshold, the method in [2] is shown to be much more effective than earlier works such as [1].
  • Summary of the Invention
  • The proposed solution is expected to not only perform better than these sparsity inducing approaches, but also be computationally less expensive. Furthermore the approaches based on time domain sparse dictionaries such as Gabor dictionary do not inherently result in phase invariant results, whereas the Non-negative Tensor Factorization (NTF) based model used herein is designed to be phase-invariant. This means that the models employed by the known methods need to be extended at the expense of performance in order to be near phase-invariant, whereas the proposed approach has no such drawback.
  • Existing methods [1], [2] usually rely on some sparse models (i.e., the signal is represented with few activation coefficients in some dictionary of elementary signals) [1] or locally-structured sparse models (ie., relations between activation coefficients are locally enforced) [2]. The models exploiting some global audio signal structure (e.g., long-term similarity of time or frequency patterns) were not applied for these problems. According to the present principles, an audio inpainting method applied to recover (short) missing temporal parts is based on a Non-negative Tensor Factorization (NTF) model. This method is more efficient than the known methods [1], [2], since the NTF model exploits some global audio signal structure (notably the long-term similarity of frequency patterns) in the time domain. NTF-like models were already used for missing audio reconstruction in the time-frequency domain [3]. The main difference with these approaches is that they assume the missing parts to be defined in some time-frequency domain, while here missing temporal parts (ie. in the time domain) are considered.
  • An additional problem considered herein and not considered by earlier works is performing audio inpainting jointly with source separation. Source separation problem can be defined as separating an audio signal into multiple sources often with different characteristics, for example separating a music signal into signals from different instruments. When the audio to be inpainted is known to be a mixture of multiple sources and some information about the sources is available (e.g. temporal source activity information [4], [5]), it can be easier to separate the sources while at the same time explicitly modeling the unknown mixture samples as missing. This situation may happen in many real-world scenarios, e.g. when one needs separating a recording that was clipped, which happens quite often. It was found that a sequential application of inpainting and source separation in one order or another is suboptimal, since the latter stage processing will suffer from the errors produced on the former stage processing, while within the joint processing these errors may be compensated. Moreover, distortion such as clipping may have quite harmful impact on the audio signal in the Short-Time Fourier Transform (STFT) domain, thus possibly destroying the low-rank signal structure and making the NTF modeling poorer. Treating the clipped values as missing within the joint approach should avoid this problem. Disclosed herein is a method for audio inpainting that uses a low-rank NTF model to model the audio signals. The disclosed method does not rely on a fixed dictionary but instead relies on a more general model representing global signal structure, which is also automatically adapted to the reconstructed audio signals. In addition to being naturally extendable to handle the joint inpainting and source separation problem, the disclosed method is also highly parallelizable for faster and more efficient computation.
  • In one embodiment, the present invention relates to a method for performing audio restauration, wherein missing coefficients of an input audio signal are recovered and a recovered audio signal is obtained. The method comprises steps of initializing a variance tensor V such that it is a low rank tensor that can be composed from component matrices H,Q,W (or initializing said component matrices H , Q , W to obtain the low rank variance tensor V ), iteratively applying the following steps, until convergence of the component matrices H , Q , W : computing conditional expectations of source power spectra of the input audio signal, wherein estimated source power spectra P (f,n,j) are obtained and wherein the variance tensor V, known signal values of the input audio signal and time domain information on loss ( IL ) are input to the computing, re-calculating the component matrices H , Q , W and the variance tensor V using the estimated source power spectra P (f,n,j) and current values of the component matrices H,Q,W , upon convergence of the component matrices H,Q,W, computing a resulting variance tensor V ', and computing from the resulting variance tensor V ', from known signal values (x,y) of the input audio signal and from time domain information on loss ( IL ), an array of a posterior mean of Short Time Fourier Transform (STFT) samples (S) of the recovered audio signal, and converting coefficients of the array of the posterior mean of the STFT samples (S) to the time domain, wherein coefficients ( 1, 2,...,J ) of the recovered audio signal are obtained.
  • In one embodiment, a computer readable medium has stored thereon executable instructions that when execution on a computer cause the computer to perform a method comprising steps of the method as disclosed in claim 1.
  • In one embodiment, an apparatus for performing audio inpainting comprises at least one of a hardware component and a hardware processor, and a non-transitory, tangible, computer-readable, storage medium tangibly embodying at least one software component, and the software component when executing on the at least one hardware component or hardware processor cause steps of the method of claim 1.
  • Further objects, features and advantages of the invention will become apparent from a consideration of the following description and the appended claims when taken in connection with the accompanying drawings.
  • Brief description of the drawings
  • Exemplary embodiments of the invention are described with reference to the accompanying drawings, which shows in
    • Fig.1 the structure of audio inpainting;
    • Fig.2 more details on an audio inpainting system;
    • Fig.3 a flow-chart of a method; and
    • Fig.4 elements of an apparatus.
    Detailed description of embodiments
  • Fig.1 shows the structure of audio inpainting. It is assumed that the audio signal x to be inpainted is given with known temporal positions of the missing samples. For the problem with joint source separation, some prior information for the sources can also be provided. E.g. some samples from individual sources may be provided, simply because they were kept during the audio mixing step or because some temporal source activity information was provided by a user, e.g. as described in [4], [5]. Additionally, further information on the characteristics of the loss in the signal x can also be provided. E.g. for the de-clipping problem, the clipping threshold is given so that the magnitude of the lost signal can be constrained, in one embodiment. Given the signal x, the problem is to find the inpainted signal for which the estimated sections are to be as close as possible to the original signal before the loss (ie. before clipping or clicking). If some prior information on the sources is available, the problem definition is extended to include joint source separation so that the individual sources are also estimated that are as close as possible to the original sources (before mixing and loss).
  • Throughout this specification, the time-domain signals will be represented by a letter with two primes, e.g. x", framed and windowed time-domain signals will be denoted by a letter with one prime, e.g. x', and complex-valued short-time Fourier transforms (STFT) coefficients will be denoted by a letter with no primes, e.g. x. The following is a single-channel mixing equation in the time domain: x t ʺ = j = 1 J s jt ʺ + a t ʺ , t = 1 , , T
    Figure imgb0001
    where t=1,...,T is the discrete time index, j=1,...,J is the source index, and x t ʺ ,
    Figure imgb0002
    S jt ʺ
    Figure imgb0003
    and a jt ʺ
    Figure imgb0004
    denote respectively mixture, source and quantization noise samples. Moreover, it is assumed that the mixture is only observed on a subset of time indices Ξ" ⊂ {1, ..., T} called mixture observation support (MOS). For clipped signals this support indicates the indices with magnitude smaller than the clipping threshold.
  • The sources are unknown. It is assumed, however, that it is known which sources are active at which time periods. For example for a multi instrument music, this information corresponds to knowing which instruments are playing at any instant. Furthermore it is also assumed that if the mixture is clipped, the clipping threshold is known.
  • The time domain signals are converted into their windowed-time version using overlapping frames of length M. In this domain, mixing equation (1) reads x mn ʹ = j = 1 J s jmn ʹ + a mn ʹ , m = 1 , , M , n = 1 , , N
    Figure imgb0005

    where n=1,...,N is the frame index and m=1,...,M is an index within the frame. We also introduce the set Ξ n ʹ 1 , , M t × 1 , Nt
    Figure imgb0006
    that is the MOS within the framed representation corresponding to Ξ" in the time domain, and its frame-level restriction Ξ' = {m|(m,n)∈Ξ'}. In this specification, the observed clipped mixture in the windowed time domain will be denoted as X c ʹ
    Figure imgb0007
    and its restriction to unclipped instants as x', where x n ʹ = x mn ʹ m Ξ n ʹ .
    Figure imgb0008
  • Let U C M × F
    Figure imgb0009
    be the complex-valued Hermitian matrix of the Discrete Fourier Transform (DFT). Applying this transform to eq.(2) yields the STFT domain model: x fn = j = 1 J x jfn + x fn , f = 1 , , F , n = 1 , , N
    Figure imgb0010

    where f=1,...,F is the frequency bin index, x n = U x n ʹ , s jn = U s jn ʹ
    Figure imgb0011
    and a n = Ua n ʹ
    Figure imgb0012
    are STFT frames (F-length column vectors) obtained from the corresponding time frames (M-length column vectors). For example, xn = [xfn ] f=1,...,F is a mixture STFT frame and x n ʹ = x mn ʹ m = 1 , , M
    Figure imgb0013
    is a mixture time frame. The sources are modelled in the STFT domain with a normal distribution (sjfn ~ Nc (0,vifn )), where the variance tensor V = [vjfn ] has the following low-rank NTF structure v jfn = k = 1 K q jk w fk h nk ,
    Figure imgb0014

    where k<max(J,F,N) and all the variables are non-negative reals. This model is parameterized by Θ = {Q, W, H}, with Q = [qjk ] j,k , W = [wfk ] f,k and H = [hnk ] n,k being, respectively, J × K, F × K and N × K non-negative matrices.
  • The assumed information on which sources are active at which time periods is captured by constraining certain entries of Q and H to be zero [5]. Each of the K components being assigned to a single source through Q(ΨQ) ≡ 0 for some appropriate set ΨQ of indices, the components of each source are marked as silent through H(ΨH) ≡ 0 with an appropriate set ΨH of indices.
  • Finally, for the sake of simplicity it is assumed that there is no mixture quantization (a'mn = 0). Note however that assuming a complex valued normal distribution instead for this error only requires minor changes. The problem at hand is now the estimation of the model parameters Θ and of the unknown un-clipped sources {sjn}n, j = 1,...,J, given the observed clipped mixture x'c.
  • Fig.2 shows more details on an exemplary audio inpainting system in a case where prior information on loss IL and/or prior information on sources IS are available.
  • In one embodiment, the invention performs audio inpainting by enforcing a low-rank non-negative tensor structure for the covariance tensor of the Short-Time
  • Fourier Transform (STFT) coefficients of the audio signal. It estimates probabilistically the most likely signal , given the input audio x and some prior information on the loss in the signal IL , based on two assumptions:
  • First assumption is that the sources are jointly Gaussian distributed in the Short-Time Fourier Transform (STFT) domain with window size F and number of windows N.
  • Second assumption is that the variance tensor of the Gaussian distribution, V R + F × N × J ,
    Figure imgb0015
    has a low rank Non-Negative Tensor Decomposition (NTF) of rank K such that V f n j = k = 1 K H n k W f k Q j k , H R + N × K , W R + F × K , Q R + J × K
    Figure imgb0016
  • Both assumptions are usually fulfilled. Further, estimation of the sources
    1, 2,..., J is further improved if some prior information on the sources Is is given.
  • In the following, the most general case will be described, wherein samples from multiple sources are available. In the case that information on multiple sources are not provided, one can simply assume that there is a single source J = 1 and the known samples of the source coincide with the input audio signal. In an exemplary embodiment, an implementation of the invention can be summarized with the following steps:
    • 1. Initialize the variance tensor V R + F × N × J
      Figure imgb0017
      by random matrices H R + N × K ,
      Figure imgb0018
      W R + F × K ,
      Figure imgb0019
      Q R + J × K
      Figure imgb0020
      such that: V f n j = k = 1 K H n k W f k Q j k
      Figure imgb0021
    • 2. Until convergence or maximum number of iterations reached, repeat:
      • 2.1 Compute the conditional expectations of the source power spectra such that P f n j = E S f n j 2 | x , I s , I L , V
        Figure imgb0022

        where S C F×N×J is the array of the STFT coefficients of the sources. This step can be performed for each STFT frame independently, hence providing significant gain by parallelism. More details on this posterior mean computation can be found below.
      • 2.2 Re-estimate NTF model parameters H R + N × K ,
        Figure imgb0023
        W R + F × K ,
        Figure imgb0024
        Q R + J × K
        Figure imgb0025
        using the multiplicative update (MU) rules minimizing the Itakura-Saito divergence (IS divergence) [6] between the 3-valence tensor of estimated source power spectra P (f,n,j) and the 3-valence tensor of the NTF model approximation V (f,n,j) such that: j k Q j k f , n W f k H n k P f n j V f n j - 2 f , n W f k H n k V f n j - 1
        Figure imgb0026
        f k W f k j , n Q j k H n k P f n j V f n j - 2 j , n Q j k H n k V f n j - 1
        Figure imgb0027
        n k H n k f , j W f k Q j k P f n j V f n j - 2 f , j W f k Q j k V f n j - 1
        Figure imgb0028

        Then update V by f n j = k = 1 K n k f k j k
        Figure imgb0029

        This can be repeated multiple times.
    • 3. Compute the array of STFT coefficients S C F×N×J as the posterior mean as S ^ f n j = E S f n j | x , I s , I L , V
      Figure imgb0030
      and convert back into the time domain to recover the estimated sources 1, 2,..., J . Set the estimated signal as x = j = 1 J s ˜ j .
      Figure imgb0031
      More details on this posterior mean computation can be found below.
  • The following describes some mathematical basics on the above calculations.
  • A tensor is a data structure that can be seen as a higher dimensional matrix. a matrix is 2-dimensional, whereas a tensor can be N-dimensional. In the present case, V is a 3-dimensional tensor (like a cube) that represents the covariance matrix of the jointly Gaussian distribution of the sources.
  • A matrix can be represented as the sum of few rank-1 matrices, each formed by multiplying two vectors, in the low rank model. In the present case, the tensor is similarly represented as the sum of K rank one tensors, where a rank one tensor is formed by multiplying three vectors, e.g. hi , qi and wi . These vectors are put together to form the matrices H, Q and W . There are K sets of vectors for the K rank one tensors. Essentially, the tensor is represented by K components, and the matrices H, Q and W represent how the components are distributed along different frames, different frequencies of STFT and different sources respectively.
  • Similar to a low rank model in matrices, K is kept small because a small K better defines the characteristics of the data, such as audio data, e.g. music. Hence it is possible to guess unknown characteristics of the signal by using the information that V should be a low rank tensor. This reduces the number of unknowns and defines an interrelation between different parts of the data.
  • The steps of the above-described iterative algorithm can be described as follows. First, initialize the matrices H, Q and W and therefore V . Note that it is also possible to initialize V and then obtain the initial matrices H, Q and W from it, since H, Q and W directly define V . After the initialization, V always equals to the multiplied sum of H, Q and W , so it is a low rank tensor. If there is only one source, then Q does not exist (or equivalently can be set to be a constant), so that V is a low rank matrix. Note further that H, Q and W may also be called "model parameters" or "low-rank components" herein.
  • Given V , the probability distribution of the signal is known. And looking at the observed part of the signals (signals are observed only partially), it is possible to estimate the STFT coefficients , e.g. by Wiener filtering. This is the posterior mean of the signal. Further, also a posterior covariance of the signal is computed, which will be used below. This step is performed independently for each window of the signal, and it is parallelizable. This is called the expectation step (E-step).
  • The posterior mean jn and posterior covariance Σ̂ sjnsjn can be computed by s ^ jn = x n ʹ s jn H x n ʹ x n ʹ - 1 x n ʹ
    Figure imgb0032
    Σ ^ s jn s jn = Σ s jn s jn - x n ʹ s jn H x n ʹ x n ʹ - 1 x n ʹ s jn
    Figure imgb0033

    given the definitions Σ s jn s jn = diag v jfn f
    Figure imgb0034
    Σ x n ʹ s jn = U H Ξ n ʹ diag v jfn f
    Figure imgb0035
    Σ x n ʹ x n ʹ = U H Ξ n ʹ diag j v jfn f U Ξ n ʹ
    Figure imgb0036

    where U Ξ n ʹ
    Figure imgb0037
    is the M × Ξ n ʹ
    Figure imgb0038
    matrix of columns from U with index in Ξ n ʹ .
    Figure imgb0039
    For a de-clipping application, it is also known that the estimated mixture must obey U H Ξ n ʹ j s ^ jn x c , jn ʹ Ξ n ʹ
    Figure imgb0040
  • This is difficult to enforce directly into the model since posterior distribution of the sources under this prior would no longer be Gaussian. In order to find a workaround, let's suppose that eq.(18) is not satisfied at the indices Ξ ^ n ʹ .
    Figure imgb0041
    A simple way to enforce eq.(18) can be directly scaling up the magnitude of the sources at window indices Ξ ^ n ʹ
    Figure imgb0042
    so that eq.(18) is satisfied.
  • The clipping constraint can be handled as follows.
  • In order to update the model parameters, one needs to estimate the posterior power spectra of the signal defined as p ^ jfn = E s jfn 2 x n ʹ ; Θ .
    Figure imgb0043
    For an audio inpainting problem without any further constraints, the posterior signal estimate n and the posterior covariance matrix Σ̂ snsn would be sufficient to estimate fn since the posterior distribution of the signal is Gaussian. However, in clipping, the original unknown signal is known to have its magnitude above the clipping threshold outside the OS, and so should have the reconstructed signal frames s ^ n ʹ = U H s ^ n :
    Figure imgb0044
    s ^ mn ʹ × sign x mn ʹ x mn ʹ , n , m Ξ n ʹ
    Figure imgb0045
  • This constraint is difficult to enforce directly into the model since the posterior distribution of the signal under it is no longer Gaussian, which significantly complicates the computation of the posterior power spectra. In the presence of such constraints on the magnitude of the signal, various ways can be considered to approach the problem:
  • Unconstrained: The simplest way to perform the estimation is to ignore completely the constraints, treating the problem as a more generic audio inpainting in time domain. Hence during the iterations, the "constrained" signal is taken simply as the estimated signal, i.e. n = n , n = 1, ..., N, as is the posterior covariance matrix, Σ ˜ s n s n = Σ ^ s n s n , n = 1 , , N .
    Figure imgb0046
  • Ignored projection: Another simple way to proceed is to ignore the constraint during the iterative estimation process and to enforce it at the end as a post-processing of the estimated signal. In this case, the signal is treated the same as the unconstrained case during the iterations.
  • Signal projection: A more advanced approach is to update the estimated signal at each iteration so that the magnitude obeys the clipping constraints. Let's suppose eq. (18) is not satisfied at the indices in set Ξ ^ n ʹ .
    Figure imgb0047
    We can set S ˜ n ʹ = S ^ n ʹ
    Figure imgb0048
    and then force S ˜ n ʹ Ξ ^ n ʹ = x c , n ʹ Ξ ^ n ʹ .
    Figure imgb0049
    However, this approach does not update the posterior covariance matrix, ie. Σ̂ snsn = Σ̂ snsn , n = 1, ..., N, which is needed to compute the posterior power spectra of the sources to update the NTF model.
  • Covariance projection: In order to update as well the posterior covariance matrix, we can re-compute the posterior mean and the posterior covariance by eq. (13) and (14) respectively. The posterior mean and the posterior covariance are simply re-computed with the above equations respectively, by using Ξ n ʹ Ξ ^ n ʹ
    Figure imgb0050
    instead of Ξ ^ n ʹ ,
    Figure imgb0051
    and x c , jn ʹ Ξ n ʹ Ξ ^ n ʹ
    Figure imgb0052
    instead of x n ʹ
    Figure imgb0053
    in eq.(13)-(17).
  • If the resulting estimation of the sources violate eq.(18) on additional indices, Ξ ^ n ʹ
    Figure imgb0054
    is extended to include these indices and the computation is repeated.
  • As a result, final sources estimates s that satisfy eq.(18) and the corresponding posterior covariance matrix Σ̃snsn are obtained. Note that in addition to updating the posterior covariance matrix, this approach also updates the entire estimated signal and not just the signal at the indices of violated constraints.
  • Therefore the posterior power spectra p, which will be used to update the NTF model as described in the following, can be computed as p ˜ fn = E s fn 2 x n ʹ ; Θ s ˜ fn 2 + Σ ˜ s n s n f f
    Figure imgb0055
  • Once the posterior mean and covariance are computed, these are used to compute the posterior power spectra p . This is needed to update the earlier model parameters, ie. H, Q and W .
  • NMF model parameters can be re-estimated using the multiplicative update (MU) rules minimizing the Is divergence between the matrix of estimated signal power spectra = [fn ] and the NMF model approximation V=WHT: D IS P ˜ V = f , n d IS p ˜ fn v fn
    Figure imgb0056
    where d IS x y = x y - log x / y - 1
    Figure imgb0057
    is the Is divergence, and fn and vfn are specified respectively by (19) and (12). Hence the model parameters can be updated as w fk w fk Σ n h nk p ˜ fn v fn - 2 Σ n h nk v fn - 1
    Figure imgb0058
    h nk h nk Σ f w fk p ˜ fn v fn - 2 Σ f w fk v fn - 1
    Figure imgb0059
  • It may be advantageous to repeat this step more than once in order to reach a better estimate (e.g. 2-10 times). This is called the maximization step (M-step). Once the model parameters H, Q and W are updated, all the steps (from estimating the STFT coefficients ) can be repeated until some convergence is reached, in an embodiment. After the convergence is reached, in an embodiment the posterior mean of the STFT coefficients is converted into the time domain to obtain an audio signal as final result.
  • The approximation of S and P, as described above, is based on the following basic idea. An exact computation of P normally relies on the assumption that the signal is Gaussian distributed with zero mean. When the distribution is Gaussian, posterior mean and posterior variance of the signal are enough to compute P. However, when some constraints exist, like information on loss IL , the distribution is not Gaussian any more. With the true distribution, an exact computation of P (f,n,j) = E{| S (f,n,j)|2| x , Is , IL , V } is computationally not viable. According to the present principles, the posterior estimate (f,n,j) is computed, and then the time domain signal is projected to the subspace satisfying the information on loss IL . After that, it is assumed that the modified values (the values of not obeying IL ) are known for that iteration. When these values are assumed to be known to their current values, the rest of the unknowns can be assumed to be Gaussian again, and corresponding posterior mean and posterior variance can be computed. By using this, P can also be computed. Note that the values that are assumed to be known are only an approximation, so that P is also an approximation. However, P is altogether much more accurate than if the information on loss IL would be ignored.
  • For information on loss IL , one example is the clipping threshold. If the clipping threshold thr is known, such that the unknown values of the time domain signal su is known to be su > thr if su >0, and su < -thr if su <0 for a known threshold thr. Other examples for information on loss IL are the sign of the unknown value, an upper limit for the signal magnitude (essentially the opposite of the first example), and/or the quantized value of the unknown signal, so that there is the constraint thr2 < su < thr1. All these are constraints in the time domain. No other method is known that can enforce them in a low rank NTF/NMF model enforced on the time frequency distribution of the signal. At least one or more of the above examples, in any combination, can be used as information on loss IL .
  • For information on sources IS , one example is information about which sources are active or silent for some of the time instants. Another example is a number of how many components each source is composed in the low rank representation. A further example is specific information on the harmonic structure of sources, which can introduce stronger constraints on the low rank tensor or on the matrix. These constraints are often easier to apply on the STFT coefficients or directly on the low rank variance tensor of the STFT coefficients or directly on the model, ie. on H, Q and W .
  • One advantage of the invention is enabling efficient recovery of missing portions in audio signals that resulted from effects such as clipping and clicking.
  • A second advantage of the invention is the possibility of jointly performing inpainting and source separation tasks without the need for additional steps or components in the methodology. This enables the possibility of utilizing the additional information on the components of the audio signal for a better inpainting performance.
  • Further, a third advantage is making use of the NTF model and hence efficiently exploiting the global structure of an audio signal for an improved inpainting performance.
  • A fourth advantage of the invention is that it allows joint audio inpainting and source separation, as described below.
  • Further, the Non-negative tensor factorization (NTF) or Non-negative Matrix factorization (NMF) can be applied to improve dequantization of a quantized signal. As mentioned above, quantized signals can be handled by treating quantization noise as Gaussian. In a case where there are no other time domain losses, handling noisy signals with low rank NTF/NMF model is known. But since the present principles introduce a way to handle time domain constraints (with IL ), this provides an opportunity to handle the quantized signals in a better way. More specifically, when the quantization step sizes are known, the quantized time domain signals are known to obey constraints such that
    quant_level_low < s < quant_level_high
    where the upper and lower bounds (quant_level_low/high) are known. Hence, it is possible to enforce this constraint while applying the low rank NMF/NTF model.
  • Fig.3 shows, in one embodiment, a flow-chart of a method 30 for performing audio inpainting, wherein missing portions in an input audio signal are recovered and a recovered audio signal is obtained. The method comprises initializing 31 a variance tensor V such that it is a low rank tensor that can be composed from component matrices H , Q , W or initializing said component matrices H , Q , W to obtain the low rank variance tensor V , computing 32 of source power spectra of the input audio signal, wherein estimated source power spectra P (f,n,j) are obtained and wherein the variance tensor V , known signal values x,y of the input audio signal and time domain information on loss IL are input to the computing, iteratively re-calculating 33 the component matrices H,Q,W and the variance tensor V using the estimated source power spectra P (f,n,j) and current values of the component matrices H,Q,W, and upon detecting convergence 34 of the component matrices H,Q,W, or upon reaching a predefined maximum number of iterations, computing 35 a resulting variance tensor V ', and further computing 36 from the resulting variance tensor V ', known signal values x,y of the input audio signal and time domain information on loss IL , an array of a posterior mean of Short Time Fourier Transform (STFT) samples S of the recovered audio signal, and converting 37 coefficients of the array of the posterior mean of the STFT samples S to the time domain, wherein coefficients 1, 2, ... , J of the recovered audio signal are obtained.
  • In one embodiment, the estimated source power spectra P (f,n,j) are obtained according to P (f,n,j) = E{| S(f,n,j)|2| x,I s,I L,V }, with Is being time domain information on sources.
  • In one embodiment, the time domain information on sources IS comprises at least one of: information about which sources are active or silent for a particular time instant, information about a number of how many components each source is composed in the low rank representation, and specific information on a harmonic structure of the sources.
  • In one embodiment, the time domain information on loss IL comprises at least one of: a clipping threshold, a sign of an unknown value in the input audio signal, an upper limit for the signal magnitude, and the quantized value of an unknown signal in the input audio signal.
  • In one embodiment, the variance tensor V is initialized by random matrices H R + N × K ,
    Figure imgb0060
    W R + F × K ,
    Figure imgb0061
    Q R + J × K ,
    Figure imgb0062
    as explained above.
  • In one embodiment, the variance tensor V is initialized by values derived from known samples of the input audio signal.
  • In one embodiment, the input audio signal is a mixture of multiple audio sources, and the method further comprises receiving 38 side information comprising quantized random samples of the multiple audio signals, and performing 39 source separation, wherein the multiple audio signals from said mixture of multiple audio sources are separately obtained.
  • In one embodiment, the STFT coefficients are windowed time domain samples S. In one embodiment, the input audio signal contains quantization noise, wherein wrongly quantized coefficients take the position of the missing coefficients, wherein the quantization levels are used as further constraints in said time domain information on loss IL , and wherein the recovered audio signal is a de-quantized audio signal.
  • Fig.4 shows, in one embodiment, an apparatus 40 for performing audio restauration, wherein missing portions in an input audio signal are recovered and a recovered audio signal is obtained. The apparatus comprises a processor 41 and a memory 42 storing instructions that, when executed on the processor, cause the apparatus to perform a method comprising initializing a variance tensor V such that it is a low rank tensor that can be composed from component matrices H,Q,W , or initializing said component matrices H , Q , W to obtain the low rank variance tensor V, iteratively applying the following steps, until convergence of the component matrices H , Q , W :
    computing 32 conditional expectations of source power spectra of the input audio signal, wherein estimated source power spectra P (f,n,j) are obtained and wherein the variance tensor V , known signal values x, y of the input audio signal and time domain information on loss ( IL ) are input to the computing, re-calculating 33 the component matrices H,Q,W and the variance tensor V using the estimated source power spectra P (f,n,j) and current values of the component matrices H,Q,W ,
    upon convergence of the component matrices H,Q,W_, computing a resulting variance tensor V', and computing from the resulting variance tensor V ', known signal values x,y of the input audio signal and time domain information on loss IL , an array of a posterior mean of Short Time Fourier Transform (STFT) samples S of the recovered audio signal, and converting 37 coefficients of the array of the posterior mean of the STFT samples S to the time domain, wherein coefficients 1, 2, ..., J of the recovered audio signal are obtained.
  • In one embodiment, the estimated source power spectra P (f,n,j) are obtained according to P (f,n,j) = E{| S (f,n,j)|2| x , I s ,IL , V } with Is being time domain information on sources.
  • In one embodiment, the time domain information on loss comprises at least one of: a clipping threshold, a sign of an unknown value in the input audio signal, an upper limit for the signal magnitude, and the quantized value of an unknown signal in the input audio signal.
  • In one embodiment, the input audio signal is a mixture of multiple audio sources, and the instructions when executed on the processor further cause the apparatus to receive 38 side information comprising quantized random samples of the multiple audio signals, and perform 39 source separation, wherein the multiple audio signals from said mixture of multiple audio sources are separately obtained. In one embodiment, the input audio signal contains quantization noise, wherein wrongly quantized coefficients take the position of the missing coefficients, wherein the quantization levels are used as further constraints in said time domain information on loss IL , and wherein the recovered audio signal is a de-quantized audio signal.
  • In one embodiment, the input audio signal contains quantization noise, wherein wrongly quantized coefficients take the position of the missing coefficients, wherein the quantization levels are used as further constraints in said time domain information on loss IL , and wherein the recovered audio signal is a de-quantized audio signal.
  • In one embodiment, an apparatus for performing audio restauration, wherein missing coefficients of an input audio signal are recovered and a recovered audio signal is obtained, comprises
    first computing means for initializing 31 a variance tensor V such that it is a low rank tensor that can be composed from component matrices H,Q,W, or for initializing said component matrices H , Q , W to obtain the low rank variance tensor V, second computing means for computing 32 conditional expectations of source power spectra of the input audio signal, wherein estimated source power spectra P (f,n,j) are obtained and wherein the variance tensor V, known signal values x,y of the input audio signal and time domain information on loss IL are input to the computing, calculating means for iteratively re-calculating 33 the component matrices H,Q,W and the variance tensor V using the estimated source power spectra P (f,n,j) and current values of the component matrices H,Q,W, detection means for detecting 34 convergence of the component matrices H,Q,W or for detecting that a predefined maximum number of iterations is reached,
    third computing means for computing 35, upon said convergence of the component matrices H,Q,W or upon reaching said predefined maximum number of iterations, a resulting variance tensor V', fourth computing means for computing 36 from the resulting variance tensor V', known signal values x,y of the input audio signal and time domain information on loss IL , an array of a posterior mean of Short Time Fourier Transform (STFT) samples S of the recovered audio signal, and converter means for converting 37 coefficients of the array of the posterior mean of the STFT samples S to the time domain, wherein coefficients 1, 2, ... , J of the recovered audio signal are obtained. The coefficients 1, 2, ..., J of the recovered audio signal can be used e.g. to reproduce or store the recovered audio signal.
  • Usually, the invention leads to a low-rank tensor structure in the power spectrogram of the reconstructed signal.
  • The use of the verb "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. Furthermore, the use of the article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Several "means" may be represented by the same item of hardware. Furthermore, the invention resides in each and every novel feature or combination of features. As used herein, a "digital audio signal" or "audio signal" does not describe a mere mathematical abstraction, but instead denotes information embodied in or carried by a physical medium capable of detection by a machine or apparatus. This term includes recorded or transmitted signals, and should be understood to include conveyance by any form of encoding, including pulse code modulation (PCM), but not limited to PCM.
  • While there has been shown, described, and pointed out fundamental novel features of the present invention as applied to preferred embodiments thereof, it will be understood that various omissions and substitutions and changes in the apparatus and method described, in the form and details of the devices disclosed, and in their operation, may be made by those skilled in the art without departing from the spirit of the present invention. It is expressly intended that all combinations of those elements that perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Substitutions of elements from one described embodiment to another are also fully intended and contemplated.
  • Each feature disclosed in the description and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination. Features may, where appropriate be implemented in hardware, software, or a combination of the two. Connections may, where applicable, be implemented as wireless connections or wired, not necessarily direct or dedicated, connections. Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims. In one embodiment, an apparatus is at least partially implemented in hardware by using at least one silicon component.
  • Cited References

Claims (15)

  1. A method (30) for performing audio restauration, wherein missing coefficients of an input audio signal are recovered and a recovered audio signal is obtained, comprising steps of
    - initializing (31) a variance tensor V such that it is a low rank tensor that can be composed from component matrices H,Q,W , or initializing said component matrices H , Q , W to obtain the low rank variance tensor V ;
    - iteratively applying the following steps, until convergence of the component matrices H , Q , W :
    i. computing (32) conditional expectations of source power spectra of the input audio signal, wherein estimated source power spectra P (f,n,j) are obtained and wherein the variance tensor V , known signal values (x,y) of the input audio signal and time domain information on loss ( IL ) are input to the computing;
    ii. re-calculating (33) the component matrices H,Q,W and the variance tensor V using the estimated source power spectra P (f,n,j) and current values of the component matrices H,Q,W ;
    - upon convergence (34) of the component matrices H,Q,W_, computing (35) a resulting variance tensor V ', and computing (36) from the resulting variance tensor V ', known signal values (x,y) of the input audio signal and time domain information on loss ( IL ), an array of a posterior mean of Short Time Fourier Transform (STFT) samples (S) of the recovered audio signal; and
    - converting (37) coefficients of the array of the posterior mean of the STFT samples (S) to the time domain, wherein coefficients ( 1 , 2 , ..., J ) of the recovered audio signal are obtained.
  2. The method according to claim 1, wherein in the step of computing (32) conditional expectations of the source power spectra of the input audio signal the estimated source power spectra P (f,n,j) are obtained according to P (f,n,j) = E{| S(f,n,j)|2| x , Is , IL , V }, with IS being time domain information on sources.
  3. The method according to claim 2, wherein the time domain information on sources ( Is ) comprises at least one of: information about which sources are active or silent for a particular time instant, information about a number of how many components each source is composed in the low rank representation, and specific information on a harmonic structure of the sources.
  4. The method according to one of the claims 1-3, wherein the time domain information on loss ( IL ) comprises at least one of: a clipping threshold, a sign of an unknown value in the input audio signal, an upper limit for the signal magnitude, and the quantized value of an unknown signal in the input audio signal.
  5. The method according to one of the claims 1-4, wherein the variance tensor V is computed from matrices H R + N × K ,
    Figure imgb0063
    W R + F × K ,
    Figure imgb0064
    Q R + J × K ,
    Figure imgb0065
    of rank k according to V f n j = k = 1 K H n k W f k Q j k .
    Figure imgb0066
  6. The method according to one of the claims 1-5, wherein the variance tensor V is initialized by random matrices H R + N × K ,
    Figure imgb0067
    W R + F × K ,
    Figure imgb0068
    Q R + J × K ,
    Figure imgb0069
    according to V f n j = k = 1 K H n k W f k Q j k .
    Figure imgb0070
  7. The method according to one of the claims 1-6, wherein the variance tensor V is initialized by values derived from known samples of the input audio signal.
  8. The method according to one of the claims 1-7, wherein the input audio signal is a mixture of multiple audio sources, further comprising steps of
    - receiving (38) side information comprising quantized random samples of the multiple audio signals; and
    - performing (39) source separation, wherein the multiple audio signals from said mixture of multiple audio sources are separately obtained.
  9. The method according to one of the claims 1-8, wherein the STFT coefficients are windowed time domain samples ().
  10. The method according to one of the claims 1-9, wherein the input audio signal contains quantization noise, wherein wrongly quantized coefficients take the position of the missing coefficients, wherein the quantization levels are used as further constraints in said time domain information on loss ( IL ), and wherein the recovered audio signal is a de-quantized audio signal.
  11. An apparatus (40) for performing audio restauration, wherein missing coefficients of an input audio signal are recovered and a recovered audio signal is obtained, the apparatus comprising a processor (41) and a memory (42) storing instructions that, when executed on the processor, cause the apparatus to perform a method comprising
    - initializing a variance tensor V such that it is a low rank tensor that can be composed from component matrices H,Q,W , or initializing said component matrices H , Q , W to obtain the low rank variance tensor V ;
    - iteratively applying the following steps, until convergence of the component matrices H , Q , W :
    i. computing (32) conditional expectations of source power spectra of the input audio signal, wherein estimated source power spectra P (f,n,j) are obtained and wherein the variance tensor V , known signal values (x, y) of the input audio signal and time domain information on loss ( IL ) are input to the computing;
    ii. re-calculating (33) the component matrices H,Q,W and the variance tensor V using the estimated source power spectra P (f,n,j) and current values of the component matrices H,Q,W;
    - upon convergence of the component matrices H,Q,W _, computing a resulting variance tensor V ', and computing from the resulting variance tensor V ', known signal values (x,y) of the input audio signal and time domain information on loss ( IL ), an array of a posterior mean of Short Time Fourier Transform (STFT) samples (S) of the recovered audio signal; and
    - converting (37) coefficients of the array of the posterior mean of the STFT samples (S) to the time domain, wherein coefficients ( 1, 2, ..., J ) of the recovered audio signal are obtained.
  12. The apparatus according to claim 11, wherein the estimated source power spectra P (f,n,j) are obtained according to
    P (f,n,j) = E{\S(f,n,j)\2\x,I s,IL ,V } with IS being time domain information on sources.
  13. The apparatus according to one of the claims 11-12, wherein the time domain information on loss comprises at least one of: a clipping threshold, a sign of an unknown value in the input audio signal, an upper limit for the signal magnitude, and the quantized value of an unknown signal in the input audio signal.
  14. The apparatus according to one of the claims 11-13, wherein the input audio signal is a mixture of multiple audio sources, the instructions when executed on the processor further cause the apparatus to
    - receive (38) side information comprising quantized random samples of the multiple audio signals; and
    - perform (39) source separation, wherein the multiple audio signals from said mixture of multiple audio sources are separately obtained.
  15. The apparatus according to one of the claims 11-14, wherein the input audio signal contains quantization noise, wherein wrongly quantized coefficients take the position of the missing coefficients, wherein the quantization levels are used as further constraints in said time domain information on loss ( IL ), and wherein the recovered audio signal is a de-quantized audio signal.
EP15306212.0A 2015-04-10 2015-07-24 Method for performing audio restauration, and apparatus for performing audio restauration Withdrawn EP3121811A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP15306212.0A EP3121811A1 (en) 2015-07-24 2015-07-24 Method for performing audio restauration, and apparatus for performing audio restauration
EP16714898.0A EP3281194B1 (en) 2015-04-10 2016-04-06 Method for performing audio restauration, and apparatus for performing audio restauration
PCT/EP2016/057541 WO2016162384A1 (en) 2015-04-10 2016-04-06 Method for performing audio restauration, and apparatus for performing audio restauration
US15/564,378 US20180211672A1 (en) 2015-04-10 2016-04-06 Method for performing audio restauration, and apparatus for performing audio restauration
HK18103188.6A HK1244946B (en) 2015-04-10 2018-03-06 Method for performing audio restauration, and apparatus for performing audio restauration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP15306212.0A EP3121811A1 (en) 2015-07-24 2015-07-24 Method for performing audio restauration, and apparatus for performing audio restauration

Publications (1)

Publication Number Publication Date
EP3121811A1 true EP3121811A1 (en) 2017-01-25

Family

ID=53776524

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15306212.0A Withdrawn EP3121811A1 (en) 2015-04-10 2015-07-24 Method for performing audio restauration, and apparatus for performing audio restauration

Country Status (1)

Country Link
EP (1) EP3121811A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853457A (en) * 2019-10-31 2020-02-28 中国科学院自动化研究所南京人工智能芯片创新研究院 Interactive music teaching guidance method

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
A. ADLER; V. EMIYA; M. JAFARI; M. ELAD; R. GRIBONVAL; M. D. PLUMBLEY: "Audio inpainting", IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, vol. 20, no. 3, 2012, pages 922 - 932
A. OZEROV; C. FEVOTTE; R. BLOUET; J.-L. DURRIEU: "Multichannel nonnegative tensor factorization with structured constraints for user-guided audio source separation", IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP'11, May 2011 (2011-05-01), pages 257 - 260
C. FEVOTTE; N. BERTIN; J.-L. DURRIEU: "Nonnegative matrix factorization with the Itakura-Saito divergence. With application to music analysis", NEURAL COMPUTATION, vol. 21, no. 3, March 2009 (2009-03-01), pages 793 - 830
CAGDAS BILEN ET AL: "Audio Inpainting, Source Separation, Audio Compression. All with a Unified Framework Based on NTF Model", MISSDATA 2015, 18 June 2015 (2015-06-18), pages 1 - 2, XP055216560, Retrieved from the Internet <URL:https://hal.inria.fr/hal-01171843/document> [retrieved on 20150928] *
CAGDAS BILEN ET AL: "Audio Inpainting, Source Separation, Audio Compression. All with a Unified Framework Based on NTF Model", MISSDATA 2015, 18 June 2015 (2015-06-18), Rennes, France, XP055216546, Retrieved from the Internet <URL:http://arxiv.org/abs/1502.06919> [retrieved on 20150928] *
CAGDAS BILEN, ALEXEY OZEROV, PATRICK PEREZ: "Joint Audio Inpainting and Source Separation", 5 June 2015 (2015-06-05), XP002754817, Retrieved from the Internet <URL:https://hal.inria.fr/hal-01160438/document> [retrieved on 20160226] *
KAI SIEDENBURG; MATTHIEU KOWALSKI; MONIKA D6RFLER: "Audio Declipping with Social Sparsity", PROC. IEEE INT. CONF. ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP, 2014
N. Q. K. DUONG; A. OZEROV; L. CHEVALLIER: "Temporal annotation-based audio source separation using weighted nonnegative matrix factorization", PROC. IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE-BERLIN, September 2014 (2014-09-01)
SMARAGDIS, P.; B. RAJ; M. SHASHANKA: "Missing data imputation for time-frequency representations of audio signals", JOURNAL OF SIGNAL PROCESSING SYSTEMS, August 2010 (2010-08-01)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853457A (en) * 2019-10-31 2020-02-28 中国科学院自动化研究所南京人工智能芯片创新研究院 Interactive music teaching guidance method
CN110853457B (en) * 2019-10-31 2021-09-21 中科南京人工智能创新研究院 Interactive music teaching guidance method

Similar Documents

Publication Publication Date Title
Le Roux et al. Deep NMF for speech separation
US8751227B2 (en) Acoustic model learning device and speech recognition device
Weninger et al. Discriminative NMF and its application to single-channel source separation.
US9824683B2 (en) Data augmentation method based on stochastic feature mapping for automatic speech recognition
Smaragdis et al. Supervised and semi-supervised separation of sounds from single-channel mixtures
US8433567B2 (en) Compensation of intra-speaker variability in speaker diarization
US9812150B2 (en) Methods and systems for improved signal decomposition
US10192568B2 (en) Audio source separation with linear combination and orthogonality characteristics for spatial parameters
Bilen et al. Audio declipping via nonnegative matrix factorization
CN110164465B (en) Deep-circulation neural network-based voice enhancement method and device
US20140114650A1 (en) Method for Transforming Non-Stationary Signals Using a Dynamic Model
US11562765B2 (en) Mask estimation apparatus, model learning apparatus, sound source separation apparatus, mask estimation method, model learning method, sound source separation method, and program
Wu et al. The theory of compressive sensing matching pursuit considering time-domain noise with application to speech enhancement
Mogami et al. Independent low-rank matrix analysis based on complex Student's t-distribution for blind audio source separation
Al-Tmeme et al. Underdetermined convolutive source separation using GEM-MU with variational approximated optimum model order NMF2D
US10904688B2 (en) Source separation for reverberant environment
EP3550565B1 (en) Audio source separation with source direction determination based on iterative weighting
Kwon et al. Target source separation based on discriminative nonnegative matrix factorization incorporating cross-reconstruction error
EP3281194B1 (en) Method for performing audio restauration, and apparatus for performing audio restauration
EP3121811A1 (en) Method for performing audio restauration, and apparatus for performing audio restauration
Hoffmann et al. Using information theoretic distance measures for solving the permutation problem of blind source separation of speech signals
Kang et al. NMF-based speech enhancement incorporating deep neural network.
Badiezadegan et al. A wavelet-based thresholding approach to reconstructing unreliable spectrogram components
US11676619B2 (en) Noise spatial covariance matrix estimation apparatus, noise spatial covariance matrix estimation method, and program
Badeau et al. Nonnegative matrix factorization

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170726