EP3036739A1 - Enhanced estimation of at least one target signal - Google Patents

Enhanced estimation of at least one target signal

Info

Publication number
EP3036739A1
EP3036739A1 EP14753072.9A EP14753072A EP3036739A1 EP 3036739 A1 EP3036739 A1 EP 3036739A1 EP 14753072 A EP14753072 A EP 14753072A EP 3036739 A1 EP3036739 A1 EP 3036739A1
Authority
EP
European Patent Office
Prior art keywords
signal
phase
estimation
amplitude
discrete
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP14753072.9A
Other languages
German (de)
French (fr)
Inventor
Gernot Kubin
Rahim Saeidi
Pejman Mowlaee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technische Universitaet Graz
Original Assignee
Technische Universitaet Graz
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technische Universitaet Graz filed Critical Technische Universitaet Graz
Priority to EP14753072.9A priority Critical patent/EP3036739A1/en
Publication of EP3036739A1 publication Critical patent/EP3036739A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters

Definitions

  • the present invention relates to a method for estimation of at least one signal of interest from at least one discrete-time signal. Furthermore, the invention relates to a device for carrying out a method according to the invention.
  • the target signal being a continuous-time signal to be processed by digital data processing
  • the target signal is usually measured and transformed into a quantized discrete-time signal.
  • the quantized discrete-time signal comprises the desired target signal and, as an undesirable effect, also noise sources and/ or other signals.
  • phase-unaware amplitude estimation methods separate the target signal (signal of interest) from noise and/ or other signals by applying a frequency-dependent gain function (mask) on observed noisy amplitude spectrum.
  • gain functions are Wiener filter (as softmask) and binary mask. Noise reduction capability obtained by conventional methods is limited since they only modify the amplitude or phase individually.
  • phase estimation being an amplitude-aware phase estimation using an input signal to obtain an estimated phase spectrum of the at least one signal of interest, wherein the result of the amplitude estimation of the preceding step b) is used as an input signal;
  • step d) performing an amplitude estimation on the complex spectrum, said amplitude estimation being a phase-aware amplitude estimation using the result of the phase estimation of step c) to obtain an enhanced complex spectrum of the at least one signal of interest.
  • the signal of interest can be any target signal included in the at least one discrete- time signal.
  • This approach according to the invention pushes the limits of the conventional speech enhancement methods by introducing synergistic interaction between amplitude estimation and phase estimation stages.
  • the amplitude estimation on the complex spectrum in step b) is performed irrespectively of the phase spectrum of the signal of interest.
  • Such an amplitude estimation forms a "phase- unaware amplitude estimation” referring to any conventional amplitude estimation method which is performed irrespectively of the phase spectrum of at least the at least one signal of interest.
  • the result of the phase-unaware amplitude estimation (according to step b)) is only used as an input signal in step c) if step c) follows in direct order to step b).
  • Step c) and d) according to the invention are based upon certain conditions.
  • Amplitude- aware phase estimation according to step c) requires at least an estimation of the amplitude spectrum (signal magnitude spectrum) of the at least one signal of interest and preferably estimation of the amplitude spectrum of the vector sum of all other sources.
  • the amplitude-aware phase estimator (and a method for amplitude-aware phase estimation) has been derived from the non-patent literature "Phase estimation for signal reconstruction in single-channel speech separation" (P. Mowlaee, R. Saiedi, and R.
  • phase-aware amplitude estimator in particular, a method for phase-aware amplitude estimation
  • the at least one discrete-time signal can be of any source or the interaction of sources, for example a noisy speech signal or the superposition of several speech and/ or noise signals.
  • the at least one discrete-time signal could be obtained by observation, measurement and/ or calculation.
  • the amplitude estimation on the complex spectrum in step b) is performed by a frequency-dependent time-frequency mask, in particular by Wiener filtering of the complex spectrum.
  • step d) the steps c) and d) are repeated iteratively wherein as input signal in the repeated step c) the result of the phase-aware amplitude estimation of the preceding step d) is used. Therefore, a loop is closed by a feedback from an output of a phase-aware amplitude estimator to an input of an amplitude- aware phase estimator.
  • Previous iterative speech enhancement methods aimed at improving the spectral amplitude estimates only within the iterations. In these methods, neither a phase enhancement stage nor a combined synthesis-analysis stage was used within the feedback loop for the iterations. Instead, a noisy phase was exploited in signal reconstruction.
  • the consistency between the phase and amplitude estimations of the enhanced complex spectrum (as the enhanced complex spectrum provides an input in step c)) of the at least one signal of interest is monitored according to the following comparison criterion: , X being a matrix composed of a complex time-frequency representation of the enhanced complex spectrum, wherein at least one quality index is established to measure inconsistency of complex time-frequency representations denoted by , obtained at each loop-iteration, and defined for the i-th loop iteration as follows: wherein the i-th quality index is calculated by
  • the threshold allows to measure the decrease of the amount of inconsistency observed between the phase and amplitude estimates obtained by each iteration before feedback to the phase estimation. Therefore, the iterations can be stopped when the quality index gets lower than the pre-defined threshold allowing fast and efficient processing of the trans
  • the threshold is which is especially suited as a comparison
  • the iterations are stopped at least after a predefined number of iterations, in particular after five, six or seven iterations. This allows to limit the number of iterations and therefore to limit the computing efforts. It is also possible to relate to the above mentioned comparison criterion and to limit the number of iterations in case does not fall below the threshold
  • the transformation method in step a) is a spectro-temporal transformation, in particular STFT, Wavelet or sinusoidal signal modelling.
  • STFT spectro-temporal transformation
  • wavelet wavelet
  • sinusoidal model it is possible to reduce the dimensionality of signal feature to a great level, hence less computational effort.
  • replacing STFT with other time-frequency transformations inducting Wavelet or Wigner ville time-frequency representation for amplitude estimation or Chirplet signal transformation and complex Wavelet transformation for representation of both amplitude and phase enables to have a non-uniform resolution to analyze different frequency bands, which is advantageous when applied to audio or speech signals.
  • the at least one discrete-time signal can be a bio-medical, radar, image or video signal.
  • the complex time-frequency representation X can be either one- or multidimensional.
  • the matrix X is typically composed of frames as rows and frequency bins as its columns (rows are often larger than the columns).
  • speech signals it is composed of a wide dynamic range of values (80 dB).
  • the dynamic range is often much lower as the signal is sparse in time-frequency.
  • the method according to the invention is especially suited if the at least one discrete-time signal is an audio signal.
  • the at least one discrete-time signal comprises at least one speech signal.
  • the speech signal can be the target signal, which is true for many everyday life speech-related applications, in particular for automatic speech recognition (ASR) applications.
  • ASR automatic speech recognition
  • the at least one discrete-time signal can comprise two or even more speech signals.
  • the target signal is represented by one speech signal to be separated from the accompanying signals.
  • the at least one discrete-time signal can be derived from a single channel signal.
  • Single channel signals are common in many applications as they rely on a signal obtained by a single microphone (cell phones, headsets,...) but usually do provide less information than multi channel devices. Therefore, the requirements on signal enhancement are very high, especially in case of single channel speech separation (SCSS). Since the method according to the invention provides strongly enhanced target signals it is exceptionally suited to be applied on single channel signals.
  • SCSS single channel speech separation
  • the at least one discrete-time signal can be derived from a multi channel signal.
  • An additional information provided by at least a second measurement device can be processed to give an extraordinary accurate estimation of the at least one target signal.
  • the method according to the invention is also suited to estimate two or more target signals.
  • the aim to provide an enhanced method for the estimation of at least one target signal is achieved by means of a device for carrying out a method according to any of the preceding claims.
  • Fig. 1 a schematic block-diagram illustrating the object of the invention
  • Fig.2 an exemplary schematic block-diagram of a state of the art multi-sensor speech enhancement method
  • FIG.4 an exemplary schematic block-diagram of a variant of the invention
  • FIG.5 an exemplary schematic block-diagram of another variant of the invention
  • Fig. 6 a schematic block-diagram of the block "New Enhancement" according to the invention shown in fig.4 and 5
  • Fig. 7 a detailed schematic block-diagram of the stopping rule block shown in fig.4,
  • Fig.8 a schematic block-diagram of a typical single-channel separation algorithm based on amplitude estimation on a complex spectrum of a noisy signal described the cited non-patent literature non-patent literature "Phase estimation for signal reconstruction in single-channel speech separation'" in detail, said amplitude estimation being performed phase-unaware,
  • Fig.9 a schematic block-diagram of amplitude-aware phase estimation described in the non-patent literature "Phase estimation for signal reconstruction in single-channel speech separation'" in detail,
  • Fig. 10 two schematic block-diagrams of two different single-channel speech separation algorithms described in the cited non-patent literature "On phase importance in parameter estimation in single-channel speech enhancement" in detail.
  • Fig. 1 shows a schematic block-diagram illustrating the object of the invention.
  • y(t) which includes for example two different signals and (and/ or correspondingly it is an object of the invention to separate the signals by providing an enhanced complex spectrum (see
  • the symbol index t refers to continuous time and n refers to discrete time domain) of the at least one signal of interest . This allows to provide an estimate and/ or of the signals (and/ or correspondingly Assuming that
  • the signal is a signal of interest and the signal represents for example interfering
  • a typical approach to estimate the signal of interest consists of transforming the continuous-time signal y(t) into a quantized discrete-time signal y(n) by applying an analog-to- digital converter 1 on the continuous-time signal y(t).
  • a signal estimation device 2 processes the discrete time signal y(n) using a priori information to provide an estimate of at least the signal of interest. In the given example an estimate of the signal representing noise is provided as well.
  • amplitude-aware phase estimation refers to the task of phase estimation given the input noisy data as well as an estimation of the amplitude spectrum of the signal of interest which can be provided by a conventional phase-unaware amplitude estimation method (an example for a phase-unaware amplitude estimation method is denoted by block C shown in Figure 4).
  • Fig. 2 shows an exemplary schematic block-diagram of a state of the art multi-sensor speech enhancement method (which can be applied by a signal estimation device 2 according to Fig. 1) to be applied on M discrete-time signals exploited from a number of M sensors, said speech enhancement method composed of three stages, i.e. analysis, modification and synthesis.
  • the analysis stage might consist in different signal representations including short-time Fourier transformation (STFT), Sinusoidal modeling, polyphase filter banks, Mel- frequency Cepstral analysis and/ or any other suitable transformation applicable on at least one discrete-time signal.
  • STFT short-time Fourier transformation
  • Sinusoidal modeling Sinusoidal modeling
  • polyphase filter banks Polyphase filter banks
  • Mel- frequency Cepstral analysis Mel- frequency Cepstral analysis and/ or any other suitable transformation applicable on at least one discrete-time signal.
  • the discrete-time signals exploited from the number of M sensors are therefore transformed in a complex format providing amplitude and phase parts of the signals.
  • the analysis stage is required to decompose the complex signals into a number of N different frequency channels, hence a product of N x M samples are provided for the modification stage.
  • the output of the analysis stage by the complex spectrum representation can be exemplified as follows.
  • the modification stage known from the state of the art can be performed in two ways: a) amplitude enhancement, in which any frequency- dependent gain function as amplitude estimator (e.g. Wiener filter as a common choice) is employed together with a noise estimator given either by a reference microphone or a noise tracking method while the noisy phase is directly copied to reconstruct the enhanced signal, or b) phase enhancement; in which the noisy phase is often directly copied to synthesize enhanced output signal.
  • the synthesis stage is applied on the resulting N x M samples of the modification stage to reconstruct enhanced signals, in particular enhanced speech signals.
  • Fig. 3 exemplifies state-of-the-art modification stage of Fig. 2 (if not stated otherwise in the description of the figures, same reference signs describe same features).
  • the M discrete-time signals exploited from the number of M sensors are analyzed in block A, providing N x M samples in a complex format as described in Fig. 2.
  • block 3 the amplitude part contained in the complex format of the samples is exploited (block 4 exploits the phase part contained in the complex format of the samples).
  • the samples are processed through amplitude or phase enhancement stages, wherein the amplitude enhancement stage is provided with a noise estimate, and finally synthesized in block S to provide an enhanced signal, in particular an enhanced speech signal.
  • the modifications of the samples can be categorized in four different groups.
  • a first group provides an estimate for the clean speech spectral amplitude based on a noise estimate from a noise tracker or a reference sensor and a speech estimate using a decision- directed method (see US 2009/0163168 Al and Y. Ephraim and D. Malah, "Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator", IEEE Trans. Acoust., Speech, Signal Processing, vol. 32. no. 6, pp. 1109-1121, Dec 1984).
  • the noisy phase is directly used unaltered when reconstructing a time-domain enhanced speech signal at an output.
  • This group can be represented in Fig. 3 by an amplitude switch ASW being in a position F2 and a phase switch PSW being in a position P3.
  • a second group (ASW in position F2 and PSW in position P4) refers to phase enhancement only methods.
  • non-patent literature see the cited non-patent literature xx and "On phase importance in parameter estimation in single-channel speech enhancement" suggested to employ Griffin and Lim iterations to estimate the signal phase for signal reconstruction given the Wiener filtered amplitude spectrum, using synthesis-analysis in iterations.
  • a third group (ASW in position PI and PSW in position 4) refers to phase enhancement only methods used with the noisy amplitude. The phase estimation often requires strong assumptions knowing exact onsets and fundamental frequency of clean signals, in particular speech signals and previous frame phase values.
  • a fourth group (ASW in position F2 and PSW in position P4, but in contrast to the second group no iterations) refers to a method assuming a clean spectral amplitude is available and spectral amplitude is estimated in a phase-aware way in an open loop configuration.
  • Block A and block S represent analysis and synthesis blocks as described in Fig. 2 and 3, wherein block A is provided with at least one discrete-time signal y(n).
  • an enhanced signal modification method according to the invention is provided, which is described in detail in Fig. 6.
  • a block "New Enhancement” is provided with a noise estimate, N x M samples, and, depending on the switching position of a loop switch LSW,
  • the conventional enhancement block C represents any phase-unaware amplitude estimator or phase-unaware amplitude estimation methods (or any amplitude estimation method performed irrespectively of the phase spectrum of at least the signal of interest which separates the signal of interest from noise and/or other signals for example by applying a frequency-dependent gain function (mask) on observed noisy amplitude spectrum. Examples for such gain functions are Wiener filter (as softmask) and binary mask. Noise reduction capability obtained by conventional methods is limited since they only modify the amplitude or phase individually.
  • the block C and the block "New Enhancement" is provided with a noise estimate.
  • Block C performs an amplitude estimation on the complex spectrum to obtain an estimated amplitude spectrum of the at
  • At least one signal of interest (preferably from noise and/ or other signals as well).
  • the method according to the invention comprises a step (b) including performing a phase- unaware amplitude estimation on the complex spectrum to obtain an estimated amplitude spectrum (see output P1 of the block C) of the at least one signal of interest
  • the conventional method (phase- unaware amplitude estimation) applied within block C enhances the amplitude only, providing an input signal representing an initial amplitude estimate of the signal required for phase estimation in the following step.
  • the amplitude-aware phase estimation requires such initially enhanced amplitude signal of interest.
  • Fig. 4 shows a block “stopping rule” (stopping criterion), which provides a criterion to stop the feedback loop.
  • the block “new enhancement” is first provided with the amplitude estimate of the complex spectrum of the conventional block C.
  • the output of the block “New Enhancement” can be looped back as an input signal (the input signal can be in complex format) for the block “New Enhancement” in a following iteration
  • the block “New Enhancement” is described in more detail in Fig. 6 and provides an enhanced complex spectrum of the at least one signal of interest (and correspondingly of which can be used to reconstruct an estimate of the signal of interest
  • Fig. 5 shows an exemplary schematic block-diagram of another variant of the invention, wherein the feedback loop differs from the variant shown in Fig.4.
  • the output of the block "New Enhancement” is synthesized in Block S and analyzed in a following analysis block A, before being looped back as an input signal to the block "New Enhancement", provided that the loop switch LSW being in position P2.
  • X being a matrix composed of a complex time-frequency representation of the enhanced complex spectrum, wherein at least one quality index is established to measure inconsistency of complex time-frequency representations denoted by , obtained by each loop-iteration, and defined for the i-th loop iteration as follows: wherein the i-th quality index is calculated by
  • the threshold is preferably
  • Fig. 6 shows a schematic block-diagram of the block “New Enhancement” according to the invention shown in Fig.4 and 5.
  • two blocks are shown processing the N x M samples described in the preceding figures.
  • a block "amplitude-aware phase estimation'' performs a phase estimation on the complex spectrum said phase estimation
  • the block "amplitude-aware phase estimation'" provides an enhanced phase estimation of the at least one signal of interest (preferably, an enhanced phase estimation of the noise or any other signal as well) to a following block "phase-aware amplitude estimator".
  • phase-aware amplitude estimator an amplitude estimation on the complex spectrum is performed, said amplitude estimation being a phase-aware amplitude estimation using the result of the phase estimation of the block “amplitude-aware phase estimation” to obtain an enhanced complex spectrum of the at least one signal of interest
  • Fig. 7 shows a detailed schematic block-diagram of the block “stopping rule” (stopping criterion) shown in Fig.4.
  • a block “consistency check” is provided with
  • a certain number of iterations for example five, six or seven iterations
  • a inconsistency criterion can be applied (for example the quality index mentioned above) limiting the number of iterations.
  • Fig. 8 shows a schematic block-diagram of a typical single-channel separation algorithm based on amplitude estimation on a complex spectrum of a noisy signal described in appendix the cited non-patent literature "Phase estimation for signal reconstruction in single- channel speech separation", said amplitude estimation being performed phase-unaware.
  • a signal y comprises two signals si and s2 to be separated, wherein amplitude estimates a noisy phase signal is applied to reconstruct the clean signals and
  • Fig. 9 shows a schematic block-diagram of amplitude-aware phase estimation.
  • the signal reconstruction is provided with phase information corresponding to the signals respectively.
  • An minimum mean square error (MMSE) phase estimation block is shown, which is provided with the amplitude estimates and the signal y, said phase estimation being amplitude-aware and providing phase signals
  • Fig. 10 shows two schematic block-diagrams of two different single-channel speech separa- tion algorithms.
  • a typical method to estimate a clean speech amplitude (corresponding to of Fig. 8 and 9) is shown in (a), wherein the amplitude estimation (within the block "Gain function'") is not provided with any phase information.
  • the amplitude estimation is referred to as being phase-unaware.
  • phase-aware amplitude estimation and amplitude-aware phase estimation do not relate to speech signals only.
  • phase-aware amplitude estimation and amplitude-aware phase estimation is applicable to a plurality of signals and the speech signals described in the non-patent literature "On phase importance in parameter estimation in single-channel speech enhancement” and “Phase estimation for signal reconstruction in single-channel speech separation” just represent one utilization of phase-aware amplitude estimation and amplitude-aware phase example, respectively. Therefore, the invention is not limited to the examples given in this specification and can be adjusted in any manner known to a person skilled in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

Method for estimation of at least one signal of interest (s1(t), s1(n)) from at least one discrete- time signal (y(n)), said method comprising the steps of a) transforming the at least one discrete-time signal (y(n)) into a frequency domain to obtain a complex spectrum (I) of the at least one discrete-time signal (y(n)); b) performing a phase-unaware amplitude estimation on the complex spectrum (I) to obtain an estimated amplitude spectrum of the at least one signal of interest (s1(t), s1(n)); c) performing a phase estimation on the complex spectrum (I) said phase estimation being an amplitude-aware phase estimation using an input signal (sin(n)) to obtain an estimated phase spectrum of the at least one signal of interest (s1(t), s1(n)), wherein the result of the amplitude estimation of the preceding step b) is used as an input signal (sin(n)); d) performing an amplitude estimation on the complex spectrum (I) said amplitude estimation being a phase-aware amplitude estimation using the result of the phase estimation of step c) to obtain an enhanced complex spectrum (II) of the at least one signal of interest (s1(t), s1(n)).

Description

ENHANCED ESTIMATION OF AT LEAST ONE TARGET SIGNAL
Field of the invention and description of prior art
The present invention relates to a method for estimation of at least one signal of interest from at least one discrete-time signal. Furthermore, the invention relates to a device for carrying out a method according to the invention.
In many applications, signals of interest are corrupted by noise sources and/or other signals. Therefore, depending on the requirements of a given application, efforts were taken to reduce the level of noise and/ or other signals to a tolerable level.
In case of the signal of interest (target signal) being a continuous-time signal to be processed by digital data processing, the target signal is usually measured and transformed into a quantized discrete-time signal. Provided that the sampling rate and the quantization levels were chosen properly the quantized discrete-time signal comprises the desired target signal and, as an undesirable effect, also noise sources and/ or other signals.
Conventional phase-unaware amplitude estimation methods separate the target signal (signal of interest) from noise and/ or other signals by applying a frequency-dependent gain function (mask) on observed noisy amplitude spectrum. Examples for such gain functions are Wiener filter (as softmask) and binary mask. Noise reduction capability obtained by conventional methods is limited since they only modify the amplitude or phase individually.
Summary of the invention
It is an object of the present invention to provide an enhanced method for the estimation of at least one target signal.
In a first aspect of the invention, this aim is achieved by means of above-mentioned method, comprising the following steps:
a) transforming the at least one discrete-time signal into a frequency domain to obtain a complex spectrum of the at least one discrete-time signal; b) performing a phase-unaware amplitude estimation on the complex spectrum to obtain an estimated amplitude spectrum of the at least one signal of interest;
c) performing a phase estimation on the complex spectrum, said phase estimation being an amplitude-aware phase estimation using an input signal to obtain an estimated phase spectrum of the at least one signal of interest, wherein the result of the amplitude estimation of the preceding step b) is used as an input signal;
d) performing an amplitude estimation on the complex spectrum, said amplitude estimation being a phase-aware amplitude estimation using the result of the phase estimation of step c) to obtain an enhanced complex spectrum of the at least one signal of interest.
By virtue of this approach according to the invention it is possible to perform accurate estimations of at least one target signal comprised in at least one discrete-time signal even under adverse conditions, i.e. highly correlated noise sources and/ or other signals. For example, the signal of interest can be any target signal included in the at least one discrete- time signal. This approach according to the invention pushes the limits of the conventional speech enhancement methods by introducing synergistic interaction between amplitude estimation and phase estimation stages.
The amplitude estimation on the complex spectrum in step b) is performed irrespectively of the phase spectrum of the signal of interest. Such an amplitude estimation forms a "phase- unaware amplitude estimation" referring to any conventional amplitude estimation method which is performed irrespectively of the phase spectrum of at least the at least one signal of interest.
Preferably, the result of the phase-unaware amplitude estimation (according to step b)) is only used as an input signal in step c) if step c) follows in direct order to step b).
Step c) and d) according to the invention are based upon certain conditions. Amplitude- aware phase estimation according to step c) requires at least an estimation of the amplitude spectrum (signal magnitude spectrum) of the at least one signal of interest and preferably estimation of the amplitude spectrum of the vector sum of all other sources. In particular, the amplitude-aware phase estimator (and a method for amplitude-aware phase estimation) has been derived from the non-patent literature "Phase estimation for signal reconstruction in single-channel speech separation" (P. Mowlaee, R. Saiedi, and R. Martin, in Proceedings of the International Conference on Spoken Language Processing, 2012, in particular see chapter 3: "Proposed Algorithm for Phase Estimation") and "STFT phase improvement for single channel speech enhancement" (M. Krawczyk and T. Gerkmann, in International Workshop on Acoustic Signal Enhancement; Proceedings of IWAENC, 2012, pp. 1-4). The phase-aware amplitude estimator (in particular, a method for phase-aware amplitude estimation) has been derived from the non-patent literature "On phase importance in parameter estimation in single-channel speech enhancement" (P. Mowlaee and R. Saeidi, in IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, pp 7462-7466, in particular see chapter 2 "Gain Function for MMSE Amplitude Speech Estimation" and section 2.2 "Proposed gain function given prior clean phase spectrum") and "MMSE-optimal spectral amplitude estimation given the STFT-phase" (T. Gerkmann and M. Krawczyk, Signal Processing Letters, IEEE, vol. 20, no. 2, pp. 129 - 132, Feb. 2013). The enhanced complex spectrum of the at least one signal of interest obtained in step d) represents an estimate of the at least one signal of interest.
The at least one discrete-time signal can be of any source or the interaction of sources, for example a noisy speech signal or the superposition of several speech and/ or noise signals. The at least one discrete-time signal could be obtained by observation, measurement and/ or calculation.
According to a development of the invention, the amplitude estimation on the complex spectrum in step b) is performed by a frequency-dependent time-frequency mask, in particular by Wiener filtering of the complex spectrum.
In a further development of the invention, after step d) the steps c) and d) are repeated iteratively wherein as input signal in the repeated step c) the result of the phase-aware amplitude estimation of the preceding step d) is used. Therefore, a loop is closed by a feedback from an output of a phase-aware amplitude estimator to an input of an amplitude- aware phase estimator. Previous iterative speech enhancement methods aimed at improving the spectral amplitude estimates only within the iterations. In these methods, neither a phase enhancement stage nor a combined synthesis-analysis stage was used within the feedback loop for the iterations. Instead, a noisy phase was exploited in signal reconstruction. No phase information was taken into account to update the signal parameters of an enhanced target signal. A synergistic effect in this closed loop according to the invention stems from the fact that better amplitude estimation assists the phase estimation and a better phase estimation assists the amplitude estimation. These improvements can be continued by alternating between the two estimators multiple times until a sufficient quality of the joint amplitude and phase estimates is obtained.
In yet another development of the invention, the consistency between the phase and amplitude estimations of the enhanced complex spectrum (as the enhanced complex spectrum provides an input in step c)) of the at least one signal of interest is monitored according to the following comparison criterion: , X being a matrix composed of a complex time-frequency representation of the enhanced complex spectrum, wherein at least one quality index is established to measure inconsistency of complex time-frequency representations denoted by , obtained at each loop-iteration, and defined for the i-th loop iteration as follows: wherein the i-th quality index is calculated by
and the loop-iterations are stopped at least when a quality index gets lower than a predefined threshold Establishing a quality index and comparison with a defined
threshold allows to measure the decrease of the amount of inconsistency observed between the phase and amplitude estimates obtained by each iteration before feedback to the phase estimation. Therefore, the iterations can be stopped when the quality index gets lower than the pre-defined threshold allowing fast and efficient processing of the trans
formed signal.
Advantageously, the threshold is which is especially suited as a comparison
criterion for the quality index According to another development of the invention the iterations are stopped at least after a predefined number of iterations, in particular after five, six or seven iterations. This allows to limit the number of iterations and therefore to limit the computing efforts. It is also possible to relate to the above mentioned comparison criterion and to limit the number of iterations in case does not fall below the threshold
In a variant of the invention the transformation method in step a) is a spectro-temporal transformation, in particular STFT, Wavelet or sinusoidal signal modelling. For example by replacing STFT representation with sinusoidal model, it is possible to reduce the dimensionality of signal feature to a great level, hence less computational effort. On the other hand, replacing STFT with other time-frequency transformations inducting Wavelet or Wigner ville time-frequency representation for amplitude estimation or Chirplet signal transformation and complex Wavelet transformation for representation of both amplitude and phase enables to have a non-uniform resolution to analyze different frequency bands, which is advantageous when applied to audio or speech signals.
Preferably, the at least one discrete-time signal can be a bio-medical, radar, image or video signal. In this case, the complex time-frequency representation X can be either one- or multidimensional. The matrix X is typically composed of frames as rows and frequency bins as its columns (rows are often larger than the columns). For speech signals, it is composed of a wide dynamic range of values (80 dB). For bio-medical signals, the dynamic range is often much lower as the signal is sparse in time-frequency.
Alternatively, the method according to the invention is especially suited if the at least one discrete-time signal is an audio signal.
In a further development of the invention the at least one discrete-time signal comprises at least one speech signal. The speech signal can be the target signal, which is true for many everyday life speech-related applications, in particular for automatic speech recognition (ASR) applications. According to a challenging scenario the at least one discrete-time signal can comprise two or even more speech signals. The target signal is represented by one speech signal to be separated from the accompanying signals. Furthermore, the at least one discrete-time signal can be derived from a single channel signal. Single channel signals are common in many applications as they rely on a signal obtained by a single microphone (cell phones, headsets,...) but usually do provide less information than multi channel devices. Therefore, the requirements on signal enhancement are very high, especially in case of single channel speech separation (SCSS). Since the method according to the invention provides strongly enhanced target signals it is exceptionally suited to be applied on single channel signals.
Alternatively, the at least one discrete-time signal can be derived from a multi channel signal. An additional information provided by at least a second measurement device can be processed to give an extraordinary accurate estimation of the at least one target signal.
Of course, the method according to the invention is also suited to estimate two or more target signals.
In a second aspect of the invention the aim to provide an enhanced method for the estimation of at least one target signal is achieved by means of a device for carrying out a method according to any of the preceding claims.
Brief description of the drawings
The specific features and advantages of the present invention will be better understood through the following description. In the following, the present invention is described in more detail with reference to exemplary embodiments (which are not to be construed as limitative) shown in the drawings, which show:
Fig. 1 a schematic block-diagram illustrating the object of the invention,
Fig.2 an exemplary schematic block-diagram of a state of the art multi-sensor speech enhancement method,
Fig.3 exemplary state of the art modifications of Fig.2,
Fig.4 an exemplary schematic block-diagram of a variant of the invention,
Fig.5 an exemplary schematic block-diagram of another variant of the invention, Fig. 6 a schematic block-diagram of the block "New Enhancement" according to the invention shown in fig.4 and 5,
Fig. 7 a detailed schematic block-diagram of the stopping rule block shown in fig.4,
Fig.8 a schematic block-diagram of a typical single-channel separation algorithm based on amplitude estimation on a complex spectrum of a noisy signal described the cited non-patent literature non-patent literature "Phase estimation for signal reconstruction in single-channel speech separation'" in detail, said amplitude estimation being performed phase-unaware,
Fig.9 a schematic block-diagram of amplitude-aware phase estimation described in the non-patent literature "Phase estimation for signal reconstruction in single-channel speech separation'" in detail,
Fig. 10 two schematic block-diagrams of two different single-channel speech separation algorithms described in the cited non-patent literature "On phase importance in parameter estimation in single-channel speech enhancement" in detail.
Detailed description of the invention
Fig. 1 shows a schematic block-diagram illustrating the object of the invention. Given an exemplary continuous-time signal y(t) which includes for example two different signals and (and/ or correspondingly it is an object of the invention to separate the signals by providing an enhanced complex spectrum (see
Fig. 3 and 4, the symbol index t refers to continuous time and n refers to discrete time domain) of the at least one signal of interest . This allows to provide an estimate and/ or of the signals (and/ or correspondingly Assuming that
the signal is a signal of interest and the signal represents for example interfering
noise (the signal could stem from any other source or from a superposition of sources), a typical approach to estimate the signal of interest consists of transforming the continuous-time signal y(t) into a quantized discrete-time signal y(n) by applying an analog-to- digital converter 1 on the continuous-time signal y(t). As a next step, a signal estimation device 2 processes the discrete time signal y(n) using a priori information to provide an estimate of at least the signal of interest. In the given example an estimate of the signal representing noise is provided as well.
Conventional speech enhancement methods modify the amplitude of the noisy signal while they directly copy the noisy phase for the purpose of signal reconstruction (see Figure 1(a) shown in the stated non-patent literature "On phase importance in parameter estimation in single-channel speech enhancement", and similar, Fig. 10 (a) of the present application). With regard to the scope of this disclosure phase-unaware amplitude estimation methods are referred to this category. In contrast, in the method according to the invention, an estimated phase is exploited into the speech enhancement method (see Fig. 10 (b)). This category is referred to phase-aware speech enhancement where the enhancement process is accomplished with the knowledge of an estimation of the unknown phase spectrum of the signal of interest
The terminology of amplitude-aware phase estimation refers to the task of phase estimation given the input noisy data as well as an estimation of the amplitude spectrum of the signal of interest which can be provided by a conventional phase-unaware amplitude estimation method (an example for a phase-unaware amplitude estimation method is denoted by block C shown in Figure 4).
Fig. 2 shows an exemplary schematic block-diagram of a state of the art multi-sensor speech enhancement method (which can be applied by a signal estimation device 2 according to Fig. 1) to be applied on M discrete-time signals exploited from a number of M sensors, said speech enhancement method composed of three stages, i.e. analysis, modification and synthesis. The analysis stage might consist in different signal representations including short-time Fourier transformation (STFT), Sinusoidal modeling, polyphase filter banks, Mel- frequency Cepstral analysis and/ or any other suitable transformation applicable on at least one discrete-time signal. The discrete-time signals exploited from the number of M sensors are therefore transformed in a complex format providing amplitude and phase parts of the signals. Furthermore, the analysis stage is required to decompose the complex signals into a number of N different frequency channels, hence a product of N x M samples are provided for the modification stage. The output of the analysis stage by the complex spectrum representation can be exemplified as follows. The modification stage known from the state of the art can be performed in two ways: a) amplitude enhancement, in which any frequency- dependent gain function as amplitude estimator (e.g. Wiener filter as a common choice) is employed together with a noise estimator given either by a reference microphone or a noise tracking method while the noisy phase is directly copied to reconstruct the enhanced signal, or b) phase enhancement; in which the noisy phase is often directly copied to synthesize enhanced output signal. Finally, the synthesis stage is applied on the resulting N x M samples of the modification stage to reconstruct enhanced signals, in particular enhanced speech signals.
Fig. 3 exemplifies state-of-the-art modification stage of Fig. 2 (if not stated otherwise in the description of the figures, same reference signs describe same features). The M discrete-time signals exploited from the number of M sensors are analyzed in block A, providing N x M samples in a complex format as described in Fig. 2. In block 3 the amplitude part contained in the complex format of the samples is exploited (block 4 exploits the phase part contained in the complex format of the samples). The samples are processed through amplitude or phase enhancement stages, wherein the amplitude enhancement stage is provided with a noise estimate, and finally synthesized in block S to provide an enhanced signal, in particular an enhanced speech signal. The modifications of the samples can be categorized in four different groups.
- A first group provides an estimate for the clean speech spectral amplitude based on a noise estimate from a noise tracker or a reference sensor and a speech estimate using a decision- directed method (see US 2009/0163168 Al and Y. Ephraim and D. Malah, "Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator", IEEE Trans. Acoust., Speech, Signal Processing, vol. 32. no. 6, pp. 1109-1121, Dec 1984). The noisy phase is directly used unaltered when reconstructing a time-domain enhanced speech signal at an output. This group can be represented in Fig. 3 by an amplitude switch ASW being in a position F2 and a phase switch PSW being in a position P3.
- A second group (ASW in position F2 and PSW in position P4) refers to phase enhancement only methods. For example non-patent literature (see the cited non-patent literature xx and "On phase importance in parameter estimation in single-channel speech enhancement") suggested to employ Griffin and Lim iterations to estimate the signal phase for signal reconstruction given the Wiener filtered amplitude spectrum, using synthesis-analysis in iterations. - A third group (ASW in position PI and PSW in position 4) refers to phase enhancement only methods used with the noisy amplitude. The phase estimation often requires strong assumptions knowing exact onsets and fundamental frequency of clean signals, in particular speech signals and previous frame phase values.
- A fourth group (ASW in position F2 and PSW in position P4, but in contrast to the second group no iterations) refers to a method assuming a clean spectral amplitude is available and spectral amplitude is estimated in a phase-aware way in an open loop configuration.
Fig. 4 shows an exemplary schematic block-diagram of a variant of the invention Block A and block S represent analysis and synthesis blocks as described in Fig. 2 and 3, wherein block A is provided with at least one discrete-time signal y(n). Block A transforms the at least one discrete-time signal y(n) for example by an N-point Fourier transform (any other time- frequency transformation providing amplitude and phase spectra suffice for the method according to the invention) into a frequency domain to obtain a complex spectrum, i.e. where n = [0 ... N-1] with N the window size and Y(k) and as the kth frequency component for the magnitude and phase spectrum of y(n),
respectively.
In contrast to the signal modification and enhancement methods described in Fig. 3, an enhanced signal modification method according to the invention is provided, which is described in detail in Fig. 6. A block "New Enhancement" is provided with a noise estimate, N x M samples, and, depending on the switching position of a loop switch LSW,
- with an amplitude estimate of the amplitude part of the complex spectrum of at least one corresponding signal of interest (preferably from noise and/ or other signals as well) provided by a conventional enhancement block C (loop switch LSW in position P1) or
- with an output signal of the new enhancement method, which is looped back as an input signal (loop switch LSW in position P2).
The conventional enhancement block C represents any phase-unaware amplitude estimator or phase-unaware amplitude estimation methods (or any amplitude estimation method performed irrespectively of the phase spectrum of at least the signal of interest which separates the signal of interest from noise and/or other signals for example by applying a frequency-dependent gain function (mask) on observed noisy amplitude spectrum. Examples for such gain functions are Wiener filter (as softmask) and binary mask. Noise reduction capability obtained by conventional methods is limited since they only modify the amplitude or phase individually. Preferably, the block C and the block "New Enhancement" is provided with a noise estimate. Block C performs an amplitude estimation on the complex spectrum to obtain an estimated amplitude spectrum of the at
least one signal of interest (preferably from noise and/ or other signals as well).
The method according to the invention comprises a step (b) including performing a phase- unaware amplitude estimation on the complex spectrum to obtain an estimated amplitude spectrum (see output P1 of the block C) of the at least one signal of interest This can be achievable by connecting the switch LSW to P1, where the conventional speech enhancement is included in the loop. The conventional method (phase- unaware amplitude estimation) applied within block C enhances the amplitude only, providing an input signal representing an initial amplitude estimate of the signal required for phase estimation in the following step. The amplitude-aware phase estimation requires such initially enhanced amplitude signal of interest.
Furthermore, Fig. 4 shows a block "stopping rule" (stopping criterion), which provides a criterion to stop the feedback loop. The block "new enhancement" is first provided with the amplitude estimate of the complex spectrum of the conventional block C. The output of the block "New Enhancement" can be looped back as an input signal (the input signal can be in complex format) for the block "New Enhancement" in a following iteration The block "New Enhancement" is described in more detail in Fig. 6 and provides an enhanced complex spectrum of the at least one signal of interest (and correspondingly of which can be used to reconstruct an estimate of the signal of interest
Fig. 5 shows an exemplary schematic block-diagram of another variant of the invention, wherein the feedback loop differs from the variant shown in Fig.4. Herein, the output of the block "New Enhancement" is synthesized in Block S and analyzed in a following analysis block A, before being looped back as an input signal to the block "New Enhancement", provided that the loop switch LSW being in position P2. This allows monitoring the consistency between the phase and amplitude estimations of the enhanced complex spectrum of the at least one signal of interest according to the following comparison criterion: X being a matrix composed of a complex time-frequency representation of the enhanced complex spectrum, wherein at least one quality index is established to measure inconsistency of complex time-frequency representations denoted by , obtained by each loop-iteration, and defined for the i-th loop iteration as follows: wherein the i-th quality index is calculated by
and the loop-iterations are stopped at least when a quality index gets lower than a predefined threshold . The threshold is preferably
Fig. 6 shows a schematic block-diagram of the block "New Enhancement" according to the invention shown in Fig.4 and 5. Herein, two blocks are shown processing the N x M samples described in the preceding figures. Generally, a block "amplitude-aware phase estimation'' performs a phase estimation on the complex spectrum said phase estimation
being an amplitude-aware phase estimation using the input signal and the noisy discrete time signal y(n) (see input (1) of Fig. 6 representing the noisy complex spectrum) to obtain an estimated phase spectrum of the at least one signal of interest wherein the result of the phase-unaware amplitude estimation of the conventional enhancement block C (see Fig. 4 and 5) is used as said input signal The block "amplitude-aware phase estimation'" provides an enhanced phase estimation of the at least one signal of interest (preferably, an enhanced phase estimation of the noise or any other signal as well) to a following block "phase-aware amplitude estimator". Within the block "phase-aware amplitude estimator" an amplitude estimation on the complex spectrum is performed, said amplitude estimation being a phase-aware amplitude estimation using the result of the phase estimation of the block "amplitude-aware phase estimation" to obtain an enhanced complex spectrum of the at least one signal of interest
Fig. 7 shows a detailed schematic block-diagram of the block "stopping rule" (stopping criterion) shown in Fig.4. A block "consistency check" is provided with
- the estimated phase spectrum of the at least one signal of interest derived from the block "amplitude-aware phase estimation" (see Fig. 6) and with
- the estimated amplitude spectrum of the at least one signal of interest derived from the block "phase-aware Amplitude Estimator" (see Fig. 6).
The consistency of the enhanced complex spectrum can either be assumed to
converge after a certain number of iterations (for example five, six or seven iterations) or a inconsistency criterion can be applied (for example the quality index mentioned above) limiting the number of iterations.
Fig. 8 shows a schematic block-diagram of a typical single-channel separation algorithm based on amplitude estimation on a complex spectrum of a noisy signal described in appendix the cited non-patent literature "Phase estimation for signal reconstruction in single- channel speech separation", said amplitude estimation being performed phase-unaware. Herein, a signal y comprises two signals si and s2 to be separated, wherein amplitude estimates a noisy phase signal is applied to reconstruct the clean signals and
Fig. 9 shows a schematic block-diagram of amplitude-aware phase estimation. In contrast to Fig. 8, the signal reconstruction is provided with phase information corresponding to the signals respectively. An minimum mean square error (MMSE) phase estimation block is shown, which is provided with the amplitude estimates and the signal y, said phase estimation being amplitude-aware and providing phase signals
Detailed description of the algorithm is given in the cited non-patent literature "Phase estimation for signal reconstruction in single-channel speech separation". Fig. 10 shows two schematic block-diagrams of two different single-channel speech separa- tion algorithms. A typical method to estimate a clean speech amplitude (corresponding to of Fig. 8 and 9) is shown in (a), wherein the amplitude estimation (within the block "Gain function'") is not provided with any phase information. Within the scope of this specification such an amplitude estimation is referred to as being phase-unaware. In contrast, Fig. 10 (b) provides an example of a phase-aware amplitude estimation (block "Gain function"), wherein the amplitude estimation is based at least on the magnitude spectra Y of the signal y and the phase spectra of a speech signal x, the phase spectra being the phase spectra of a
speech signal x. Taking into account the phase spectra of the speech signal x to calculate
the a clean speech amplitude makes the amplitude estimation phase-aware. Detailed description of the algorithm is given in the non-patent literature "On phase importance in parameter estimation in single-channel speech enhancement".
Of course, the terms phase-aware amplitude estimation and amplitude-aware phase estimation defined herein do not relate to speech signals only. In fact, phase-aware amplitude estimation and amplitude-aware phase estimation is applicable to a plurality of signals and the speech signals described in the non-patent literature "On phase importance in parameter estimation in single-channel speech enhancement" and "Phase estimation for signal reconstruction in single-channel speech separation" just represent one utilization of phase-aware amplitude estimation and amplitude-aware phase example, respectively. Therefore, the invention is not limited to the examples given in this specification and can be adjusted in any manner known to a person skilled in the art.

Claims

Claims
1. Method for estimation of at least one signal of interest from at least one discrete-time signal (y(n)), said method comprising the steps of
a) transforming the at least one discrete-time signal (y(n)) into a frequency domain to obtain a complex spectrum of the at least one discrete-time signal (y(n));
b) performing a phase-unaware amplitude estimation on the complex spec- trum to obtain an estimated amplitude spectrum of the at least one signal of
interest
c) performing a phase estimation on the complex spectrum said phase
estimation being an amplitude-aware phase estimation using an input signal to obtain an estimated phase spectrum of the at least one signal of interest wherein the result of the amplitude estimation of the preceding step b) is used as an input signal d) performing an amplitude estimation on the complex spectrum said amplitude estimation being a phase-aware amplitude estimation using the result of the phase estimation of step c) to obtain an enhanced complex spectrum of the at
least one signal of interest
2. Method of claim 1, wherein in step b) the amplitude estimation on the complex spec- trum is performed by a frequency-dependent time-frequency mask, in
particular by Wiener filtering of the complex spectrum
3. Method of claim 1 or 2, wherein after step d) the steps c) and d) are repeated iteratively wherein as input signal in the repeated step c) the result of the phase-aware amplitude estimation of the preceding step d) is used.
4. Method of any of the claims 1 to 3, wherein the consistency between the phase and amplitude estimations of the enhanced complex spectrum of the at least one signal of interest is monitored according to the following comparison criterion:
, X being a matrix composed of a complex time-frequency
representation of the enhanced complex spectrum wherein at least one quality index is established to measure inconsistency of complex time-frequency representations denoted by , obtained by each loop-iteration, and defined for the i-th loop iteration as follows: wherein the i-th quality index is calculated by
and the loop-iterations are stopped at least when a quality index gets lower than a predefined threshold .
5. Method of claim 4, wherein the threshold is
6. Method of any of the claims 3 to 5, wherein the iterations are stopped at least after a predefined number of iterations, in particular after five, six or seven iterations.
7. Method of any of the claims 1 to 6, wherein the transformation method in step a) is a spectro-temporal transformation, in particular STFT, Wavelet or sinusoidal signal modeling.
8. Method of any of the claims 1 to 7, wherein the at least one discrete-time signal (y(n)) is a bio-medical, radar, image or video signal.
9. Method of any of the claims 1 to 7, wherein the at least one discrete-time (y(n)) signal is an audio signal.
10. Method of claim 9, wherein the at least one discrete-time signal (y(n)) comprises at least one speech signal.
11. Method of claims 9 or 10, wherein the at least one discrete-time signal (y(n)) is derived from a single channel signal.
12. Method of claims 9 or 10, wherein the at least one discrete-tirne signal (y(n)) is derived from a multi channel signal.
13. Device for carrying out a method according to any of the preceding claims.
EP14753072.9A 2013-08-23 2014-08-19 Enhanced estimation of at least one target signal Withdrawn EP3036739A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP14753072.9A EP3036739A1 (en) 2013-08-23 2014-08-19 Enhanced estimation of at least one target signal

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP13181563.1A EP2840570A1 (en) 2013-08-23 2013-08-23 Enhanced estimation of at least one target signal
EP14753072.9A EP3036739A1 (en) 2013-08-23 2014-08-19 Enhanced estimation of at least one target signal
PCT/EP2014/067667 WO2015024940A1 (en) 2013-08-23 2014-08-19 Enhanced estimation of at least one target signal

Publications (1)

Publication Number Publication Date
EP3036739A1 true EP3036739A1 (en) 2016-06-29

Family

ID=49115345

Family Applications (2)

Application Number Title Priority Date Filing Date
EP13181563.1A Withdrawn EP2840570A1 (en) 2013-08-23 2013-08-23 Enhanced estimation of at least one target signal
EP14753072.9A Withdrawn EP3036739A1 (en) 2013-08-23 2014-08-19 Enhanced estimation of at least one target signal

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP13181563.1A Withdrawn EP2840570A1 (en) 2013-08-23 2013-08-23 Enhanced estimation of at least one target signal

Country Status (2)

Country Link
EP (2) EP2840570A1 (en)
WO (1) WO2015024940A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903355B (en) * 2021-12-09 2022-03-01 北京世纪好未来教育科技有限公司 Voice acquisition method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090163168A1 (en) * 2005-04-26 2009-06-25 Aalborg Universitet Efficient initialization of iterative parameter estimation
US7492814B1 (en) * 2005-06-09 2009-02-17 The U.S. Government As Represented By The Director Of The National Security Agency Method of removing noise and interference from signal using peak picking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2015024940A1 *

Also Published As

Publication number Publication date
EP2840570A1 (en) 2015-02-25
WO2015024940A1 (en) 2015-02-26

Similar Documents

Publication Publication Date Title
US9666183B2 (en) Deep neural net based filter prediction for audio event classification and extraction
CN108172231B (en) Dereverberation method and system based on Kalman filtering
US8467538B2 (en) Dereverberation apparatus, dereverberation method, dereverberation program, and recording medium
JP3154487B2 (en) A method of spectral estimation to improve noise robustness in speech recognition
JP2005518118A (en) Filter set for frequency analysis
JP7486266B2 (en) Method and apparatus for determining a depth filter - Patents.com
JP6348427B2 (en) Noise removal apparatus and noise removal program
Dumortier et al. Blind RT60 estimation robust across room sizes and source distances
Do et al. Speech Separation in the Frequency Domain with Autoencoder.
Agcaer et al. Optimization of amplitude modulation features for low-resource acoustic scene classification
Yoshioka et al. Dereverberation by using time-variant nature of speech production system
CN107919136B (en) Digital voice sampling frequency estimation method based on Gaussian mixture model
EP3036739A1 (en) Enhanced estimation of at least one target signal
Singh et al. Speech enhancement for Punjabi language using deep neural network
Ramar et al. A Hybrid MFWT Technique for Denoising Audio Signals
Malek Blind compensation of memoryless nonlinear distortions in sparse signals
Gui et al. Adaptive subband Wiener filtering for speech enhancement using critical-band gammatone filterbank
Saleem et al. Regularized sparse decomposition model for speech enhancement via convex distortion measure
KR101096091B1 (en) Apparatus for Separating Voice and Method for Separating Voice of Single Channel Using the Same
Buragohain et al. Single Channel Speech Enhancement System using Convolutional Neural Network based Autoencoder for Noisy Environments
CN110491408B (en) Music signal underdetermined aliasing blind separation method based on sparse element analysis
CN113611321B (en) Voice enhancement method and system
RU2788939C1 (en) Method and apparatus for defining a deep filter
Bharathi et al. Speaker verification in a noisy environment by enhancing the speech signal using various approaches of spectral subtraction
CN109346097B (en) Speech enhancement method based on Kullback-Leibler difference

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160314

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20161012