JP2013054258A - Sound source separation device and method, and program - Google Patents

Sound source separation device and method, and program Download PDF

Info

Publication number
JP2013054258A
JP2013054258A JP2011193517A JP2011193517A JP2013054258A JP 2013054258 A JP2013054258 A JP 2013054258A JP 2011193517 A JP2011193517 A JP 2011193517A JP 2011193517 A JP2011193517 A JP 2011193517A JP 2013054258 A JP2013054258 A JP 2013054258A
Authority
JP
Japan
Prior art keywords
signal
target
covariance matrix
observation signal
sound source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2011193517A
Other languages
Japanese (ja)
Other versions
JP5568530B2 (en
Inventor
Soden Meretsu
ソウデン メレツ
Akiko Araki
章子 荒木
Keisuke Kinoshita
慶介 木下
Tomohiro Nakatani
智広 中谷
Hiroshi Sawada
宏 澤田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2011193517A priority Critical patent/JP5568530B2/en
Publication of JP2013054258A publication Critical patent/JP2013054258A/en
Application granted granted Critical
Publication of JP5568530B2 publication Critical patent/JP5568530B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Circuit For Audible Band Transducer (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a sound source separation device capable of effectively suppressing an additive noise included in an input signal.SOLUTION: In a sound source separation device, a feature vector ψ(t) featuring each time frequency bin of a multichannel observation signal y(t) is classified into N+1 components caused by each of N target sound sources and N additive noises, so that posterior probability of each of the target sound sources and additive noises is estimated by the method of maximum likelihood, and then a covariance matrix ^Rof the n-th target signal and a covariance matrix ^Rof a multichannel observation signal y(t) included in an observation signal are calculated, so that a generalized multichannel Wiener filter wfor recovering the target signal is calculated by determining unnecessary components other than the n-th target signal included in the observation signal on the basis of the covariance matrix ^Rof the n-th target signal and the covariance matrix ^Rof the multichannel observation signal y(t). Finally, an estimate ^Sof the n-th target signal is outputted on the basis of the multichannel observation signal y(t), the generalized multichannel Wiener filter w, and the posterior probability of each of the target sound sources.

Description

この発明は、入力信号に複数の目的信号と加法性雑音が含まれている場合において、各目的信号を精度良く抽出する音源分離装置と、その方法とプログラムに関する。   The present invention relates to a sound source separation apparatus that extracts each target signal with high accuracy when the input signal includes a plurality of target signals and additive noise, and a method and program thereof.

複数の目的音源が存在する環境で音響信号を収音すると、しばしば目的信号同士が互いに重なり合った混合信号が観測される。この時、注目している目的音源が音声信号である場合、その他の音源信号がその目的信号に重畳した影響により、目的音声の明瞭度は大きく低下してしまう。その結果、本来の目的音声信号(以下、目的信号)の性質を抽出することが困難となり、自動音声認識(以下、音声認識)システムの認識率も著しく低下する。更に、目的信号以外に加法性雑音が存在する場合は、明瞭性や音声認識システムの認識率の低下も大きくなる。この認識率の低下を防ぐためには、複数の目的信号をそれぞれ分離することで、目的信号の明瞭度を回復する工夫(方法)が必要である。   When an acoustic signal is collected in an environment where a plurality of target sound sources exist, a mixed signal in which the target signals overlap each other is often observed. At this time, when the target sound source of interest is an audio signal, the clarity of the target sound is greatly reduced due to the influence of other sound source signals superimposed on the target signal. As a result, it becomes difficult to extract the nature of the original target speech signal (hereinafter referred to as the target signal), and the recognition rate of the automatic speech recognition (hereinafter referred to as speech recognition) system is significantly reduced. Furthermore, when additive noise is present in addition to the target signal, the clarity and the reduction in the recognition rate of the speech recognition system also increase. In order to prevent this reduction in the recognition rate, it is necessary to devise a method (method) for recovering the clarity of the target signal by separating a plurality of target signals.

この複数の目的信号をそれぞれ分離する要素技術は、さまざまな音響信号処理システムに用いることが可能である。例えば、実環境下で収音された音から目的信号を抽出して聞き取り易さを向上させる補聴器、目的信号を抽出することで音声の明瞭度を向上させるTV会議システム、実環境で用いられる音声認識システム、機械制御インターフェースにおける機械と人間との対話装置、楽曲を検索したり採譜したりする音楽情報処理システムなどに利用することが出来る。   The elemental technology for separating the plurality of target signals can be used for various acoustic signal processing systems. For example, a hearing aid that extracts the target signal from the sound collected in the real environment to improve ease of hearing, a TV conference system that improves the intelligibility of the voice by extracting the target signal, and audio used in the real environment It can be used for a recognition system, a machine-human interaction device in a machine control interface, a music information processing system for searching and recording music, and the like.

図9に、例えば非特許文献1,2等で開示されている従来の音源分離装置900の機能構成例を示してその動作を簡単に説明する。音源分離装置900は、特徴ベクトル計算部90、音声存在確率計算部91、1chフィルタリング部92、を備える。   FIG. 9 shows a functional configuration example of a conventional sound source separation device 900 disclosed in, for example, Non-Patent Documents 1 and 2, and its operation will be briefly described. The sound source separation device 900 includes a feature vector calculation unit 90, a speech presence probability calculation unit 91, and a 1ch filtering unit 92.

特徴ベクトル計算部90は、多チャネル入力信号の各時間周波数ビンを特徴付ける特徴ベクトルを計算する。音声存在確率計算部91は、その特徴ベクトルを入力として、各時間周波数ビンで、入力信号に含まれるN個の目的音源の各々の存在確率を計算する。存在確率は、混合数Nの混合モデルのパラメータを最尤推定することで計算される。1chフィルタリング部92は、入力信号の各時間周波数ビンの値に、音声存在確率計算部91で計算された存在確率を0(信号が存在しないことを意味)か1(信号が存在することを意味)の値に変換した値を乗算することで、目的音源の目的信号の推定値を計算する。この方法を用いることで、入力信号に含まれる複数の目的信号を回復することができる。   The feature vector calculator 90 calculates a feature vector that characterizes each time frequency bin of the multi-channel input signal. The speech existence probability calculation unit 91 receives the feature vector as an input, and calculates the existence probability of each of the N target sound sources included in the input signal in each time frequency bin. The existence probability is calculated by performing maximum likelihood estimation of the parameters of the mixture model with N mixture. The 1ch filtering unit 92 sets the existence probability calculated by the speech existence probability calculating unit 91 to 0 (meaning that no signal exists) or 1 (meaning that a signal exists) in each time frequency bin value of the input signal. ) Is multiplied by the converted value to calculate the estimated value of the target signal of the target sound source. By using this method, a plurality of target signals included in the input signal can be recovered.

H. Sawada, S. Araki, and S. Makino, “Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignement,” IEEE Trans. Audio, Speech and Lang. Process., vol. 19, pp.516-527, March 2011.H. Sawada, S. Araki, and S. Makino, “Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignement,” IEEE Trans. Audio, Speech and Lang. Process., Vol. 19, pp.516- 527, March 2011. H. Sawada, S. Araki, and S. Makino, “A two-stage frequency domain blind source separation method for underdetermined convolutive mictures,” in Proc. IEEE WASPAA, 2007, pp. 139-142.H. Sawada, S. Araki, and S. Makino, “A two-stage frequency domain blind source separation method for underdetermined convolutive mictures,” in Proc. IEEE WASPAA, 2007, pp. 139-142.

しかし、従来の方法では、入力信号に加法性雑音が含まれていることが仮定されていなかった。したがって、入力信号に加法性雑音が含まれると、その抑圧が不能なため、効果的に目的信号を回復することができなかった。   However, in the conventional method, it has not been assumed that additive noise is included in the input signal. Therefore, if additive noise is included in the input signal, it cannot be suppressed, and the target signal cannot be effectively recovered.

この発明は、このような課題に鑑みてなされたものであり、入力信号に加法性雑音が含まれる場合でも、適切にその加法性雑音を抑圧し、複数の目的音源の各々の目的信号を回復することのできる音源分離装置と、その方法とプログラムを提供することを目的とする。   The present invention has been made in view of such problems, and even when additive noise is included in an input signal, the additive noise is appropriately suppressed and each target signal of a plurality of target sound sources is recovered. It is an object of the present invention to provide a sound source separation device, a method and a program thereof.

この発明の音源分離装置は、特徴ベクトル計算部と、音声・雑音存在確率計算部と、音声・雑音特徴計算部と、音声推定用フィルタ計算部と、多チャネルフィルタリング部と、を具備する。特徴ベクトル計算部は、多チャネル観測信号の各時間周波数ビンを特徴付ける特徴ベクトルを、複素領域の観測信号をそのノルムで正規化して計算する。音声・雑音存在確率計算部は、特徴ベクトルを入力として、その特徴量ベクトルをN個の目的音源と加法性雑音とに各々起因するN+1個の成分に分類し、各目的音源と加法性雑音に関する事後確率を最尤推定する。音声・雑音特徴計算部は、各目的音源の目的信号についての事後確率と加法性雑音についての事後確率と多チャネル観測信号とを入力として、n番目の目的信号の共分散行列と観測信号に含まれる多チャネル観測信号の共分散行列を計算する。音声推定用フィルタ計算部は、n番目の目的信号の共分散行列と、多チャネル観測信号の共分散行列を入力として、観測信号に含まれるn番目の目的信号以外の不要成分を求め、目的信号を回復する一般化多チャネルウィナーフィルタを計算する。多チャネルフィルタリング部は、多チャネル観測信号と一般化多チャネルウィナーフィルタと各目的音源に関する事後確率とを入力として、n番目の目的信号の推定値を出力する。   The sound source separation apparatus according to the present invention includes a feature vector calculation unit, a speech / noise existence probability calculation unit, a speech / noise feature calculation unit, a speech estimation filter calculation unit, and a multi-channel filtering unit. The feature vector calculation unit calculates a feature vector characterizing each time frequency bin of the multi-channel observation signal by normalizing the observation signal in the complex region with its norm. The speech / noise existence probability calculation unit receives the feature vector, classifies the feature vector into N + 1 components each resulting from N target sound sources and additive noise, and relates to each target sound source and additive noise. Maximum likelihood estimation of posterior probability. The speech / noise feature calculation unit receives the posterior probability of the target signal of each target sound source, the posterior probability of additive noise, and the multi-channel observation signal, and is included in the covariance matrix of the nth target signal and the observation signal. Calculate the covariance matrix of the multichannel observation signal. The speech estimation filter calculation unit receives the covariance matrix of the nth target signal and the covariance matrix of the multi-channel observation signal, obtains an unnecessary component other than the nth target signal included in the observation signal, and obtains the target signal Compute a generalized multi-channel Wiener filter to recover. The multi-channel filtering unit receives the multi-channel observation signal, the generalized multi-channel Wiener filter, and the posterior probability relating to each target sound source, and outputs an estimated value of the n-th target signal.

この発明の音源分離装置によれば、多チャネル観測信号を、N個の目的音源の各々に起因する成分と、加法性雑音に起因する成分とに分類して処理するので、加法性雑音を効果的に抑圧することが出来る。評価実験で確認した具体的な効果については後述する。   According to the sound source separation device of the present invention, the multi-channel observation signal is classified and processed into the component caused by each of the N target sound sources and the component caused by the additive noise, so that the additive noise can be effectively processed. Can be suppressed. Specific effects confirmed in the evaluation experiment will be described later.

この発明の音源分離装置100の機能構成例を示す図。The figure which shows the function structural example of the sound source separation apparatus 100 of this invention. 音源分離装置100の動作フローを示す図。The figure which shows the operation | movement flow of the sound source separation apparatus. 音声・雑音存在確率計算部20の機能構成を示す図。The figure which shows the function structure of the audio | voice and noise presence probability calculation part 20. FIG. 音声・雑音存在確率計算部20の動作フローを示す図。The figure which shows the operation | movement flow of the audio | voice and noise presence probability calculation part 20. FIG. 音声・雑音特徴計算部30の機能構成例を示す図。The figure which shows the function structural example of the audio | voice / noise characteristic calculation part 30. 音声・雑音特徴計算部30の動作フローを示す図The figure which shows the operation | movement flow of the audio | voice and noise characteristic calculation part 30 音声分離処理前の信号波形を示す図であり、(a)は話者1のクリーン音声、(b)は話者2のクリーン音声、(c)は混合信号の音声波形を示す図である。It is a figure which shows the signal waveform before an audio | voice separation process, (a) is the clean audio | voice of the speaker 1, (b) is the clean audio | voice of the speaker 2, (c) is a figure which shows the audio | voice waveform of a mixed signal. 音声分離処理後の音声波形を示す図であり、(a)は従来法で分離した話者1の音声波形、(b)は従来法で分離した話者2の音声波形、(c)はこの発明の方法で分離した話者1の音声波形、(d)はこの発明の方法で分離した話者2の音声波形を示す図である。It is a figure which shows the speech waveform after a speech separation process, (a) is the speech waveform of the speaker 1 separated by the conventional method, (b) is the speech waveform of the speaker 2 separated by the conventional method, and (c) is this The speech waveform of the speaker 1 separated by the method of the invention is shown in (d), and the speech waveform of the speaker 2 separated by the method of the invention is shown. 従来の音声分離装置900の機能構成例を示す図。The figure which shows the function structural example of the conventional audio | voice separation apparatus 900.

以下、この発明の実施の形態を図面を参照して説明する。複数の図面中同一のものには同じ参照符号を付し、説明は繰り返さない。実施例の説明の前に、観測信号をモデル化する。   Embodiments of the present invention will be described below with reference to the drawings. The same reference numerals are given to the same components in a plurality of drawings, and the description will not be repeated. Before the description of the embodiment, the observation signal is modeled.

〔観測信号のモデル化〕
観測信号には、N(N≧1)個の点音源に起因する目的信号と、加法性雑音とが、共に存在する状況を仮定する。この場合、M個のマイクロホンを用いて観測された多チャネル観測信号y(k,t)は、短時間窓での切り出し処理と短時間フーリエ変換を経て、複素スペクトル領域で式(1)に示すように表される。

Figure 2013054258

ここで、tは時間フレームのインデックス、kは周波数インデックスを表す。観測信号y(k,t)は、M個の混合信号であるy(k,t)=[Y(k,t)…Y(k,t)]であり、x(k,t)はn番目のチャネル応答を伴った信号成分x(k,t)=h(k)S(k,t)である。S(k,t)はn番目の目的信号である。 [Modeling of observed signals]
It is assumed that the observation signal includes both the target signal caused by N (N ≧ 1) point sound sources and additive noise. In this case, the multi-channel observation signal y (k, t) observed using M microphones is expressed by the following equation (1) in the complex spectral region through a clipping process in a short time window and a short time Fourier transform. It is expressed as follows.
Figure 2013054258

Here, t represents a time frame index, and k represents a frequency index. The observed signal y (k, t) is M mixed signals y (k, t) = [Y 1 (k, t)... Y M (k, t)] T , and x n (k, t) t) is the signal component x n (k, t) = h n (k) S n (k, t) with the nth channel response. S n (k, t) is the nth target signal.

n番目の音源と各マイクロホン間のチャネル応答h(k)は、h(k)=[H1n(k)…HMn(k)]として表される。加法性雑音成分v(k,t)は、v(k,t)=[V1n(k)…V(k)]である。この発明では、対象とする加法性雑音は他の音源と比べ十分にゆっくりと変化すると仮定する。また、この発明では、チャネル応答は時不変とする。 The channel response h n (k) between the n-th sound source and each microphone is expressed as h n (k) = [H 1n (k)... H Mn (k)] T. The additive noise component v (k, t) is v (k, t) = [V 1n (k)... V M (k)] T. In the present invention, it is assumed that the target additive noise changes sufficiently slowly compared to other sound sources. In the present invention, the channel response is time invariant.

この発明の各処理は、各周波数kごとに個別に行われるものであるため、以降の説明では簡単のため周波数インデックスkは適宜省略して表記する。
また、観測信号には、ある時間周波数ビンでは多くとも一つの点音源に起因する音のみが存在し、それ以外の点音源に起因する音は存在しないとするスパース性の仮定を導入し、式(2)に示すように観測信号をモデル化する。

Figure 2013054258
Since each processing of the present invention is performed individually for each frequency k, in the following description, the frequency index k is appropriately omitted for simplicity.
In addition, the observation signal introduces a sparse assumption that there is only sound due to at least one point sound source in a certain time frequency bin, and no sound due to other point sound sources. Model the observed signal as shown in (2).
Figure 2013054258

つまり、時間周波数ビンでは加法性雑音とn番目の目的音源に起因する音のみが存在すると仮定する。若しくは、式(3)に示すように、点音源に起因する音は存在せず、雑音のみが存在することを仮定する。

Figure 2013054258
That is, it is assumed that only the sound caused by additive noise and the nth target sound source exists in the time frequency bin. Alternatively, as shown in Equation (3), it is assumed that there is no sound due to the point sound source, and there is only noise.
Figure 2013054258

このようにスパース性の仮定を導入すれば、各時間周波数ビンは、N個中の何れかの目的音源に起因した特性か、雑音のみに起因した特性であるかを、大まかに切り分けることができる。観測信号y(t)を以上のようにモデル化した前提で、以下の実施例を説明する。   If the sparseness assumption is introduced in this way, each time frequency bin can be roughly divided into characteristics due to any of the N target sound sources or characteristics due to noise alone. . The following example will be described on the assumption that the observation signal y (t) is modeled as described above.

図1に、この発明の音源分離装置100の機能構成例を示す。その動作フローを図2に示す。音源分離装置100は、特徴ベクトル計算部10と、音声・雑音存在確率計算部20と、音声・雑音特徴計算部30と、音声推定用フィルタ計算部40と、多チャネルフィルタリング部50と、を具備する。音源分離装置100の各部の機能は、例えばROM、RAM、CPU等で構成されるコンピュータに所定のプログラムが読み込まれて、CPUがそのプログラムを実行することで実現されるものである。   FIG. 1 shows a functional configuration example of a sound source separation device 100 of the present invention. The operation flow is shown in FIG. The sound source separation apparatus 100 includes a feature vector calculation unit 10, a voice / noise existence probability calculation unit 20, a voice / noise feature calculation unit 30, a voice estimation filter calculation unit 40, and a multi-channel filtering unit 50. To do. The function of each unit of the sound source separation device 100 is realized by reading a predetermined program into a computer configured by, for example, a ROM, a RAM, and a CPU, and executing the program by the CPU.

特徴ベクトル計算部10は、多チャネル観測信号y(t)の各時間周波数ビンを特徴付ける特徴ベクトルψ(t)を、複素領域の観測信号をそのノルムで正規化して計算する(ステップS10)。音声・雑音存在確率計算部20は、特徴ベクトルψ(t)を入力として、その特徴量ベクトルψ(t)をN個の目的音源と加法性雑音に各々起因するN+1個の成分に分類し、各目的音源と加法性雑音に関する事後確率を最尤推定する(ステップS20)。   The feature vector calculation unit 10 calculates a feature vector ψ (t) characterizing each time frequency bin of the multichannel observation signal y (t) by normalizing the observation signal in the complex region with its norm (step S10). The speech / noise existence probability calculation unit 20 receives the feature vector ψ (t) as an input, classifies the feature vector ψ (t) into N + 1 components each resulting from N target sound sources and additive noise, Maximum likelihood estimation is performed on posterior probabilities regarding each target sound source and additive noise (step S20).

音声・雑音特徴計算部30は、各目的音源の目的信号についての事後確率と加法性雑音についての事後確率と多チャネル観測信号y(t)とを入力として、n番目の目的信号の共分散行列^Rxnxnと観測信号に含まれる多チャネル観測信号y(t)の共分散行列^Ryyを計算する(ステップS30)。音声推定用フィルタ計算部40は、n番目の目的信号の共分散行列^Rxnxnと、多チャネル観測信号y(t)の共分散行列^Ryyを入力として、観測信号に含まれるn番目の目的信号以外の不要成分を求め、目的信号を回復する一般化多チャネルウィナーフィルタw (β)を計算する(ステップS40)。多チャネルフィルタリング部50は、多チャネル観測信号y(t)と一般化多チャネルウィナーフィルタw (β)と各目的音源に関する事後確率とを入力として、n番目の目的信号の推定値^ (β)を出力する(ステップS50)。制御部60は、上記した各部間の時系列的な動作等を制御するものである。なお、^等の表記は、図及び式中に表記されているように変数の直上に位置するのが正しい表記である。 The speech / noise feature calculation unit 30 receives the posterior probability of the target signal of each target sound source, the posterior probability of additive noise, and the multichannel observation signal y (t) as inputs, and the covariance matrix of the nth target signal. ^ covariance matrix of a multichannel observed signal y included in R Xnxn the observation signal (t) ^ to calculate the R yy (step S30). Speech estimation filter calculation unit 40, n-th and covariance matrix ^ R xnxn object signals, a covariance matrix ^ R yy multichannel observed signal y (t) as an input, n-th included in the observation signal An unnecessary component other than the target signal is obtained, and a generalized multi-channel Wiener filter w n (β) for recovering the target signal is calculated (step S40). Multichannel filtering unit 50 as inputs and posterior probability multichannel observed signal y (t) and generalized multi-channel Wiener filter w n (beta) for each target source, estimates of the n-th target signal ^ ~ S n (β) is output (step S50). The control unit 60 controls time-series operations between the above-described units. It should be noted that the notations such as ^ to are the correct notations that are located immediately above the variables as described in the figures and equations.

背景技術で説明した音声存在確率を1/0の2値で切り分ける従来のバイナリマスク処理では、目的信号が存在する時間周波数ビンにおける加法性雑音の除去は出来なかったのに対し、この実施例による方法によれば、観測信号に含まれる加法性雑音を抑圧し、N個の目的信号のそれぞれを分離して取り出すことが可能である。   In the conventional binary mask processing for dividing the voice existence probability described in the background art by binary of 1/0, additive noise in the time frequency bin where the target signal exists cannot be removed. According to the method, it is possible to suppress the additive noise included in the observation signal and separate and extract each of the N target signals.

以降において、音源分離装置100の各部の機能を更に詳しく説明する。
〔特徴ベクトル計算部〕
M個のマイクロホンで観測された多チャネル観測信号y(t)のそれぞれは、短時間フーリエ変換処理によって複素スペクトル領域の信号に変換される。1番目のマイクロホンの複素スペクトルY(k,t)〜M番目のマイクロホンの複素スペクトルY(k,t)のベクトルが、y(t)=[Y(k,t)…Y(k,t)]である。
この複素領域の多チャネル観測信号y(t)を、そのノルムで正規化して特徴ベクトルψ(t)を式(4)で計算する。

Figure 2013054258
In the following, the function of each part of the sound source separation device 100 will be described in more detail.
[Feature vector calculator]
Each of the multi-channel observation signals y (t) observed by M microphones is converted into a signal in the complex spectral region by a short-time Fourier transform process. The vector of the complex spectrum Y 1 (k, t) of the first microphone to the complex spectrum Y M (k, t) of the Mth microphone is y (t) = [Y 1 (k, t)... Y M ( k, t)] T.
The multi-channel observation signal y (t) in the complex region is normalized by its norm, and the feature vector ψ (t) is calculated by Expression (4).
Figure 2013054258

〔音声・雑音存在確率計算部〕
図3に、より具体的な音声・雑音存在確率計算部20の機能構成例を示す。その動作フローを図4に示す。音声・雑音存在確率計算部20は、クラスタ分類手段201と、初期化手段202と、期待値計算手段203と、最大化手段204と、収束判定手段205と、を備える。音声・雑音存在確率計算部20は、特徴ベクトルψ(t)を入力として、特徴ベクトルψ(t)を、N個の「目的信号+加法性雑音」と、「加法性雑音」のそれぞれの成分に起因するクラスタに自動分類し、各クラスタに関する事後確率p[C|ψ(t),θ]を、期待値最大化法(EMアルゴリズム)を用いて推定して音声存在確率として出力する。
[Speech / Noise Presence Probability Calculator]
FIG. 3 shows a more specific functional configuration example of the voice / noise existence probability calculation unit 20. The operation flow is shown in FIG. The voice / noise existence probability calculation unit 20 includes a cluster classification unit 201, an initialization unit 202, an expected value calculation unit 203, a maximization unit 204, and a convergence determination unit 205. The speech / noise existence probability calculation unit 20 receives the feature vector ψ (t) as an input, and converts the feature vector ψ (t) into N components of “target signal + additive noise” and “additive noise”. Are automatically classified into clusters resulting from the above, and the posterior probability p [C n | ψ (t), θ] relating to each cluster is estimated using an expected value maximization method (EM algorithm) and output as a speech existence probability.

クラスタ分類手段201は、特徴ベクトルψ(t)を、式(5)を用いて確率密度関数でモデル化する(ステップS201)。つまり、クラスタ分類手段201は、特徴ベクトルψ(t)を、N個の目的音源の各々に起因する成分と、加法性雑音に起因する成分とに分類し、N+1個の確率密度関数でモデル化する。   The cluster classification unit 201 models the feature vector ψ (t) with a probability density function using equation (5) (step S201). That is, the cluster classification unit 201 classifies the feature vector ψ (t) into a component caused by each of the N target sound sources and a component caused by additive noise, and is modeled by N + 1 probability density functions. To do.

Figure 2013054258
Figure 2013054258

密度関数を特徴付けるパラメータθをθ={a,σ}で表す。aはn番目のクラスタCの平均、σ はその分散である。

Figure 2013054258

ここで、混合分布のパラメータθはθ={a,σ,…,aN+1,σN+1}であり、n番目の分布の重みパラメータαは、Σα=1,0≦α≦1の制約を満たす。 A parameter θ n characterizing the density function is represented by θ n = {a n , σ n }. a n is the average of the n-th cluster C n , and σ n 2 is its variance.
Figure 2013054258

Here, the parameter θ of the mixed distribution is θ = {a 1 , σ 1 ,..., A N + 1 , σ N + 1 }, and the weight parameter α n of the nth distribution is Σ n α n = 1, 0 ≦ α Satisfies the constraint of n ≦ 1.

初期化手段202は、各混合分布パラメータθを乱数で初期化する(ステップS202)。
期待値計算手段203は式(7)を用いて期待値(Eステップ)を計算する(ステップS203)。

Figure 2013054258

ここで(q)は、EMアルゴリズムの繰り返し回数を表す。 The initialization unit 202 initializes each mixture distribution parameter θ n with a random number (step S202).
The expected value calculation means 203 calculates an expected value (E step) using equation (7) (step S203).
Figure 2013054258

Here, (q) represents the number of repetitions of the EM algorithm.

最大化手段204は、式(8)を用いて特徴ベクトルψ(t)に関する共分散行列Rを算出して、Rに関する固有値分解を行う。

Figure 2013054258
The maximizing means 204 calculates a covariance matrix R related to the feature vector ψ (t) using Expression (8), and performs eigenvalue decomposition on R.
Figure 2013054258

そして、最大固有値に対応する固有ペクトルを平均パラメータa (q)に代入して、分散パラメータσ を式(9)で更新し、混合重みパラメータαを式(10)で更新(Mステップ)する(ステップS204)。

Figure 2013054258
Then, the eigenvector corresponding to the maximum eigenvalue is substituted into the average parameter a n (q) , the dispersion parameter σ n 2 is updated by equation (9), and the mixture weight parameter α n is updated by equation (10) (M Step) (step S204).
Figure 2013054258

収束判定手段205は、分散パラメータσ と、混合重みパラメータαの更新幅が十分小さくなるまで、ステップS203とステップS204の処理を繰り返す(ステップS205の収束)。EMアルゴリズムによる演算を収束するまで繰り返すことで、各時間周波数ビンにおける各信号成分の存在確率を計算することが可能となる。なお、この処理で得られたクラスタCに関する事後確率p[C|ψ(t),θ]若しくは単純にp[C|ψ(t)](n=1,…,N+1)は、式(11)に示す特性を満たすものとする。

Figure 2013054258
The convergence determination unit 205 repeats the processing of step S203 and step S204 until the update width of the dispersion parameter σ n 2 and the mixture weight parameter α n becomes sufficiently small (convergence of step S205). By repeating the calculation by the EM algorithm until convergence, it is possible to calculate the existence probability of each signal component in each time frequency bin. Note that the posterior probability p [C n | ψ (t), θ] or simply p [C n | ψ (t)] (n = 1,..., N + 1) regarding the cluster C n obtained by this processing is The characteristic shown in Formula (11) shall be satisfy | filled.
Figure 2013054258

式(11)は、ある時間周波数ビンにおいてn番目の信号が存在する確率は、特徴ベクトルψ(t)にて完全に規定されることを示している。なお、これらの音声存在確率の計算は、各周波数kで独立に行われるため、あるn番目の信号成分が、異なる周波数では異なるクラスタのインデックスを持つというパーミューテーション(入れ替わり)問題が起こる。周波数間で同じ信号を束ねるためのパーミューテーション問題の解決には、従来法(例えば非特許文献1)を用いることができる。   Equation (11) shows that the probability that the nth signal exists in a certain time frequency bin is completely defined by the feature vector ψ (t). Since the calculation of the speech existence probability is performed independently at each frequency k, a permutation (replacement) problem occurs that a certain n-th signal component has different cluster indexes at different frequencies. A conventional method (for example, Non-Patent Document 1) can be used to solve the permutation problem for bundling the same signal between frequencies.

〔音声・雑音特徴計算部〕
図5に、より具体的な音声・雑音特徴計算部30の機能構成例を示す。その動作フローを図6に示す。音声・雑音特徴計算部30は、観測信号共分散行列計算手段301と、加法性雑音共分散行列計算手段302と、目的信号共分散行列計算手段303と、を備える。
観測信号共分散行列計算手段301は、多チャネル観測信号y(t)の共分散行列Ryyを計算する。多チャネル観測信号y(t)の共分散行列Ryyは、式(12)で与えられる。

Figure 2013054258
[Voice / Noise Feature Calculator]
FIG. 5 shows a more specific functional configuration example of the voice / noise feature calculation unit 30. The operation flow is shown in FIG. The speech / noise feature calculation unit 30 includes observation signal covariance matrix calculation means 301, additive noise covariance matrix calculation means 302, and target signal covariance matrix calculation means 303.
The observation signal covariance matrix calculation unit 301 calculates a covariance matrix R yy of the multichannel observation signal y (t). The covariance matrix R yy of the multichannel observation signal y (t) is given by Expression (12).
Figure 2013054258

実際の計算としては、多チャネル観測信号のベクトルy(t)とそのエルミート転置y(t)を乗じた値を総観測フレーム数Tで平均して求める(式(13)、ステップS301)。

Figure 2013054258
In actual calculation, a value obtained by multiplying the vector y (t) of the multi-channel observation signal and its Hermitian transposition y H (t) is averaged by the total number of observation frames T (formula (13), step S301).
Figure 2013054258

次に、加法性雑音の成分を含まない目的信号に関する共分散行列を算出する方法について説明する。音声・雑音存在確率計算部20において多チャネル観測信号y(t)をN+1個のクラスタに分類したことを考慮すると、式(12)で示した観測信号の共分散行列Ryyは、次のように各クラスタの和に分解できる。

Figure 2013054258
Next, a method for calculating a covariance matrix related to a target signal that does not include an additive noise component will be described. Considering that the multichannel observation signal y (t) is classified into N + 1 clusters in the speech / noise existence probability calculation unit 20, the covariance matrix R yy of the observation signal shown in Expression (12) is as follows: Can be decomposed into the sum of each cluster.
Figure 2013054258

n番目の積分項は式(15)で与えられる。

Figure 2013054258
The nth integral term is given by equation (15).
Figure 2013054258

n番目のクラスタに関する共分散行列は、加法性雑音に関する共分散行列Rvvと、n番目の目的信号の共分散行列Rxnxnの和の形で表せる。N+1番目のクラスタは、目的音源がN個であるので、加法性雑音に関する特徴を捉えることになる。つまり、N+1番目のクラスタに関する共分散行列RN+1は、加法性雑音に関する共分散行列を表す(RN+1=Rvv)。 covariance matrix for the n-th cluster, and covariance matrix R vv relates additive noise, expressed by n-th form of the sum of the covariance matrix R Xnxn the target signal. Since the (N + 1) -th cluster has N target sound sources, the feature regarding additive noise is captured. That is, the covariance matrix R N + 1 related to the ( N + 1 ) th cluster represents a covariance matrix related to additive noise (R N + 1 = R vv ).

Figure 2013054258
Figure 2013054258

この実施例で対象としている加法性雑音は、目的音源と比べて十分にゆっくりと変化する雑音を仮定しているため、1〜N番目のクラスタに含まれる加法性雑音成分とN+1番目のクラスタで観測される加法性雑音成分とは、十分に近い特性を持っているものと考えることができる。したがって、加法性雑音に関する共分散行列^Rvvと、目的信号に関する共分散行列^Rxnxnは次のように計算することができる。

Figure 2013054258
The additive noise targeted in this embodiment is assumed to be noise that changes sufficiently slowly as compared with the target sound source. Therefore, the additive noise component included in the 1st to Nth clusters and the N + 1th cluster are included. The observed additive noise component can be considered to have sufficiently close characteristics. Accordingly, the covariance matrix ^ R vv relates additive noise, the covariance matrix ^ R xnxn about the target signal can be calculated as follows.
Figure 2013054258

加法性雑音共分散行列計算手段302は、多チャネル観測信号y(t)と目的音源の事後確率p[C|ψ(t),θ]を入力として、多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)と加法性雑音に関する事後確率p[CN+1|y(t)]とを乗じた値を総観測フレーム数Tで平均して、加法性雑音の共分散行列^Rvvを計算する(式(18)、ステップS302)。 The additive noise covariance matrix calculation means 302 receives the multi-channel observation signal y (t) and the posterior probability p [C n | ψ (t), θ] of the target sound source as input. A value obtained by multiplying the vector, its Hermitian transposition y H (t), and the posterior probability p [C N + 1 | y (t)] relating to additive noise over the total number of observation frames T, and adding the covariance matrix of additive noise ^ R vv is calculated (formula (18), step S302).

目的信号共分散行列計算手段303は、多チャネル観測信号y(t)と目的音源の事後確率p[C|ψ(t),θ]と加法性雑音の共分散行列^Rvvを入力として、多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)と各目的音源に関する事後確率p[C|y(t)]とを乗じた値を総観測フレーム数Tで平均した値から、加法性雑音の共分散行列^Rvvを減じて各々の目的信号に関する共分散行列^Rxnxnを計算する(式(19)、ステップS303)。 The target signal covariance matrix calculation means 303 receives the multi-channel observation signal y (t), the posterior probability p [C n | ψ (t), θ] of the target sound source, and the additive noise covariance matrix ^ R vv as inputs. , A value obtained by multiplying the vector of the multi-channel observation signal y (t), its Hermitian transposition y H (t), and the posterior probability p [C n | y (t)] for each target sound source with the total number of observation frames T from the values, additive noise covariance matrix ^ by subtracting R vv to calculate the covariance matrix ^ R Xnxn for each signal of interest (the formula (19), step S303).

〔音声推定用フィルタ計算部〕
音声推定用フィルタ計算部40は、多チャネル観測信号y(t)の共分散行列Ryyと、目的信号に関する共分散行列^Rxnxnを入力として、n番目の目的信号に起因する信号成分を最小二乗誤差推定する。
[Sound estimation filter calculator]
The speech estimation filter calculation unit 40 receives the covariance matrix R yy of the multichannel observation signal y (t) and the covariance matrix ^ R xnxn related to the target signal, and minimizes the signal component caused by the nth target signal. Estimate the square error.

n番目の目的信号成分の最小二乗誤差推定は、以下のように与えられる。

Figure 2013054258
The least square error estimate of the nth target signal component is given as follows:
Figure 2013054258

式(20)は、スパース性の仮定を導入することで導かれる。上式中のn番目のクラスタに関する事後確率は、最小二乗誤差推定値E{(t)|y(t),C}を滑らかにマスクする効果を持つ。上式右辺第2項は、以下の二乗誤差ε(w)を最小化する多チャネルウィナーフィルタwを求めることと等価である。

Figure 2013054258
Equation (20) is derived by introducing the sparseness assumption. Posteriori probability for the n-th cluster in the above equation, the minimum square error estimation value E {~ S n (t) | y (t), C n} has the effect of smoothly mask. The second term on the right side of the above equation is equivalent to obtaining the multi-channel Wiener filter w that minimizes the following square error ε n (w).
Figure 2013054258

ε(w)を最小化するフィルタwは、一般的に、以下のようなYule-walker方程式を解くことで導出される。

Figure 2013054258
The filter w that minimizes ε n (w) is generally derived by solving the following Yule-walker equation.
Figure 2013054258

ここで、1番目のマイクロホンにおけるn番目の目的信号を回復しようとする場合は、uはu=[10…0]となる。さらに、式(22)のフィルタは、以下の式のように、n番目の目的音源以外の成分をどの程度抑圧するかをβを用いて調節することのできるフィルタw (β)に一般化することができる。

Figure 2013054258
Here, u 1 becomes u 1 = [10... 0] T when the n-th target signal in the first microphone is to be recovered. Furthermore, the filter of Expression (22 ) is generalized to a filter w n (β) that can adjust by β how much the component other than the nth target sound source is suppressed, as in the following expression. can do.
Figure 2013054258

ここで、n番目の目的信号以外の不要成分であるRunは、次のように計算される。

Figure 2013054258
Here, Run , which is an unnecessary component other than the nth target signal, is calculated as follows.
Figure 2013054258

音声推定用フィルタ計算部40は、そのn番目の目的信号以外の不要成分Runを、多チャネル観測信号y(t)の共分散行列Ryyと、目的信号に関する共分散行列^Rxnxnを入力として求め、目的信号を回復する一般化多チャネルウィナーフィルタを式(24)で計算して求める。 The speech estimation filter calculation unit 40 inputs an unnecessary component R un other than the n-th target signal as a covariance matrix R yy of the multichannel observation signal y (t) and a covariance matrix ^ R xnxn related to the target signal. And a generalized multi-channel Wiener filter that recovers the target signal is calculated by Equation (24).

〔多チャネルフィルタリング部〕
多チャネルフィルタリング部50は、多チャネル観測信号y(t)と、一般化多チャネルウィナーフィルタw (β)と、各目的信号に関する事後確率p[C|ψ(t),θ]と、を入力として、n番目の目的信号の推定値を式(26)でフィルタリングして出力する。

Figure 2013054258
[Multi-channel filtering section]
The multi-channel filtering unit 50 includes a multi-channel observation signal y (t), a generalized multi-channel Wiener filter w n (β) , a posterior probability p [C n | ψ (t), θ] for each target signal, As an input, the estimated value of the nth target signal is filtered by Expression (26) and output.
Figure 2013054258

〔評価実験〕
この発明の音源分離装置100の性能を評価する目的で評価実験を行った。実験条件は次の通りとした。
目的信号を2つ(N=2)とし、TIMITデータベースからランダムに抽出した男女各12名の話者のデータを用いた。混合の条件としては、女声2話者の混合、男性2話者の混合、女性話者1名と男声話者1名の混合、の3条件を模擬した。話者二人の位置は、マイクロホンアレーから2m離れ、互いに160度離れた位置とし、同程度の音量で混合した(SIR : Signal-to-Interference Ratio=0dB)。
[Evaluation experiment]
An evaluation experiment was conducted for the purpose of evaluating the performance of the sound source separation device 100 of the present invention. The experimental conditions were as follows.
Two target signals (N = 2) were used, and data of 12 male and female speakers extracted at random from the TIMIT database were used. As the mixing conditions, three conditions were mixed: mixing two female voices, mixing two male speakers, and mixing one female speaker and one male speaker. The positions of the two speakers were 2 m away from the microphone array and 160 degrees apart from each other, and were mixed at the same volume level (SIR: Signal-to-Interference Ratio = 0 dB).

加法性雑音としては、noisexデータベースから抽出したバブルノイズを用い、各マイクロホン信号のSNR(Signal-to-Noise Ratio)が5〜20dBとなるように加算した。この発明としては、多チャネルウィナーフィルタ(式(24)のβ=1)とMVDR(Mininimum Variance Distortionless Responds、式(24)のβ=0)を作成し、非特許文献1と2に示された従来技術と比較を行った。マイクロホンの数としては、8と16の2つの条件を用意した。   As additive noise, bubble noise extracted from the noisex database was used and added so that the SNR (Signal-to-Noise Ratio) of each microphone signal was 5 to 20 dB. As the present invention, a multi-channel Wiener filter (β = 1 in equation (24)) and MVDR (Minimimum Variance Distortionless Responds, β = 0 in equation (24)) were created and shown in Non-Patent Documents 1 and 2. Comparison was made with the prior art. As the number of microphones, two conditions of 8 and 16 were prepared.

表1にSNRの比較結果、表2にSIRの比較結果を示す。

Figure 2013054258
Table 1 shows the SNR comparison results, and Table 2 shows the SIR comparison results.
Figure 2013054258

Figure 2013054258
表1と2の比較結果から明らかなように、この発明の音源分離方法の方が、マイクロホンの数によらず高い性能を示した。
Figure 2013054258
As is clear from the comparison results in Tables 1 and 2, the sound source separation method of the present invention showed higher performance regardless of the number of microphones.

図7と図8に、この評価結果を信号波形で示す。図7は、処理前の波形を示し、(a)は話者1のクリーン音声、(b)は話者2のクリーン音声、(c)はそれぞれの話者音声と雑音を混合した音声である。図8に、音源分離後の信号波形を示す。(a)と(b)は従来法で音源分離した話者1と話者2の信号波形、(c)(d)はこの発明の音源分離方法で音源分離した話者1と話者2の信号波形である。話者の信号が途切れる4秒付近の波形を比較すると、この発明の方法で音源分離した方がSNRの良いことが分かる。このように、この発明の音源分離方法は、加法性雑音を効果的に抑圧した目的信号の抽出を可能にする。   7 and 8 show the evaluation results as signal waveforms. FIG. 7 shows a waveform before processing, where (a) is a clean voice of speaker 1, (b) is a clean voice of speaker 2, and (c) is a voice obtained by mixing each speaker voice and noise. . FIG. 8 shows signal waveforms after sound source separation. (A) and (b) are signal waveforms of speaker 1 and speaker 2 separated by the conventional method, and (c) and (d) are of speaker 1 and speaker 2 separated by the sound source separation method of the present invention. It is a signal waveform. Comparing waveforms around 4 seconds when the speaker's signal is interrupted, it can be seen that the SNR is better when the sound source is separated by the method of the present invention. As described above, the sound source separation method of the present invention enables extraction of a target signal in which additive noise is effectively suppressed.

上記した音声分離装置100における処理手段をコンピュータによって実現する場合、各装置が有すべき機能の処理内容はプログラムによって記述される。そして、このプログラムをコンピュータで実行することにより、各装置における処理手段がコンピュータ上で実現される。   When the processing means in the speech separation apparatus 100 described above is realized by a computer, the processing contents of functions that each apparatus should have are described by a program. Then, by executing this program on the computer, the processing means in each apparatus is realized on the computer.

この処理内容を記述したプログラムは、コンピュータで読み取り可能な記録媒体に記録しておくことができる。コンピュータで読み取り可能な記録媒体としては、例えば、磁気記録装置、光ディスク、光磁気記録媒体、半導体メモリ等どのようなものでもよい。具体的には、例えば、磁気記録装置として、ハードディスク装置、フレキシブルディスク、磁気テープ等を、光ディスクとして、DVD(Digital Versatile Disc)、DVD−RAM(Random Access Memory)、CD−ROM(Compact Disc Read Only Memory)、CD−R(Recordable)/RW(ReWritable)等を、光磁気記録媒体として、MO(Magneto Optical disc)等を、半導体メモリとしてEEP−ROM(Electronically Erasable and Programmable-Read Only Memory)等を用いることができる。   The program describing the processing contents can be recorded on a computer-readable recording medium. As the computer-readable recording medium, any recording medium such as a magnetic recording device, an optical disk, a magneto-optical recording medium, and a semiconductor memory may be used. Specifically, for example, as a magnetic recording device, a hard disk device, a flexible disk, a magnetic tape or the like, and as an optical disk, a DVD (Digital Versatile Disc), a DVD-RAM (Random Access Memory), a CD-ROM (Compact Disc Read Only). Memory), CD-R (Recordable) / RW (ReWritable), etc., magneto-optical recording medium, MO (Magneto Optical disc), etc., semiconductor memory, EEP-ROM (Electronically Erasable and Programmable-Read Only Memory), etc. Can be used.

また、このプログラムの流通は、例えば、そのプログラムを記録したDVD、CD−ROM等の可搬型記録媒体を販売、譲渡、貸与等することによって行う。さらに、このプログラムをサーバコンピュータの記録装置に格納しておき、ネットワークを介して、サーバコンピュータから他のコンピュータにそのプログラムを転送することにより、このプログラムを流通させる構成としてもよい。   The program is distributed by selling, transferring, or lending a portable recording medium such as a DVD or CD-ROM in which the program is recorded. Further, the program may be distributed by storing the program in a recording device of a server computer and transferring the program from the server computer to another computer via a network.

また、各手段は、コンピュータ上で所定のプログラムを実行させることにより構成することにしてもよいし、これらの処理内容の少なくとも一部をハードウェア的に実現することとしてもよい。   Each means may be configured by executing a predetermined program on a computer, or at least a part of these processing contents may be realized by hardware.

Claims (5)

多チャネル観測信号の各時間周波数ビンを特徴付ける特徴ベクトルを、複素領域の観測信号をそのノルムで正規化して計算する特徴ベクトル計算部と、
上記特徴ベクトルを入力として、当該特徴量ベクトルをN個の目的音源と加法性雑音とに各々起因するN+1個の成分に分類し、各目的音源と加法性雑音に関する事後確率を最尤推定する音声・雑音存在確率計算部と、
上記各目的音源の目的信号についての事後確率と加法性雑音についての事後確率と、上記多チャネル観測信号とを入力として、n番目の目的信号の共分散行列と観測信号に含まれる多チャネル観測信号の共分散行列を計算する音声・雑音特徴計算部と、
上記n番目の目的信号の共分散行列と、上記多チャネル観測信号の共分散行列を入力として、観測信号に含まれるn番目の目的信号以外の不要成分を求め、上記目的信号を回復する一般化多チャネルウィナーフィルタを計算する音声推定用フィルタ計算部と、
上記多チャネル観測信号と上記一般化多チャネルウィナーフィルタと上記各目的音源に関する事後確率とを入力として、n番目の目的信号の推定値を出力する多チャネルフィルタリング部と、
を具備する音源分離装置。
A feature vector calculation unit that calculates a feature vector that characterizes each time-frequency bin of the multi-channel observation signal by normalizing the observation signal in the complex region with its norm;
Using the feature vector as an input, the feature vector is classified into N + 1 components each resulting from N target sound sources and additive noise, and the maximum likelihood estimation of posterior probabilities for each target sound source and additive noise is performed・ Noise existence probability calculator,
Using the posterior probability for the target signal of each target sound source, the posterior probability for additive noise, and the multichannel observation signal as input, the covariance matrix of the nth target signal and the multichannel observation signal included in the observation signal A speech / noise feature calculator that calculates the covariance matrix of
Generalization of recovering the target signal by obtaining an unnecessary component other than the nth target signal included in the observation signal by using the covariance matrix of the nth target signal and the covariance matrix of the multi-channel observation signal as inputs. A speech estimation filter calculator for calculating a multi-channel Wiener filter;
A multi-channel filtering unit that receives the multi-channel observation signal, the generalized multi-channel Wiener filter, and the posterior probability associated with each target sound source, and outputs an estimated value of the n-th target signal;
A sound source separation apparatus comprising:
請求項1に記載した音源分離装置において、
上記音声・雑音特徴計算部は、
上記多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)を乗じた値を、総観測フレーム数Tで平均して多チャネル観測信号y(t)の共分散行列Ryyを計算する観測信号共分散行列計算手段と、
多チャネル観測信号y(t)と目的音源の事後確率p[C|ψ(t),θ]を入力として、多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)と加法性雑音に関する事後確率p[CN+1|y(t)]とを乗じた値を総観測フレーム数Tで平均して、加法性雑音の共分散行列^Rvvを計算する加法性雑音共分散行列計算手段と、
多チャネル観測信号y(t)と目的音源の事後確率p[C|ψ(t),θ]と上記加法性雑音の共分散行列^Rvvを入力として、多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)と各目的音源に関する事後確率p[C|y(t)]とを乗じた値を観測信号Tで平均した値から、上記加法性雑音の共分散行列^Rvvを減じて各々の目的信号に関する共分散行列^Rxnxnを計算する目的信号共分散行列計算手段と、
を備えることを特徴とする音源分離装置。
The sound source separation device according to claim 1,
The voice / noise feature calculator is
A value obtained by multiplying the vector of the multi-channel observation signal y (t) by the Hermitian transposition y H (t) is averaged over the total number of observation frames T to obtain a covariance matrix R yy of the multi-channel observation signal y (t). An observation signal covariance matrix calculation means for calculating,
The multi-channel observation signal y (t) and the posterior probability p [C n | ψ (t), θ] of the target sound source are input, and the vector of the multi-channel observation signal y (t) and its Hermitian transposition y H (t) The additive noise covariance is calculated by averaging the value obtained by multiplying the posterior probability p [C N + 1 | y (t)] with respect to the additive noise by the total number of observed frames T and calculating the additive noise covariance matrix ^ Rvv. Matrix calculation means;
The multi-channel observation signal y (t) is input using the multi-channel observation signal y (t), the posterior probability p [C n | ψ (t), θ] of the target sound source and the covariance matrix ^ R vv of the additive noise. Of the additive noise from a value obtained by averaging the observed signal T and the value obtained by multiplying the vector of the vector and its Hermitian transposition y H (t) and the posterior probability p [C n | y (t)] for each target sound source. A target signal covariance matrix calculating means for calculating a covariance matrix ^ R xnxn for each target signal by subtracting the matrix ^ R vv ;
A sound source separation device comprising:
多チャネル観測信号の各時間周波数ビンを特徴付ける特徴ベクトルを、複素領域の観測信号をそのノルムで正規化して計算する特徴ベクトル計算過程と、
上記特徴ベクトルを入力として、当該特徴量ベクトルをN個の目的音源と加法性雑音とに各々起因するN+1個の成分に分類し、各目的音源と加法性雑音に関する事後確率を最尤推定する音声・雑音存在確率計算過程と、
上記各目的音源の目的信号についての事後確率と加法性雑音についての事後確率と、上記多チャネル観測信号とを入力として、n番目の目的信号の共分散行列と観測信号に含まれる多チャネル観測信号の共分散行列を計算する音声・雑音特徴計算過程と、
上記n番目の目的信号の共分散行列と、上記多チャネル観測信号の共分散行列を入力として、観測信号に含まれるn番目の目的信号以外の不要成分を求め、上記目的信号を回復する一般化多チャネルウィナーフィルタを計算する音声推定用フィルタ計算過程と、
上記多チャネル観測信号と上記一般化多チャネルウィナーフィルタと上記各目的音源に関する事後確率とを入力として、n番目の目的信号の推定値を出力する多チャネルフィルタリング過程と、
を備える音源分離方法。
A feature vector calculation process for calculating a feature vector that characterizes each time-frequency bin of a multi-channel observation signal by normalizing the observation signal in the complex region by its norm,
Using the feature vector as an input, the feature vector is classified into N + 1 components each resulting from N target sound sources and additive noise, and the maximum likelihood estimation of posterior probabilities for each target sound source and additive noise is performed・ Noise existence probability calculation process,
Using the posterior probability for the target signal of each target sound source, the posterior probability for additive noise, and the multichannel observation signal as input, the covariance matrix of the nth target signal and the multichannel observation signal included in the observation signal Voice / noise feature calculation process to calculate the covariance matrix of
Generalization of recovering the target signal by obtaining an unnecessary component other than the nth target signal included in the observation signal by using the covariance matrix of the nth target signal and the covariance matrix of the multi-channel observation signal as inputs. A voice estimation filter calculation process for calculating a multi-channel Wiener filter;
A multi-channel filtering process for outputting an estimated value of the n-th target signal, with the multi-channel observation signal, the generalized multi-channel Wiener filter, and the posterior probability for each target sound source as inputs;
A sound source separation method comprising:
請求項3に記載した音源分離方法において、
上記音声・雑音特徴計算過程は、
上記多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)を乗じた値を、総観測フレーム数Tで平均して多チャネル観測信号y(t)の共分散行列Ryyを計算する観測信号共分散行列計算ステップと、
多チャネル観測信号y(t)と目的音源の事後確率p[C|ψ(t),θ]を入力として、多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)と加法性雑音に関する事後確率p[CN+1|y(t)]とを乗じた値を総観測フレーム数Tで平均して、加法性雑音の共分散行列^Rvvを計算する加法性雑音共分散行列計算ステップと、
多チャネル観測信号y(t)と目的音源の事後確率p[C|ψ(t),θ]と上記加法性雑音の共分散行列^Rvvを入力として、多チャネル観測信号y(t)のベクトルとそのエルミート転置y(t)と各目的音源に関する事後確率p[C|y(t)]とを乗じた値を総観測フレーム数Tで平均した値から、上記加法性雑音の共分散行列^Rvvを減じて各々の目的信号に関する共分散行列^Rxnxnを計算する目的信号共分散行列計算ステップと、
を含むことを特徴とする音源分離方法。
In the sound source separation method according to claim 3,
The voice / noise feature calculation process is as follows:
A value obtained by multiplying the vector of the multi-channel observation signal y (t) by the Hermitian transposition y H (t) is averaged over the total number of observation frames T to obtain a covariance matrix R yy of the multi-channel observation signal y (t). An observation signal covariance matrix calculation step to be calculated;
The multi-channel observation signal y (t) and the posterior probability p [C n | ψ (t), θ] of the target sound source are input, and the vector of the multi-channel observation signal y (t) and its Hermitian transposition y H (t) The additive noise covariance is calculated by averaging the value obtained by multiplying the posterior probability p [C N + 1 | y (t)] with respect to the additive noise by the total number of observed frames T and calculating the additive noise covariance matrix ^ Rvv. Matrix calculation step;
The multi-channel observation signal y (t) is input using the multi-channel observation signal y (t), the posterior probability p [C n | ψ (t), θ] of the target sound source and the covariance matrix ^ R vv of the additive noise. And the Hermitian transposition y H (t) and the value obtained by multiplying the posterior probability p [C n | y (t)] for each target sound source by the total number of observed frames T, An objective signal covariance matrix calculating step of calculating a covariance matrix ^ R xnxn for each target signal by subtracting the covariance matrix ^ R vv ;
A sound source separation method comprising:
請求項1又は2に記載した音源分離装置としてコンピュータを機能させるためのプログラム。   A program for causing a computer to function as the sound source separation device according to claim 1.
JP2011193517A 2011-09-06 2011-09-06 Sound source separation device, method and program thereof Active JP5568530B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011193517A JP5568530B2 (en) 2011-09-06 2011-09-06 Sound source separation device, method and program thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011193517A JP5568530B2 (en) 2011-09-06 2011-09-06 Sound source separation device, method and program thereof

Publications (2)

Publication Number Publication Date
JP2013054258A true JP2013054258A (en) 2013-03-21
JP5568530B2 JP5568530B2 (en) 2014-08-06

Family

ID=48131281

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011193517A Active JP5568530B2 (en) 2011-09-06 2011-09-06 Sound source separation device, method and program thereof

Country Status (1)

Country Link
JP (1) JP5568530B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014157261A (en) * 2013-02-15 2014-08-28 Nippon Telegr & Teleph Corp <Ntt> Sound source separating device, sound source separating method, and program
JP2015040934A (en) * 2013-08-21 2015-03-02 日本電信電話株式会社 Sound source separation device, and method and program of the same
JP2016194657A (en) * 2015-04-01 2016-11-17 日本電信電話株式会社 Sound source separation device, sound source separation method, and sound source separation program
JP2017090853A (en) * 2015-11-17 2017-05-25 株式会社東芝 Information processing device, information processing method, and program
JP2018141922A (en) * 2017-02-28 2018-09-13 日本電信電話株式会社 Steering vector estimation device, steering vector estimating method and steering vector estimation program
JP2018146610A (en) * 2017-03-01 2018-09-20 日本電信電話株式会社 Mask estimation device, mask estimation method and mask estimation program
CN110914899A (en) * 2017-07-19 2020-03-24 日本电信电话株式会社 Mask calculation device, cluster weight learning device, mask calculation neural network learning device, mask calculation method, cluster weight learning method, and mask calculation neural network learning method
CN111009256A (en) * 2019-12-17 2020-04-14 北京小米智能科技有限公司 Audio signal processing method and device, terminal and storage medium
CN111028857A (en) * 2019-12-27 2020-04-17 苏州蛙声科技有限公司 Method and system for reducing noise of multi-channel audio and video conference based on deep learning
CN111262590A (en) * 2020-01-21 2020-06-09 中国科学院声学研究所 Underwater acoustic communication information source and channel joint decoding method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006510069A (en) * 2002-12-11 2006-03-23 ソフトマックス,インク System and method for speech processing using improved independent component analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006510069A (en) * 2002-12-11 2006-03-23 ソフトマックス,インク System and method for speech processing using improved independent component analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
H. SAWADA, S. ARAKI, AND S. MAKINO: ""Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutat", IEEE TRANS. AUDIO, SPEECH AND LANG. PROCESSING, vol. 19, JPN6014010546, March 2011 (2011-03-01), pages 516 - 527, XP011337035, ISSN: 0002766314, DOI: 10.1109/TASL.2010.2051355 *
礒佳樹 荒木章子 牧野昭二 中谷智広 澤田宏 山田武志 中村篤: "高残響下で混合された音声の音源分離に関する研究", 日本音響学会 2011年 春季研究発表会講演論文集CD−ROM, JPN6014010549, 11 March 2011 (2011-03-11), pages 643 - 646, ISSN: 0002766315 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014157261A (en) * 2013-02-15 2014-08-28 Nippon Telegr & Teleph Corp <Ntt> Sound source separating device, sound source separating method, and program
JP2015040934A (en) * 2013-08-21 2015-03-02 日本電信電話株式会社 Sound source separation device, and method and program of the same
JP2016194657A (en) * 2015-04-01 2016-11-17 日本電信電話株式会社 Sound source separation device, sound source separation method, and sound source separation program
JP2017090853A (en) * 2015-11-17 2017-05-25 株式会社東芝 Information processing device, information processing method, and program
JP2018141922A (en) * 2017-02-28 2018-09-13 日本電信電話株式会社 Steering vector estimation device, steering vector estimating method and steering vector estimation program
JP2018146610A (en) * 2017-03-01 2018-09-20 日本電信電話株式会社 Mask estimation device, mask estimation method and mask estimation program
CN110914899A (en) * 2017-07-19 2020-03-24 日本电信电话株式会社 Mask calculation device, cluster weight learning device, mask calculation neural network learning device, mask calculation method, cluster weight learning method, and mask calculation neural network learning method
CN110914899B (en) * 2017-07-19 2023-10-24 日本电信电话株式会社 Mask calculation device, cluster weight learning device, mask calculation neural network learning device, mask calculation method, cluster weight learning method, and mask calculation neural network learning method
CN111009256A (en) * 2019-12-17 2020-04-14 北京小米智能科技有限公司 Audio signal processing method and device, terminal and storage medium
US11284190B2 (en) 2019-12-17 2022-03-22 Beijing Xiaomi Intelligent Technology Co., Ltd. Method and device for processing audio signal with frequency-domain estimation, and non-transitory computer-readable storage medium
CN111028857A (en) * 2019-12-27 2020-04-17 苏州蛙声科技有限公司 Method and system for reducing noise of multi-channel audio and video conference based on deep learning
CN111028857B (en) * 2019-12-27 2024-01-19 宁波蛙声科技有限公司 Method and system for reducing noise of multichannel audio-video conference based on deep learning
CN111262590A (en) * 2020-01-21 2020-06-09 中国科学院声学研究所 Underwater acoustic communication information source and channel joint decoding method

Also Published As

Publication number Publication date
JP5568530B2 (en) 2014-08-06

Similar Documents

Publication Publication Date Title
JP5568530B2 (en) Sound source separation device, method and program thereof
JP6243858B2 (en) Speech model learning method, noise suppression method, speech model learning device, noise suppression device, speech model learning program, and noise suppression program
US10614827B1 (en) System and method for speech enhancement using dynamic noise profile estimation
KR102152197B1 (en) Hearing Aid Having Voice Activity Detector and Method thereof
JP6054142B2 (en) Signal processing apparatus, method and program
CN111696568B (en) Semi-supervised transient noise suppression method
KR102206546B1 (en) Hearing Aid Having Noise Environment Classification and Reduction Function and Method thereof
JP6348427B2 (en) Noise removal apparatus and noise removal program
CN114041185A (en) Method and apparatus for determining a depth filter
JP4960933B2 (en) Acoustic signal enhancement apparatus and method, program, and recording medium
CN110998723A (en) Signal processing device using neural network, signal processing method using neural network, and signal processing program
JP5351856B2 (en) Sound source parameter estimation device, sound source separation device, method thereof, program, and storage medium
JP5726790B2 (en) Sound source separation device, sound source separation method, and program
Al-Ali et al. Enhanced forensic speaker verification using multi-run ICA in the presence of environmental noise and reverberation conditions
Krijnders et al. Tone-fit and MFCC scene classification compared to human recognition
JP6114053B2 (en) Sound source separation device, sound source separation method, and program
Seong et al. WADA-W: A modified WADA SNR estimator for audio-visual speech recognition
KR101610708B1 (en) Voice recognition apparatus and method
JP6285855B2 (en) Filter coefficient calculation apparatus, audio reproduction apparatus, filter coefficient calculation method, and program
Chehresa et al. MMSE speech enhancement based on GMM and solving an over-determined system of equations
KR101096091B1 (en) Apparatus for Separating Voice and Method for Separating Voice of Single Channel Using the Same
JP6059112B2 (en) Sound source separation device, method and program thereof
Hasan et al. Acoustic factor analysis based universal background model for robust speaker verification in noise.
JP2013037177A (en) Speech enhancement device, and method and program thereof
WO2016092837A1 (en) Speech processing device, noise suppressing device, speech processing method, and recording medium

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20130829

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20140226

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140311

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140422

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140513

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140527

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20140617

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20140623

R150 Certificate of patent or registration of utility model

Ref document number: 5568530

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150