JP5089651B2 - Speech recognition device, acoustic model creation device, method thereof, program, and recording medium - Google Patents

Speech recognition device, acoustic model creation device, method thereof, program, and recording medium Download PDF

Info

Publication number
JP5089651B2
JP5089651B2 JP2009138987A JP2009138987A JP5089651B2 JP 5089651 B2 JP5089651 B2 JP 5089651B2 JP 2009138987 A JP2009138987 A JP 2009138987A JP 2009138987 A JP2009138987 A JP 2009138987A JP 5089651 B2 JP5089651 B2 JP 5089651B2
Authority
JP
Japan
Prior art keywords
likelihood
gmm
speech
predetermined range
feature amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2009138987A
Other languages
Japanese (ja)
Other versions
JP2010286586A (en
Inventor
哲 小橋川
太一 浅見
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2009138987A priority Critical patent/JP5089651B2/en
Publication of JP2010286586A publication Critical patent/JP2010286586A/en
Application granted granted Critical
Publication of JP5089651B2 publication Critical patent/JP5089651B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

この発明は、未知の非定常雑音が入力されても認識誤りの発生が少ない音声認識装置と、未知の非定常雑音が入力されても精度の高い音響モデルを作成する音響モデル作成装置とそれらの方法と、プログラムと記録媒体に関する。   The present invention provides a speech recognition apparatus that generates less recognition errors even when unknown non-stationary noise is input, an acoustic model generation apparatus that generates a highly accurate acoustic model even when unknown non-stationary noise is input, and their The present invention relates to a method, a program, and a recording medium.

近年、統計的手法に基づく音声認識技術の進歩により、静かな環境における音声認識は高い精度で行うことが可能になった。しかし、実際の環境では、雑音の存在、特に未知の非定常な雑音によって認識性能が劣化することが問題になっている。   In recent years, the progress of speech recognition technology based on statistical methods has made it possible to perform speech recognition in a quiet environment with high accuracy. However, in an actual environment, there is a problem that the recognition performance deteriorates due to the presence of noise, particularly unknown non-stationary noise.

図9に従来の音声認識装置900の機能構成を示す。音声認識装置900は、A/D変換部10、特徴量分析部20、音声認識処理部30、音響モデルパラメータメモリ40、言語モデルパラメータメモリ50を備える。   FIG. 9 shows a functional configuration of a conventional speech recognition apparatus 900. The speech recognition apparatus 900 includes an A / D conversion unit 10, a feature amount analysis unit 20, a speech recognition processing unit 30, an acoustic model parameter memory 40, and a language model parameter memory 50.

A/D変換部10は、入力されるアナログ信号の音声を、例えばサンプリング周波数16kHzで離散的なディジタル信号に変換する。特徴量分析部20は、離散値化された音声ディジタル信号を入力として、例えば320個の音声ディジタル信号を1フレーム(20ms)としたフレーム毎に、音声特徴量Oを算出する。音声特徴量Oは、例えばメル周波数ケプストラム係数(MFCC)分析によって算出される。 The A / D converter 10 converts the sound of the input analog signal into a discrete digital signal, for example, at a sampling frequency of 16 kHz. The feature amount analysis unit 20 receives the speech digital signal that has been converted into discrete values, and calculates the speech feature amount O t for each frame in which, for example, 320 speech digital signals are one frame (20 ms). The voice feature amount O t is calculated, for example, by Mel frequency cepstrum coefficient (MFCC) analysis.

音声認識処理部30は、音声特徴量Oを入力として音響モデルパラメータメモリ40に記録された音響モデルと、言語モデルパラメータメモリ50に記録された言語モデルとを参照して、音響モデルの尤度と言語モデルの尤度の和が最も高い音声認識結果候補を音声認識結果として出力する。 The speech recognition processing unit 30 refers to the acoustic model recorded in the acoustic model parameter memory 40 with the speech feature quantity O t as an input, and the language model recorded in the language model parameter memory 50, and the likelihood of the acoustic model. And the speech recognition result candidate having the highest likelihood of the language model is output as the speech recognition result.

従来の音声認識装置900においては、未知な非定常雑音である突発性雑音に対処する目的で、音響モデルの尤度を補正する方法が取られていた。音響モデルは、HMM(Hidden Markov Model:隠れマルコフモデル)で表現され、その出現確率分布には正規分布が広く用いられる。その出現確率の対数である尤度は、2次関数となり分布平均からのずれの2乗に従い低下する特性を示す。この音響モデルの出現確率と尤度との特性の差が、突発性雑音による認識誤りの一因と考えられる。その差を補正する考えが、例えば非特許文献1に開示されている。   In the conventional speech recognition apparatus 900, a method of correcting the likelihood of the acoustic model has been taken for the purpose of dealing with sudden noise that is unknown unsteady noise. The acoustic model is expressed by an HMM (Hidden Markov Model), and a normal distribution is widely used as the appearance probability distribution. The likelihood, which is the logarithm of the appearance probability, is a quadratic function and exhibits a characteristic that decreases according to the square of the deviation from the distribution average. The difference in the characteristics between the appearance probability and likelihood of the acoustic model is considered to be a cause of recognition error due to sudden noise. The idea of correcting the difference is disclosed in Non-Patent Document 1, for example.

図10に非特許文献1の尤度補正の考えを示す。図10の左上側は、音素|a|と音素|o|の出現確率分布を示す。同左下側はそれぞれの音素の尤度の特性を示す。図10の横軸は音声特徴量である。音声特徴量yが観測されたとき、出現確率は、音素|a|よりも音素|o|の確率が高い。左下側に示す尤度も同じ傾向を示す。 FIG. 10 shows the idea of likelihood correction of Non-Patent Document 1. The upper left side of FIG. 10 shows the appearance probability distribution of phonemes | a | and phonemes | o |. The lower left side shows the likelihood characteristics of each phoneme. The horizontal axis of FIG. 10 is the audio feature amount. When the speech feature amount s is observed, the appearance probability is higher for the phoneme | o | than for the phoneme | a |. The likelihood shown on the lower left side shows the same tendency.

しかし、音声特徴量yが重畳雑音等の影響でyに変化したとき、出現確率はどちらも小さくなるが、左下側に示す尤度では音素|a|と音素|o|で大小関係が逆転するだけでなく、2次曲線によってその間に大きな差が生じてしまう。このように、出現確率の差は小さいのにもかかわらず尤度に大きな差が発生することが、認識誤りの一因になると考えられる。 However, when the speech feature amount s changes to yo due to the influence of superimposed noise or the like, the appearance probabilities are both small, but the likelihood shown in the lower left side has a magnitude relationship between the phoneme | a | and the phoneme | o |. Not only does it reverse, but the quadratic curve makes a big difference between them. Thus, although the difference in appearance probability is small, a large difference in likelihood is considered to contribute to recognition errors.

そこで、非特許文献1では、突発性雑音に対する頑健性の向上のため、観測されたデータの分布N(y)に正の微小な補正定数εを加え、その値の尤度(式(1))を用いることで、線形の出現確率の小さな差が尤度の大きな差になる問題を回避している。   Therefore, in Non-Patent Document 1, in order to improve robustness against sudden noise, a small positive correction constant ε is added to the observed data distribution N (y), and the likelihood of the value (formula (1)) ) Is used to avoid the problem that a difference in linear appearance probability becomes a difference in likelihood.

Figure 0005089651
Figure 0005089651

ここでN(y)は観測されたデータの音声特徴量の分布である。つまり、図10の音素|a|や音素|o|の分布である。εは補正定数である。
この補正定数εを加える処理は、音声認識処理部30で行われる。この処理によって、図10の右下側に示すように尤度の差を縮小することが可能である。よって、突発性雑音が発生したときの尤度の変化量を少なくすることができるので、認識誤りの発生を抑制する効果が期待できる。
Here, N (y) is the distribution of the voice feature amount of the observed data. That is, the distribution of phonemes | a | and phonemes | o | in FIG. ε is a correction constant.
The process of adding the correction constant ε is performed by the voice recognition processing unit 30. By this processing, it is possible to reduce the difference in likelihood as shown in the lower right side of FIG. Therefore, since the amount of change in likelihood when sudden noise occurs can be reduced, the effect of suppressing the occurrence of recognition errors can be expected.

山本仁、篠田浩一、嵯峨山茂樹「正規分布の尤度補正による突発性雑音に頑健な音声認識」、音響学会秋季講演論文集、1-9-10,pp.19-20,2002Hitoshi Yamamoto, Koichi Shinoda, Shigeki Hiyama “Speech Recognition Robust against Sudden Noise by Correcting Likelihood of Normal Distribution”, Acoustical Society Autumn Meeting, 1-9-10, pp.19-20,2002

従来の補正定数εを導入する考えは、その定数の設定によっては認識精度が劣化してしまう危険性がある。最適な定数は、認識対象のデータによって異なるため一律に決められない。定数を一律に固定してしまうと、音素モデルによって、補正の影響に強弱が発生してしまい本来正しい尤度が得られる場合も、それを阻害してしまう心配がある。   The idea of introducing the conventional correction constant ε has a risk that the recognition accuracy may deteriorate depending on the setting of the constant. The optimum constant varies depending on the data to be recognized and cannot be determined uniformly. If the constants are fixed uniformly, the phoneme model may affect the effect of correction, and even if the correct likelihood is obtained, there is a concern that it may be hindered.

この発明は、このような点に鑑みてなされたものであり、尤度が一定の範囲を超えた場合にそのデータを対象外にすることで、認識誤りの発生を少なくした音声認識装置とその方法と、それと同じ考えに基づく音響モデル作成装置とその方法と、プログラムと記録媒体を提供することを目的とする。   The present invention has been made in view of the above points, and a speech recognition apparatus that reduces the occurrence of recognition errors by excluding data when the likelihood exceeds a certain range, and its It is an object of the present invention to provide a method, an acoustic model creation device and method based on the same idea, a program, and a recording medium.

この発明の音声認識装置は、特徴量分析部と、GMM尤度計算部と、GMM尤度判定部と、音声認識処理部とを具備する。特徴量分析部は、入力される音声ディジタル信号の音声特徴量をフレーム単位で分析する。GMM尤度計算部は、GMM(Gaussian Mixture Model:混合正規分布モデル)と上記音声特徴量を照合してフレーム毎にGMM尤度を計算する。GMM尤度判定部は、GMM尤度が所定の範囲内であるか否かを判定し、その判定結果とGMM尤度とを出力する。音声認識処理部は、音声特徴量とGMM尤度と判定結果を入力として、上記所定の範囲内のフレームについては音声特徴量に対応する音響尤度に基づいて音声認識処理を行い、上記所定の範囲外のフレームについてはGMM尤度を利用した音響尤度を用いて音声認識処理を行う。   The speech recognition apparatus according to the present invention includes a feature amount analysis unit, a GMM likelihood calculation unit, a GMM likelihood determination unit, and a speech recognition processing unit. The feature amount analysis unit analyzes the speech feature amount of the input speech digital signal in units of frames. The GMM likelihood calculation unit collates GMM (Gaussian Mixture Model: mixed normal distribution model) with the above-mentioned speech feature quantity, and calculates the GMM likelihood for each frame. The GMM likelihood determination unit determines whether or not the GMM likelihood is within a predetermined range, and outputs the determination result and the GMM likelihood. The speech recognition processing unit receives the speech feature amount, the GMM likelihood, and the determination result, performs speech recognition processing on the frame within the predetermined range based on the acoustic likelihood corresponding to the speech feature amount, For frames outside the range, speech recognition processing is performed using acoustic likelihood using GMM likelihood.

また、この発明の音響モデル作成装置は、学習処理部と、上記したと同じGMM尤度計算部とGMM尤度判定部とを具備し、学習処理部は、判定結果が範囲外と判定されたフレームを音響モデルの統計量計算の対象外として学習後音響モデルを生成する。   Further, the acoustic model creation device of the present invention includes a learning processing unit and the same GMM likelihood calculating unit and GMM likelihood determining unit as described above, and the learning processing unit determines that the determination result is out of range. An acoustic model is generated after learning with the frame excluded from the statistical model calculation target.

この発明の音声認識装置と音響モデル作成装置は、殆どの音素を包含し、分散が広くなる混合ガウス分布モデルであるGMMから求めた尤度を用いる。よって、従来問題になっていた分布の端において発生する尤度の逆転現象や、尤度の差が増大してしまう問題を低減できる。
つまり、GMM尤度判定部で範囲外と判定されたフレームの音響尤度が、GMM尤度計算部でGMMに基づいて計算されたGMM尤度に代用されるので、突発性雑音が入力されたときの音響尤度を安定化させることが出来る。その結果、音声認識処理及び音響モデルの学習処理の精度を向上させる効果を奏する。
The speech recognition apparatus and acoustic model creation apparatus of the present invention use the likelihood obtained from the GMM, which is a mixed Gaussian distribution model that includes most phonemes and has a wide dispersion. Therefore, it is possible to reduce the likelihood reversal phenomenon that occurs at the end of the distribution, which has been a problem in the past, and the problem that the difference in likelihood increases.
That is, since the acoustic likelihood of the frame determined to be out of the range by the GMM likelihood determination unit is substituted for the GMM likelihood calculated based on the GMM by the GMM likelihood calculation unit, sudden noise is input. The acoustic likelihood at the time can be stabilized. As a result, there is an effect of improving the accuracy of the speech recognition process and the acoustic model learning process.

音素モデルを構成する1状態を模式的に示す図。The figure which shows typically 1 state which comprises a phoneme model. 音素モデルの一例を示す図。The figure which shows an example of a phoneme model. この発明の音声認識装置100の機能構成例を示す図。The figure which shows the function structural example of the speech recognition apparatus 100 of this invention. 音声認識装置100の動作フローを示す図。The figure which shows the operation | movement flow of the speech recognition apparatus 100. GMMを用いた音声特徴量とGMM尤度との関係を示す図。The figure which shows the relationship between the audio | voice feature-value using GMM, and GMM likelihood. 音声認識装置100′の動作フローを示す図。The figure which shows the operation | movement flow of speech recognition apparatus 100 '. この発明の音響モデル作成装置200の機能構成例を示す図。The figure which shows the function structural example of the acoustic model production apparatus 200 of this invention. 音響モデル作成装置200の動作フローを示す図。The figure which shows the operation | movement flow of the acoustic model production apparatus 200. 従来の音声認識装置900の機能構成を示す図。The figure which shows the function structure of the conventional speech recognition apparatus 900. 非特許文献1に開示された尤度補正の考えを示す図。The figure which shows the idea of the likelihood correction | amendment disclosed by the nonpatent literature 1. FIG.

この発明の実施例の説明をする前に、この発明の考えについて説明する。
〔この発明の考え〕
この発明の考えを説明するに当たって先ず音響モデルについて説明する。音響モデルを構成する音素モデルは、約3個程度の状態の確率連鎖によって構築される。各状態は、混合正規分布として表現される。図1に、例えば混合数を3の場合での3つの正規分布、N(μ,U),N(μ,U),N(μ,U)、重み係数c,c,cで構成される状態sを示す。μは平均ベクトル、Uは共分散行列である。
Before describing the embodiments of the present invention, the idea of the present invention will be described.
[Concept of this invention]
In describing the idea of the present invention, an acoustic model will be described first. A phoneme model constituting the acoustic model is constructed by a probability chain of about three states. Each state is expressed as a mixed normal distribution. FIG. 1 shows, for example, three normal distributions when the number of mixtures is 3, N (μ 1 , U 1 ), N (μ 2 , U 2 ), N (μ 3 , U 3 ), weight coefficient c 1 , A state s composed of c 2 and c 3 is shown. μ is an average vector, and U is a covariance matrix.

図2に3状態で構成される音素モデルの概念図を一例として示す。この例は、left−to−right型HMMと呼ばれるもので、3つの状態s(第1状態)、s(第2状態)、s(第3状態)を並べたものであり、状態の確率連鎖(状態遷移確率)としては、自己遷移a11、a22、a33と、次状態へのa12、a23、a34からなる。この状態遷移系列の中で最も尤度の高い音素モデルの組み合わせが、音声認識結果として出力される。この音素モデルの集合が音響モデルである。
状態sから得られる出現確率P(s,O)は式(2)で求められる。
FIG. 2 shows, as an example, a conceptual diagram of a phoneme model composed of three states. This example is called a left-to-right type HMM, which is an array of three states s 1 (first state), s 2 (second state), and s 3 (third state). As a probability chain (state transition probability), self-transitions a 11 , a 22 , a 33 and a 12 , a 23 , a 34 to the next state are included. A combination of phoneme models having the highest likelihood in the state transition series is output as a speech recognition result. A set of phoneme models is an acoustic model.
The appearance probability P (s, O t ) obtained from the state s is obtained by Expression (2).

Figure 0005089651
Figure 0005089651

ここでOはフレームtの音声特徴量、N(O;μms,Ums)は平均ベクトルμms、共分散行列Umsからなる正規分布から計算される確率、cmsは混合重み係数、Mは状態sに属する分布数である。各状態におけるこの出現確率P(s,O)と前述の状態遷移確率の対数値の総和が音響尤度である。
背景技術で説明した補正定数εを音声特徴量の分布に加える考え方では、突発性雑音が入力されると、上記した説明から明らかなように、音響尤度が大きく変動する可能性が有り、それが認識誤りの原因になっていた。
Here, O t is an audio feature amount of frame t, N (O t ; μ ms , U ms ) is a probability calculated from a normal distribution consisting of an average vector μ ms and a covariance matrix U ms , and c ms is a mixing weight coefficient. , M s is the number of distributions belonging to state s. The sum of the logarithmic value of the appearance probability P (s, O t ) in each state and the above-described state transition probability is the acoustic likelihood.
In the concept of adding the correction constant ε described in the background art to the distribution of the speech feature amount, when sudden noise is input, the acoustic likelihood may greatly vary as is apparent from the above description. Was the cause of recognition errors.

その従来の方法に対してこの発明の音声認識方法は、音声認識処理の前にGMMと音声特徴量を照合してGMM尤度を計算する。そして、そのGMM尤度が所定の範囲内であるか否かを判定する。GMM尤度が所定の範囲内であれば、音声認識処理過程で音声特徴量に基づいた音響尤度を求め、その音響尤度を用いて音声認識処理を行う。
逆にGMM尤度が所定の範囲外の場合、例えば、突発性雑音が入力されたGMMから求めたGMM尤度を、音響尤度に代用して音声認識処理するので、音響尤度が大きく変化することがない。
In contrast to the conventional method, the speech recognition method of the present invention calculates the GMM likelihood by comparing the GMM with the speech feature before speech recognition processing. Then, it is determined whether or not the GMM likelihood is within a predetermined range. If the GMM likelihood is within a predetermined range, the acoustic likelihood based on the speech feature amount is obtained in the speech recognition process, and the speech recognition process is performed using the acoustic likelihood.
On the other hand, when the GMM likelihood is out of the predetermined range, for example, the GMM likelihood obtained from the GMM to which sudden noise has been input is subjected to speech recognition processing instead of the acoustic likelihood, so that the acoustic likelihood changes greatly. There is nothing to do.

したがって従来の方法のように、そもそも音声特徴量に対する小さな出現確率の差が逆転したり、その小さな出現確率の差が大きな尤度差に変化してしまうことが無い。このようにこの発明の考えによれば、音響尤度の値を安定化することが可能である。その結果、音声認識の誤認識を減らすことが出来る。また、この考えは音響モデル作成装置にも適用することが可能である。
以下、この発明の実施の形態を図面を参照して説明する。複数の図面中同一のものには同じ参照符号を付し、説明は繰り返さない。
Therefore, unlike the conventional method, the small difference in appearance probability with respect to the speech feature amount is not reversed, or the small difference in appearance probability does not change to a large likelihood difference. Thus, according to the idea of the present invention, it is possible to stabilize the value of acoustic likelihood. As a result, misrecognition of voice recognition can be reduced. This idea can also be applied to an acoustic model creation device.
Embodiments of the present invention will be described below with reference to the drawings. The same reference numerals are given to the same components in a plurality of drawings, and the description will not be repeated.

図3にこの発明の音声認識装置100の機能構成例を示す。その動作フローを図4に示す。音声認識装置100は、A/D変換部10と、特徴量分析部20と、音響モデルパラメータメモリ40と、GMM尤度計算部60と、GMM尤度判定部70と、音声認識処理部80と、言語モデルパラメータメモリ50と、制御部90とを具備する。音声認識装置100は、例えばROM、RAM、CPU等で構成されるコンピュータに所定のプログラムが読み込まれて、CPUがそのプログラムを実行することで実現されるものである。   FIG. 3 shows a functional configuration example of the speech recognition apparatus 100 of the present invention. The operation flow is shown in FIG. The speech recognition apparatus 100 includes an A / D conversion unit 10, a feature amount analysis unit 20, an acoustic model parameter memory 40, a GMM likelihood calculation unit 60, a GMM likelihood determination unit 70, and a speech recognition processing unit 80. The language model parameter memory 50 and the control unit 90 are provided. The speech recognition apparatus 100 is realized by reading a predetermined program into a computer configured with, for example, a ROM, a RAM, a CPU, and the like, and executing the program by the CPU.

音声認識装置100は、従来の音声認識装置900と比較してGMM尤度計算部60と、GMM尤度判定部70とを具備する点で新しい。また、音声認識処理部80の動作が従来の音声認識処理部30と異なる。他の機能構成は音声認識装置900と同じものである。以降の説明では、その異なる部分を中心に説明を行う。   Compared with the conventional speech recognition apparatus 900, the speech recognition apparatus 100 is new in that it includes a GMM likelihood calculation unit 60 and a GMM likelihood determination unit 70. Further, the operation of the voice recognition processing unit 80 is different from that of the conventional voice recognition processing unit 30. Other functional configurations are the same as those of the speech recognition apparatus 900. In the following description, the description will focus on the different parts.

A/D変換部10は、入力されるアナログ信号の音声を、例えばサンプリング周波数16kHzで離散的なディジタル信号に変換する(ステップS10、図4)。特徴量分析部20は、離散値化された音声ディジタル信号を入力として、所定の数の音声ディジタル信号を1フレーム(例えば20ms)としたフレーム毎に、音声特徴量Oを算出する(ステップS20)。
GMM尤度計算部60は、GMMと音声特徴量Oを照合してフレーム毎にGMM尤度を計算する(ステップS60)。GMMは、音響モデルの学習データ中の全ての音素から学習した(場合によっては無音を除く)混合正規分布モデル(GMM)である。GMMは、この例では音響モデルパラメータメモリ40内に記録されている。
The A / D converter 10 converts the sound of the input analog signal into a discrete digital signal, for example, at a sampling frequency of 16 kHz (step S10, FIG. 4). The feature amount analysis unit 20 receives the speech digital signal converted into a discrete value, and calculates the speech feature amount O t for each frame in which a predetermined number of speech digital signals are one frame (for example, 20 ms) (step S20). ).
The GMM likelihood calculating unit 60 calculates the GMM likelihood for each frame by comparing the GMM with the speech feature amount O t (step S60). The GMM is a mixed normal distribution model (GMM) learned from all phonemes in the learning data of the acoustic model (except for silence in some cases). The GMM is recorded in the acoustic model parameter memory 40 in this example.

図5にGMMと音声特徴量Oを照合してGMM尤度を求める方法を模式的に示す。図5は、GMM尤度の分布を正規分布に近い形と仮定した場合である。横軸は音声特徴量Oであり、縦軸はGMM尤度である。
GMM尤度の計算は、上記した式(2)で求めた出現確率の対数値として求められる。この場合、GMMは、式(2)のように混合重み係数cms、平均ベクトルμms、共分散行列Umsで表現される。図5に示すように音声特徴量yやyに対応するGMM尤度が計算される。
FIG. 5 schematically shows a method for obtaining the GMM likelihood by comparing the GMM and the speech feature amount O t . FIG. 5 shows a case where the GMM likelihood distribution is assumed to be close to a normal distribution. The horizontal axis is the voice feature amount O t , and the vertical axis is the GMM likelihood.
The calculation of the GMM likelihood is obtained as a logarithmic value of the appearance probability obtained by the above equation (2). In this case, the GMM is expressed by a mixing weight coefficient c ms , an average vector μ ms , and a covariance matrix U ms as shown in Equation (2). GMM likelihood corresponding to the audio feature amount y s and y o as shown in FIG. 5 are calculated.

GMM尤度判定部70は、GMM尤度が所定の範囲内であるか否かを判定し、その判定結果を出力する(ステップS70)。所定の範囲とは、例えば、図5に示すGMM尤度分布の最大値〜最小値の範囲である。その範囲は、学習した音声データに対するGMM尤度の上下限値の範囲ということになる。つまり、GMM尤度判定部70は、学習していない突発性雑音等の影響を受けたフレームをフィルタリングすることが出来る。   The GMM likelihood determining unit 70 determines whether or not the GMM likelihood is within a predetermined range, and outputs the determination result (step S70). The predetermined range is, for example, a range from the maximum value to the minimum value of the GMM likelihood distribution shown in FIG. This range is the range of the upper and lower limit values of the GMM likelihood for the learned speech data. That is, the GMM likelihood determination unit 70 can filter frames that are affected by sudden noise that has not been learned.

音声認識処理部80は、GMM尤度が所定の範囲内である場合(ステップS70のY)は、音声特徴量Otに対応する音響尤度を求め(ステップS801)、その音響尤度に基づいて音声認識処理を行う(ステップS802)。GMM尤度が所定の範囲外である場合(ステップS70のN)は、音響尤度の代わりにGMM尤度を用いて音声認識処理を行う(ステップS803)。
以上の動作は、全てのフレームについて終了するまで繰り返される(ステップS90のN)。この音声認識装置100の各部の動作及び繰り返し動作の制御は、制御部90が行う。
When the GMM likelihood is within a predetermined range (Y in step S70), the speech recognition processing unit 80 obtains an acoustic likelihood corresponding to the speech feature amount Ot (step S801), and based on the acoustic likelihood. A voice recognition process is performed (step S802). If the GMM likelihood is outside the predetermined range (N in step S70), the speech recognition process is performed using the GMM likelihood instead of the acoustic likelihood (step S803).
The above operation is repeated until all frames are completed (N in step S90). The control unit 90 controls the operation and repetitive operation of each unit of the speech recognition apparatus 100.

音声認識装置100によれば、音声特徴量OとGMMとから求めたGMM尤度を用いることで、その音声特徴量Oが学習済みの特徴量の集合から大きく逸脱しないかどうかを判定する。そして、突発性雑音等のように学習データの集合には含まれないような音声特徴量Oが入力された場合は、その音響尤度をGMM尤度に置換えて音声認識処理を行う。したがって、突発性雑音等が入力された場合でも音響尤度を安定化することが可能である。その結果、音声認識の誤認識の発生を抑制することが出来る。 According to the speech recognition apparatus 100, by using the GMM likelihood obtained from the speech feature amount O t and the GMM, it is determined whether or not the speech feature amount O t greatly deviates from the learned feature amount set. . Then, when a speech feature quantity O t that is not included in the set of learning data such as sudden noise is input, the speech likelihood processing is performed by replacing the acoustic likelihood with the GMM likelihood. Therefore, even when sudden noise or the like is input, the acoustic likelihood can be stabilized. As a result, occurrence of misrecognition of voice recognition can be suppressed.

なお、所定の範囲は、図5のGMM分布の下限値以下のみとしても良い。又は、上述したようにGMM尤度の分布の上限値以上及び下限値以下としても良く、そのどちらでも良い。GMM尤度判定部70がGMM尤度の上限値も判定する場合、上限値を超えたフレームの音響尤度もそのGMM尤度に代用される。そのGMM尤度の値は、殆どの音素を包含した分布の大きなGMMから求めているので大きく変化した値にならない。よって、尤度値が不安定になることは無い。   Note that the predetermined range may be not more than the lower limit value of the GMM distribution of FIG. Alternatively, as described above, the upper limit value and the lower limit value of the GMM likelihood distribution may be set, or either of them may be used. When the GMM likelihood determining unit 70 also determines the upper limit value of the GMM likelihood, the acoustic likelihood of the frame exceeding the upper limit value is also substituted for the GMM likelihood. Since the GMM likelihood value is obtained from a GMM having a large distribution including most phonemes, it does not change greatly. Therefore, the likelihood value does not become unstable.

なお、所定の範囲を、学習した全ての音声特徴量に対応した尤度の上下限値の範囲として説明したが、この発明はこの例に限定されない。例えば、音響モデル学習時のGMM尤度の分布を正規分布と過程して予め求めたGMM尤度の平均値μと標準偏差σに基づき、GMM尤度計算部60内に設けられた所定範囲設定手段601が、所定の範囲をμ±2σ(上限値=μ+2σ、下限値=μ−2σ)と、計算して設定しても良い。このようにすることで所定の範囲を、学習したGMM尤度の値から任意の範囲に設定することが可能となる。なお、予めGMM尤度の平均値μと標準偏差σに基づいて任意の所定の範囲を設定し、その値を音響モデルパラメータメモリに記録して置いても良い。その場合、所定範囲設定手段601は無くても良い。   Although the predetermined range has been described as the range of the upper and lower limits of likelihood corresponding to all learned speech feature quantities, the present invention is not limited to this example. For example, the predetermined range setting provided in the GMM likelihood calculation unit 60 is based on the average value μ and the standard deviation σ of the GMM likelihood obtained in advance by processing the distribution of the GMM likelihood at the time of acoustic model learning as a normal distribution. The means 601 may calculate and set the predetermined range as μ ± 2σ (upper limit = μ + 2σ, lower limit = μ−2σ). In this way, the predetermined range can be set to an arbitrary range from the learned GMM likelihood value. An arbitrary predetermined range may be set in advance based on the average value μ and standard deviation σ of GMM likelihood, and the value may be recorded in the acoustic model parameter memory. In that case, the predetermined range setting means 601 may be omitted.

また、GMM尤度判定部70内に、上下限値設定手段701を備え、所定の範囲外のフレームのGMM尤度を所定の上下限値にしても良い。つまり、上下限値にGMM尤度を丸め込んでも良い。丸め込むことで尤度の範囲を更に狭めることが出来る。
図6に所定の範囲外のGMM尤度を上下限値に丸め込む動作を行う音声認識装置100′の動作フローを示す。GMM尤度判定過程(ステップS70)以外は、音声認識装置100と同じである。
GMM尤度判定部70′は、GMM尤度が所定の範囲内か否かを判定する(ステップS701)。範囲内の場合(ステップS701のY)は、音声特徴量から音響尤度を求めて音声認識処理を行うステップS801以降の動作を行う。
Further, the upper and lower limit value setting means 701 may be provided in the GMM likelihood determining unit 70, and the GMM likelihood of a frame outside a predetermined range may be set to a predetermined upper and lower limit value. That is, the GMM likelihood may be rounded to the upper and lower limit values. The likelihood range can be further narrowed by rounding.
FIG. 6 shows an operation flow of the speech recognition apparatus 100 ′ that performs the operation of rounding the GMM likelihood outside the predetermined range to the upper and lower limit values. Except for the GMM likelihood determination process (step S70), it is the same as the speech recognition apparatus 100.
The GMM likelihood determining unit 70 ′ determines whether or not the GMM likelihood is within a predetermined range (step S701). If it is within the range (Y in step S701), the operation after step S801 is performed in which the speech likelihood processing is performed by obtaining the acoustic likelihood from the speech feature amount.

GMM尤度が所定の範囲外の場合は、GMM尤度が下限値以下(ステップS702)であるか、上限値以上であるかを判定する(ステップS704)。上下限値設定手段701はその判定結果に基づいてGMM尤度を、下限値若しくは上限値に設定して音声認識処理部80へ出力する(ステップS703,S705)。
なお、図6では上下限の両方を所定の上下限値に設定する例を説明したが、上下限のどちらか一方を設定するようにしても良い。
If the GMM likelihood is outside the predetermined range, it is determined whether the GMM likelihood is equal to or lower than the lower limit (step S702) or equal to or higher than the upper limit (step S704). The upper and lower limit value setting means 701 sets the GMM likelihood to the lower limit value or the upper limit value based on the determination result and outputs it to the speech recognition processing unit 80 (steps S703 and S705).
Although FIG. 6 illustrates an example in which both upper and lower limits are set to predetermined upper and lower limits, either one of the upper and lower limits may be set.

図7にこの発明の音響モデル作成装置200の機能構成例を示す。その動作フローを図8に示す。音響モデル作成装置200は、特徴量分析部20と、GMM尤度計算部60と音響モデルパラメータメモリ40と、GMM尤度判定部70と、学習処理部90と、学習後音響モデルメモリ95と、制御部96とを具備する。音響モデル作成装置200は、例えばROM、RAM、CPU等で構成されるコンピュータに所定のプログラムが読み込まれて、CPUがそのプログラムを実行することで実現されるものである。   FIG. 7 shows a functional configuration example of the acoustic model creation device 200 of the present invention. The operation flow is shown in FIG. The acoustic model creation device 200 includes a feature amount analysis unit 20, a GMM likelihood calculation unit 60, an acoustic model parameter memory 40, a GMM likelihood determination unit 70, a learning processing unit 90, a post-learning acoustic model memory 95, And a control unit 96. The acoustic model creation apparatus 200 is realized by a predetermined program being read into a computer composed of, for example, a ROM, a RAM, and a CPU, and the CPU executing the program.

音響モデル作成装置200の特徴量分析部20と、GMM尤度計算部60と、GMM尤度判定部70は、音声認識装置100と同じものである。
学修処理部90が、学習ラベルと音声特徴量とGMM尤度と判定結果を入力として、GMM尤度が所定の範囲内のフレームについては音声特徴量と学習ラベルを対応付けて音響モデルの学習処理を行い(ステップS90)。所定の範囲外のフレームについては音響モデルの統計量計算の対象外(ステップS70のN)、異常フレームとして廃棄し、次のフレームの処理を行う(ステップS98)。
The feature amount analysis unit 20, the GMM likelihood calculation unit 60, and the GMM likelihood determination unit 70 of the acoustic model creation device 200 are the same as those of the speech recognition device 100.
The learning processing unit 90 receives the learning label, the voice feature amount, the GMM likelihood, and the determination result, and associates the voice feature amount with the learning label for a frame in which the GMM likelihood is within a predetermined range, thereby learning the acoustic model. (Step S90). Frames outside the predetermined range are not subject to acoustic model statistic calculation (N in step S70), are discarded as abnormal frames, and the next frame is processed (step S98).

以上の動作は、全てのフレームについて終了するまで繰り返される(ステップS97のN)。この音響モデル作成装置200の各部の動作及び繰り返し動作の制御は、制御部95が行う。上記所定の範囲をGMM尤度の平均値μと標準偏差σに基づいて設定する場合には、学習によって更新されたGMM尤度の平均値μと標準偏差σは、学習後音響モデルメモリ95に記録される。学習処理部90において、所定の範囲も、更新された平均値μと標準偏差σに連動させて更新し、その値を学習後音響モデルメモリ95に記録するようにしても良い。また、上記所定の範囲をGMM尤度の上下限値に基づいて設定する場合は、GMM尤度の上下限値は、学習後音響モデルメモリ95に記録される。学習処理部90において、所定の範囲も、更新された上下限値に連動させて更新し、その値を学習後音響モデルメモリ95に記録するようにしても良い。   The above operation is repeated for all frames (N in step S97). The control unit 95 controls the operation and repetitive operation of each unit of the acoustic model creation device 200. When the predetermined range is set based on the average value μ and standard deviation σ of GMM likelihood, the average value μ and standard deviation σ of GMM likelihood updated by learning are stored in the post-learning acoustic model memory 95. To be recorded. In the learning processing unit 90, the predetermined range may be updated in conjunction with the updated average value μ and standard deviation σ, and the value may be recorded in the after-learning acoustic model memory 95. When the predetermined range is set based on the upper and lower limit values of the GMM likelihood, the upper and lower limit values of the GMM likelihood are recorded in the post-learning acoustic model memory 95. In the learning processing unit 90, the predetermined range may be updated in conjunction with the updated upper and lower limit values, and the value may be recorded in the after-learning acoustic model memory 95.

音響モデル作成装置200も、GMM尤度計算部60とGMM尤度判定部70を備え、所定の範囲外のフレームは対象外として音響モデルの学習を行うので、突発性雑音等の影響を受けないで音響モデルを作成することが出来る。よって、精度の高いよりクリーンな音響モデルの作成を可能にする。   The acoustic model creation apparatus 200 also includes a GMM likelihood calculation unit 60 and a GMM likelihood determination unit 70, and performs learning of the acoustic model while excluding frames outside the predetermined range, so that it is not affected by sudden noise or the like. An acoustic model can be created. Therefore, it is possible to create a clean acoustic model with high accuracy.

以上説明した音声認識装置100によれば、殆どの音素を包含し、最も分散が広くなるGMMから求めたGMM尤度を、所定範囲外のフレームの音響尤度に代用するので、突発性雑音等が入力されても音響尤度が大きく変化しない。つまり、音響尤度を安定化することが出来る。また、所定の範囲は、学習時のGMMのGMM尤度に基づいて決められるので、その範囲を決定するための開発用データが不要である。
また、音響モデル作成装置200によれば、学習時に異常フレームを除去するので、異常な分布が生成される可能性を低減することが出来る。よって、より精度の高い音響モデルの作成を可能にする。
According to the speech recognition apparatus 100 described above, the GMM likelihood obtained from the GMM that includes most phonemes and has the widest variance is substituted for the acoustic likelihood of frames outside the predetermined range. Even if is input, the acoustic likelihood does not change greatly. That is, the acoustic likelihood can be stabilized. Further, since the predetermined range is determined based on the GMM likelihood of the GMM at the time of learning, development data for determining the range is unnecessary.
Moreover, according to the acoustic model creation apparatus 200, an abnormal frame is removed at the time of learning, so that the possibility that an abnormal distribution is generated can be reduced. Therefore, it is possible to create a more accurate acoustic model.

この発明の方法及び装置は上述の実施形態に限定されるものではなく、この発明の趣旨を逸脱しない範囲で適宜変更が可能である。例えば、GMMは、無音データを含めて、無音データも学習対象にしても良いし、又は音声の特徴量のみを記録させ、音声区間のみを学習対象にしても良い。音声区間のみを学習対象にする場合には、音響モデルパラメータメモリ40に、音声認識の前処理の音声区間検出等の用途でも利用される音響モデルをそのまま用いることが可能である。   The method and apparatus of the present invention are not limited to the above-described embodiments, and can be appropriately changed without departing from the spirit of the present invention. For example, in the GMM, silence data including silence data may be set as a learning target, or only a voice feature amount may be recorded and only a voice section may be set as a learning target. When only the speech section is a learning target, it is possible to directly use the acoustic model that is also used in the acoustic model parameter memory 40 for purposes such as speech section detection of speech recognition preprocessing.

なお、上記方法及び装置において説明した処理は、記載の順に従って時系列に実行され
るのみならず、処理を実行する装置の処理能力あるいは必要に応じて並列的にあるいは個別に実行されるとしてもよい。
また、上記装置における処理手段をコンピュータによって実現する場合、各装置が有すべき機能の処理内容はプログラムによって記述される。そして、このプログラムをコンピュータで実行することにより、各装置における処理手段がコンピュータ上で実現される。
Note that the processes described in the above method and apparatus are not only executed in time series according to the order of description, but may also be executed in parallel or individually as required by the processing capability of the apparatus that executes the processes. Good.
Further, when the processing means in the above apparatus is realized by a computer, the processing contents of functions that each apparatus should have are described by a program. Then, by executing this program on the computer, the processing means in each apparatus is realized on the computer.

この処理内容を記述したプログラムは、コンピュータで読み取り可能な記録媒体に記録しておくことができる。コンピュータで読み取り可能な記録媒体としては、例えば、磁気記録装置、光ディスク、光磁気記録媒体、半導体メモリ等どのようなものでもよい。具体的には、例えば、磁気記録装置として、ハードディスク装置、フレキシブルディスク、磁気テープ等を、光ディスクとして、DVD(Digital Versatile Disc)、DVD−RAM(Random Access Memory)、CD−ROM(Compact Disc Read Only Memory)、CD−R(Recordable)/RW(ReWritable)等を、光磁気記録媒体として、MO(Magneto Optical disc)等を、半導体メモリとしてEEP−ROM(Electronically Erasable and Programmable-Read Only Memory)等を用いることができる。   The program describing the processing contents can be recorded on a computer-readable recording medium. As the computer-readable recording medium, for example, any recording medium such as a magnetic recording device, an optical disk, a magneto-optical recording medium, and a semiconductor memory may be used. Specifically, for example, as a magnetic recording device, a hard disk device, a flexible disk, a magnetic tape or the like, and as an optical disk, a DVD (Digital Versatile Disc), a DVD-RAM (Random Access Memory), a CD-ROM (Compact Disc Read Only). Memory), CD-R (Recordable) / RW (ReWritable), etc., magneto-optical recording medium, MO (Magneto Optical disc), etc., semiconductor memory, EEP-ROM (Electronically Erasable and Programmable-Read Only Memory), etc. Can be used.

また、このプログラムの流通は、例えば、そのプログラムを記録したDVD、CD−ROM等の可搬型記録媒体を販売、譲渡、貸与等することによって行う。さらに、このプログラムをサーバコンピュータの記録装置に格納しておき、ネットワークを介して、サーバコンピュータから他のコンピュータにそのプログラムを転送することにより、このプログラムを流通させる構成としてもよい。
また、各手段は、コンピュータ上で所定のプログラムを実行させることにより構成することにしてもよいし、これらの処理内容の少なくとも一部をハードウェア的に実現することとしてもよい。
The program is distributed by selling, transferring, or lending a portable recording medium such as a DVD or CD-ROM in which the program is recorded. Further, the program may be distributed by storing the program in a recording device of a server computer and transferring the program from the server computer to another computer via a network.
Each means may be configured by executing a predetermined program on a computer, or at least a part of these processing contents may be realized by hardware.

Claims (10)

入力される音声ディジタル信号の音声特徴量をフレーム単位で分析する特徴量分析部と、
GMM(Gaussian Mixture Model:混合正規分布モデル)と上記音声特徴量を照合して上記フレーム毎にGMM尤度を計算するGMM尤度計算部と、
上記GMM尤度が所定の範囲内であるか否かを判定し、その判定結果を出力するGMM尤度判定部と、
上記音声特徴量と上記GMM尤度と上記判定結果を入力として、上記所定の範囲内のフレームについては上記音声特徴量に対応する音響尤度に基づいて音声認識処理を行い、上記所定の範囲外のフレームについては上記GMM尤度を利用した音響尤度を用いて音声認識処理を行う音声認識処理部と、
を具備する音声認識装置。
A feature amount analysis unit that analyzes the speech feature amount of the input speech digital signal in units of frames;
A GMM likelihood calculating unit that compares a GMM (Gaussian Mixture Model) with the speech feature and calculates a GMM likelihood for each frame;
A GMM likelihood determination unit that determines whether the GMM likelihood is within a predetermined range and outputs the determination result;
Using the speech feature amount, the GMM likelihood, and the determination result as input, a frame within the predetermined range is subjected to speech recognition processing based on the acoustic likelihood corresponding to the speech feature amount, and out of the predetermined range. A speech recognition processing unit that performs speech recognition processing using the acoustic likelihood using the GMM likelihood,
A speech recognition apparatus comprising:
請求項1に記載した音声認識装置において、
上記所定の範囲は、学習した音響モデルのGMM(Gaussian Mixture Model:混合正規分布モデル)の尤度分布範囲であることを特徴とする音声認識装置。
The speech recognition apparatus according to claim 1,
The speech recognition apparatus, wherein the predetermined range is a likelihood distribution range of a GMM (Gaussian Mixture Model) of the learned acoustic model.
請求項1に記載した音声認識装置において、
GMM尤度計算部は、
上記GMM尤度の平均値μや上記GMM尤度の標準偏差σを入力として上記所定の範囲を設定する所定範囲設定手段と、
を備えることを特徴とする音声認識装置。
The speech recognition apparatus according to claim 1,
The GMM likelihood calculator is
A predetermined range setting means for setting the predetermined range by inputting the average value μ of the GMM likelihood and the standard deviation σ of the GMM likelihood;
A speech recognition apparatus comprising:
特徴量分析部が、入力される音声ディジタル信号の音声特徴量をフレーム単位で分析する特徴量分析過程と、
GMM尤度計算部が、GMM(Gaussian Mixture Model:混合正規分布モデル)と上記音声特徴量を照合して上記フレーム毎にGMM尤度を計算するGMM尤度計算過程と、
GMM尤度判定部が、上記GMM尤度が所定の範囲内であるか否かを判定し、その判定結果を出力するGMM尤度判定過程と、
音声認識処理部が、上記音声特徴量と上記GMM尤度と上記判定結果を入力として、上記所定の範囲内のフレームについては上記音声特徴量に対応する音響尤度に基づいて音声認識処理を行い、上記所定の範囲外のフレームについては上記GMM尤度を利用した音響尤度を用いて音声認識処理を行う音声認識処理過程と、
を備える音声認識方法。
A feature amount analysis unit in which a feature amount analysis unit analyzes a speech feature amount of an input speech digital signal in units of frames;
A GMM likelihood calculation process in which a GMM likelihood calculator calculates a GMM likelihood for each frame by comparing a GMM (Gaussian Mixture Model) with the speech feature amount;
A GMM likelihood determination unit that determines whether the GMM likelihood is within a predetermined range and outputs the determination result;
A speech recognition processing unit receives the speech feature value, the GMM likelihood, and the determination result as input, and performs speech recognition processing based on the acoustic likelihood corresponding to the speech feature value for frames within the predetermined range. A speech recognition process for performing speech recognition processing using the acoustic likelihood using the GMM likelihood for a frame outside the predetermined range;
A speech recognition method comprising:
入力される音声ディジタル信号の音声特徴量をフレーム単位で分析する特徴量分析部と、
GMM(Gaussian Mixture Model:混合正規分布モデル)と上記音声特徴量を照合して上記フレーム毎にGMM尤度を計算するGMM尤度計算部と、
上記GMM尤度が所定の範囲内であるか否かを判定し、その判定結果と上記GMM尤度とを出力するGMM尤度判定部と、
学習ラベルと上記音声特徴量と上記GMM尤度と上記判定結果を入力として、上記所定の範囲内のフレームについては上記音声特徴量に基づく音響モデルの学習処理を行い、上記所定の範囲外のフレームについては音響モデルの統計量計算の対象外として学習後音響モデルを生成する学習処理部と、
を具備する音響モデル作成装置。
A feature amount analysis unit that analyzes the speech feature amount of the input speech digital signal in units of frames;
A GMM likelihood calculating unit that compares a GMM (Gaussian Mixture Model) with the speech feature and calculates a GMM likelihood for each frame;
A GMM likelihood determination unit that determines whether or not the GMM likelihood is within a predetermined range, and outputs the determination result and the GMM likelihood;
The learning label, the speech feature value, the GMM likelihood, and the determination result are input, and an acoustic model learning process is performed on the frame within the predetermined range based on the speech feature amount, and the frame outside the predetermined range Is a learning processing unit that generates a post-learning acoustic model that is not subject to acoustic model statistic calculation,
An acoustic model creation device comprising:
請求項5に記載した音響モデル作成装置において、
上記所定の範囲は、学習した音響モデルのGMM(Gaussian Mixture Model:混合正規分布モデル)の尤度分布範囲であることを特徴とする音響モデル作成装置。
In the acoustic model creation device according to claim 5,
The acoustic model generating apparatus, wherein the predetermined range is a likelihood distribution range of a GMM (Gaussian Mixture Model) of the learned acoustic model.
請求項5に記載した音響モデル作成装置において、
GMM尤度計算部は、
上記GMM尤度の平均値μや上記GMM尤度の標準偏差σを入力として上記所定の範囲を設定する所定範囲設定手段と、
を備えることを特徴とする音響モデル作成装置。
In the acoustic model creation device according to claim 5,
The GMM likelihood calculator is
A predetermined range setting means for setting the predetermined range by inputting the average value μ of the GMM likelihood and the standard deviation σ of the GMM likelihood;
An acoustic model creation device comprising:
特徴量分析部が、入力される音声ディジタル信号の音声特徴量をフレーム単位で分析する特徴量分析過程と、
GMM(Gaussian Mixture Model:混合正規分布モデル)と上記音声特徴量を照合して上記フレーム毎にGMM尤度を計算するGMM尤度計算過程と、
GMM尤度判定部が、上記GMM尤度が所定の範囲内であるか否かを判定し、その判定結果と上記GMM尤度とを出力するGMM尤度判定過程と、
学習処理部が、学習ラベルと上記音声特徴量と上記GMM尤度と上記判定結果を入力として、上記所定の範囲内のフレームについては上記音声特徴量に基づく音響モデルの学習処理を行い、上記所定の範囲外のフレームについては音響モデルの統計量計算の対象外として学習後音響モデルを生成する学習処理過程と、
を備える音響モデル作成方法。
A feature amount analysis unit in which a feature amount analysis unit analyzes a speech feature amount of an input speech digital signal in units of frames;
A GMM likelihood calculation process in which a GMM (Gaussian Mixture Model) is compared with the speech feature amount to calculate a GMM likelihood for each frame;
A GMM likelihood determination unit that determines whether or not the GMM likelihood is within a predetermined range and outputs the determination result and the GMM likelihood;
The learning processing unit receives the learning label, the voice feature quantity, the GMM likelihood, and the determination result, performs a learning process of the acoustic model based on the voice feature quantity for the frame within the predetermined range, and performs the predetermined process. The learning process for generating a post-learning acoustic model for out-of-range frames is excluded from the calculation of the acoustic model statistics,
An acoustic model creation method comprising:
請求項1乃至3の何れか、又は請求項5乃至7の何れかに記載した各装置としてコンピュータを機能させるための装置プログラム。   An apparatus program for causing a computer to function as each apparatus according to any one of claims 1 to 3 or claims 5 to 7. 請求項9に記載した何れかの装置プログラムを記録したコンピュータで読み取り可能な記録媒体。   A computer-readable recording medium on which any of the apparatus programs according to claim 9 is recorded.
JP2009138987A 2009-06-10 2009-06-10 Speech recognition device, acoustic model creation device, method thereof, program, and recording medium Expired - Fee Related JP5089651B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009138987A JP5089651B2 (en) 2009-06-10 2009-06-10 Speech recognition device, acoustic model creation device, method thereof, program, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009138987A JP5089651B2 (en) 2009-06-10 2009-06-10 Speech recognition device, acoustic model creation device, method thereof, program, and recording medium

Publications (2)

Publication Number Publication Date
JP2010286586A JP2010286586A (en) 2010-12-24
JP5089651B2 true JP5089651B2 (en) 2012-12-05

Family

ID=43542345

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009138987A Expired - Fee Related JP5089651B2 (en) 2009-06-10 2009-06-10 Speech recognition device, acoustic model creation device, method thereof, program, and recording medium

Country Status (1)

Country Link
JP (1) JP5089651B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014043476A1 (en) * 2012-09-14 2014-03-20 Dolby Laboratories Licensing Corporation Multi-channel audio content analysis based upmix detection
CN102945670B (en) * 2012-11-26 2015-06-03 河海大学 Multi-environment characteristic compensation method for voice recognition system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003022093A (en) * 2001-07-09 2003-01-24 Nippon Hoso Kyokai <Nhk> Method, device, and program for voice recognition
JP4860962B2 (en) * 2004-08-26 2012-01-25 旭化成株式会社 Speech recognition apparatus, speech recognition method, and program
JP2007072143A (en) * 2005-09-07 2007-03-22 Advanced Telecommunication Research Institute International Voice recognition device and program
JP2008257042A (en) * 2007-04-06 2008-10-23 Nippon Telegr & Teleph Corp <Ntt> Speech signal level display device and its method

Also Published As

Publication number Publication date
JP2010286586A (en) 2010-12-24

Similar Documents

Publication Publication Date Title
US7693713B2 (en) Speech models generated using competitive training, asymmetric training, and data boosting
US8019602B2 (en) Automatic speech recognition learning using user corrections
US7689419B2 (en) Updating hidden conditional random field model parameters after processing individual training samples
US11527259B2 (en) Learning device, voice activity detector, and method for detecting voice activity
JP2010032792A (en) Speech segment speaker classification device and method therefore, speech recognition device using the same and method therefore, program and recording medium
JP6452591B2 (en) Synthetic voice quality evaluation device, synthetic voice quality evaluation method, program
JP2010230868A (en) Pattern recognition device, pattern recognition method, and program
US20040006470A1 (en) Word-spotting apparatus, word-spotting method, and word-spotting program
WO2018163279A1 (en) Voice processing device, voice processing method and voice processing program
US20090094022A1 (en) Apparatus for creating speaker model, and computer program product
JP5089651B2 (en) Speech recognition device, acoustic model creation device, method thereof, program, and recording medium
JP4960845B2 (en) Speech parameter learning device and method thereof, speech recognition device and speech recognition method using them, program and recording medium thereof
JP6216809B2 (en) Parameter adjustment system, parameter adjustment method, program
JP6027754B2 (en) Adaptation device, speech recognition device, and program thereof
JP5191500B2 (en) Noise suppression filter calculation method, apparatus, and program
JP5852550B2 (en) Acoustic model generation apparatus, method and program thereof
JP5914119B2 (en) Acoustic model performance evaluation apparatus, method and program
JP5427140B2 (en) Speech recognition method, speech recognition apparatus, and speech recognition program
JP2011039434A (en) Speech recognition device and feature value normalization method therefor
JP4922377B2 (en) Speech recognition apparatus, method and program
JP4729078B2 (en) Voice recognition apparatus and method, program, and recording medium
JP5538350B2 (en) Speech recognition method, apparatus and program thereof
JP4981850B2 (en) Voice recognition apparatus and method, program, and recording medium
JP5166195B2 (en) Acoustic analysis parameter generation method and apparatus, program, and recording medium
JP2010250161A (en) Difference-utilizing type identification-learning device and method therefor, and program

Legal Events

Date Code Title Description
RD03 Notification of appointment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7423

Effective date: 20110720

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20111012

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120827

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120904

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120911

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150921

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 5089651

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

LAPS Cancellation because of no payment of annual fees