JPH0357777B2 - - Google Patents

Info

Publication number
JPH0357777B2
JPH0357777B2 JP59130876A JP13087684A JPH0357777B2 JP H0357777 B2 JPH0357777 B2 JP H0357777B2 JP 59130876 A JP59130876 A JP 59130876A JP 13087684 A JP13087684 A JP 13087684A JP H0357777 B2 JPH0357777 B2 JP H0357777B2
Authority
JP
Japan
Prior art keywords
training
sounds
sound
speech
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP59130876A
Other languages
Japanese (ja)
Other versions
JPS6111021A (en
Inventor
Toyozo Sugimoto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Original Assignee
Agency of Industrial Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology filed Critical Agency of Industrial Science and Technology
Priority to JP59130876A priority Critical patent/JPS6111021A/en
Publication of JPS6111021A publication Critical patent/JPS6111021A/en
Publication of JPH0357777B2 publication Critical patent/JPH0357777B2/ja
Granted legal-status Critical Current

Links

Description

【発明の詳細な説明】 産業上の利用分野 本発明は言語障害者用の発話訓練装置に関し、
特に音声波とともに他の発話器官の発話情報も利
用する発話訓練装置に関するものである。
[Detailed Description of the Invention] Industrial Application Field The present invention relates to a speech training device for speech-impaired people.
In particular, the present invention relates to a speech training device that uses speech information from other speech organs as well as sound waves.

従来例の構成とその問題点 聴覚障害等のために話し言葉が不自由な言語障
害者は我国で数十万人と推定されているが、言語
訓練士の不足、訓練が長期にわたるなどの利用に
より、十分な訓練時間が確保されているとは言い
難い状況である。このため有効な発話訓練装置の
開発が望まれているが発話音を分類認識して、そ
の正否を表示できるような訓練装置はまだ得られ
ていない。
Structure of the conventional example and its problems It is estimated that there are several hundred thousand people with language disabilities in Japan who are unable to speak due to hearing impairment, etc. However, due to the lack of language trainers and the long training period, etc. However, it is difficult to say that sufficient training time is secured. For this reason, it is desired to develop an effective speech training device, but a training device that can classify and recognize speech sounds and display whether the speech is correct or incorrect has not yet been obtained.

発話訓練装置と関連が深い装置として、近年、
研究開発が盛んな音声認識装置がある。通常の音
声認識装置は音声波の情報のみを利用しているも
のが大半であるが、音声波のみからでは、特に子
音/p、t、k/、/b、d、g/や/m、n、
ηなどの正確な検出および相互の識別は困難であ
る。このため音声波以外に声帯振動、鼻振動、口
気流、舌と口蓋の接触の有無等の情報を利用した
発音特徴抽出装置(特開昭58−150997公報参照)
などが提案されている。
In recent years, as a device closely related to speech training devices,
There is a speech recognition device that is undergoing active research and development. Most of the ordinary speech recognition devices use only the information of the speech waves, but from the speech waves alone, the consonants /p, t, k/, /b, d, g/, /m, etc. n,
Accurate detection of η, etc. and mutual discrimination are difficult. For this reason, a pronunciation feature extraction device that uses information such as vocal cord vibration, nasal vibration, oral airflow, presence or absence of contact between the tongue and palate, etc. in addition to voice waves (see Japanese Patent Application Laid-Open No. 150997/1983)
etc. have been proposed.

第1図は前記従来の発音特徴抽出装置において
音素分類を行なうための判定基準である。すなわ
ち第1図において、有声音/b、d、g、m、
n、η/は声帯振動が有(+)であり、無声音/
p、t、k、hは声帯振動が無(−)である。鼻
音/m、n、η/は鼻振動が有(+)であり、/
p、t、k、b、d、g、h/は鼻振動が無
(−)である。破裂音/p、t、k、b、d、
g/と摩擦音/h/は口気流の流速が有(+)で
あるが/m、n、η/は無(−)である。さらに
口気流流速の変化率は/p、t、b、d/が大き
く有(+)になるが、/k、g、h/小さくて無
(−)となる。口蓋接触については、/p、b、
m、h/は閉鎖パターンとはならず、/t、d、
n/では前舌閉鎖のパターンに、また/k、g、
η/では後舌閉鎖のパターンになる。故に第1図
に従つて音素が分類できるとするものである。
FIG. 1 shows the criteria for phoneme classification in the conventional pronunciation feature extraction device. That is, in FIG. 1, voiced sounds /b, d, g, m,
n, η/ has vocal cord vibration (+), and is a voiceless sound/
For p, t, k, and h, there is no vocal cord vibration (-). Nasal sounds /m, n, η/ have nasal vibration (+), /
p, t, k, b, d, g, h/ have no nasal vibration (-). Plosive sounds/p, t, k, b, d,
For g/ and the fricative sound /h/, the flow velocity of the oral airflow is present (+), but for /m, n, and η/, it is absent (-). Furthermore, the rate of change in the oral airflow velocity is such that /p, t, b, d/ are large and present (+), but /k, g, h/ are small and are absent (-). For palatal contact, /p, b,
m, h/ are not closed patterns, /t, d,
In n/, there is a pattern of anterior tongue closure, and in /k, g,
η/ results in a pattern of posterior tongue closure. Therefore, it is assumed that phonemes can be classified according to FIG.

しかしながら実験によれば、上記のような構成
では、 (1) 有声破裂音/b、d、g/では鼻音化の傾向
が強く、鼻振動が有(+)になることがある。
However, according to experiments, with the above configuration, (1) voiced plosives /b, d, g/ have a strong tendency to become nasalized, and nasal vibrations may be present (+).

(2) /k、g、η/における後舌閉鎖は個人差に
より全く検出されないことがあり、この場合閉
鎖なしとなる。
(2) Posterior tongue closure at /k, g, η/ may not be detected at all due to individual differences, and in this case, no closure occurs.

(3) 破裂の調音点が後舌部にある/k、g/は、
口気流の流速の検出が/p、t、b/に比べて
やや不安定であり、〔流速、変化率〕が、〔有
(+)、有(+)〕、〔有(+)、無(−)〕、〔無
(−)、無(−)〕などとバラつくことがある。
また頻度は少ないが/d/についても同様の傾
向がみられる。
(3) /k, g/ where the point of articulation of rupture is in the posterior lingual region,
The detection of the flow velocity of the oral airflow is somewhat unstable compared to /p, t, b/, and [flow velocity, rate of change] is [present (+), present (+)], [present (+), absent]. (-)], [No (-), No (-)], etc. may vary.
A similar tendency is also seen for /d/, although it is less frequent.

などの現象があり、これらの原因により第1図の
判定基準では/p/と/k/、/d/と/
n/、/g/と/η/と/m/の混同などが生ず
るためにこのまま発話訓練に用いることは困難で
あることがわかつた。
Due to these reasons, the criteria shown in Figure 1 is that /p/ and /k/, /d/ and /
It has been found that it is difficult to use it as is for speech training because confusion occurs between n/, /g/, /η/ and /m/.

発明の目的 本発明は上記従来の音声認識装置の適用の困難
を解消し、実際の発話訓練に則し、発話音を分類
認識してその正否を表示できる発話訓練装置を提
供することを目的とする。
Purpose of the Invention It is an object of the present invention to provide a speech training device that can classify and recognize speech sounds and display whether they are correct or incorrect, in accordance with actual speech training, by solving the difficulties in applying the conventional speech recognition device. do.

発明の構成 本発明は、音声波、鼻振動、喉頭振動、破裂性
口蓋と舌の閉鎖接触の各情報の検出器と分類認識
部と訓練音指定部を備えた発話訓練装置であり、
訓練指定音と音韻産声的に類似する音韻間のみに
着目した分類認識を行なうことにより、音声認識
上の困難をなくすとともに現実の訓練に即した分
類認識を実現できるものである。
Composition of the Invention The present invention is a speech training device equipped with a detector for information on sound waves, nasal vibrations, laryngeal vibrations, and closed contact between the palate and the tongue, a classification recognition unit, and a training sound designation unit,
By performing classification recognition focusing only on phonemes that are phonetically similar to the designated training sound, difficulties in speech recognition can be eliminated and classification recognition that is suitable for actual training can be realized.

実施例の説明 第2図は本発明の一実施例を示すブロツク図で
ある。第2図において、1は音の有無や始まりの
情報を得るための音声波検出器、2は鼻壁中央部
付近に取付けられ、鼻音情報を得るための鼻振動
検出器、3は喉頭部声帯付近に取付けられ有声無
声の情報を得るための咽喉振動検出器、4は口腔
前方に位置した口気流検出器から例えば流速の変
化率を検出して破裂の情報を得るための破裂性検
出器、5は舌と口蓋の接触を検出するために口腔
内の口蓋に装着された口蓋接触検出器から閉鎖接
触情報を得るための舌閉鎖検出器、6は前記1〜
5の検出情報から発話音の分類認識を行なう分類
認識部、7は分類認識部の結果を示す発話の正否
判定結果表示部、8は訓練に先だち訓練音の指定
を受け付ける訓練音指定部、9は前記分類認識部
6と前記正否判定結果表示部7と前記訓練音指定
部8とを制御する制御部である。
DESCRIPTION OF THE EMBODIMENT FIG. 2 is a block diagram showing an embodiment of the present invention. In Fig. 2, 1 is a sound wave detector to obtain information on the presence or absence of sound and its onset, 2 is a nasal vibration detector attached near the center of the nasal wall and used to obtain information on nasal sounds, and 3 is a laryngeal vocal cord. A throat vibration detector is installed nearby to obtain information on voiced and unvoiced sounds; 4 is a rupture detector for obtaining information on rupture by detecting, for example, the rate of change in flow velocity from an oral airflow detector located in front of the oral cavity; 5 is a tongue closure detector for obtaining closure contact information from a palate contact detector attached to the palate in the oral cavity in order to detect contact between the tongue and the palate; 6 is the above-mentioned 1 to 1;
5, a classification recognition unit that performs classification recognition of speech sounds from the detection information; 7, a speech correctness determination result display unit that shows the results of the classification recognition unit; 8, a training sound designation unit that accepts the designation of a training sound prior to training; 9; is a control unit that controls the classification recognition unit 6, the correct/incorrect determination result display unit 7, and the training sound designation unit 8.

以上のように構成された本発明の実施例におけ
る分類認識部6の動作について以下例に従つて詳
細に説明する。
The operation of the classification recognition unit 6 in the embodiment of the present invention configured as described above will be explained in detail below according to an example.

発話訓練は各種の音韻を無秩序に練習するわけ
ではなく、音韻の類似性、対立性、難易度などを
考慮して体系的に行なわれる。第3図は発話訓練
では最も基本的かつ重要な訓練目標であるパバマ
表の1例を示す配列で、訓練は/p/から始めて
習熟につれて行と列の隣接音へ移行してゆく。
Speech training does not involve practicing various phonemes in a chaotic manner, but is carried out systematically, taking into account the similarities, contrasts, and difficulty of the phonemes. FIG. 3 shows an example of the Pabama table, which is the most basic and important training goal in speech training. Training begins with /p/ and moves to adjacent sounds in rows and columns as the user becomes proficient.

第3図は行と列に応じて共通点を有する。即わ
ち、調音位置からみれば、/p、b、m/は両唇
音、/t、d、n/は歯茎音、/k、g、n/は
軟口蓋音である。音の種類からみれば/p、t、
k/は音声破裂音/b、d、gは有声破裂音、/
m、n、η/は通鼻音である。これらの点からも
類推されるが、実際の発話訓練においても、行と
列方向の隣接音で混同が生じやすく、離れたもの
の混同は起ることがまれである。例えば/p/の
発話では/t/や/b/に間違うことはあつて
も、/k/や/m/やその他の音に間違うことは
非常に少ない。すなわち訓練の実際に則せば/
p/の訓練時には、/p/(正解)、/t/また
はb/(隣接音への間違い)、/?/(その他の
間違い)という分類認識、あるいは/p/、/
t/、/b/、/?/という分類認識を行ない、
困難な/p/と/k/の分類認識を行なわなくて
も十分有効な訓練を行なうことができる。
FIG. 3 has common features depending on the rows and columns. That is, from the point of articulation position, /p, b, m/ are bilabial sounds, /t, d, n/ are alveolar sounds, and /k, g, n/ are soft palatal sounds. In terms of types of sounds, /p, t,
k/ is a voiced plosive / b, d, g are voiced plosives, /
m, n, η/ are nasal sounds. As can be inferred from these points, even in actual speech training, confusion tends to occur between sounds that are adjacent in the row and column directions, and confusion between sounds that are far apart is rare. For example, when uttering /p/, it may be mistaken for /t/ or /b/, but it is extremely rare for it to be mistaken for /k/, /m/, or other sounds. In other words, according to the actual training/
When training p/, /p/ (correct answer), /t/ or b/ (mistake to adjacent sound), /? / (other mistakes) classification recognition, or /p/, /
t/, /b/, /? / performs classification recognition,
Sufficiently effective training can be performed without performing difficult classification recognition of /p/ and /k/.

同様に訓練する発話音がわかつていれば、/
g/と/m/、/η/と/m/の分類認識を行な
わなくても訓練が可能である。
Similarly, if you know the speech sounds to be trained, /
Training is possible without performing classification recognition of g/ and /m/, /η/ and /m/.

次に、/d、g/は鼻音化傾向が強く、時に鼻
音と同程度の鼻振動を示すことがあり、間違いの
原因となつていたが、実験によれば、鼻音化し
た/d、g/は強い鼻振動が100m秒以上継続す
ることはきわめて少なく、逆に鼻音は殆んどの場
合、強い鼻振動が120m秒以上継続することが判
明した。従つてこの事実を利用すれば/d/と/
n/、/g/と/η/の混同をほぼなくすことが
できる。
Next, /d, g/ have a strong tendency to become nasalized, and sometimes show nasal vibrations to the same extent as nasal sounds, causing errors; however, according to experiments, /d, g/ have a strong tendency to become nasalized. / It was found that strong nasal vibrations rarely lasted for more than 100 msec, and conversely, for nasal sounds, strong nasal vibrations lasted for more than 120 msec in most cases. Therefore, using this fact, /d/ and /
Confusion between n/, /g/ and /η/ can be almost eliminated.

第4図は上記の考え方にもとずき第3図の各音
の訓練に対し、分類認識を行なう場合の判定結果
の種類と判定に使用する分離関数の組合せを示し
ている。
Based on the above-mentioned concept, FIG. 4 shows the types of judgment results and the combinations of separation functions used for judgment when classification recognition is performed for the training of each sound shown in FIG. 3.

/t/の訓練では/p/と/k/を分類認識せ
ず/p、k/のままで判定結果とする。これは前
述のように隣接音への間違いとして/p、k/を
くくつても訓練が実施できること、訓練の立場か
らは、分類認識の性能を落としても、間違つた正
否判定を行なわない方が望ましいこと、分類認識
部を簡潔に実現できることなどの理由による。/
d/の訓練における/b、g/、/n/の訓練に
おける/m、η/も同様である。
In the training for /t/, /p/ and /k/ are not classified and recognized, and /p and k/ are used as the judgment results. As mentioned above, this means that training can be carried out even if /p and k/ are added as errors to adjacent sounds, and from a training standpoint, it is a method that does not make incorrect correct/incorrect judgments even if the performance of classification recognition is reduced. This is because it is desirable and the classification recognition unit can be realized simply. /
The same is true for /b, g/ in the training of d/, and /m and η/ in the training of /n/.

第5図は第4図の各分離関数を実現するための
判定基準を図示したものであり、例えば無声破裂
音/p/は鼻振動がなく、破裂性があり、無声で
あり、かつ舌の閉鎖接触はないことを示してい
る。列間の分離条件は舌閉鎖接触情報を用いる。
即ち/p、b、m/と/t、d、n/を分離する
判定条件は舌閉鎖接触の無と有、/t、d、n/
と/k、g、η/を分離する判定条件は舌閉鎖接
触の有と無である。/p、t、k/と/b、d、
g/の行間を分離する判定条件は無声と有声であ
る。/b、d、g//と/m、n、η/の行間を
分離する判定条件は鼻音化傾向と強く長い鼻振動
である。
Figure 5 illustrates the criteria for realizing each separation function in Figure 4. For example, the voiceless plosive /p/ has no nasal vibration, is plosive, is voiceless, and has a tongue This indicates that there is no closed contact. Tongue closed contact information is used as the separation condition between columns.
In other words, the criteria for separating /p, b, m/ from /t, d, n/ are the absence and presence of tongue closing contact, /t, d, n/
The criterion for separating /k, g, η/ is the presence or absence of tongue closing contact. /p, t, k/ and /b, d,
The conditions for determining the line spacing of g/ are unvoiced and voiced. The criteria for separating the lines between /b, d, g// and /m, n, η/ are a tendency to nasalization and strong and long nasal vibrations.

また分離関数は両者が共通に有する特性を共通
基本条件とし、共通基本条件を満足しない場合、
分類認識は「?」(その他)になる。
In addition, the separation function uses the characteristics that both have in common as a common basic condition, and if the common basic condition is not satisfied,
The classification recognition becomes "?" (other).

第6図は第5図にもとづいて各分離関数ごとに
共通基本条件と分離判定条件をまとめたものであ
る。
FIG. 6 summarizes common basic conditions and separation judgment conditions for each separation function based on FIG. 5.

以上のように構成した分類認識部を備えた本実
施例の発話訓練装置の動作について以下フローチ
ヤートを用いて説明する。
The operation of the speech training device of this embodiment, which is equipped with the classification recognition section configured as above, will be explained below using a flowchart.

第7図に/p/の訓練を行なう場合の例を示
す。第2図と第7図を用いて説明する。
FIG. 7 shows an example of /p/ training. This will be explained using FIGS. 2 and 7.

(イ) まず訓練にさきだち、訓練音指定部8によ
り、どの音を訓練するのかを指定する。仮に/
p/を指定すれば、制御部9を経て、これか
ら/p/の訓練が開始されることが分類認識部
6に通知される。
(a) First, prior to training, the training sound specifying section 8 specifies which sound is to be trained. what if/
If p/ is specified, the classification recognition unit 6 is notified via the control unit 9 that training for /p/ will start.

(ロ) 次に訓練音の発話が行なわれると、分類認識
部6では、1〜5の検出器からの情報を受取
り、/p/の分類認識を開始する。
(b) When the training sound is uttered next, the classification recognition unit 6 receives the information from the detectors 1 to 5 and starts classification recognition of /p/.

(ハ) /p/の分類認識ではまず、p/t分離関数
が起動され、第6図に示す基本条件と分離判定
条件が調べられる。基本条件に違反すると、そ
の違反条件を確保しておいて、p/b分離ニへ
進む。
(c) In the classification recognition of /p/, first, the p/t separation function is activated, and the basic conditions and separation judgment conditions shown in FIG. 6 are checked. If the basic condition is violated, the violation condition is secured and the process proceeds to p/b separation 2.

次に分離判定条件が調べられ、舌閉鎖接触が
あれば/t/と判定され、制御部9を通じて正
否判定結果表示部7に、判定音/t/とともに
“シタヘイサ アリ”という判定理由が示され
る。
Next, the separation judgment condition is checked, and if there is a tongue closed contact, it is judged as /t/, and the judgment sound /t/ and the reason for the judgment are displayed on the correctness judgment result display part 7 through the control unit 9. .

(ニ) p/b分離関数において、第6図の条件が調
べられる。基本条件に違反しておけば違反条件
を保持してヘへ進む。分離判定条件では有声か
どうかが調べられ、有声であれば/b/と判定
され、制御部9を通じて正否判定結果表示部7
に、判定音/b/とともに、判定理由の“ユウ
セイ”が示される。
(d) In the p/b separation function, the conditions shown in Figure 6 can be examined. If you violate the basic conditions, keep the violation conditions and proceed to next step. In the separation judgment condition, it is checked whether or not there is a voice, and if it is voiced, it is judged as /b/, and the correct/incorrect judgment result display section 7 is displayed through the control section 9.
Along with the judgment sound /b/, the judgment reason "Yusei" is shown.

(ホ) p/t分離関数で違反条件がなかつたかどう
かが確認され、もし違反条件があればヘへ進
む。違反がなければ正解である。/p/が正否
判定結果表示部7に示される。
(E) It is checked whether or not there is a violation condition using the p/t separation function, and if there is a violation condition, the process proceeds to step (5). If there is no violation, the answer is correct. /p/ is displayed on the correct/incorrect determination result display section 7.

(ヘ) 基本条件に違反したものが、この処理ステツ
プである。正否判定結果表示部7には/?/と
ともに違反した内容が示される。/p/の訓練
で起動される分離関数は、p/t分離関数と
p/b分離関数であり、これらの分離関数の基
本条件違反は“ピオン”、“ハレツセイ ナシ”、
“ユウセイ”、“シタヘイサ アリ”の4種があ
る。
(f) This processing step violates the basic conditions. The correct/incorrect judgment result display section 7 shows /? / indicates the content of the violation. The separation functions that are activated in the training of /p/ are the p/t separation function and the p/b separation function, and violations of the basic conditions of these separation functions are “pion”, “haretsei nashi”,
There are four types: “Yusei” and “Shitaheisa Ali”.

各訓練音について示される基本条件の違反メツ
セージの種類を第8図に示す。
FIG. 8 shows the types of messages that violate the basic conditions for each training sound.

発明の効果 本発明は、各検出器の出力を入力として発話音
の分類を行なう分類認識部を備え、この分類認識
部において、訓練音指定部により指定された訓練
音に応じて特定の分離関数を選択して分類を行な
うように構成したので、音声認識の困難な発話音
を、何が入力されるか判らない状態で認識するの
に比較して、精度良く簡単に分類認識してその正
否を表示することができ、性能のよい発話訓練装
置を安価に提供することができるものであり、そ
の実用的価値は高い。
Effects of the Invention The present invention includes a classification recognition unit that classifies speech sounds using the outputs of each detector as input, and in this classification recognition unit, a specific separation function is set according to the training sound designated by the training sound designation unit. Since the configuration is configured to select and classify speech sounds that are difficult to recognize, it is easier to classify and recognize speech sounds that are difficult to recognize. It is possible to provide a high-performance speech training device at a low cost, and its practical value is high.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は従来の発音特徴抽出装置において、音
素分類を行なうための判定基準を示す図、第2図
は本発明の一実施例における発話訓練装置を示す
ブロツク図、第3図はパバマ表の一例を示す配列
図、第4図は本発明の実施例において、各訓練音
に対する判定結果の種類と、その判定を得るため
の分離関数の種類を示す対応図、第5図は各分離
関数を得るための判定基準を示す図、第6図は第
5図をもとに共通基本条件と分離判定条件とを整
理した図、第7図は本実施例の動作例を示すフロ
ーチヤート、第8図は各訓練音に対する違反メツ
セージの種類を示す対応図である。 1……音声波検出器、2……鼻振動検出器、3
……喉頭振動検出器、4……破裂性検出器、5…
…舌閉鎖検出器、6……分類認識部、7……正否
判定結果表示部、8……訓練音指定部、9……制
御部。
Fig. 1 is a diagram showing the criteria for phoneme classification in a conventional pronunciation feature extraction device, Fig. 2 is a block diagram showing an utterance training device in an embodiment of the present invention, and Fig. 3 is a diagram of the Pabama table. An array diagram showing an example, FIG. 4 is a correspondence diagram showing the types of judgment results for each training sound and the types of separation functions used to obtain the judgments in the embodiment of the present invention, and FIG. FIG. 6 is a diagram illustrating the common basic conditions and separation determination conditions based on FIG. 5. FIG. 7 is a flowchart showing an example of the operation of this embodiment. The figure is a correspondence diagram showing the types of violation messages for each training sound. 1... Sound wave detector, 2... Nasal vibration detector, 3
...Laryngeal vibration detector, 4... Rupture detector, 5...
. . . Tongue closure detector, 6 . . . Classification recognition unit, 7 .

Claims (1)

【特許請求の範囲】 1 音声波の検出器と、鼻振動の検出器と、喉頭
振動の検出器と、破裂性の検出器と、舌と硬口蓋
との閉鎖接触情報を検出する舌閉鎖検出器と、 訓練音に応じた複数の分離関数を有し、前記各
検出器の出力を入力として発話音の分類を行なう
分類認識部と、 訓練音を指定する訓練音指定部と、 分類認識の結果を示す正否判定結果表示部とを
備え、 前記分類認識部は、前記訓練音指定部により指
定された訓練音に応じて特定の分離関数を選択し
て分類を行なうことを特徴とする発話訓練装置。
[Claims] 1. A sound wave detector, a nasal vibration detector, a laryngeal vibration detector, a rupture detector, and a tongue closure detection that detects closure contact information between the tongue and the hard palate. a classification recognition unit that has a plurality of separation functions depending on the training sounds and classifies the speech sounds using the outputs of the respective detectors as input; a training sound specification unit that designates the training sounds; and a correct/incorrect judgment result display unit that shows the result, and the classification recognition unit selects a specific separation function according to the training sound designated by the training sound designation unit to perform classification. Device.
JP59130876A 1984-06-26 1984-06-26 Speaking exercise apparatus Granted JPS6111021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP59130876A JPS6111021A (en) 1984-06-26 1984-06-26 Speaking exercise apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP59130876A JPS6111021A (en) 1984-06-26 1984-06-26 Speaking exercise apparatus

Publications (2)

Publication Number Publication Date
JPS6111021A JPS6111021A (en) 1986-01-18
JPH0357777B2 true JPH0357777B2 (en) 1991-09-03

Family

ID=15044757

Family Applications (1)

Application Number Title Priority Date Filing Date
JP59130876A Granted JPS6111021A (en) 1984-06-26 1984-06-26 Speaking exercise apparatus

Country Status (1)

Country Link
JP (1) JPS6111021A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55149993A (en) * 1979-05-12 1980-11-21 Rion Co Electroparatograph display system
JPS58150997A (en) * 1982-03-03 1983-09-07 工業技術院長 Speech feature extractor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55149993A (en) * 1979-05-12 1980-11-21 Rion Co Electroparatograph display system
JPS58150997A (en) * 1982-03-03 1983-09-07 工業技術院長 Speech feature extractor

Also Published As

Publication number Publication date
JPS6111021A (en) 1986-01-18

Similar Documents

Publication Publication Date Title
JP5120826B2 (en) Pronunciation diagnosis apparatus, pronunciation diagnosis method, recording medium, and pronunciation diagnosis program
US6157913A (en) Method and apparatus for estimating fitness to perform tasks based on linguistic and other aspects of spoken responses in constrained interactions
Bernstein et al. Automatic evaluation and training in English pronunciation.
Cutler et al. Universality versus language-specificity in listening to running speech
JP2006048065A (en) Method and apparatus for voice-interactive language instruction
JPH075807A (en) Device for training conversation based on synthesis
JPH065451B2 (en) Pronunciation training device
CN107610691B (en) English vowel sounding error correction method and device
Suchato Classification of stop consonant place of articulation
Lisker The pursuit of invariance in speech signals
WO2014087571A1 (en) Information processing device and information processing method
Czap Automated speech production assessment of hard of hearing children
JP2003177779A (en) Speaker learning method for speech recognition
Gibson et al. Polysyllabic shortening in speakers exposed to two languages
Mizoguchi et al. Production of the Japanese moraic nasal/N/by speakers of English: An ultrasound study
JPH0357777B2 (en)
JP3621624B2 (en) Foreign language learning apparatus, foreign language learning method and medium
Tsubota et al. Computer-assisted English vowel learning system for Japanese speakers using cross language formant structures.
Anuar et al. An acoustical study of English consonants pronunciation among Malaysian-Javaneses
JP2908720B2 (en) Synthetic based conversation training device and method
Jo et al. Japanese pronunciation instruction system using speech recognition methods
EP3979239A1 (en) Method and apparatus for automatic assessment of speech and language skills
Menéndez-Pidal et al. An HMM-based phoneme recognizer applied to assessment of dysarthric speech.
JP2528890B2 (en) Vocal training machine
Urbaniec A prototype of Chinese aspirated consonants pronunciation training system based on multi-resolution cochleagram

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term