JPS6111021A - Speaking exercise apparatus - Google Patents

Speaking exercise apparatus

Info

Publication number
JPS6111021A
JPS6111021A JP59130876A JP13087684A JPS6111021A JP S6111021 A JPS6111021 A JP S6111021A JP 59130876 A JP59130876 A JP 59130876A JP 13087684 A JP13087684 A JP 13087684A JP S6111021 A JPS6111021 A JP S6111021A
Authority
JP
Japan
Prior art keywords
training
sound
speech
detector
classification recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP59130876A
Other languages
Japanese (ja)
Other versions
JPH0357777B2 (en
Inventor
杉本 豊三
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Original Assignee
Agency of Industrial Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology filed Critical Agency of Industrial Science and Technology
Priority to JP59130876A priority Critical patent/JPS6111021A/en
Publication of JPS6111021A publication Critical patent/JPS6111021A/en
Publication of JPH0357777B2 publication Critical patent/JPH0357777B2/ja
Granted legal-status Critical Current

Links

Abstract

(57)【要約】本公報は電子出願前の出願データであるた
め要約のデータは記録されません。
(57) [Summary] This bulletin contains application data before electronic filing, so abstract data is not recorded.

Description

【発明の詳細な説明】 産業上の利用分野 本発明は言語障害者用の発話訓練装置に関し、特に音声
波とともに他の発話器官の発話情報も利用する発話訓練
装置に関するものである。
DETAILED DESCRIPTION OF THE INVENTION Field of the Invention The present invention relates to a speech training device for speech-impaired persons, and more particularly to a speech training device that uses not only sound waves but also speech information from other speech organs.

従来例の構成とその問題点 聴覚障害等のために話し言葉が不自由な言語障害者は我
国で数十万人と推定されているが、言語訓練士の不足、
訓練が長期にわたるなどの利用により、十分な訓練時間
が確保されているとは言い難い状況である。このだめ有
効な発話訓練装置の開発が望まれているが発話音を分類
認識して、その正否を表示できるような訓練装置はまだ
得られていない。
Structure of the conventional example and its problems It is estimated that there are several hundred thousand people with language disabilities in Japan who are unable to speak due to hearing impairment, etc., but there is a shortage of language trainers,
It is difficult to say that sufficient training time is secured due to the fact that training takes a long time. Although it is desired to develop an effective speech training device, a training device that can classify and recognize speech sounds and display whether the speech is correct or incorrect has not yet been obtained.

発話訓練装置と関連が深い装置として、近年、研究開発
が盛んな音声認識装置がある。通常の音声認識装置は音
声波の情報のみを利用しているものが大半であるが、音
声波のみからでは、特に子音/p、t、に/、/b、d
、g/や7m、n。
As a device closely related to speech training devices, there is a speech recognition device, which has been actively researched and developed in recent years. Most normal speech recognition devices use only information from speech waves, but from only speech waves, it is difficult to recognize the consonants /p, t, /, /b, d.
, g/ and 7m, n.

ツ/などの正確な検出および相互の識別は困難である。It is difficult to accurately detect and distinguish between each other.

このため音声波以外に声帯振動、鼻振動。Therefore, in addition to voice waves, there are vocal cord vibrations and nasal vibrations.

ロ気流、舌と口蓋の接触の有無等の情報を利用した発音
特徴抽出装置(特開昭68−160997公報参照)な
どが提案されている。
A pronunciation feature extraction device (see Japanese Unexamined Patent Publication No. 160997/1983) that uses information such as air flow and presence or absence of contact between the tongue and the palate has been proposed.

第1図は前記従来例の発音特徴抽出装置において音素分
類を行なうだめの判定基準である。すなわち第1図にお
いて、有声音/ b+ d+ g y ” vn、y7
/は声帯振動が有(→であり、無声音/p。
FIG. 1 shows criteria for determining whether to perform phoneme classification in the conventional pronunciation feature extraction device. That is, in Fig. 1, the voiced sound / b+ d+ g y ” vn, y7
/ has vocal cord vibration (→, and is a voiceless sound /p.

t 、 k 、 h/は声帯振動が無(ハ)である。鼻
音/m。
For t, k, and h/, there is no vocal cord vibration (c). Nasal/m.

n、y)/は鼻振動が有(イ)であり、/p 、 t 
、 k、−b、d、g、h/は鼻振動が無(へ)である
。破裂音/p、t、に、b、d、g/と摩擦音/h/は
口気流の流速が有(ト)であるが、7m、n、v/は無
(ハ)である。さらに口気流流速の変化率は/p、t。
n, y)/ has nasal vibration (a), /p, t
, k, -b, d, g, h/ have no nasal vibration. The plosives /p, t, b, d, g/ and the fricative /h/ have a velocity of oral airflow (g), but the velocity of 7m, n, v/ has none (c). Furthermore, the rate of change of the oral airflow velocity is /p,t.

b、4/が大きくて有(ト)になるが、/k 、 g 
、 h/小さくて無←)となる。口蓋接触については、
/p。
b, 4/ is large and becomes true (g), but /k, g
, h/small and absent ←). Regarding palatal contact,
/p.

b、m、h/は閉鎖パターンとはならず、/1゜d r
 n /では前古閉鎖のパターンに、また/k。
b, m, h/ are not closed patterns, /1°d r
In /k, there is a pattern of prepaleoclosure.

g 、 n/では後置閉鎖のパターンになる。故に第1
図に従って音素が分類できるとするものである。
g and n/ result in a postfix closure pattern. Therefore, the first
It is assumed that phonemes can be classified according to the diagram.

しかしながら実験によれば、上記のような構成では、 (1)有声破裂音/b 、 d、g/では鼻音化の傾向
が強く、鼻振動が有(イ)になることがある。
However, according to experiments, with the above configuration, (1) voiced plosives /b, d, g/ have a strong tendency to become nasalized, and nasal vibrations may become present (a).

(2)/に、g、?J/における後置閉鎖は個人差によ
り全く検出されないことがあり、この場合閉鎖なしとな
る。
(2)/ni, g,? Posterior closure in J/ may not be detected at all due to individual differences, and in this case no closure occurs.

(3)破裂の調音点が後舌部にある/k 、 g/は、
口気流の流速の検出が/p、t、b/に比べてやや不安
定であシ、〔流速、変化率〕が、〔有(→、有(+))
、C有(→、無(ハ)〕、〔無←)、無←)〕などとバ
ラつくことがある。また頻度は少ないが/d/について
も同様の傾向がみられる。
(3) The articulatory point of rupture is in the posterior tongue /k, g/,
The detection of the flow velocity of the oral airflow is somewhat unstable compared to /p, t, b/, and [flow velocity, rate of change] is [Yes (→, Yes (+))]
, C presence (→, nothing (c)], [no←), nothing←)] may vary. A similar tendency is also seen for /d/, although it is less frequent.

などの現象があシ、これらの原因により第1図の判定基
準では/p/と/に/、/C1/と/n/。
Due to these causes, the criteria shown in Figure 1 is /p/ and /, /C1/ and /n/.

/g/と/り/と/m/の混同などが生ずるためにこの
まま発話訓練に用いることは困難であることがわかった
It has been found that it is difficult to use it as is for speech training because confusion arises between /g/ and /ri/ and /m/.

発明の目的 本発明は上記従来の音声認識装置の適用の困難を解消し
、実際の発話訓練に則し、発話音を分類認識してその正
否を表示できる発話訓練装置を提供することを目的とす
る。
OBJECTS OF THE INVENTION An object of the present invention is to provide a speech training device that can classify and recognize speech sounds and display whether they are correct or incorrect, in accordance with actual speech training, by solving the difficulties in applying the conventional speech recognition device. do.

発明の構成 本発明は、音声波、鼻振動、喉頭振動、破裂性口蓋と舌
の閉鎖接触の各情報の検出器と分類認識部と訓練音指定
部を備えだ発話訓練装置であり、訓連指定音と音韻産声
的に類似する音韻間のみに着目した分類認識を行なうこ
とにより、音声認識上の困難をなくすとともに現実の訓
練に即した分類認識を実現できるものである。
Structure of the Invention The present invention is a speech training device that is equipped with a detector for information on voice waves, nasal vibrations, laryngeal vibrations, and closed contact between the palate and the tongue, a classification recognition section, and a training sound designation section. By performing classification recognition focusing only on phonemes that are similar to the designated sound in terms of phonetic production, difficulties in speech recognition can be eliminated and classification recognition that is suitable for actual training can be realized.

実施例の説明 第2図は本発明の一実施例を示すブロック図である5、
第2図において、1は音の有無や始まりの情報を得るだ
めの音声波検出器、2は真壁中央部付近に取付けられ、
鼻音情報を得るための鼻振動声 検出器、3は喉頭部声帯付近に取付けられず無声の情報
を得るための喉頭振動検出器、4は口腔前方に配置した
口気流検出器から例えば流速の変化率を検出して破裂の
情報を得るだめの破裂性検出器、5は舌と口蓋の接触を
検出するために口腔内の口蓋に装着された口蓋接触検出
器から閉鎖接触情報を得るだめの舌閉鎖検出器、6は前
記1〜6の検出情報から発話音の分類認識を行なう分類
認識部、7は分類認識部の結果を示す発話の正否判定結
果表示部、8はvll練に先たち訓練音の指定を受は付
ける訓練音指定部、9は前記分類認識部6と前記正否判
定結果表示部Tと前記訓練音指定部8とを制御する制御
部である。
DESCRIPTION OF EMBODIMENTS FIG. 2 is a block diagram showing an embodiment of the present invention5.
In Figure 2, 1 is an audio wave detector used to obtain information on the presence or absence of sound and the beginning of the sound, and 2 is installed near the center of Makabe.
A nasal vibration voice detector is used to obtain information about nasal sounds. 3 is a laryngeal vibration detector that is not attached near the vocal cords of the larynx and is used to obtain information about unvoiced sounds. 4 is an oral airflow detector placed in front of the oral cavity to detect changes in flow velocity, for example. 5 is a tongue rupture detector that obtains rupture information by detecting the rate of rupture; and 5 a tongue rupture detector that obtains closure contact information from a palate contact detector attached to the palate in the oral cavity to detect contact between the tongue and the palate. A closure detector, 6 a classification recognition unit that classifies and recognizes speech sounds from the detection information of 1 to 6 above, 7 a utterance correctness determination result display unit that shows the results of the classification recognition unit, 8 a training prior to VLL training. A training sound specifying section 9 receives and accepts the designation of sounds, and a control section 9 controls the classification recognition section 6, the correct/incorrect determination result display section T, and the training sound specifying section 8.

以上のように構成された本発明の実施例における分類認
識部6の動作について以下例に従って詳細に説明する。
The operation of the classification recognition unit 6 in the embodiment of the present invention configured as described above will be described in detail below according to an example.

発話訓練は各種の音韻を無秩序に練習するわけではなく
、音韻の類似性、対立性2難易度などを考慮して体系的
に行なわれる。第3図は発話訓練では最も基本的かつ重
要な訓練目標であるパバマ表の1例を示す配列で、訓練
は/p/から始めて習熟につれて行と列の隣接音へ移行
してゆく。
Speech training does not involve practicing various phonemes in a chaotic manner, but is carried out systematically, taking into account similarities in phonemes, opposition, and two levels of difficulty. FIG. 3 shows an example of the Pabama table, which is the most basic and important training goal in speech training. Training begins with /p/ and moves to adjacent sounds in rows and columns as the user becomes proficient.

第3図は行と列に応じて共通点を有する。即わち、調音
位置からみれば、/ p 、 b 、 m /は両唇音
、 / t 、 6 、 n /は歯茎音、/に、g、
n/は軟口蓋音である。音の種類からみれば/p 、 
t 。
FIG. 3 has common features depending on the rows and columns. That is, from the point of articulatory position, /p, b, m/ are bilabial sounds, /t, 6, n/ are alveolar sounds, /ni, g,
n/ is a soft palate sound. In terms of the type of sound, /p,
t.

k/は無声破裂音/b 、 d 、 g/は有声破裂音
k/ is a voiceless plosive; b, d, g/ are voiced plosives.

7 m+ n+’)/は通鼻音である。これらの点から
も類推されるが、実際の発話訓練においても、行と列方
向の隣接音で混同が生じやすく、離れたものの混同は起
ることがまれである。例えば/p/の発話では/1/や
/b/に間違うことはあっても、/に/や/m/やその
他の音に間違うことは非常に少ない。すなわち訓練の実
際に則せば/p/の訓練時には、/p/(正解)、/l
まだはb/(隣接音への間違い)、/?/(その他の間
違い)という分類認識、あるいは/p/、/l/ 、/
b/。
7 m+ n+')/ is a nasal sound. As can be inferred from these points, even in actual speech training, confusion tends to occur between sounds that are adjacent in the row and column directions, and confusion between sounds that are far apart is rare. For example, when uttering /p/, it may be mistaken for /1/ or /b/, but it is very rare for it to be mistaken for /ni/, /m/, or other sounds. In other words, according to the actual training, when training /p/, /p/ (correct answer), /l
Still b/ (mistake to adjacent sound), /? / (other mistakes) classification recognition, or /p/, /l/, /
b/.

/9/という分類認識を行ない、困難な/1)/と/に
/の分類認識を行なわなくても十分有効な訓練を行なう
ことができる。
By performing the classification recognition of /9/, it is possible to carry out sufficiently effective training without performing the difficult classification recognition of /1)/ and /ni/.

同様に訓練する発話音がわかっていれば、/g/と/m
/、/)7/と/m/の分類認識を行なわなくても訓練
が可能である。
Similarly, if you know the speech sounds to be trained, /g/ and /m
Training is possible without performing classification recognition of /, /)7/ and /m/.

次に/d 、 g/は鼻音化傾向が強く、時に鼻音と同
程度の鼻振動を示すことがあり、間蓮いの原因となって
いたが、実験によれば、鼻音化した/d 、 g/は強
い鼻振動が100m秒以上継続することはきわめて少な
く、逆に鼻音は殆んどの場合、強い鼻振動が120m秒
以上継続することが判明した。従ってこの事実を利用す
れば/+1/と/n/、/g/と/り/の混同をほぼな
くすことができる。
Next, /d and g/ have a strong tendency to become nasalized, sometimes exhibiting nasal vibrations to the same extent as nasal sounds, and have been the cause of interlude, but according to experiments, /d and g/ have become nasalized. It was found that for g/, strong nasal vibrations rarely last for more than 100 msec, and on the contrary, for nasal sounds, strong nasal vibrations last for more than 120 msec in most cases. Therefore, by utilizing this fact, it is possible to almost eliminate confusion between /+1/ and /n/, /g/ and /ri/.

第4図は上記の考え方にもとすき第3図の各音の訓練に
対し、分類認識を行なう場合の判定結果の種類と判定に
使用する分離関数の組合せを示している。
Based on the above idea, FIG. 4 shows the types of determination results and combinations of separation functions used for determination when classification recognition is performed for each sound training shown in FIG. 3.

/1/の訓練では/p/と/に/を分類認識せず/ p
、k /のままで判定結果とする。これは前述のように
隣接音−1の間違いとして/ p、k /をくくっても
訓練が実施できること、訓練の立場からは、分類認識の
性能を落としても、間違った正否判定を行なわない方が
望ましいこと、分類認識部を簡潔に実現できることなど
の理由による。/d/の訓練における/b 、g/、/
n/の訓練における7m、77/も同様である。
In the training of /1/, /p/ and /ni/ were not classified/recognized.
, k / is used as the judgment result. This is because, as mentioned above, training can be carried out even if / p, k / are grouped together as an error for adjacent sound -1, and from a training standpoint, it is better not to make incorrect correct/incorrect judgments even if the performance of classification recognition is degraded. This is because it is desirable and the classification recognition unit can be realized simply. /b, g/, / in the training of /d/
The same goes for 7m and 77/ in the training of n/.

第5図は第4図の各分離関数を実現するための判定基準
を図示したものであり、例えば無声破裂音/p/は鼻振
動がなく、破裂性があり、無声であり、かつ舌の閉鎖接
触はないことを示している。
Figure 5 illustrates the criteria for realizing each separation function in Figure 4. For example, the voiceless plosive /p/ has no nasal vibration, is plosive, is voiceless, and has a tongue This indicates that there is no closed contact.

列間の分離条件は舌閉鎖接触情報を用いる。即ち/ P
 + br m/と/l 、 d 、 n/を分離する
半]定条件は舌閉鎖接触の無と有、/ t、 d 、 
n /と/k 、 g 、り/を分離する判定条件は舌
閉鎖接光虫の有と無である。/p 、 t 、 k /
と/b 、 d 、 g/の行間を分離する判定条件は
無声と有声である。
Tongue closed contact information is used as the separation condition between columns. That is / P
+ br m/ and /l, d, n/ are separated by the absence and presence of tongue-closed contact, /t, d,
The criterion for separating n/ from /k, g, and ri/ is the presence or absence of tongue-closing photoptera. /p, t, k/
The criteria for separating the line spacing between /b, d, and g/ are unvoiced and voiced.

/b、d、g/と7m 、 n 、 7J/(D行間を
分肉色する判定条件は鼻音化傾向と強く長い鼻振動であ
る。
/b, d, g/ and 7m, n, 7J/ (The conditions for determining the difference between D lines are a tendency to nasalization and strong and long nasal vibrations.

また分離関数は両者が共通に有する特性を共通基本条件
とし、共通基本条件を満足しない場合、分類認識は「?
」(その他)になる。
In addition, the separation function uses the characteristics that both have in common as a common basic condition, and if the common basic condition is not satisfied, the classification recognition is "?"
”(other).

第6菊は第6図にもとづいて各分離関数ごとに共通基本
条件と分離判定条件を甘とめたものである。
The sixth chrysanthemum is based on FIG. 6, with the common basic conditions and separation judgment conditions relaxed for each separation function.

以上のように構成した分類認識部を備えた本実施例の発
話訓練装置の動作について以下フローチャートを用いて
説明する。
The operation of the speech training device of this embodiment, which is equipped with the classification recognition unit configured as above, will be explained below using a flowchart.

第7図に/p/の訓練を行なう場合の91]を示す。91] in the case of /p/ training is shown in FIG.

第2図と第7図を用いて説明する。This will be explained using FIGS. 2 and 7.

(イ)捷ず訓練にさきたち、訓練音指定音b8によシ、
どの音を訓練するのかを指定する。仮に/p/を指定す
れば、制御部9を経て、これ力・ら/p/の訓練が開始
されることが分類認識部6に通知される。
(B) Before starting the training without switching, use the training sound specified sound b8.
Specify which sound to train. If /p/ is specified, the classification recognition unit 6 is notified via the control unit 9 that the training for this/ra/p/ will start.

(ロ)次に訓練音の発話が行なわれると、分類認識部6
では、1〜6の検出器からの情報を受取り、/p/の分
類認識を開始する。
(b) When the next training sound is uttered, the classification recognition unit 6
Now, information from detectors 1 to 6 is received and classification recognition of /p/ is started.

1’e/p/の分類認識ではまず、p/を分離関数が起
動され、第6図に示す基本条件と分離判定条件が調べら
れる。基本条件に違反すると、その違反条件を保持して
おいて、p/b分離に)へ進む。
In the classification recognition of 1'e/p/, first, a separation function for p/ is activated, and the basic conditions and separation judgment conditions shown in FIG. 6 are checked. If a basic condition is violated, the violation condition is maintained and the process proceeds to p/b separation).

次に分離判定条件が調べられ、舌閉鎖接角虫75−あれ
ば/1/と判定され、制御部9を通して正否判定結果表
示部7に、判定音/1/とともにパシタヘイサ アリ″
という判定理由75二示される。
Next, the separation judgment condition is checked, and if there is a tongue-closed tangent worm 75, it is judged as /1/, and the judgment sound /1/ is displayed on the success/failure judgment result display section 7 through the control section 9.
Reason 75 for this determination is shown below.

に) p/b分離関数において、第6図の条lf+75
二調べられる。基本条件に違反しておれば違反条件を保
持して(へ)へ進む。分離判定条件では有声力・どうか
が調べられ、有声であれば/b/と荊1定され、制御部
9を通じて正否判定結果表示部7に、判定音/b/とと
もに、判定理由の゛ユウセイ”が示される。
) In the p/b separation function, Article lf+75 in Figure 6
Two can be investigated. If the basic conditions are violated, keep the violation conditions and proceed to (). In the separation judgment condition, the presence/absence of voicing is checked, and if it is voiced, it is determined as /b/, and the judgment sound /b/ and the reason for the judgment are displayed via the control unit 9 on the correct/incorrect judgment result display unit 7. is shown.

(ホ) p/を分離関数で違反条件がなかったかどうか
が確認され、もし違反があれば(へ)へ進む。違反がな
ければ正解である/p/が正否判定結果表示部7に示さ
れ゛る。
(e) It is checked whether there are any violation conditions using the p/ separation function, and if there is a violation, proceed to (v). If there is no violation, /p/, which is the correct answer, is displayed on the correctness determination result display section 7.

(へ)基本条件に違反したものが、この処理ステップで
ある。正否判定結果表示部7には/?/とともに違反し
た内容が示される。/p/の訓練で起動される分離関数
は、p/を分離関数とp/b分離関数であり、これらの
分離関数の基本条件違反は゛ピオンu 、 uハレツセ
イ ナシ″。
(f) This processing step violates the basic condition. The correct/incorrect judgment result display section 7 shows /? / indicates the content of the violation. The separation functions activated in the training of /p/ are the p/ separation function and the p/b separation function, and violations of the fundamental conditions of these separation functions are ``pion u, u failure''.

″ユウセイ”、″シタヘイサ アリ″の4種がある。There are four types: ``Yusei'' and ``Shitaheisa Ali''.

各訓練音について示される基本条件の違反メツセージの
種類を第8図に示す。
FIG. 8 shows the types of messages that violate the basic conditions for each training sound.

発明の効果 本発明の発話訓練装置は、音声波、鼻振動、喉頭振動、
破裂性2口蓋と舌との閉鎖接触の有無の各情報を得る検
出器と、分類認識部と、訓練音指定部と、正否判定結果
表示部と、制御部とを備え、訓練に先たち訓練音を指定
し、その訓練音に着目した分類認識を行なうことにより
、発話音を分類認識してその正否を表示でき、発話訓練
における効果は極めて大きい。
Effects of the Invention The speech training device of the present invention uses voice waves, nasal vibrations, laryngeal vibrations,
It is equipped with a detector that obtains information on the presence or absence of closed contact between the palate and the tongue, a classification recognition section, a training sound designation section, a correct/incorrect judgment result display section, and a control section. By specifying a sound and performing classification recognition focusing on the training sound, it is possible to classify and recognize speech sounds and display whether the recognition is correct or not, which is extremely effective in speech training.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は従来の発音特徴抽出装置において、音素分類を
行なうだめの判定基準を示す図、第2図は本発明の一実
施例における発話訓練装置を示すブロック図、第3図は
パバマ表の一例を示す配列図、第4図は本発明の実施例
において、各訓練音に対する判定結果の種類と、その判
定を得るための分離関数の種類を示す対応図、第6図は
各分離関数を得るだめの判定基準を示す図、第6図は第
6図をもとに共通基本条件と分離判定条件とを整理した
図、第7図は本実施例の動作例を示すフローチャート、
第8図は各訓練音に対する違反メツセージの種類を示す
対応図である。 1・・・・・・音声波検出器、2・・・・・・鼻振動検
出器、3・・・・喉頭振動検出器、4・・・・・破裂性
検出器、6・・・・・・舌閉鎖検出器、6・・・・・・
分類認識部、7・・・・・・正否判定結果表示部、8・
・・・訓練音指定部、9・・・・・制御部。 特許出願人 工業技術院長 川 1)裕 部第1図 第2図 第3図 第4図 第5図 6図 第7図 第8図
Fig. 1 is a diagram showing criteria for determining whether or not to perform phoneme classification in a conventional pronunciation feature extraction device, Fig. 2 is a block diagram showing a speech training device according to an embodiment of the present invention, and Fig. 3 is a diagram of a Pabama table. An array diagram showing an example, FIG. 4 is a correspondence diagram showing the types of judgment results for each training sound and the types of separation functions used to obtain the judgments in the embodiment of the present invention, and FIG. FIG. 6 is a diagram illustrating common basic conditions and separation determination conditions based on FIG. 6, and FIG. 7 is a flowchart showing an example of the operation of this embodiment.
FIG. 8 is a correspondence diagram showing the types of violation messages for each training sound. 1... Sound wave detector, 2... Nasal vibration detector, 3... Laryngeal vibration detector, 4... Rupturability detector, 6...・・Tongue closure detector, 6・・・・・・
Classification recognition unit, 7...Accuracy determination result display unit, 8.
. . . Training sound designation section, 9 . . . Control section. Patent applicant: Director of the Agency of Industrial Science and Technology Kawa 1) Hirobe Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8

Claims (1)

【特許請求の範囲】[Claims] 音声波の検出器と、鼻振動の検出器と、喉頭振動の検出
器と、破裂性の検出器と、舌と硬口蓋との閉鎖接触情報
を検出する舌閉鎖検出器と、発話音の分類をおこなう分
類認識部と、訓練音を指定する訓練音指定部と、分類認
識の結果を示す正否判定結果表示部と、前記分類認識部
と前記訓練音指定部と前記正否判定結果表示部とを制御
する制御部とを備え、訓練開始前に訓練音の指定を行な
い、かつその訓練音に着目した分類認識を行なうことを
特徴とする発話訓練装置。
A sound wave detector, a nasal vibration detector, a laryngeal vibration detector, a burst detector, a tongue closure detector that detects closure contact information between the tongue and the hard palate, and speech sound classification. a classification recognition section for performing the training sound, a training sound specification section for specifying the training sound, a correctness judgment result display section for indicating the result of the classification recognition, the classification recognition section, the training sound specification section, and the correctness judgment result display section. What is claimed is: 1. A speech training device comprising: a control unit for controlling the speech training device; the speech training device is characterized in that a training sound is specified before training starts, and classification recognition is performed focusing on the training sound;
JP59130876A 1984-06-26 1984-06-26 Speaking exercise apparatus Granted JPS6111021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP59130876A JPS6111021A (en) 1984-06-26 1984-06-26 Speaking exercise apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP59130876A JPS6111021A (en) 1984-06-26 1984-06-26 Speaking exercise apparatus

Publications (2)

Publication Number Publication Date
JPS6111021A true JPS6111021A (en) 1986-01-18
JPH0357777B2 JPH0357777B2 (en) 1991-09-03

Family

ID=15044757

Family Applications (1)

Application Number Title Priority Date Filing Date
JP59130876A Granted JPS6111021A (en) 1984-06-26 1984-06-26 Speaking exercise apparatus

Country Status (1)

Country Link
JP (1) JPS6111021A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55149993A (en) * 1979-05-12 1980-11-21 Rion Co Electroparatograph display system
JPS58150997A (en) * 1982-03-03 1983-09-07 工業技術院長 Speech feature extractor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55149993A (en) * 1979-05-12 1980-11-21 Rion Co Electroparatograph display system
JPS58150997A (en) * 1982-03-03 1983-09-07 工業技術院長 Speech feature extractor

Also Published As

Publication number Publication date
JPH0357777B2 (en) 1991-09-03

Similar Documents

Publication Publication Date Title
US20060004567A1 (en) Method, system and software for teaching pronunciation
US20060136225A1 (en) Pronunciation assessment method and system based on distinctive feature analysis
JPH075807A (en) Device for training conversation based on synthesis
WO2007037356A1 (en) Pronunciation diagnosis device, pronunciation diagnosis method, recording medium, and pronunciation diagnosis program
US20060058996A1 (en) Word competition models in voice recognition
Selouani et al. Alternative speech communication system for persons with severe speech disorders
CN107610691B (en) English vowel sounding error correction method and device
JPH06110494A (en) Pronounciation learning device
Louko et al. Issues in collecting and transcribing speech samples
JP2844817B2 (en) Speech synthesis method for utterance practice
JP2003177779A (en) Speaker learning method for speech recognition
JPS6111021A (en) Speaking exercise apparatus
Lertwongkhanakool et al. An automatic real-time synchronization of live speech with its transcription approach
Arslan Foreign accent classification in American English
Do et al. Vietnamese Text-To-Speech system with precise tone generation
Azizah AN ANALYSIS OF STUDENTS’ERROR IN PRONOUNCING PLOSIVE VOICELESS CONSONANTS AT THE SIXTH SEMESTER OF ENGLISH EDUCATION RADEN INTAN STATE ISLAMIC UNIVERSITY OF LAMPUNG IN THE ACADEMIC YEAR OF 2018/2019
JP3621624B2 (en) Foreign language learning apparatus, foreign language learning method and medium
Sun Analysis and interpretation of glide characteristics in pursuit of an algorithm for recognition
Datta et al. Time Domain Representation of Speech Sounds
Alotaibi et al. A new look at the automatic mapping between Arabic distinctive phonetic features and acoustic cues
Aji et al. Mispronunciation of English Consonant Sound [Θ] in the Medial Position by the Students of Smk Grafika Surakarta
Khairunnisa et al. Minimal Pairs: An Analysis Of College Students' English Pronunciation Errors
JPS5895399A (en) Voice message identification system
Kantner et al. An apologia of a new phonetic classification
Huckvale Word recognition from tiered phonological models

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term