JP2001051580A - Voice learning device - Google Patents

Voice learning device

Info

Publication number
JP2001051580A
JP2001051580A JP11224610A JP22461099A JP2001051580A JP 2001051580 A JP2001051580 A JP 2001051580A JP 11224610 A JP11224610 A JP 11224610A JP 22461099 A JP22461099 A JP 22461099A JP 2001051580 A JP2001051580 A JP 2001051580A
Authority
JP
Japan
Prior art keywords
voice
teacher
learner
learning
voices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP11224610A
Other languages
Japanese (ja)
Inventor
Keisuke Takamori
圭介 高森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NYUUTON KK
Original Assignee
NYUUTON KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NYUUTON KK filed Critical NYUUTON KK
Priority to JP11224610A priority Critical patent/JP2001051580A/en
Publication of JP2001051580A publication Critical patent/JP2001051580A/en
Pending legal-status Critical Current

Links

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a voice learning device first deciding a teacher pronouncing the nearest to oneself pronounce by a learner and thereafter, learning while modeling the teacher at the time of learning while comparing a recorded learner' s voice with a beforehand recorded teacher's voice. SOLUTION: A learner voice storage part 2 preserves temporarily the inputted learner voice. A teacher voice storage part 3 preserves beforehand the voices respectively pronounced by plural foreigner teachers A-J with voice characteristics different from each other related to the prescribed number of respective English sentences. A teacher text storage part 4 preserves beforehand the prescribed number of these sentences as the text data. A characteristic extraction part 6 extracts respectively the characteristics of the voices from the voices preserved in the learner voice storage part 2 and one teacher voice selected from the voices of the teachers A-J preserved in the teacher voice storage part 3. A characteristic comparison part 7 compares the characteristics of the extracted voices. A teacher decision part 8 compares the learner voice characteristic with the characteristics of the teacher voices in the characteristic comparison part 7, and decides one teacher that the characteristics of both voices are the nearest.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【発明の属する技術分野】本発明は、音声学習装置に関
し、特に録音した学習者の音声と予め録音された教師の
音声とを比較して学習させるにあたり、学習者固有の音
声の特徴を加味して学習を効果的に進める新規な音声学
習装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a speech learning apparatus, and more particularly, to learning by comparing a recorded learner's voice with a pre-recorded teacher's voice, taking into account the characteristics of the learner's unique voice. The present invention relates to a novel speech learning device that effectively promotes learning.

【従来の技術】パーソナルコンピュータを利用する音声
学習装置は公知であり、例えば教材中に外国人教師の発
音すなわち音声を予め記憶した語学学習装置は既に実用
化されている。語学学習においてコンピュータを利用す
る利点は、外国人教師の発音を繰り返し再生したり、学
習者が録音した発音と外国人教師の発音を聞き比べたり
する機能を設けることで、学習者単独でヒアリングとス
ピーキングの学習効果を上げることができる点にある。
2. Description of the Related Art A speech learning apparatus using a personal computer is known. For example, a language learning apparatus in which the pronunciation of a foreign teacher, that is, the voice, is previously stored in a teaching material has already been put to practical use. The advantage of using a computer in language learning is that the ability to repeatedly play the pronunciation of the foreign teacher and compare the pronunciation recorded by the learner with the pronunciation of the foreign teacher can be used to make it possible for the learner to hear The point is that the learning effect of speaking can be improved.

【発明が解決しようとする課題】日常のコミュニケーシ
ョンに必要不可欠な表現は、ある説によれば500セン
テンス程度であるといわれており、それらセンテンスの
ヒアリングとスピーキングを完璧にマスターすれば日常
生活に事足りるともいわれている。しかして、学習者
は、外国人教師の発音を何度も耳で聞き、話の内容を理
解すると同時に、聞き取った発音をそっくり真似て発音
することで、ヒアリングとスピーキングの上達を図る。
ところで、学習者が発音する音声の特徴は、性別、年齢
等に応じて音質、音域(高・低)等に個人差があり、そ
の学習者に固有なものである。例えば、学習者が模範と
なる外国人教師の言ったとおりに発音できるまで音声学
習をする場合、教師の発音する音声の特徴と学習者の発
音する音声の特徴とがよく似ているほうが学習者の負担
が少なく、効果的に学習を進めるうえで望ましいと考え
られる。しかし、従来の語学学習装置ではそうした配慮
はされていなかった。そこで、本発明の目的は、こうし
た点を改善し、録音した学習者の音声と予め録音された
教師の音声とを比較して学習するにあたり、学習者が自
分の発音に最も近い発音をする教師をまず決定し、それ
以降その教師をお手本として学習を進めることのできる
音声学習装置を提供することにある。
According to a certain theory, it is said that an expression indispensable for daily communication is about 500 sentences, and it is sufficient for daily life if hearing and speaking of those sentences are perfectly mastered. It is also called. The learner listens to the pronunciation of the foreign teacher many times, understands the content of the story, and at the same time, pronounces the pronunciation in a way that is exactly the same as the one heard, thereby improving hearing and speaking.
By the way, the characteristics of the sound that the learner pronounces vary among individuals in sound quality, sound range (high / low), etc. according to gender, age, etc., and are unique to the learner. For example, if the learner learns speech until he / she can pronounce as the model teacher says, the characteristics of the voice pronounced by the teacher and the characteristics of the voice pronounced by the learner should be more similar. It is considered to be less burdensome for effective learning. However, such considerations were not made in the conventional language learning device. Accordingly, an object of the present invention is to improve the above points and compare the recorded learner's voice with the pre-recorded teacher's voice to learn. It is an object of the present invention to provide a speech learning apparatus that can determine the first and then use the teacher as a model to proceed with learning.

【課題を解決するための手段】上記の目的を達成するた
めに、本発明に係る音声学習装置は、録音した学習者の
音声と予め記憶された教師の音声とを比較して学習させ
る音声学習装置において、入力される学習者の音声を保
存する学習者音声記憶部と、音声の特徴が互いに異なる
複数の教師の音声を予め保存する教師音声記憶部と、学
習者の音声と前記複数の教師の中から任意に選択された
1つの教師の音声からそれぞれ音声の特徴を抽出する特
徴抽出部と、抽出された音声の特徴を比較する特徴比較
部と、少なくとも1つのサンプル文について、学習者の
音声の特徴と教師の音声の特徴を比較した結果両者の音
声の特徴が最も類似する一人の教師を決定する教師決定
部と、決定された一人の教師の音声をお手本としてそれ
以降学習者の音声を比較して学習させる学習制御部とを
有することを特徴とする。本発明によれば、音声認識技
術を応用して学習者の音声の特徴と教師の音声の特徴を
比較した結果両者の音声の特徴が最も類似する一人の教
師をまず決定するので、学習者は、声の特徴が自分と最
もよく似た教師を相手に単独で音声学習を進めることが
でき、音声学習効果を高めることができる。
In order to achieve the above object, a speech learning apparatus according to the present invention provides a speech learning apparatus for learning by comparing a recorded learner's speech with a prestored teacher's speech. In the apparatus, a learner voice storage unit for storing a learner's voice input thereto, a teacher voice storage unit for previously storing voices of a plurality of teachers having different voice characteristics, a learner's voice and the plurality of teachers A feature extraction unit that extracts the features of the speech from one teacher's speech arbitrarily selected from among the features, a feature comparison unit that compares the features of the extracted speech, and at least one sample sentence. As a result of comparing the voice characteristics and the teacher's voice characteristics, a teacher determining unit that determines one teacher whose voice characteristics are most similar to each other, and the learner's voice using the determined one teacher's voice as a model It characterized by having a learning control unit for learning compared to. According to the present invention, the characteristics of the learner's voice and the characteristics of the teacher's voice are compared by applying the voice recognition technology. As a result, the one teacher whose voice characteristics are the most similar is determined first. In addition, the voice learning can be advanced independently with the teacher whose voice characteristics are most similar to the user, and the voice learning effect can be enhanced.

【発明の実施の形態】以下、本発明の好ましい実施の形
態を、図面に基づき説明する。以下、英語学習の例を取
り上げて説明するが、本発明はこれに限定されるもので
はない。図1は、本発明における一実施の形態に係る音
声学習装置を示す機能ブロック図であり、音声入力部
1、学習者音声記憶部2、教師音声記憶部3、教師テキ
スト記憶部4、教師音声・テキスト制御部5、特徴抽出
部6、特徴比較部7、教師決定部8、学習制御部9、外
部入力部10、音声出力部11、及びテキスト出力部1
2とからなる。学習者の音声は、音声入力部1により入
力する。学習者音声記憶部2は、入力される学習者の音
声を一時的に保存する。教師音声記憶部3は、所定数の
英文センテンスのそれぞれについて、音声の特徴が互い
に異なる複数の教師、ここでは教師A、教師B、教師C
…教師Jからなる10人の外国人教師によってそれぞれ
発音される10種類の音声を予め保存する。教師テキス
ト記憶部4は、これら所定の数のセンテンスについて、
テキストデータとして予め保存する。教師音声・テキス
ト制御部5は、教師音声記憶部3に保存された教師A〜
教師Jの各教師の発音する音声と教師テキスト記憶部4
に保存されたその音声に対応するテキストデータの一方
または両方をランダムにまたは同時に呼び出し出力する
もので、その呼び出し命令は後述する学習制御部9によ
り行われる。特徴抽出部6は、学習者音声記憶部2に保
存された音声と、教師音声記憶部3に保存された10人
の教師A〜教師Jの音声から選択される一人の教師の音
声とからそれぞれ音声の特徴を抽出する。ここで、特徴
抽出法としては、従来公知の音声認識技術に基づいた特
徴抽出法を採用することができるが、例えば相対振幅音
声波形の時間分布等の特徴を抽出する。特徴比較部7
は、抽出された音声の特徴を比較する。この比較は、例
えばDP(Dynamic Programming)マッチングその他の
パターンマッチング法により行うことができる。教師決
定部8は、特徴比較部7において学習者の音声の特徴と
教師の音声の特徴を比較した結果、両者の音声の特徴が
最も類似する一人の教師を決定する。学習制御部9は、
後述するところにより学習装置の動作を統括制御する。
なお、外部入力部10は、学習装置の起動、終了、また
は所定の機能を実現する命令を外部から入力するための
もの、音声出力部11は、学習者及び教師の音声を再生
するためのもの、また、テキスト出力部12は、教師テ
キスト記憶部に記憶されたテキストを音声出力と同期し
てまたは非同期で出力するためのものである。次に、本
発明における一実施形態に係る音声学習装置の動作の一
例について以下説明する。まず、外部入力部10によっ
て本音声学習装置を起動すると、学習制御部9は教師テ
キスト記憶部4から1つのサンプル文(複数であっても
よい)を、例えば“Nice to see you.”を呼び出し出力
するよう、教師音声・テキスト制御部5に命じる。呼び
出されたサンプル文“Nice to see you.”は、テキスト
出力部を通じて、図示しないコンピュータディスプレイ
上に提示する。学習者は、そのサンプル文を読んで“Ni
ce to see you.”と発音し、音声を音声入力部1により
入力して学習者音声記憶部2に一時的に保存させ、それ
を特徴抽出部6に出力する。一方、学習制御部9は、教
師Aから順番に当該サンプル文“Nice to see you.”の
音声を教師音声記憶部3から呼び出して特徴抽出部6に
出力するよう教師音声・テキスト制御部5に命じる。特
徴抽出部6は、学習者の音声と教師Aの音声それぞれに
ついて、図2に示すような相対音声振幅波形の時間分布
の特徴を抽出し、特徴比較部7に出力する。特徴比較部
7は、両音声についての相対振幅音声波形の時間分布の
特徴を比較する。具体的には、教師Aの相対音声振幅波
形波形Tから所定の許容幅Wを予め設定し、学習者の相
対音声振幅波形がどの程度その範囲内にあるか、つまり
どの程度両波形がマッチングしているか(マッチング
度)を定量的に測り、それを教師決定部8に出力する。
学習制御部9は、次いで、教師Bについて教師Aのとき
と同様の動作を行うよう、教師音声・テキスト制御部5
に命じ、特徴抽出部6、特徴比較部7によって学習者の
音声の特徴と教師Bの音声の特徴とのマッチング度を測
り、教師決定部8に出力する。同様にして、教師Cから
教師Jまで順番に、マッチング度を測り、教師決定部8
にそれぞれ出力する。そして、教師決定部8は、教師A
から教師Jまでのうちマッチング度の一番よい教師を選
びその教師をお手本の教師とする旨学習制御部9に出力
する。以上のようにして、音声の特徴が学習者と最も類
似する一人の教師を決定する。今、例えば教師Cに決定
されたとすると、それ以降、学習制御部9は、その教師
Cの音声のみを教師音声記憶部3から呼び出すよう、教
師音声・テキスト制御部4を制御し、その教師Cの音声
に基づいて、通常の音声学習が進められる。すなわち、
順次提示されるテキストを読んで、あるいは音声出力部
により出力される教師Cの音声(発音)に続いて、学習
者が音声を入力し、それが学習音声記憶部2に一時的に
保存され、特徴抽出部6と特徴比較部7によって学習者
の音声と教師Cの音声との比較が行われ、学習者のレベ
ルに応じた合否の判定や、発音矯正の示唆、教師Cの音
声の繰り返し再生等が行われる。なお、この通常の音声
学習の効果を飛躍的に高めるために、本発明者が先に特
開平9−101737号公報において提案した、習熟度
に応じて問題を出題し、そのうち正解できなかった問題
を学習し、未習熟状態にある問題を習熟するまで繰り返
し練習する手法を採用することもできる。上述した本発
明における一実施の形態では、特に英語学習の例を示し
たが、本発明は広く音声を利用する学習装置に利用でき
るのは勿論である。また、本発明に係る音声学習装置
は、マイクロフォンを備えたパーソナルコンピュータ上
で、上記の機能を実現させる所定のコンピュータプログ
ラムを実行することで容易に実現することができる。な
お、本発明は上記した実施の形態に限定されるものでは
なく、特許請求の範囲に記載した技術思想の範囲内にお
いて種々の変更が可能なのはいうまでもない。
DESCRIPTION OF THE PREFERRED EMBODIMENTS Preferred embodiments of the present invention will be described below with reference to the drawings. Hereinafter, an example of English learning will be described, but the present invention is not limited to this. FIG. 1 is a functional block diagram showing a speech learning device according to an embodiment of the present invention, and includes a speech input unit 1, a learner speech storage unit 2, a teacher speech storage unit 3, a teacher text storage unit 4, a teacher speech storage unit. A text control unit 5, a feature extraction unit 6, a feature comparison unit 7, a teacher determination unit 8, a learning control unit 9, an external input unit 10, a voice output unit 11, and a text output unit 1
Consists of two. The voice of the learner is input by the voice input unit 1. The learner voice storage unit 2 temporarily stores the input learner voice. The teacher voice storage unit 3 stores, for each of a predetermined number of English sentences, a plurality of teachers having different voice characteristics from each other, in this case, teacher A, teacher B, and teacher C.
... 10 kinds of voices which are pronounced by ten foreign teachers consisting of teacher J are stored in advance. The teacher text storage unit 4 stores, for these predetermined number of sentences,
Save as text data in advance. The teacher voice / text control unit 5 stores the teachers A to A stored in the teacher voice storage unit 3.
Voice of each teacher of teacher J and teacher text storage unit 4
One or both of the text data corresponding to the voices stored in the memory are called or output at random or simultaneously, and the calling command is performed by a learning control unit 9 described later. The feature extraction unit 6 extracts the voices stored in the learner voice storage unit 2 and the voices of one teacher selected from the voices of the ten teachers A to J stored in the teacher voice storage unit 3, respectively. Extract voice features. Here, as the feature extraction method, a feature extraction method based on a conventionally known speech recognition technique can be employed. For example, a feature such as a time distribution of a relative amplitude speech waveform is extracted. Feature comparison unit 7
Compares the features of the extracted speech. This comparison can be performed by, for example, DP (Dynamic Programming) matching or another pattern matching method. As a result of comparing the feature of the learner's voice and the feature of the teacher's voice in the feature comparing unit 7, the teacher determination unit 8 determines one teacher whose features of the two voices are most similar. The learning control unit 9
The operation of the learning device is generally controlled as described below.
Note that the external input unit 10 is for inputting a command for starting or ending the learning device or for realizing a predetermined function from the outside, and the audio output unit 11 is for reproducing the voice of the learner and the teacher. The text output unit 12 is for outputting the text stored in the teacher text storage unit synchronously or asynchronously with the audio output. Next, an example of the operation of the speech learning device according to one embodiment of the present invention will be described below. First, when the present voice learning device is activated by the external input unit 10, the learning control unit 9 calls one sample sentence (or a plurality of sample sentences) from the teacher text storage unit 4, for example, "Nice to see you." It instructs the teacher voice / text control unit 5 to output. The called sample sentence “Nice to see you.” Is presented on a computer display (not shown) through a text output unit. The learner reads the sample sentence and reads “Ni
"ce to see you.", a voice is input by the voice input unit 1, temporarily stored in the learner voice storage unit 2, and output to the feature extraction unit 6. On the other hand, the learning control unit 9 Then, the voice of the sample sentence "Nice to see you." Is sequentially called from the teacher voice storage unit 3 and output to the feature extraction unit 6 from the teacher A. Then, the teacher voice / text control unit 5 is instructed. 2, for each of the learner's voice and the teacher A's voice, the feature of the time distribution of the relative voice amplitude waveform is extracted and output to the feature comparing unit 7. The feature comparing unit 7 The characteristics of the time distribution of the relative amplitude voice waveform are compared, specifically, a predetermined allowable width W is set in advance from the relative voice amplitude waveform waveform T of the teacher A, and how much the relative voice amplitude waveform of the learner is within the range. Within, that is, how much both waveforms are matched And if the (degree of matching) quantitatively to measure, and outputs it to the teacher determiner 8.
The learning control unit 9 then performs the same operation on the teacher B as that of the teacher A so that the teacher voice / text control unit 5
And the feature extraction unit 6 and the feature comparison unit 7 measure the degree of matching between the features of the learner's speech and the features of the teacher B's speech, and output the result to the teacher determination unit 8. Similarly, the degree of matching is measured in order from teacher C to teacher J, and the
Respectively. Then, the teacher determination unit 8 determines that the teacher A
, And selects the teacher with the best matching degree from the teacher J to the learning control unit 9 to use the teacher as a model teacher. As described above, one teacher whose voice characteristics are most similar to the learner is determined. Now, for example, if it is determined to be the teacher C, the learning control unit 9 controls the teacher voice / text control unit 4 so that only the voice of the teacher C is called from the teacher voice storage unit 3, The normal speech learning is advanced based on the speech. That is,
After reading the sequentially presented text or following the voice (pronunciation) of Teacher C output by the voice output unit, the learner inputs voice, which is temporarily stored in the learning voice storage unit 2, The feature extraction unit 6 and the feature comparison unit 7 compare the learner's voice with the teacher's C voice, determine pass / fail according to the learner's level, suggest pronunciation correction, and repeatedly reproduce the teacher C's voice. Etc. are performed. In order to dramatically increase the effect of the normal speech learning, the present inventor previously proposed a problem in Japanese Patent Application Laid-Open No. Hei 9-101737, in which questions were presented according to the level of proficiency. It is also possible to employ a method of learning and learning repeatedly until an unlearned problem is mastered. In the above-described embodiment of the present invention, particularly, an example of learning English is shown. However, it is needless to say that the present invention can be widely applied to a learning device using voice. Further, the speech learning device according to the present invention can be easily realized by executing a predetermined computer program for realizing the above functions on a personal computer having a microphone. Note that the present invention is not limited to the above-described embodiment, and it goes without saying that various changes can be made within the scope of the technical idea described in the claims.

【発明の効果】以上説明したように、本発明による本発
明に係る音声学習装置によれば、音声認識技術を応用し
て学習者の音声の特徴と教師の音声の特徴を比較した結
果両者の音声の特徴が最も類似する一人の教師をまず決
定するので、学習者は、声の特徴が自分と最もよく似た
教師を相手に単独で音声学習を進めることができ、音声
学習効果を高めることができる。
As described above, according to the speech learning apparatus of the present invention according to the present invention, the features of the learner's speech and the teacher's speech are compared by applying the speech recognition technology. Since the first teacher with the most similar voice characteristics is determined, the learner can proceed with the voice learning alone with the teacher whose voice characteristics are most similar to his / her own. Can be.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明における一実施の形態に係る音声学習装
置の機能ブロック図。
FIG. 1 is a functional block diagram of a speech learning device according to an embodiment of the present invention.

【図2】音声の特徴抽出と比較を説明するグラフ。FIG. 2 is a graph for explaining speech feature extraction and comparison.

【符号の説明】[Explanation of symbols]

1 音声入力部 2 学習者音声記憶部 3 教師音声記憶部 6 特徴抽出部 7 特徴比較部 8 教師決定部 9 学習制御部 Reference Signs List 1 voice input unit 2 learner voice storage unit 3 teacher voice storage unit 6 feature extraction unit 7 feature comparison unit 8 teacher determination unit 9 learning control unit

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】 録音した学習者の音声と予め記憶された
教師の音声とを比較して学習させる音声学習装置におい
て、入力される学習者の音声を保存する学習者音声記憶
部と、音声の特徴が互いに異なる複数の教師の音声を予
め保存する教師音声記憶部と、学習者の音声と前記複数
の教師の中から任意に選択された1つの教師の音声から
それぞれ音声の特徴を抽出する特徴抽出部と、抽出され
た音声の特徴を比較する特徴比較部と、少なくとも1つ
のサンプル文について、学習者の音声の特徴と教師の音
声の特徴を比較した結果両者の音声の特徴が最も類似す
る一人の教師を決定する教師決定部と、決定された一人
の教師の音声をお手本としてそれ以降学習者の音声を比
較して学習させる学習制御部とを有することを特徴とす
る音声学習装置。
1. A voice learning device for learning by comparing a recorded learner's voice with a prestored teacher's voice, comprising: a learner voice storage unit for storing an input learner's voice; A teacher voice storage unit for preliminarily storing voices of a plurality of teachers having different features, and a feature of extracting voice characteristics from a learner voice and a voice of one teacher arbitrarily selected from the plurality of teachers. An extraction unit, a feature comparison unit that compares the features of the extracted speech, and a feature of the learner's speech and the feature of the teacher's speech for at least one sample sentence. A voice learning device comprising: a teacher determination unit that determines one teacher; and a learning control unit that uses the voice of the determined one teacher as a model to compare and learn learner voices thereafter.
【請求項2】 前記音声の特徴が相対振幅音声波形の時
間分布であることを特徴とする請求項第1項記載の音声
学習装置。
2. The speech learning apparatus according to claim 1, wherein the feature of the speech is a time distribution of a relative amplitude speech waveform.
JP11224610A 1999-08-06 1999-08-06 Voice learning device Pending JP2001051580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP11224610A JP2001051580A (en) 1999-08-06 1999-08-06 Voice learning device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP11224610A JP2001051580A (en) 1999-08-06 1999-08-06 Voice learning device

Publications (1)

Publication Number Publication Date
JP2001051580A true JP2001051580A (en) 2001-02-23

Family

ID=16816433

Family Applications (1)

Application Number Title Priority Date Filing Date
JP11224610A Pending JP2001051580A (en) 1999-08-06 1999-08-06 Voice learning device

Country Status (1)

Country Link
JP (1) JP2001051580A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003295754A (en) * 2002-04-05 2003-10-15 Hitachi Ltd Sign language teaching system and program for realizing the system
JP2006178334A (en) * 2004-12-24 2006-07-06 Yamaha Corp Language learning system
JP2006184813A (en) * 2004-12-28 2006-07-13 Advanced Telecommunication Research Institute International Foreign language learning system
JP2012078768A (en) * 2010-09-06 2012-04-19 Nippon Telegr & Teleph Corp <Ntt> Person matching device, method and program
CN110288977A (en) * 2019-06-29 2019-09-27 联想(北京)有限公司 A kind of data processing method, device and electronic equipment
CN110556095A (en) * 2018-05-30 2019-12-10 卡西欧计算机株式会社 Learning device, robot, learning support system, learning device control method, and storage medium
JP7521869B2 (en) 2017-03-25 2024-07-24 スピーチェイス エルエルシー Teaching and assessing spoken language skills through fine-grained assessment of human speech

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003295754A (en) * 2002-04-05 2003-10-15 Hitachi Ltd Sign language teaching system and program for realizing the system
JP2006178334A (en) * 2004-12-24 2006-07-06 Yamaha Corp Language learning system
JP2006184813A (en) * 2004-12-28 2006-07-13 Advanced Telecommunication Research Institute International Foreign language learning system
JP2012078768A (en) * 2010-09-06 2012-04-19 Nippon Telegr & Teleph Corp <Ntt> Person matching device, method and program
JP7521869B2 (en) 2017-03-25 2024-07-24 スピーチェイス エルエルシー Teaching and assessing spoken language skills through fine-grained assessment of human speech
CN110556095A (en) * 2018-05-30 2019-12-10 卡西欧计算机株式会社 Learning device, robot, learning support system, learning device control method, and storage medium
CN110288977A (en) * 2019-06-29 2019-09-27 联想(北京)有限公司 A kind of data processing method, device and electronic equipment
CN110288977B (en) * 2019-06-29 2022-05-31 联想(北京)有限公司 Data processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
Eskenazi Using automatic speech processing for foreign language pronunciation tutoring: Some issues and a prototype
Eskenazi Using a computer in foreign language pronunciation training: What advantages?
US6865533B2 (en) Text to speech
US6847931B2 (en) Expressive parsing in computerized conversion of text to speech
KR20010013236A (en) Reading and pronunciation tutor
Michael Automated Speech Recognition in language learning: Potential models, benefits and impact
Himmelmann Prosody in language documentation
KR101967849B1 (en) Foreign language acquisition practice method through the combination of shadowing and speed listening based on the processes of mother language acquisition, apparatus and computer readable program medium thereof
JP2001051580A (en) Voice learning device
JP6792091B1 (en) Speech learning system and speech learning method
US20090291419A1 (en) System of sound representaion and pronunciation techniques for english and other european languages
JP5248365B2 (en) Memory support system, memory support program, and memory support method
Jayakumar et al. Enhancing speech recognition in developing language learning systems for low cost Androids
JPH10268753A (en) Computer-readable recording medium recording chinese learning program, and chinese learning device
JP2006139162A (en) Language learning system
JP2001051587A (en) Device and method for leaning foreign language, and computer-readable recording medium on which foreign language learning program is recorded
TWI657421B (en) Method, system and non-transitory computer-readable recording medium for supporting listening
JPS616732A (en) Vocal training device
JP2001282098A (en) Foreign language learning device, foreign language learning method and medium
Prudnikova et al. Difficulties in Conducting Listening Comprehension in Modern English Language
JP2001337594A (en) Method for allowing learner to learn language, language learning system and recording medium
JP2000172162A (en) Language practice system
JP2023029751A (en) Speech information processing device and program
Evans The use of the language laboratory for phonetics at advanced levels of English learning
KR101228909B1 (en) Electronic Dictionary Device and Method on Providing Sounds of Words

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060801

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20071220

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080108

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20080507