JPH0420998A - Voice synthesizing device - Google Patents

Voice synthesizing device

Info

Publication number
JPH0420998A
JPH0420998A JP2126009A JP12600990A JPH0420998A JP H0420998 A JPH0420998 A JP H0420998A JP 2126009 A JP2126009 A JP 2126009A JP 12600990 A JP12600990 A JP 12600990A JP H0420998 A JPH0420998 A JP H0420998A
Authority
JP
Japan
Prior art keywords
word
character
memory
words
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2126009A
Other languages
Japanese (ja)
Other versions
JP3071804B2 (en
Inventor
Yuichi Kojima
裕一 小島
Hiroo Kitagawa
博雄 北川
Tetsuya Sakayori
哲也 酒寄
Junko Komatsu
小松 順子
Nobuhide Yamazaki
山崎 信英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to JP2126009A priority Critical patent/JP3071804B2/en
Publication of JPH0420998A publication Critical patent/JPH0420998A/en
Application granted granted Critical
Publication of JP3071804B2 publication Critical patent/JP3071804B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Abstract

PURPOSE:To properly read a word having plural meanings by providing a subject extracting part, which properly reads words having the same expression, and a word selecting part. CONSTITUTION:A character broadcast reception part receives and detects a TV radio wave to extract character broadcast data. picture element data in character broadcast data and character patterns generated by a character generating part 2 are temporarily stored in a pattern memory 3 and are outputted to a TV screen 4. The character generating part 2 receives a character code in the character broadcast to send it to the memory 3. A musical sound generating part 5 receives a musical sound code and generates musical sounds to output them from a speaker 6. A sentence segmenting part 8 analyzes the connection of sentences on a character page memory 7 and segments sentence in the read order and sends them to a language processing part 10, and a morpheme analyzing part 11 uses a word dictionary 14 to determine word by morpheme analysis. Frequencies in apperance of determined word classifications are stored in a word classification memory 15. Plural words remaining as the result of analysis are selected by a word selecting part 12 ad are subjected to rhythm conversion by a rhythm processing part 13, and a voice is synthesized by a voice synthesizing part 9 and is outputted from a speaker.

Description

【発明の詳細な説明】 肢先分互 本発明は、音声合成装置に関する。[Detailed description of the invention] limb division The present invention relates to a speech synthesis device.

丈米薮夏 文章中には、複数の読みを持つ単語がある。この単語を
読み分けるため、従来は、1文単位で、構文の解析、単
語の意味を使った意味の接続などを行ない、単語の読み
を決定していた0例えば、「階層的単語属性を用いた同
形語の自動読み分は法」 (宮崎正弧電子通信学会論文
誌’89/3 Vol。
There are words in Jobei Yabuka's writings that have multiple readings. In order to distinguish these words, conventionally, the pronunciation of the word was determined by analyzing the syntax and connecting the meanings using the meanings of the words for each sentence. "Automatic reading of isomorphic words" (Masaru Miyazaki Journal of the Institute of Electronics and Communication Engineers '89/3 Vol.

J6g−D No、3 pp、392−399)では、
表記が同じで読みが異なる単語(同形語)を読み分ける
ために意味分類を拡張した階層的な単語属性、複合名詞
内かかり受は解析、文節間かかり受は解析の3つを用い
て同形語の読み分けを行なっている。しかし、1文の解
析を行なうのみでは、単語の読みが決定できないものも
ある。特に、略語などの場合は、例えば「私はSFが好
きです。」なと、サンフランシスコなのか、サイエンス
・フィクションなのか、複数の意味を持つ略語となって
いて決定できない場合がある。
J6g-D No, 3 pp, 392-399),
Hierarchical word attributes that extend semantic classification to distinguish between words with the same spelling but different pronunciations (homographs), analysis of kari-uke within compound nouns, and analysis of kari-uke between bunsetsu. We are making a distinction between the two. However, there are cases in which the pronunciation of a word cannot be determined just by analyzing a single sentence. In particular, in the case of abbreviations, for example, ``I like science fiction,'' it may be difficult to determine whether the word is San Francisco or science fiction because the abbreviation has multiple meanings.

且−−ム 本発明は、上述のごとき実情に鑑みてなされたもので、
特に、複数の意味を持つ単語の読みわけが可能な音声合
成装置を実現することを目的としてなされたものである
The present invention was made in view of the above-mentioned circumstances, and
In particular, the purpose of this invention is to realize a speech synthesis device that can read words with multiple meanings.

勇−−−成。Isamu---Sei.

本発明は、上記目的を達成するために、(1)漢字仮名
混じり文を入力し、読みを表わす音韻情報と、アクセン
ト等の韻律情報に変換し、該情報に従って音素辞書から
音素を選択し、一定の規則に基づいて順次結合して、任
意の音声を合成する規則音声合成装置において、話題抽
出器と、単語選択器を有し、前記話題抽出器は、単語選
択器が単語を選択するための情報を蓄え、前記単語選択
器は、前記話題抽出器によって蓄えられた情報に応じて
、読み上げ文章中の複数通りの読みの考えられる単語の
読みを選択することを特徴とするものであり、更には、
前記(1)において、(2)話題抽出器による話題抽出
を、読み上げ文章中に現れる単語の種類ごとの頻度をか
ぞえることによって行なうこと、(3)読み上げ文章中
に現れる単語のシソーラスを用いて行なうことを特徴と
したものであり、更には、(4)読み上げ文章の入力手
段が、文字放送受信機であることを特徴とするものであ
る。以下、本発明の実施例に基Isで説明する。
In order to achieve the above object, the present invention (1) inputs a sentence containing kanji and kana, converts it into phonological information representing the reading and prosodic information such as accent, and selects a phoneme from a phoneme dictionary according to the information; A regular speech synthesis device that synthesizes arbitrary speech by sequentially combining based on a certain rule, which includes a topic extractor and a word selector, and the topic extractor is used for the word selector to select words. , and the word selector selects a reading of a word from a plurality of possible readings in the read-aloud text according to the information stored by the topic extractor, Furthermore,
In (1) above, (2) topic extraction is performed by a topic extractor by counting the frequency of each type of word that appears in the read-aloud text, and (3) it is performed using a thesaurus of words that appear in the read-out text. The present invention is further characterized in that (4) the means for inputting the text to be read out is a teletext receiver. Hereinafter, embodiments of the present invention will be explained using the base Is.

第1図は、本発明による音声合成装置の一実施例を説明
するための構成図で、図中、1は文字放送受信部、2は
文字発生部、3はパターンメモリ、4はテレビ画面、5
は楽音発生部、6はスピーカー、7は文字ページメモリ
、8は文章切出し部、9は音声合成部、10は言語処理
部で、該言語処理部10は、形態素解析部11、単語選
択部12、音韻処理部13、単語辞書14、単語分類メ
モリ15等より成っている。文章入力手段は、文字認識
装置、ワードプロセッサなど様々であり、文字放送に限
らないが、ここでは文章入力手段として、我が国で実施
されている符号伝送())イブリッド)方式文字放送を
用いた場合の実施例について説明する。
FIG. 1 is a block diagram for explaining one embodiment of a speech synthesis device according to the present invention, in which 1 is a teletext receiving section, 2 is a character generating section, 3 is a pattern memory, 4 is a television screen, 5
1 is a musical sound generation section, 6 is a speaker, 7 is a character page memory, 8 is a sentence cutting section, 9 is a speech synthesis section, 10 is a language processing section, and the language processing section 10 includes a morphological analysis section 11 and a word selection section 12. , a phoneme processing section 13, a word dictionary 14, a word classification memory 15, and the like. Text input means are various, such as character recognition devices and word processors, and are not limited to teletext; however, here we will discuss the case where code transmission ()) hybrid) system teletext broadcasting, which is implemented in Japan, is used as a text input means. An example will be explained.

第1図乃至第3図は、話題抽出のために単語の種類ごと
の頻度をかぞえる方法を用いた場合の、実施例を説明す
るための図で1図において、請求項の話題抽出器は単語
分類メモリに、単語選択部はそのまま単語選択部に対応
する。以下、図に沿って実施例の説明を行なっていく。
Figures 1 to 3 are diagrams for explaining an embodiment in which a method of counting the frequency of each type of word is used for topic extraction. In the classification memory, the word selection section directly corresponds to the word selection section. Hereinafter, embodiments will be explained along the drawings.

文字放送受信部1はテレビジョン電波を受信、検波し、
文字放送データを抽出する。パターンメモリ3は文字放
送データ中の画素データや文字発生部2によって発生さ
れた文字パターンを一時的に記憶し、テレビ画面4に出
力する。文字発生部2は文字放送中の文字コードを受け
て、パターンメモリ3へ送る。楽音発生部5は文字放送
中の楽音コートを受けてコードに対応する楽音を発生し
The teletext receiving unit 1 receives and detects television waves,
Extract teletext data. The pattern memory 3 temporarily stores pixel data in teletext data and character patterns generated by the character generation section 2, and outputs them to the television screen 4. A character generating section 2 receives character codes during teletext broadcasting and sends them to a pattern memory 3. The musical tone generator 5 receives a musical tone code during teletext broadcasting and generates a musical tone corresponding to the chord.

スピーカー6より出力する。以上の部分は従来の文字放
送受信装置と同様であり、詳しい説明は省略する。
Output from speaker 6. The above portions are the same as those of the conventional teletext receiving device, and detailed explanation will be omitted.

文字ページメモリ7は文字放送データ中の文字コードの
みをページ単位に記憶するもので、テレビ画面に文字を
表示する際の表示座標位置に対応するアドレスをもつ。
The character page memory 7 stores only the character codes in the teletext data on a page-by-page basis, and has an address corresponding to the display coordinate position when displaying characters on the television screen.

文章切出し部8では、文字ページメモリ7上での文章の
つながりを解析し、読み上げる順番に文章を切呂し、言
語処理部10に送る。
The sentence cutting unit 8 analyzes the connections between sentences on the character page memory 7, cuts the sentences in the order of reading, and sends the sentences to the language processing unit 10.

言語処理部10では文章切出し部8から受は取った文章
をもとに、形態素解析部11で、まず、単語辞書14を
用いて形態素解析を行ない、単語を決定する(第2図に
単語辞@14の構造を示す)。
In the language processing unit 10, the morphological analysis unit 11 first performs morphological analysis using the word dictionary 14 to determine words based on the sentences received from the sentence segmentation unit 8 (see the word dictionary in Figure 2). (showing the structure of @14).

単語辞書14には、第2図に示すように1表記と読み、
アクセントの他に、単語のおおまかな意味を示す単語分
類が登録されている。
The word dictionary 14 has 1 notation and pronunciation as shown in Figure 2.
In addition to accents, word classifications that indicate the rough meaning of words are registered.

単語分類メモリ15は、形態素解析後に決定した単語に
ついて、その単語分類の出現数をかぞえ。
The word classification memory 15 counts the number of occurrences of the word classification for the word determined after the morphological analysis.

記憶しておく。記憶した内容は、1文の解析が終わった
後も消去せずに残し1次の文の解析の為の情報とする。
Remember it. The stored contents are not deleted even after the analysis of one sentence is completed and are left as information for the analysis of the next sentence.

また、長い文章で話題が変化する場合もあるので、例え
ば、段落を検出するごとに単語分類メモリの内容をすべ
て消去するなどして、古い話題の影響が残らないように
する。
Furthermore, since the topic may change in a long sentence, for example, every time a paragraph is detected, the entire contents of the word classification memory are erased to prevent the effects of old topics from remaining.

形態素解析の結果、1つの表記について複数の単語が残
った場合には、単語選択部12で単語の選択を行なう。
If a plurality of words remain for one notation as a result of the morphological analysis, the word selection unit 12 selects the words.

単語選択部12では、それぞれの単語について単語分類
を調べ、対応する単語分類の出現数を単語分類メモリか
ら得て、最も高い出現数を得た単語を選択する。
The word selection unit 12 examines the word classification for each word, obtains the number of occurrences of the corresponding word classification from the word classification memory, and selects the word with the highest number of occurrences.

第3図は、本方法による解析例で、例文(NYの大会に
出席した。ついでにSFの大会にも顔を出した。)では
、SFという単語についてSF(地名)とSF(文学)
の2通りの読みが考えられる。そこで、それぞれの単語
分類について単語分類メモリに記憶されている出現数を
調べると。
Figure 3 shows an example of analysis using this method. In the example sentence (I attended a convention in New York. I also attended an SF convention.), regarding the word SF, SF (place name) and SF (literature) are used.
There are two possible readings. Therefore, if we check the number of occurrences stored in the word classification memory for each word classification.

NYという単語によって、地名の単語分類がひとつ数え
られている。そのため、地名の出現数が多くなり、SF
の読みとして、地名のサンフランシスコを選択する。
The word NY counts as one word category for place names. Therefore, the number of occurrences of place names increases, and SF
Select the place name San Francisco as the reading.

音韻処理部13は決定した単語列について、促音化や濁
音化、アクセントの移動など、発音のための音韻変換を
行ない、音声合成部9に発音記号列を送る。音声合成部
9では、発音記号列に基づいて、音素辞書から音素を選
択し、規則に基づいて順次結合し、スピーカー6より音
声を出力する。
The phonological processing unit 13 performs phonological conversion for pronunciation, such as making it consonant, making it dull, or shifting the accent, on the determined word string, and sends the phonetic symbol string to the speech synthesis unit 9. The speech synthesis section 9 selects phonemes from the phoneme dictionary based on the phonetic symbol string, sequentially combines them based on rules, and outputs speech from the speaker 6.

第4図乃至第6図は、話題抽出のためにシソーラスを用
いた場合の、実施例を説明するための図で、第4図にお
いて、言語処理部10以外は第1図と同様であるので、
第1図と同様の部分は省略して示しである。また、単語
には第5図に示すように、それぞれの単語について、上
位の単語と下位の単語とが存在し、単語辞書14は、あ
る単語から上位、下位の単語がひけるような構造になっ
ている。
4 to 6 are diagrams for explaining an example when a thesaurus is used for topic extraction. In FIG. 4, everything except the language processing unit 10 is the same as in FIG. ,
Portions similar to those in FIG. 1 are omitted from illustration. Furthermore, as shown in FIG. 5, each word has higher-ranking words and lower-ranking words, and the word dictionary 14 has a structure in which higher-ranking and lower-ranking words can be found from a certain word. ing.

単語メモリ16には、処理中の文より前の文に現れた単
語が記憶されている。形態素解析の結果。
The word memory 16 stores words that appeared in sentences before the sentence being processed. Results of morphological analysis.

複数の単語が候補として上がった場合、単語選択部12
は、それぞれの単語について、単語メモリ16に記憶さ
れている単語との関係の強さを計算し、単語を選択する
。関係の強さを調べるためには、例えば、第5図におい
て、単語から単語までの間に通る枝の数を単語間の距離
として用いることが考えられる。
If multiple words are selected as candidates, the word selection unit 12
calculates the strength of the relationship between each word and the words stored in the word memory 16, and selects the word. In order to examine the strength of the relationship, for example, in FIG. 5, it is conceivable to use the number of branches passing between words as the distance between words.

第6図は、解析の例(例文:夏に較べて、冬は寒い、私
は1月の間、布団からでることがつらかった。)である
が、1月には2とおりの読み、すなわち、期間の「ひと
つき」と、月の名前の「いちがつ」が考えられるが、前
の文にある「夏」と「冬」のそれぞれの単語についての
距離を累計すると、「いちがつ」の方が前の文章に現れ
る単語と距離が近いことが分かり、「いちがつ」の読み
が選択される。
Figure 6 is an example of analysis (Example sentence: Winter is colder than summer. It was difficult for me to get out of bed during January.) There are two readings for January, viz. , the term ``Hitotoki'' and the name of the month ``Ichigatsu'' can be considered, but if we add up the distances for each word ``Summer'' and ``Winter'' in the previous sentence, we find that ``Ichigatsu'' is the name of the month. '' is found to be closer to the word that appears in the previous sentence, and the pronunciation of ``ichigatsu'' is selected.

夏−一来 以上の説明から明らかなように、本発明によると、音声
合成装置に、同型表記の単語を読み分けるための話題抽
出部および単語選択部を設けることによって、話題に応
じて単語を読み分けることが出来る。また、話題抽出の
ために入力文章中に出現する単語の種類ごとの頻度を用
いた場合には。
As is clear from the above description, according to the present invention, a speech synthesis device is provided with a topic extracting section and a word selecting section for distinguishing between words written in the same manner, so that words can be selected according to the topic. Can be read apart. Also, when the frequency of each type of word appearing in the input text is used for topic extraction.

比較的簡単に既存の辞書を用いることが出来るため、装
置の実現が容易である。また、単語のシソーラスをもち
いた場合、意味的には異なるが関係のある単語などを単
語間の距離という概念で扱うことが出来る。そのため、
単語の種類を用いた場合よりも多くの単語との関係を調
べることが出来、より精密な処理が可能である。更に、
音声合成装置の入力文章として文字放送を用いた場合、
文字放送中には略語が多いため、同一表記の略語の読み
分けが必要であり、話題抽出を用いた本方法は、効果が
大きいと考えられる。
Since existing dictionaries can be used relatively easily, the device is easy to implement. Furthermore, when a word thesaurus is used, words that are related but have different meanings can be treated using the concept of distance between words. Therefore,
It is possible to examine relationships with more words than when using word types, and more precise processing is possible. Furthermore,
When using teletext as input text for a speech synthesizer,
Since there are many abbreviations in teletext broadcasts, it is necessary to distinguish between abbreviations that have the same notation, and this method using topic extraction is considered to be highly effective.

【図面の簡単な説明】[Brief explanation of drawings]

第1図乃至第3図は、話題抽出のための単語種類ごとの
頻度をかぞえる方法を用いた場合の本発明の一実施例を
説明するための図、第4図乃至第6図は1話題抽出のた
めにシソーラスを用いた場合の、本発明の一実施例を説
明するための図である。 1・・・文字放送受信部、2・・・文字発生部、3・・
・パターンメモリ、4・・・テレビ画面、5・・・楽音
発生部、6・・・スピーカー、7・・・文字ページメモ
リ、8・・・文章切出し部、9・・・音声合成部、10
・・・言語処理部、11・・・形態素解析部、12・・
単語選択部、13・・音韻処理部、14・・・単語辞書
、15・・・単語分類メモリ、16・・・単語メモリ。 特許出願人  株式会社 リ コ 第 図 第 図 例文 NYの大会に出席した。ついでにSFの大会にも顔をだ
した。
Figures 1 to 3 are diagrams for explaining an embodiment of the present invention using a method of counting the frequency of each type of word for topic extraction, and Figures 4 to 6 are diagrams for explaining one topic. FIG. 2 is a diagram for explaining an embodiment of the present invention when a thesaurus is used for extraction. 1... Teletext receiving section, 2... Character generating section, 3...
- Pattern memory, 4... TV screen, 5... Musical sound generation section, 6... Speaker, 7... Character page memory, 8... Sentence cutting section, 9... Speech synthesis section, 10
...Language processing unit, 11...Morphological analysis unit, 12...
Word selection unit, 13... Phonological processing unit, 14... Word dictionary, 15... Word classification memory, 16... Word memory. Patent applicant Rico Co., Ltd. attended a convention in New York. I also attended SF conventions.

Claims (1)

【特許請求の範囲】[Claims] 1、漢字仮名混じり文を入力し、読みを表わす音韻情報
と、アクセント等の韻律情報に変換し、該情報に従って
音素辞書から音素を選択し、一定の規則に基づいて順次
結合して、任意の音声を合成する規則音声合成装置にお
いて、話題抽出器と、単語選択器を有し、前記話題抽出
器は、単語選択器が単語を選択するための情報を蓄え、
前記単語選択器は、前記話題抽出器によって蓄えられた
情報に応じて、読み上げ文章中の複数通りの読みの考え
られる単語の読みを選択することを特徴とする音声合成
装置。
1. Input a sentence containing kanji and kana, convert it into phonological information representing the reading and prosodic information such as accent, select phonemes from the phoneme dictionary according to the information, and combine them sequentially based on certain rules to create any arbitrary A regular speech synthesis device for synthesizing speech, comprising a topic extractor and a word selector, the topic extractor storing information for the word selector to select words,
The speech synthesis device is characterized in that the word selector selects the pronunciation of a word that can be read in a plurality of ways in the text to be read, depending on the information stored by the topic extractor.
JP2126009A 1990-05-16 1990-05-16 Speech synthesizer Expired - Fee Related JP3071804B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2126009A JP3071804B2 (en) 1990-05-16 1990-05-16 Speech synthesizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2126009A JP3071804B2 (en) 1990-05-16 1990-05-16 Speech synthesizer

Publications (2)

Publication Number Publication Date
JPH0420998A true JPH0420998A (en) 1992-01-24
JP3071804B2 JP3071804B2 (en) 2000-07-31

Family

ID=14924472

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2126009A Expired - Fee Related JP3071804B2 (en) 1990-05-16 1990-05-16 Speech synthesizer

Country Status (1)

Country Link
JP (1) JP3071804B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05224687A (en) * 1992-02-18 1993-09-03 Nippon Telegr & Teleph Corp <Ntt> Japanese pronounced word converting and editing process system
JPH06289890A (en) * 1993-03-31 1994-10-18 Sony Corp Natural language processor
JPH07191687A (en) * 1993-12-27 1995-07-28 Toshiba Corp Natural language processor and its method
JPH08272392A (en) * 1995-03-30 1996-10-18 Sanyo Electric Co Ltd Voice output device
JPH10222187A (en) * 1996-12-04 1998-08-21 Just Syst Corp Device and method for preparing speech text and computer-readable recording medium with program stored for executing its preparation process
JPH10254470A (en) * 1997-03-13 1998-09-25 Fujitsu Ten Ltd Text voice synthesizer
JP2000010579A (en) * 1998-06-19 2000-01-14 Nec Corp Speech synthesizer and computer readable recording medium
JP2002023782A (en) * 2000-07-13 2002-01-25 Sharp Corp Voice synthesizer and method therefor, information processor, and program recording medium
WO2006085565A1 (en) * 2005-02-08 2006-08-17 Nippon Telegraph And Telephone Corporation Information communication terminal, information communication system, information communication method, information communication program, and recording medium on which program is recorded

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05224687A (en) * 1992-02-18 1993-09-03 Nippon Telegr & Teleph Corp <Ntt> Japanese pronounced word converting and editing process system
JPH06289890A (en) * 1993-03-31 1994-10-18 Sony Corp Natural language processor
JPH07191687A (en) * 1993-12-27 1995-07-28 Toshiba Corp Natural language processor and its method
JPH08272392A (en) * 1995-03-30 1996-10-18 Sanyo Electric Co Ltd Voice output device
JPH10222187A (en) * 1996-12-04 1998-08-21 Just Syst Corp Device and method for preparing speech text and computer-readable recording medium with program stored for executing its preparation process
JPH10254470A (en) * 1997-03-13 1998-09-25 Fujitsu Ten Ltd Text voice synthesizer
JP2000010579A (en) * 1998-06-19 2000-01-14 Nec Corp Speech synthesizer and computer readable recording medium
JP2002023782A (en) * 2000-07-13 2002-01-25 Sharp Corp Voice synthesizer and method therefor, information processor, and program recording medium
WO2006085565A1 (en) * 2005-02-08 2006-08-17 Nippon Telegraph And Telephone Corporation Information communication terminal, information communication system, information communication method, information communication program, and recording medium on which program is recorded
US8126712B2 (en) 2005-02-08 2012-02-28 Nippon Telegraph And Telephone Corporation Information communication terminal, information communication system, information communication method, and storage medium for storing an information communication program thereof for recognizing speech information

Also Published As

Publication number Publication date
JP3071804B2 (en) 2000-07-31

Similar Documents

Publication Publication Date Title
KR100403293B1 (en) Speech synthesizing method, speech synthesis apparatus, and computer-readable medium recording speech synthesis program
RU2319221C1 (en) Method for identification of natural speech pauses in a text string
US7496498B2 (en) Front-end architecture for a multi-lingual text-to-speech system
US7263488B2 (en) Method and apparatus for identifying prosodic word boundaries
EP1668628A1 (en) Method for synthesizing speech
EP1213705A2 (en) Method and apparatus for speech synthesis without prosody modification
JPH09503316A (en) Language synthesis
KR970029143A (en) Text Recognition Translation System and Voice Recognition Translation System
WO2004066271A1 (en) Speech synthesizing apparatus, speech synthesizing method, and speech synthesizing system
JPH0420998A (en) Voice synthesizing device
KR20000071227A (en) Method and system for audibly outputting multi-byte characters to a visually-impaired users
US6088666A (en) Method of synthesizing pronunciation transcriptions for English sentence patterns/words by a computer
JP2000352990A (en) Foreign language voice synthesis apparatus
JPH10228471A (en) Sound synthesis system, text generation system for sound and recording medium
KR100554950B1 (en) Method of selective prosody realization for specific forms in dialogical text for Korean TTS system
CN1629933B (en) Device, method and converter for speech synthesis
JPH0916190A (en) Text reading out device
JPH03214197A (en) Voice synthesizer
JPH05134691A (en) Method and apparatus for speech synthesis
JP2859674B2 (en) Teletext receiver
EP1777697A2 (en) Method and apparatus for speech synthesis without prosody modification
JPH11212586A (en) Voice synthesizer
KR100334127B1 (en) Automatic translation apparatus and method thereof
JP3219678B2 (en) Speech synthesizer
JP2622834B2 (en) Text-to-speech converter

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees