JPH0642158B2 - Speech synthesizer - Google Patents

Speech synthesizer

Info

Publication number
JPH0642158B2
JPH0642158B2 JP58205227A JP20522783A JPH0642158B2 JP H0642158 B2 JPH0642158 B2 JP H0642158B2 JP 58205227 A JP58205227 A JP 58205227A JP 20522783 A JP20522783 A JP 20522783A JP H0642158 B2 JPH0642158 B2 JP H0642158B2
Authority
JP
Japan
Prior art keywords
articulatory
waveform
voice
phoneme
string
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP58205227A
Other languages
Japanese (ja)
Other versions
JPS6097396A (en
Inventor
勝信 伏木田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
Nippon Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Electric Co Ltd filed Critical Nippon Electric Co Ltd
Priority to JP58205227A priority Critical patent/JPH0642158B2/en
Priority to EP84113186A priority patent/EP0144731B1/en
Priority to DE8484113186T priority patent/DE3473956D1/en
Publication of JPS6097396A publication Critical patent/JPS6097396A/en
Publication of JPH0642158B2 publication Critical patent/JPH0642158B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Description

【発明の詳細な説明】 本発明は音声合成装置に関する。The present invention relates to a speech synthesizer.

従来、単音節の半分程度の時間長を有するCVあるいは
VC(ここでCは子音、Vは母音を表わす)に対応した
自然音声波形をあらかじめ用意しておき、入力文字列に
従って前記CVあるいはVCに対応する波形を編集合成
することにより音声波形を生成する方式が、下記資料
(1)等により知られている。
Conventionally, a natural speech waveform corresponding to a CV or VC (where C is a consonant and V is a vowel) having a time length about half that of a single syllable is prepared in advance, and the CV or VC is assigned to the CV or VC according to the input character string. The method of generating a voice waveform by editing and synthesizing the corresponding waveforms
(1) Known by others.

三留、伏木田、「CV,VC波形のピッチ同期的補間に
よる任意語合成方式」 日本音響学会、音声研究会資料、S82−06(198
2−4)しかしながら、前記の方式においては高品質な
合成音声が得られるがCV,VC等に対応する音声波形
の情報量が大きく膨大な容量の記憶装置を必要とする欠
点があった。
Mitome, Fushikida, "Arbitrary Word Synthesis Method by Pitch-Synchronous Interpolation of CV and VC Waveforms", Acoustical Society of Japan, Speech Study Group, S82-06 (198)
2-4) However, the above-mentioned method has a drawback that a high-quality synthesized speech can be obtained, but a storage device having a large amount of information of a speech waveform corresponding to CV, VC, etc. is required and a huge capacity is required.

本発明の目的は、あらかじめ用意すべき音声波形等の音
声データのメモリ容量が比較的少なくて済む音声合成装
置を提供することにある。
It is an object of the present invention to provide a voice synthesizer that requires a relatively small memory capacity for voice data such as voice waveforms to be prepared in advance.

本発明によれば、入力文字列を音声に変換する型の音声
合成装置において、 前記入力文字列を音韻あるいは音素の時間長よりも短い
時間区間に対応する調音記号で構成され該調音記号が異
なる音素系列に従って発声された音声波形の中から調音
様式の類似する部分の音声波形で代用できる調音記号列
に変換する手段と、 前記調音記号に対応する音声波形を記憶する記憶回路
と、 前記変換された調音記号列に従って対応する音声波形を
前記記憶回路から引き出し編集合成することにより、前
記入力文字列に対応する音声波形を生成する手段とを有
することを特徴とする音声合成装置が得られる。
According to the present invention, in a speech synthesizer of a type that converts an input character string into speech, the input character string is composed of articulatory symbols corresponding to a time section shorter than a time length of a phoneme or a phoneme, and the articulatory symbols are different. Means for converting into an articulatory symbol string that can be substituted by a voice waveform of a part having a similar articulation style from a voice waveform uttered according to a phoneme sequence; a storage circuit that stores a voice waveform corresponding to the articulatory symbol; And a means for generating a voice waveform corresponding to the input character string by extracting and editing and synthesizing a corresponding voice waveform from the storage circuit according to the articulatory symbol string.

本発明の特徴は調音記号に対応した音声波形等を単位音
声データとしてあらかじめ用意しておき、入力文字列か
ら変換された調音記号列に従って前記単位音声データを
編集合成することにより、前記あらかじめ用意すべき単
位音声データのメモリ容量が比較的少なくと済むことに
ある。
The feature of the present invention is that the voice waveform corresponding to the articulatory symbols is prepared in advance as the unit voice data, and the unit voice data is edited and synthesized according to the articulatory symbol string converted from the input character string to prepare in advance. This is because the memory capacity of the unit voice data to be used is relatively small.

人間の調音器官は第1図(a)に示されるように、声帯、
舌、唇、口蓋帆等があり、これらの調音器官が神経パル
ス信号に従って制御されることにより種々の音声が生成
される。従って調音器官の動きが類似していれば、類似
した音声波形が生成される。さらに、これらの調音器官
の動きを表わす調音パラメータ値が近ければ生成される
音声波形も類似していることは明らかである。さて、従
来の前記CV,VC波形編集型の合成方式ではCV,V
C名に対応して音声波形を用意していたが、調音パラメ
ータの動きということから考えると、かなり冗長な波形
が含まれている。例えば、前記CV,VC波形編集方式
では/ka/に対する音声波形と/ga/に対する音声
波形はそれぞれ別々に用意されていた。しかしながら/
ka/に対する調音器官の動きと/ga/に対する調音
器官の動きは非常に似ており、子音/k/、と/g/に
おいて舌と口蓋等によってつくられるせばめの位置はほ
とんど同じであり、主な相違は子音部において声帯が振
動しているか否か(有声か無声か)ということにある。
従って、/ka/における子音部/k/の無声区間の後
の有声区間(母音/a/の定常部に移行する区間で第1
図(b)の103に対応する)は調音パラメータ値はほぼ
同一であり、/ga/の対応する部分波形(第1図(b)
の108に対応)を用いて近似できる。一般に、/kV
/−/gV/,/tV/−/dV/,/pV/−/bV
/(ここでVは母音を表わす)の対についても同様に母
音へ移行する部分の波形が互いに共用できることは明ら
かである。
Human articulatory organs, as shown in FIG. 1 (a), are vocal cords,
There are tongues, lips, palate, etc., and various sounds are generated by controlling these articulatory organs according to nerve pulse signals. Therefore, if the movements of the articulatory organs are similar, similar speech waveforms are generated. Further, it is obvious that the generated voice waveforms are similar if the articulatory parameter values representing the movements of these articulatory organs are close to each other. Now, in the conventional CV, VC waveform editing type synthesis method, CV, V
Although a voice waveform was prepared corresponding to the C name, a considerably redundant waveform is included in view of the movement of the articulatory parameters. For example, in the CV and VC waveform editing methods, the voice waveform for / ka / and the voice waveform for / ga / are prepared separately. However/
The movement of the articulatory organ for ka / and the movement of the articulatory organ for / ga / are very similar, and the positions of the fits created by the tongue and the palate are almost the same in the consonants / k /, and / g /. The major difference is whether or not the vocal cords are vibrating in the consonant part (voiced or unvoiced).
Therefore, in the voiced section after the unvoiced section of the consonant part / k / in / ka / (the section that shifts to the stationary part of the vowel / a /
(Corresponding to 103 in FIG. 2B), the articulatory parameter values are almost the same, and the corresponding partial waveform of / ga / (FIG. 1B)
(Corresponding to No. 108)). Generally, / kV
/-/ GV /, / tV /-/ dV /, / pV /-/ bV
It is clear that the same can be said for the pair of / (where V is a vowel), and the waveforms of the transitions to vowels can be shared with each other.

ここでは、第1図(b)に例示される如く、調音様式を考
慮して定められる。CV,VC波形よりも短かく、異な
る音素系列に従って発声された音声波形により代用され
得る時間区間を調音セグメントと呼び、調音セグメント
において用いられる音声波形を調音素片波形と呼ぶ。
Here, as illustrated in FIG. 1 (b), it is determined in consideration of the articulation style. A time section that is shorter than the CV and VC waveforms and that can be substituted by a voice waveform uttered according to a different phoneme sequence is called an articulation segment, and a voice waveform used in the articulation segment is called an articulator segment waveform.

以上の例からも明らかなように、調音様式が同じである
調音セグメントに対しては同じ調音記号を用いて表わ
し、この調音記号に対応した調音素片波形をあらかじめ
用意しておき、入力として与えられる文字列を調音記号
列に変換した後、この調音記号列に従って前記調音記号
に対応した調音素片波形を編集合成すれば比較的少ない
音声波形メモリで任意音声の合成(規則合成)が可能で
ある。入力文字列を調音記号列に変換するためには、例
えば、まず、入力文字列を音素列に変換した後、前記音
素列を音素列に対してあらかじめ用意される調音記号列
テーブル(辞書)を用いて調音記号列に変換することが
できる。
As is clear from the above example, articulatory segments with the same articulation style are represented using the same articulatory symbol, and the articulatory element waveform corresponding to this articulatory symbol is prepared in advance and given as an input. After converting the character string to be converted into an articulatory symbol string, by editing and synthesizing the articulatory segment waveforms corresponding to the articulatory symbol according to this articulatory symbol string, it is possible to synthesize arbitrary voices (rule synthesis) with a relatively small voice waveform memory. is there. In order to convert the input character string into the articulatory symbol string, for example, first, the input character string is converted into a phoneme string, and then the phoneme string is prepared in advance with respect to the phoneme string by an articulatory symbol string table (dictionary). Can be used to convert into a string of articulatory symbols.

例えば、「貸車」という文字列が入力された場合には、
まず/kasya/という音素列に変換した後、さらに
/*−k(g)a−a(z)−s(i)−(h)ya/という調音記
号列に変換する。ここで、*は声道閉鎖区間(無音部)
を表わし、kは無声破裂部を表わし、(g)aとa(z)はそ
れぞれ/ga/と/az/の過渡部の一部を表わし、s
(i)は/si/の無声まさつ部を表わし、(h)yaは/h
ya/の有声過渡部を表わし調音記号である。
For example, if the character string "rental car" is entered,
First, it is converted into a phoneme string of / kasya /, and then converted into an articulatory symbol string of / *-k (g) aa (z) -s (i)-(h) ya /. Where * is the vocal tract closed section (silent part)
, K represents a silent burst part, (g) a and a (z) represent a part of the transient part of / ga / and / az /, respectively, and s
(i) represents the silent portion of / si /, and (h) ya is / h
It is an articulatory symbol that represents the voiced transient portion of ya /.

さらに、本例を第1図(b)、(c)、(d)を用いて
詳細に説明する。第1図(b)、(c)は本例に用いる
音声波形を含む音声波形例を表す。また、(d)は本願
の方式により本例(「貸車」)を編集合成した際の調音
素片波形の接続順序を示す。(d)に示されるように
「貸車」という音声波形は、(b)、(c)における調
音素片波形を101、102、108、110、11
2、117の順序で接続することにより得られる。
Further, this example will be described in detail with reference to FIGS. 1 (b), (c), and (d). FIGS. 1B and 1C show examples of voice waveforms including the voice waveform used in this example. Further, (d) shows a connection order of articulatory element waveforms when this example (“rental car”) is edited and synthesized by the method of the present application. As shown in (d), the voice waveform “rental car” is the articulatory unit waveforms 101, 102, 108, 110, 11 in (b) and (c).
It is obtained by connecting in the order of 2, 117.

本例からも明らかなように、従来の前記CV、VC波形
編集型の合成方式によれば、第1図(b)、(c)の波
形を全て用意しておく必要があるのに対して、本方式に
よれば、103、114、115等の波形は他の調音素
片波形により代用できるため、あらかじめ用意する必要
がなく、比較的小容量の記憶回路により音声合成装置が
実現できる。
As is clear from this example, according to the conventional CV and VC waveform editing type synthesis method, it is necessary to prepare all the waveforms of FIGS. 1B and 1C. According to this method, the waveforms of 103, 114, 115, etc. can be substituted by other articulatory element waveforms, so that it is not necessary to prepare in advance, and a speech synthesizer can be realized by a storage circuit of a relatively small capacity.

なお、以上の説明において、C(V)、(C)Vのよう
に( )を用いて表される調音素片記号はCVという音
声波形から( )で囲まれた音素部分を除いた波形に対
応する。
In the above description, an articulatory segment symbol represented by using () such as C (V) and (C) V is a waveform obtained by removing the phoneme portion surrounded by () from the voice waveform of CV. Correspond.

また、あらかじめ用意すべき調音素片波形は、例えば、
人間の発声した音声波形から音声学的知識を用いて対応
する区間の波形を切り出しすることにより得ることがで
きる。
Further, the articulatory element waveform to be prepared in advance is, for example,
It can be obtained by cutting out the waveform of the corresponding section from the speech waveform uttered by a human using the phonetic knowledge.

前述の如く、本発明においては、CV、VC波形編集合
成方式におけるCV、VC波形等の単位音声波形よりも
時間的に短かい単位音声を用いており、波形のメモリ容
量が少なくて済むばかりでなく、調音器官の動きも正確
に反映しているため高品質な合成音声を得ることが可能
である。
As described above, in the present invention, the unit voices which are shorter in time than the unit voice waveforms such as the CV and VC waveforms in the CV and VC waveform editing / synthesizing method are used, and the memory capacity of the waveforms is small. In addition, since the movement of the articulatory organ is accurately reflected, it is possible to obtain high-quality synthetic speech.

次に図面を用いて本発明を詳細に説明する。Next, the present invention will be described in detail with reference to the drawings.

第2図は本発明の実施例を示すブロック図である。ま
ず、文字列が文字列入力端子201を介して文字音素記
号変換回路202に入力される。文字音素記号変換回路
202は前記文字列を音素列に変換し、音素列データ伝
送路203を介して音素列調音記号列変換回路204に
出力する。音素列調音記号列変換回路204は前記音素
列を調音セグメント毎に与えられる調音記号の系列に変
換し、調音記号列伝送路205を介して波形編集合成回
路206に出力する。波形編集合成回路206は前記調
音記号列に従って該調音記号に対応する調音素片波形の
アドレスデータを生成し、調音素片波形記憶回路207
に出力する。調音素片波形記憶回路207は前記アドレ
スデータに従って、対応する調音素片波形を波形データ
伝送路209を介して波形編集合成回路206に出力す
る。波形編集合成回路206は前記調音素片波形を編集
合成し合成波形を合成波形出力端子210を介して出力
する。前記音声波形を編集合成する方式としては、例え
ば、前記調音素片波形の接続部分をピッチ同期的に補間
して編集合成すれば良いが、この方式は前記資料(1)に
詳しいのでここでは説明を省略する。
FIG. 2 is a block diagram showing an embodiment of the present invention. First, a character string is input to the character phoneme symbol conversion circuit 202 via the character string input terminal 201. The character-phoneme symbol conversion circuit 202 converts the character string into a phoneme string and outputs it to the phoneme string articulatory symbol string conversion circuit 204 via the phoneme string data transmission path 203. The phoneme string articulatory symbol string conversion circuit 204 converts the phoneme string into a series of articulatory symbols given to each articulatory segment, and outputs it to the waveform editing / synthesizing circuit 206 via the articulatory symbol string transmission path 205. The waveform editing / synthesizing circuit 206 generates address data of an articulatory element waveform corresponding to the articulatory symbol according to the articulatory symbol string, and the articulatory element waveform storage circuit 207.
Output to. The articulation segment waveform storage circuit 207 outputs the corresponding articulation segment waveform to the waveform editing / synthesizing circuit 206 via the waveform data transmission path 209 in accordance with the address data. The waveform editing / synthesizing circuit 206 edits and synthesizes the articulatory segment waveforms and outputs the synthesized waveform via the synthesized waveform output terminal 210. As a method of editing and synthesizing the voice waveform, for example, the connected portion of the articulatory segment waveform may be interpolated in a pitch-synchronous manner to edit and synthesize, but since this method is described in detail in the material (1), it will be described here. Is omitted.

以上の説明においては調音記号に対応する調音素片波形
を編集合成するものとしたが、調音記号に対応するホル
マントパラメータ等の言わゆる特徴パラメータを用いて
音声合成を行なう方式に対しても、同様のメモリ節減効
果が得られることは明らかである。
In the above description, the articulatory segment waveforms corresponding to the articulatory symbols are edited and synthesized, but the same applies to the method of performing voice synthesis using so-called characteristic parameters such as formant parameters corresponding to the articulatory symbols. It is clear that the memory saving effect of is obtained.

【図面の簡単な説明】[Brief description of drawings]

第1図(a),(b)は調音パラメータを説明するための説明
図で同図(a)は調音器官の概念図、同図(b)は調音セグメ
ントの例を示す図 第2図は本発明の実施例を示すブロック図である。図に
おいて、 201は文字列入力端子、202は文字音素記号変換回
路、203は音素列データ伝送路、204は音素列調音
記号列変換回路、205は調音記号列伝送路、206は
波形編集合成回路、207は調音素片波形記憶回路、2
08はアドレスデータ伝送路、209は波形データ伝送
路、210は合成波形出力端子である。
FIGS. 1 (a) and 1 (b) are explanatory diagrams for explaining articulatory parameters. FIG. 1 (a) is a conceptual diagram of an articulatory organ, and FIG. 1 (b) is an example of an articulatory segment. It is a block diagram which shows the Example of this invention. In the figure, 201 is a character string input terminal, 202 is a character phoneme symbol conversion circuit, 203 is a phoneme string data transmission path, 204 is a phoneme string articulation symbol string conversion circuit, 205 is an articulation symbol string transmission path, and 206 is a waveform editing / synthesizing circuit. , 207 are articulatory element waveform storage circuits, 2
Reference numeral 08 is an address data transmission line, 209 is a waveform data transmission line, and 210 is a composite waveform output terminal.

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】入力文字列を音声に変換する型の音声合成
装置において、 前記入力文字列を音韻あるいは音素の時間長よりも短い
時間区間に対応する調音記号で構成され該調音記号が異
なる音素系列に従って発声された音声波形の中から調音
様式の類似する部分の音声波形で代用できる調音記号列
に変換する手段と、 前記調音記号に対応する音声波形を記憶する記憶回路
と、 前記変換された調音記号列に従って対応する音声波形を
前記記憶回路から引き出し編集合成することにより、前
記入力文字列に対応する音声波形を生成する手段とを有
することを特徴とする音声合成装置。
1. A speech synthesizer for converting an input character string into speech, wherein the input character string is composed of articulatory symbols corresponding to a time section shorter than the time length of a phoneme or a phoneme, and the phoneme symbols having different articulatory symbols. Means for converting from a voice waveform uttered according to a sequence into a sequence of articulatory symbols that can be substituted by a voice waveform of a portion having a similar articulation style; a storage circuit for storing a voice waveform corresponding to the articulatory symbol; A voice synthesizing device, comprising means for generating a voice waveform corresponding to the input character string by extracting a corresponding voice waveform from the storage circuit according to an articulatory symbol string, and editing and synthesizing the voice waveform.
JP58205227A 1983-11-01 1983-11-01 Speech synthesizer Expired - Lifetime JPH0642158B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP58205227A JPH0642158B2 (en) 1983-11-01 1983-11-01 Speech synthesizer
EP84113186A EP0144731B1 (en) 1983-11-01 1984-11-02 Speech synthesizer
DE8484113186T DE3473956D1 (en) 1983-11-01 1984-11-02 Speech synthesizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP58205227A JPH0642158B2 (en) 1983-11-01 1983-11-01 Speech synthesizer

Publications (2)

Publication Number Publication Date
JPS6097396A JPS6097396A (en) 1985-05-31
JPH0642158B2 true JPH0642158B2 (en) 1994-06-01

Family

ID=16503507

Family Applications (1)

Application Number Title Priority Date Filing Date
JP58205227A Expired - Lifetime JPH0642158B2 (en) 1983-11-01 1983-11-01 Speech synthesizer

Country Status (3)

Country Link
EP (1) EP0144731B1 (en)
JP (1) JPH0642158B2 (en)
DE (1) DE3473956D1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02141054A (en) * 1988-11-21 1990-05-30 Nec Home Electron Ltd Terminal equipment for personal computer communication
JP3070127B2 (en) * 1991-05-07 2000-07-24 株式会社明電舎 Accent component control method of speech synthesizer
DE19610019C2 (en) 1996-03-14 1999-10-28 Data Software Gmbh G Digital speech synthesis process
JP4265501B2 (en) 2004-07-15 2009-05-20 ヤマハ株式会社 Speech synthesis apparatus and program
JP5782751B2 (en) * 2011-03-07 2015-09-24 ヤマハ株式会社 Speech synthesizer

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2531006A1 (en) * 1975-07-11 1977-01-27 Deutsche Bundespost Speech synthesis system from diphthongs and phonemes - uses time limit for stored diphthongs and their double application
JPS5331561A (en) * 1976-09-04 1978-03-24 Mitsukawa Shiyouichi Method of manufacturing ssshaped springs
DE3105518A1 (en) * 1981-02-11 1982-08-19 Heinrich-Hertz-Institut für Nachrichtentechnik Berlin GmbH, 1000 Berlin METHOD FOR SYNTHESIS OF LANGUAGE WITH UNLIMITED VOCUS, AND CIRCUIT ARRANGEMENT FOR IMPLEMENTING THE METHOD
JPS6017120B2 (en) * 1981-05-29 1985-05-01 松下電器産業株式会社 Phoneme piece-based speech synthesis method
JPS5868099A (en) * 1981-10-19 1983-04-22 富士通株式会社 Voice synthesizer
US4601052A (en) * 1981-12-17 1986-07-15 Matsushita Electric Industrial Co., Ltd. Voice analysis composing method
NL8200726A (en) * 1982-02-24 1983-09-16 Philips Nv DEVICE FOR GENERATING THE AUDITIVE INFORMATION FROM A COLLECTION OF CHARACTERS.
JPS5972494A (en) * 1982-10-19 1984-04-24 株式会社東芝 Rule snthesization system

Also Published As

Publication number Publication date
DE3473956D1 (en) 1988-10-13
JPS6097396A (en) 1985-05-31
EP0144731A2 (en) 1985-06-19
EP0144731A3 (en) 1985-07-03
EP0144731B1 (en) 1988-09-07

Similar Documents

Publication Publication Date Title
JP3408477B2 (en) Semisyllable-coupled formant-based speech synthesizer with independent crossfading in filter parameters and source domain
US5400434A (en) Voice source for synthetic speech system
JPS62160495A (en) Voice synthesization system
JPH031200A (en) Regulation type voice synthesizing device
JP3732793B2 (en) Speech synthesis method, speech synthesis apparatus, and recording medium
JPH0642158B2 (en) Speech synthesizer
JP4510631B2 (en) Speech synthesis using concatenation of speech waveforms.
JP5175422B2 (en) Method for controlling time width in speech synthesis
JP2577372B2 (en) Speech synthesis apparatus and method
JPS5972494A (en) Rule snthesization system
JP3089940B2 (en) Speech synthesizer
JP2586040B2 (en) Voice editing and synthesis device
JPS5880699A (en) Voice synthesizing system
JP2573586B2 (en) Rule-based speech synthesizer
JP2573585B2 (en) Speech spectrum pattern generator
JPH0572599B2 (en)
JPS6295595A (en) Voice response system
JPS60113299A (en) Voice synthesizer
JPH09325788A (en) Device and method for voice synthesis
JP2001166787A (en) Voice synthesizer and natural language processing method
JPH0553595A (en) Speech synthesizing device
JPS58168096A (en) Multi-language voice synthesizer
JPS63285597A (en) Phoneme connection type parameter rule synthesization system
JPH11224096A (en) Method and device for speech synthesis
JPH06138894A (en) Device and method for voice synthesis