JPH032799A - Pitch pattern coupling system for voice synthesizer - Google Patents

Pitch pattern coupling system for voice synthesizer

Info

Publication number
JPH032799A
JPH032799A JP1136365A JP13636589A JPH032799A JP H032799 A JPH032799 A JP H032799A JP 1136365 A JP1136365 A JP 1136365A JP 13636589 A JP13636589 A JP 13636589A JP H032799 A JPH032799 A JP H032799A
Authority
JP
Japan
Prior art keywords
pitch
mora
phoneme
interpolation processing
target value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP1136365A
Other languages
Japanese (ja)
Inventor
Kazuya Hasegawa
和也 長谷川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meidensha Corp
Meidensha Electric Manufacturing Co Ltd
Original Assignee
Meidensha Corp
Meidensha Electric Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meidensha Corp, Meidensha Electric Manufacturing Co Ltd filed Critical Meidensha Corp
Priority to JP1136365A priority Critical patent/JPH032799A/en
Publication of JPH032799A publication Critical patent/JPH032799A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To improve naturalness of a synthesized voice by performing the interpolation processing based of the pitch target value and the pitch time constant of the next mora from the center point or the centroid point of the steady part of the current mora, where the pitch target value is subjected to interpolation processing by its own pitch time constant, to the next mora. CONSTITUTION:With respect to read-out of phoneme parameters, phoneme parameters are read out in the phoneme order of an input sentence by an interpolation processing part 5 after intonation and the accent type of the input sentence are determined. With respect to taking-in of the next mora, pitch data P1 to P3 and PA out of phoneme parameters of the mora following current phoneme parameters and pitch time constants DP1 to DP3 and DPA are taken in for current pitch pattern interpolation. With respect to interpolation processing of the current pitch pattern, the center point of a steady part V2 of the mora is used as the target value of current pitch data P1 to P3 and PA to perform the interpolation processing of the pitch frequency from a transition part V1 to the center point. Thus, a highly natural synthesized voice is obtained.

Description

【発明の詳細な説明】 A、産業上の利用分野 本発明は、規則合成方式による音声合成装置に係り、特
にピッチパターンの結合方式に関する。
DETAILED DESCRIPTION OF THE INVENTION A. Field of Industrial Application The present invention relates to a speech synthesis device using a regular synthesis method, and particularly to a method for combining pitch patterns.

B9発明の概要 本発明は、各音素のピッチ目標値とピッチ時定数による
補間処理でピッチパターンを求める音声合成装置におい
て、 ピッチ目標値を各モーラの定常部中心又は重心に設定し
た補間処理により、 合成音声の自然性を高めたピッチパターンの結合を得る
ものである。
B9 Summary of the Invention The present invention provides a speech synthesis device that obtains a pitch pattern through interpolation processing using a pitch target value of each phoneme and a pitch time constant. This method obtains a combination of pitch patterns that enhances the naturalness of synthesized speech.

C0従来の技術 規則合成方式による音声合成装置は、入力文字列を構文
解析によって単語、文節に区切り、夫々にはイントネー
ション、アクセントを決定し、単語、文節を音節さらに
は音素にまで分解し、音節又は音素単位の音源波及び調
音フィルタのパラメータを求め、音源波に対する調音フ
ィルタの応答出力として合成音声を得るようにしている
C0 A speech synthesis device using the conventional technical rule synthesis method divides an input character string into words and phrases by syntactic analysis, determines intonation and accent for each, breaks down the words and phrases into syllables and even phonemes, and converts them into syllables. Alternatively, the sound source wave and the parameters of the articulation filter are determined for each phoneme, and synthesized speech is obtained as a response output of the articulation filter to the sound source wave.

この種の音声合成装置は、例えば第3図に示す構成にさ
れる。日本語処理部1は入力された日本語文章に対して
文節の区切りや辞書を参照して読みがな変換等を行う。
This type of speech synthesis device has, for example, the configuration shown in FIG. The Japanese language processing unit 1 performs pronunciation conversion, etc. on the input Japanese text by referring to segment breaks and a dictionary.

文章処理部2は文章にイントネーションを付与し、アク
セント処理部3では文、文節を構成する音節にアクセン
トを付ける。
The sentence processing section 2 adds intonation to sentences, and the accent processing section 3 adds accents to syllables constituting sentences and clauses.

例えば、第4図に示すように、文章入力「学校の桜がき
れいに咲いた」に対して文イントネーシコンはその音節
数によって立上り点から対数特性等で低下していき、文
節アクセントは単語、文節によってアクセント型が決め
られ、これらイントネーショとアクセント型を合成しさ
らには呼気イントネーションやフィルタ処理による丸め
、ポーズ等を付加して合成イントネーションが求められ
る。
For example, as shown in Figure 4, for the sentence input ``The cherry blossoms at school bloomed beautifully,'' the sentence intonesicon decreases from the rising point in a logarithmic manner depending on the number of syllables, and the clause accent decreases depending on the number of syllables. An accent type is determined by the phrase, and a composite intonation is obtained by synthesizing these intonations and accent types, and adding exhalation intonation, rounding by filter processing, pauses, etc.

音素処理部4は入力されたrsAJ・・・等の各音節デ
ータに対して母音及び子音の単位である音素との対応関
係を規定した音節パラメータ格納部4゜内のデータを参
照して音素に分解する処理、例えば音節rSAJに対し
て音素rSJ、rAJに分解処理する。
The phoneme processing unit 4 converts each input syllable data such as rsAJ, etc. into a phoneme by referring to data in the syllable parameter storage unit 4 that defines the correspondence relationship with phonemes, which are units of vowels and consonants. For example, the syllable rSAJ is decomposed into phonemes rSJ and rAJ.

補間処理部5は、音素処理部4からの音素列データに対
して、音素毎に音素パラメータ格納部51の音素パラメ
ータを抽出し、また音源パラメータ格納部5.の音源パ
ターンを抽出してこれらデータから補間処理によって音
源波形及び調音データを得る。音素パラメータは、例え
ば第5図に示すように、子音には各音素を3つの発声時
間帯01〜0.に区分し、各時間帯毎に継続時間t1〜
t8、音源波の繰り返し周波数であるピッチP1〜P3
、この音源波のエネルギーE l−E 3 、音源波バ
ター/G、−G、及びピッチとエネルギーの時定数DP
l〜DP、、DE、〜DE、を有して音源波の離散的デ
ータを得る。また、母音には1つの区分OAにして夫々
ピッチP A、ピッチ時定数DPA、エネルギーEA、
エネルギー時定数DEA、音源波パターンGAを有して
音源波の離散的データとする。このうち、音源波パター
ンは例えば第6図に示すような音源波パターンG、〜G
3、GAが対応づけられ、各パターンに対して音源パラ
メータ格納部5゜には数十個のサンプルデータ列が用意
されて音源波のサンプルデータが取出される。また、エ
ネルギーE I””” E j、 E Aは音源波のレ
ベルの大きさ即ち音の大きさを規定し、ピッチP、−P
3、PAは周波数の高さ即ち音の高さを規定する。そし
て、これら音源波データの規定は各時間帯0.〜o3、
OAでの1つの値になり、各時間帯及び音素間のわたり
には時定数DP、〜DP、、DPASDEl〜DE、、
DEAが与えられて補間処理部5による補間処理によっ
て連続した音源波データ列が取出される。
The interpolation processing unit 5 extracts phoneme parameters from the phoneme parameter storage unit 51 for each phoneme from the phoneme string data from the phoneme processing unit 4, and also extracts phoneme parameters from the phoneme parameter storage unit 51. A sound source pattern is extracted, and a sound source waveform and articulatory data are obtained from these data through interpolation processing. For example, as shown in FIG. 5, the phoneme parameters include each phoneme for consonants in three utterance time periods 01 to 0. For each time period, the duration t1~
t8, pitch P1 to P3 which is the repetition frequency of the sound source wave
, the energy of this sound source wave E l-E 3 , the sound source wave butter/G, -G, and the time constant DP of pitch and energy
Discrete data of the sound source wave is obtained using l~DP, , DE, ~DE. In addition, vowels are divided into one division OA, each with pitch P A, pitch time constant DPA, energy EA,
It has an energy time constant DEA and a sound source wave pattern GA to provide discrete data of the sound source wave. Among these, the sound source wave patterns are, for example, sound source wave patterns G, ~G as shown in FIG.
3. GA is associated with each other, and for each pattern, several tens of sample data strings are prepared in the sound source parameter storage unit 5°, and sample data of the sound source wave is extracted. In addition, the energy E I""" E j, E A defines the level of the sound source wave, that is, the loudness of the sound, and the pitches P, -P
3. PA defines the height of the frequency, that is, the pitch of the sound. The regulations for these sound source wave data are 0. ~o3,
It becomes one value in OA, and the time constant DP, ~DP,, DPASDEl~DE, , for each time period and between phonemes.
A continuous sound source wave data sequence is extracted by interpolation processing by the interpolation processing section 5 when the DEA is given.

例えば、子音のピッチP、〜P3は第7図に示すように
区間O6〜03毎の目標値として与えられ、各区間内の
ピッチPは時定数DP、−Dp、の大きさによって実線
や破線で示すような変化になるn回の補間処理を行う。
For example, the consonant pitch P, ~P3 is given as a target value for each interval O6~03 as shown in Fig. 7, and the pitch P within each interval is indicated by a solid line or a broken line depending on the magnitude of the time constant DP, -Dp. Interpolation processing is performed n times resulting in changes as shown in .

この補間演算は次の漸化式%式% 但し、 P、に、に回目のピッチ制御値 DP  、ピッチ時定数 Pn ;今回のピッチ目標値 Pn−+;前回のピッチ目標値 によってn回演算を行ってP lk+  P Ik*l
・・・のように夫々ピッチPnkを求める。
This interpolation calculation is performed using the following recurrence formula. Go P lk+ P Ik*l
The pitch Pnk is determined as follows.

次に、音素パラメータ格納部5.には第5図に示すよう
に音響管モデル断面積のパラメータと時定数DA、〜D
Aj、DAAも格納される。このパラメータは声道調音
等価フィルタのパラメータを与えるもので、人間の声道
(男性の場合は約17cx)を長さIC11の音響管1
7個連接した調音モデルとして各時間帯毎に各音響管の
断面積A 、−、〜A I?−1% A I−t〜Al
7−1、A1−3〜A17−3として与えられる。これ
らパラメータは音響管時定数と共に調音演算部6に与え
られて音源波に対する調音演算がなされる。
Next, the phoneme parameter storage section 5. As shown in Fig. 5, the parameters of the cross-sectional area of the sound tube model and the time constants DA, ~D
Aj and DAA are also stored. This parameter gives the parameters of the vocal tract articulation equivalent filter.
As an articulation model with seven connected articulation tubes, the cross-sectional area of each acoustic tube is A, -, ~A I? for each time period. -1% A I-t~Al
7-1, A1-3 to A17-3. These parameters are given to the articulation calculation unit 6 together with the acoustic tube time constant, and articulation calculations are performed on the sound source wave.

調音演算部6は、断面積パラメータを持つ音響管に対し
て音源波を与えたときの放射音声波形データ列を求め、
この波形デーをD/A変換器7によってアナログ信号に
変換して音声出力装置8から合成音声を得る。
The articulation calculation unit 6 obtains a radiated sound waveform data string when a sound source wave is applied to an acoustic tube having a cross-sectional area parameter,
This waveform data is converted into an analog signal by the D/A converter 7, and synthesized speech is obtained from the speech output device 8.

ここで、各音素又は音節に対応して求めた各ピッチパタ
ーンは入力文章に従って結合されて1つの文節、単語の
ピッチパターンになる。このピッチパターンの結合は、
例えば第8図に母音のピッチパターンを示すように、母
音「あ」、「お」、「い」のピッチ目標値PA、Po5
Psと夫々の時定数を音素パラメータから得ることで定
常部■。
Here, each pitch pattern found corresponding to each phoneme or syllable is combined according to the input sentence to form a pitch pattern of one phrase or word. The combination of this pitch pattern is
For example, as shown in the vowel pitch pattern shown in FIG.
The stationary part ■ is obtained by obtaining Ps and each time constant from the phoneme parameters.

の音素毎のピッチパターンを補間処理で求め、過渡部■
3は定常部■、との境界(破線)値をそのまま引きずり
、過渡部V1は前の音素の過渡部■3から自己音素のピ
ッチ目標値に対する補間処理で求める。図中、〈印は次
モーラへのピッチ変化開始点を示す。
The pitch pattern for each phoneme is determined by interpolation processing, and the transient part ■
3, the boundary (broken line) value between the stationary part (2) and the constant part (2) is directly dragged, and the transient part V1 is obtained by interpolating the pitch target value of the self-phoneme from the previous phoneme's transient part (3). In the figure, the <mark indicates the starting point of pitch change to the next mora.

D0発明が解決しようとする課題 従来のピッチパターンの結合方式では、次モーラへのピ
ッチ変化開始点がモーラの境界点になり、該境界点での
ピッチ変化が非線形の大きな変化になる。また、モーラ
境界点はその前後のモーラでアクセント変化があるとき
にピッチ変化が一層大きくなる。このため、モーラの境
界周辺(過渡部V、、V3)での合成音声の滑らかさが
損なわれ、ひいては自然性の劣化に継がる問題があった
D0 Problems to be Solved by the Invention In the conventional pitch pattern combination method, the starting point of the pitch change to the next mora becomes the boundary point of the mora, and the pitch change at the boundary point becomes a large nonlinear change. Furthermore, when there is an accent change in the mora before and after the mora boundary point, the pitch change becomes even larger. As a result, the smoothness of the synthesized speech around the mora boundaries (transient parts V, V3) is impaired, which leads to a problem of deterioration of naturalness.

本発明の目的は、ピッチパターンの結合に合成音声の自
然性を高める結合方式を提供することにある。
SUMMARY OF THE INVENTION An object of the present invention is to provide a method for combining pitch patterns that enhances the naturalness of synthesized speech.

86課題を解決するための手段と作用 本発明は、上記目的を達成するため、入力文章の各音素
パラメータからピッチ目標値とピッチ時定数を求めて各
モーラのピッチ周波数を補間処理し、この補間処理によ
って各モーラビッチバ9−ンを順次決定する音声合成装
置において、前記ピッチ目標値を自己モーラの定常部中
心点または重心点にして自己ピッチ時定数による補間処
理を行い、定常部中心点又は重心点から次のモーラまで
は次モーラのピッチ目標値とピッチ時定数による補間処
理を行うようにし、各モーラの境界部ではピッチ目標値
及びピッチ時定数を同じにして該境異部のピッチ周波数
の変化を滑らかにする。
In order to achieve the above object, the present invention calculates a pitch target value and a pitch time constant from each phoneme parameter of an input sentence, interpolates the pitch frequency of each mora, and performs interpolation processing on the pitch frequency of each mora. In a speech synthesis device that sequentially determines each mora bitch band through processing, interpolation processing is performed using the self-pitch time constant using the pitch target value as the center point or center of gravity of the stationary section of the self-mora, and From to the next mora, interpolation processing is performed using the pitch target value and pitch time constant of the next mora, and at the boundary of each mora, the pitch target value and pitch time constant are kept the same, and the pitch frequency at the boundary part changes. smoothen.

F、実施例 第1図は本発明の一実施例を示す補間処理フローチャー
トである。ステップS1による音素パラメータ読み出し
は、入力文章のイントネーション、アクセント型の決定
後に補間処理部5によって入力文章の音素順に音素パラ
メータ(第5図)を読み出す。ステップS2による次モ
ーラのピッチデータ取り込みは、現在の音素のパラメー
タに対して次モーラになる音素パラメータのうちのピッ
チデータP、−P、、PAとピッチ時定数DP、〜DP
、、DPAを自己ピッチパターン補間のために取り込む
。ステップS3による自己ピッチパターンの補間処理は
、モーラの定常部V、の中心点を自己ピッチP、−P3
、PAの目標値として過渡部■、から該中心点までのピ
ッチ周波数の補間処理を行う。
F. Embodiment FIG. 1 is a flowchart of interpolation processing showing an embodiment of the present invention. In reading the phoneme parameters in step S1, after determining the intonation and accent type of the input sentence, the interpolation processing unit 5 reads out the phoneme parameters (FIG. 5) in the order of the phonemes of the input sentence. The acquisition of pitch data of the next mora in step S2 includes pitch data P, -P, , PA of phoneme parameters that become the next mora with respect to the parameters of the current phoneme, and pitch time constants DP, ~DP.
, , DPA is taken for self-pitch pattern interpolation. In step S3, the self-pitch pattern interpolation process converts the center point of the steady part V, of the mora into the self-pitch P, -P3.
, PA interpolation processing is performed on the pitch frequency from the transient part (3) to the center point as the target value of PA.

ステップS4によるピッチパターンの補間処理は、次モ
ーラのピッチを目標値としてそのピッチ時定数を使って
自己の定常部中心点から過渡部v3までのピッチパター
ンの補間処理を行う。ステップS5による音源波確立は
従来と同様に補間したピッチパターンや音源波パターン
から音源波を確立する。
In the pitch pattern interpolation process in step S4, the pitch pattern from the center point of the stationary part to the transient part v3 is interpolated using the pitch time constant of the next mora as a target value. The sound source wave establishment in step S5 establishes the sound source wave from the interpolated pitch pattern and sound source wave pattern as in the conventional method.

このような補間処理によるピッチパターンは、第2図に
例示するようにピッチ目標値が定常部V。
In the pitch pattern obtained by such interpolation processing, the pitch target value is in the steady portion V, as illustrated in FIG.

の中心になり、過渡部V、、vIによるピッチは次モー
ラのピッチ目標値とピッチ時定数による周波数変化にな
る。例えば、音素「お」では定常部Y2の中心点で自己
のピッチP。を目標値にして過渡部vlから定常部V、
の中心点までは自己のピッチ時定数DP、によって補間
したパターン変化になり、定常部vtの中心点から過渡
部V、にかけては次モーラになる音素「い」のピッチP
、を目標値としてまたその時定数D P Iを使って補
間処理を行い、次モーラ「い」の過渡部V、は同じ補間
処理によって行われる。
The pitch due to the transient parts V, , vI becomes the frequency change due to the pitch target value of the next mora and the pitch time constant. For example, for the phoneme "o", its own pitch P is at the center point of the stationary part Y2. from the transient part vl to the steady part V,
Up to the center point, the pattern changes interpolated by the own pitch time constant DP, and from the center point of the steady part vt to the transient part V, the pitch P of the phoneme "i" becomes the next mora.
, is used as a target value and its time constant D P I is used to perform interpolation processing, and the transient portion V of the next mora "i" is performed by the same interpolation processing.

従って、モーラの結合点になる過渡部vl及びvsでは
ピッチ補間が同じ目標値と時定数で行われ、ピッチ周波
数が滑らかに変化したピッチパターンによる結合になる
Therefore, pitch interpolation is performed with the same target value and time constant in the transient parts vl and vs, which are the joining points of the moras, resulting in a combination based on a pitch pattern in which the pitch frequency changes smoothly.

なお、音素間に子音が存在する場合には、母音について
は同様の補間処理がなされ、子音についでは音素パラメ
ータからの補間ピッチパターンを母音のピッチパターン
に続けて挿入することで行われる。
Note that when a consonant exists between phonemes, the same interpolation process is performed for the vowel, and for the consonant, an interpolated pitch pattern from the phoneme parameter is inserted following the pitch pattern of the vowel.

また、実施例ではピッチ目標値を定常部■、の時間中心
とする場合を示すが、これはモーラの重心(面積の半分
になる位置)とすることもできる。
Further, in the embodiment, a case is shown in which the pitch target value is set as the time center of the steady portion (2), but this can also be set as the center of gravity of the mora (a position where the area is half).

また、音素パラメータのエネルギーについても同様に定
常部V、の中心点又は重心点にエネルギー目標値を設定
して補間処理することでモーラ境界点での滑らかなエネ
ルギー変化を与えることができる。
Further, regarding the energy of the phoneme parameter, by similarly setting an energy target value at the center point or center of gravity of the stationary portion V and performing interpolation processing, a smooth energy change at the mora boundary point can be provided.

G8発明の効果 以上のとおり、本発明によれば、ピッチ目標値を定常部
の中心又は重心にして該部分までのピッチ時定数による
補間処理を行い、中心又は重心からは次モーラのピッチ
目標値と時定数による補間処理を行うようにしたため、
各モーラ間の境界は同じ目標値と時定数によるピッチパ
ターン変化になって境界部での滑らかな変化により自然
性の高い合成音声を得ることができる。
G8 Effects of the Invention As described above, according to the present invention, the pitch target value is set as the center or center of gravity of a stationary part, and interpolation processing is performed using the pitch time constant up to the part, and the pitch target value of the next mora is calculated from the center or center of gravity. Since interpolation processing is performed using a time constant and
The boundaries between each mora result in pitch pattern changes based on the same target value and time constant, making it possible to obtain highly natural synthesized speech due to smooth changes at the boundaries.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の一実施例を示すフローチャート、第2
図は実施例のピッチパターン、第3図は音声合成装置の
構成図、第4図はイントネーション波形図、第5図は音
素パラメータのデータ図、第6図は音源波パターンの波
形図、第7図は補間処理によるピッチ特性図、第8図は
従来のピッチパターンである。 1・・・日本語処理部、2・・−文章処理部、3・・・
アクセント処理部、4・・・音素処理部、4.・・・音
節パラメータ格納部、5・・・補間処理部、5.・・・
音素パラメータ格納部、5.・・・音源パラメータ格納
部、6・・・調音演算部、7・・・D/A変換器、8・
・・音声出力装置。 第1図 実施例のフローチャート 外2名 第2図 実施例のピッチパターン □モーラ 第3図 音声合成装置の構成図 8・・・音声出力装置 第5図 音素パラメータのデータ図 第4図 イントネーション波形図 第6図 音源波パターンの波形図 第7図 補間処理によるピッチ特性図 第8図 従来のピッチパターン □モーラ 平成 (1月 12日 平成1年特許願第136365号 2゜ 発明の名称 音声合成装置のピッチパターン結合方式補正をする者 事件との関係
FIG. 1 is a flowchart showing one embodiment of the present invention, and FIG.
The figure shows the pitch pattern of the embodiment, Figure 3 is the configuration diagram of the speech synthesis device, Figure 4 is the intonation waveform diagram, Figure 5 is the data diagram of phoneme parameters, Figure 6 is the waveform diagram of the sound source wave pattern, and Figure 7 is the waveform diagram of the sound source wave pattern. The figure shows a pitch characteristic diagram obtained by interpolation processing, and FIG. 8 shows a conventional pitch pattern. 1...Japanese language processing section, 2...-text processing section, 3...
Accent processing unit, 4... Phoneme processing unit, 4. ... syllable parameter storage unit, 5... interpolation processing unit, 5. ...
Phoneme parameter storage unit, 5. ... Sound source parameter storage unit, 6... Articulation calculation unit, 7... D/A converter, 8.
...Audio output device. Fig. 1: Flowchart of the embodiment (2 people) Fig. 2: Pitch pattern of the embodiment: Mora Fig. 3: Speech synthesis device configuration diagram 8: Speech output device Fig. 5: Data diagram of phoneme parameters Fig. 4: Intonation waveform diagram Figure 6 Waveform diagram of sound source wave pattern Figure 7 Pitch characteristic diagram by interpolation process Figure 8 Conventional pitch pattern Relationship with case involving pitch pattern combination method correction

Claims (1)

【特許請求の範囲】[Claims] (1)入力文章の各音素パラメータからピッチ目標値と
ピッチ時定数を求めて各モーラのピッチ周波数を補間処
理し、この補間処理によって各モーラピッチパターンを
順次決定する音声合成装置において、前記ピッチ目標値
を自己モーラの定常部中心点または重心点にして自己ピ
ッチ時定数による補間処理を行い、定常部中心点又は重
心点から次のモーラまでは次モーラのピッチ目標値とピ
ッチ時定数による補間処理を行うことを特徴とする音声
合成装置のピッチパターン結合方式。
(1) In a speech synthesis device that calculates a pitch target value and a pitch time constant from each phoneme parameter of an input sentence, interpolates the pitch frequency of each mora, and sequentially determines each mora pitch pattern by this interpolation process, the pitch target Interpolation processing is performed using the self-pitch time constant using the value as the center point of the steady section or center of gravity of the self-mora, and interpolation processing is performed using the pitch target value and pitch time constant of the next mora from the center point of the steady section or center of gravity to the next mora. A pitch pattern combining method for a speech synthesizer characterized by performing the following steps.
JP1136365A 1989-05-30 1989-05-30 Pitch pattern coupling system for voice synthesizer Pending JPH032799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1136365A JPH032799A (en) 1989-05-30 1989-05-30 Pitch pattern coupling system for voice synthesizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1136365A JPH032799A (en) 1989-05-30 1989-05-30 Pitch pattern coupling system for voice synthesizer

Publications (1)

Publication Number Publication Date
JPH032799A true JPH032799A (en) 1991-01-09

Family

ID=15173465

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1136365A Pending JPH032799A (en) 1989-05-30 1989-05-30 Pitch pattern coupling system for voice synthesizer

Country Status (1)

Country Link
JP (1) JPH032799A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05232993A (en) * 1991-11-19 1993-09-10 Philips Gloeilampenfab:Nv Device for generating announce information

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05232993A (en) * 1991-11-19 1993-09-10 Philips Gloeilampenfab:Nv Device for generating announce information

Similar Documents

Publication Publication Date Title
Flanagan et al. Synthetic voices for computers
JPH031200A (en) Regulation type voice synthesizing device
WO1997034291A1 (en) Microsegment-based speech-synthesis process
JPH01284898A (en) Voice synthesizing device
JPH03273280A (en) Voice synthesizing system for vocal exercise
JPH032799A (en) Pitch pattern coupling system for voice synthesizer
JP3742206B2 (en) Speech synthesis method and apparatus
JPH0580791A (en) Device and method for speech rule synthesis
Iyanda et al. Development of a Yorúbà Textto-Speech System Using Festival
Urakova THE COMBINATION OF WORDS IS THE PHONETIC PHENOMENA
de Jesus et al. Speech coding and synthesis using parametric curves.
JPH09292897A (en) Voice synthesizing device
Flach Interface design for speech synthesis systems
JPH0667685A (en) Speech synthesizing device
JPH032798A (en) Intonation control system of voice synthesizer
O'Shaughnessy Recent progress in automatic text-to-speech synthesis
JPH032800A (en) Intonation control system for voice synthesizer
JPH09325788A (en) Device and method for voice synthesis
JPH032797A (en) Intonation control system for voice synthesizer
JPH01112297A (en) Voice synthesizer
JPH032796A (en) Intonation control system for voice synthesizer
Shi A speech synthesis-by-rule system for Modern Standard Chinese
JPH1078795A (en) Speech synthesizing device
Rudzicz Speech Synthesis
JPH07140999A (en) Device and method for voice synthesis