EP0107945B1 - Dispositif pour la synthèse de la parole - Google Patents

Dispositif pour la synthèse de la parole Download PDF

Info

Publication number
EP0107945B1
EP0107945B1 EP83306228A EP83306228A EP0107945B1 EP 0107945 B1 EP0107945 B1 EP 0107945B1 EP 83306228 A EP83306228 A EP 83306228A EP 83306228 A EP83306228 A EP 83306228A EP 0107945 B1 EP0107945 B1 EP 0107945B1
Authority
EP
European Patent Office
Prior art keywords
data
vowel
parameter data
consonant
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
EP83306228A
Other languages
German (de)
English (en)
Other versions
EP0107945A1 (fr
Inventor
Tsuneo Nitta
Norimasa Nomura
Kazuo Sumita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of EP0107945A1 publication Critical patent/EP0107945A1/fr
Application granted granted Critical
Publication of EP0107945B1 publication Critical patent/EP0107945B1/fr
Expired legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Definitions

  • This invention relates to a speech synthesizing . apparatus for synthesizing speech in accordance with input character strings.
  • various speech synthesizing apparatuses for synthesizing speech on the basis of the sentence data to be applied as character strings have become known.
  • various speech segments of predetermined units are preliminarily registered as a format of acoustic parameter in a speech segment file, and the corresponding acoustic parameter data is selectively read out from this speech segment file in accordance with the input phoneme data string.
  • the speech data is synthesized on the basis of this acoustic parameter data read out in accordance with a predetermined synthesizing rule.
  • a desired sentence can be generated at a desired speaking speed since the speech is synthesized in accordance with a predetermined synthesizing rule.
  • This apparatus for synthesizing speech by rule is mainly divided, for example, into a V-C-V synthesizing apparatus using a chain consisting of vowel, consonant and vowel as a speech segment of one unit, and a C-V synthesizing apparatus using a monosyllable consisting of consonant and vowel as a speech segment of one unit in dependence upon the format of the speech segment to be registered in the speech segment file.
  • Reference characters V and C used herein represent a vowel segment and a consonant segment, respectively.
  • Fig. 1 is a schematic block diagram of a conventional speech synthesizing apparatus.
  • This speech synthesizing apparatus includes a phoneme converting circuit 2 for converting input character code string into phoneme data string including accent information in accordance with predetermined phoneme conversion rule and accent rule, a speech segment file 4 in which a plurality of speech segments in the form of monosyllable have been stored, an interpolating circuit 6 which sequentially reads out the speech characteristic parameter data of the corresponding speech segment from the speech segment file 4 in accordance with the phoneme data string from the phoneme converting circuit 2 and then interpolates these speech characteristic parameter data, and a speech synthesizer circuit 8 for generating speech data by filter-processing the parameter data from this interpolating circuit 6.
  • phonemes In the apparatus for synthesizing speech data by rules of this kind, phonemes must of course be converted with high accuracy to obtain more natural speech with high quality, but it is also required to obtain speech characteristic parameters which represent, with a high fidelity, the characteristics of the speech generated by a human being. For example, when speech is continuously generated, there may be a case where a certain monosyllable in this speech is coarticulated by monosyllables before and after the above-mentioned monosyllable.
  • the acoustic energy pattern (speech characteristic parameter) of the speech segment of this monosyllable exhibits the inherent characteristics of the consonant C1 and vowel V1 with high fidelity as schematically shown in Fig. 2.
  • the acoustic energy pattern (speech characteristic parameter) of the speech segment of the C1-V1 monosyllable will be changed as shown in Figs.
  • this monosyllable is coarticulated by the subsequent C2-V2 monosyllable and is changed to a C11-V11 monosyllable, or it is coarticulated by the subsequent C3-V3 monosyllable and is changed to a C12-V12 monosyllable. Therefore, in order to generate the speech which is more natural and has high quality and is as similar as possible to the speech that is actually generated by a human being, it is required to generate the speech in consideration of the coarticulation between the successive speech segments. However, with a conventional speech synthesizing apparatus, only unnatural speech is obtained because it generates speech by simply coupling the phonemes regardless of the influence due to the coarticulation.
  • EP-A-58130 discloses that discreet sound elements corresponding to consonant portions, steady-state vowel portions and transition elements.
  • transition elements are composed of a combination of a consonant portion and a coarticulated vowel and it is thus necessary to prepare a large number of such transition elements in order to synthesize natural speech.
  • a speech synthesizing apparatus comprising a data generation circuit for generating phoneme string data; memory means in which consonant and vowel characteristic parameter data representative of consonant and vowel segments are stored and which has a consonant segment file in which a plurality of consonant characteristic parameter data representative of a plurality of consonant segments, each of which has a consonant portion and a transient segment to a vowel segment, are stored, and a vowel segment file in which a plurality of vowel characteristic parameter date representative of a plurality of steady-state vowel segments are stored; control means for allowing the corresponding consonant and vowel characteristic parameter data to be generated from said memory means in accordance with said phoneme string data; and synthesizing means for synthesizing a speech signal on the basis of said consonant and vowel characteristic parameter data from said memory means; and including a parameter data series generation circuit for generating a series of consonant and vowel characteristic parameter data on the
  • each consonant characteristic parameter data stored in the consonant segment file represents the consonant segment including a consonant portion and a transient segment to the vowel segment; therefore, it is possible to easily obtain the interpolated characteristic parameter data between this consonant characteristic parameter data and the succeeding vowel characteristic parameter data read out from the vowel segment file, thereby making it possible to clearly and naturally synthesize a speech even for a coarticulated monosyllable.
  • consonant segments each including a consonant portion and a transient segment which changes from this consonant portion to a vowel segment are registered as a consonant segment C in the consonant segment file
  • vowel segments including steady-state and coarticulated vowel segments are registered as a vowel segment V in the vowel segment file.
  • Figs. 5A and 5B shows waveforms of a second [a]-sound of speech [hakata] and an [a]-sound of speech [kiai].
  • Fig. 6A shows a power spectrum in the frame A of [a]-sound shown in Fig. 5A.
  • Fig. 68 shows a power spectrum in the frame B of [a]-sound shown in Fig. 5B.
  • the power spectrum of [a]-sound of [kiai] which is strongly affected due to the coarticulation is different from the power spectrum of the second [a]-sound of speech [hakata] which is not so affected due to the coarticulation.
  • the speech characteristic parameters representative of the power spectra of different kinds of [a]-sounds are registered in the vowel segment file in dependence upon the degree of the influence due to the coarticulation.
  • Figs. 7A to 7C show a speech signal, power spectrum and power sequence of a monosyllable "go" when it was generated.
  • Fig. 7D indicates similarity between the power spectrum having the maximum power in the power sequence shown in Fig. 7C and other power spectra.
  • time point t1 is determined as a boundary point between consonant and vowel, that is, in this example, the time point t1 is determined as a time point at which the similarity becomes smaller than a predetermined value when the similarity between the power spectrum having the maximum power and the power spectra which sequentially appear toward the direction in which a consonant was generated is sequentially calculated.
  • the speech characteristic parameter data representing the power spectra generated during the period from the time when the consonant had been generated to the time point t1, in this example, the power spectra of three frames, is registered as a consonant segment data in the consonant segment file.
  • the speech characteristic parameter data representing the power spectrum of one frame generated after a predetermined number of frames from the time point t1, preferably indicative of the power spectrum having the maximum power is registered as a vowel segment data in the vowel segment file.
  • the formats of the speech characteristic parameters to be registered in the consonant and vowel segment files are determined in accordance with the speech synthesizing apparatus to be used.
  • the speech characteristic parameter is determined by the Formant frequency, its band width and voiced-unvoiced information.
  • the speech characteristic parameter is determined by the linear prediction coefficient and voiced-unvoiced information.
  • Fig. 8 shows a block diagram of a speech synthesizing apparatus for synthesizing speech by rule as one embodiment according to the present invention.
  • This speech synthesizing apparatus includes a consonant segment file 10, a vowel segment file 12, a phoneme converting circuit 14, and a control circuit 16 for generating output data such as consonant segment address data, vowel segment address data, pitch data, etc. in response to the output data from the phoneme converting circuit 14.
  • a plurality of speech characteristic parameter data respectively representing a plurality of consonant segments each of which has a consonant portion and a transient segment are stored in the consonant segment file 10.
  • a plurality of speech characteristic parameter data respectively representing a plurality of steady-state vowel and coarticulated vowels are stored in the vowel segment file 12.
  • the phoneme converting circuit 14 reads out the corresponding phoneme string data and accent data from a phoneme dictionary and an accent dictionary (not shown) on the basis of the character code string corresponding to word, clause or sentence, and then supplies to the control circuit 16.
  • This phoneme converting circuit 14 is introduced in, for example, "Letter-to-Sound Rules for Automatic Translation of English Text to Phonetics" by Honey S. Elovitz et al. from Naval Research Lab. (ASSP-24, No. 6, Dec 76, p. 446).
  • the control circuit 16 serves to supply the consonant segment address data and vowel segment address data to the consonant segment file 10 and the vowel segment file 12, respectively, in accordance with the phoneme string data from the phoneme converting circuit 14. At the same time, the control circuit 16 writes the time data _corresponding to the time duration of a vowel to be generated and the accent data from the phoneme converting circuit 14 into a random access memory (RAM) 16A.
  • RAM random access memory
  • the segment address data are determined in accordance with not only the phoneme data indicative of the monosyllable, but also the phoneme data representing a succeeding monosyllable from the phoneme converting circuit 14, for example.
  • the speech characteristic parameter data from the consonant segment file 10 is supplied to a first input port of an interpolation circuit 18, while the speech characteristic parameter data from the vowel segment file 12 is supplied to a second input port of the interpolation circuit 18 and to a repetition circuit 20.
  • the interpolation circuit 18 calculates a predetermined number of speech characteristic parameter data on the basis of the speech characteristic parameter data indicative of the consonant segment which is constituted by the power spectrum of three frames from the consonant segment file 10 and the speech characteristic parameter data indicative of the vowel segment of the power spectrum of one frame from the vowel segment file 12.
  • the calculated speech parameter data respectively represent a corresponding number of vowel segments each having the spectrum of one frame and interpolated between the input consonant and vowel segments.
  • the repetition circuit 20 repeatedly fetches from the vowel segment file 12 the speech characteristic parameter data by the number of frames corresponding to the vowel time duration data stored in the RAM 16A.
  • the speech characteristic parameter data from the interpolation circuit 18 and repetition circuit 20 are supplied through a switch 24 to a buffer register 22 in this order.
  • the speech characteristic parameter data from this buffer register 22 is supplied to an interpolation circuit 26.
  • This interpolation circuit 26 interpolates a predetermined number of speech characteristic parameter data between these two speech characteristic parameter data on the basis of the speech characteristic parameter data of the successive two frames from the buffer register 22.
  • the speech characteristic parameter data from this interpolation circuit 26 are sequentially supplied to a speech synthesizer 28.
  • This speech synthesizer 28 sequentially filter-processes the speech characteristic parameter data from the interpolation circuit 26 according to the pitch period data generated from a pitch generation circuit 30 in accordance with the accent data of the RAM 16A, and then generates a speech signal.
  • the phoneme converting circuit 14 supplies the phoneme string data and accent data to the control circuit 16 in accordance with the input character code series.
  • This control circuit 16 writes the time length data representing the time duration of a vowel to be generated and the pitch data regarding a speech generating pitch in the RAM 16A on the basis of the phoneme data and accent data from the phoneme converting circuit 14, respectively.
  • the control circuit 16 supplies the consonant segment address data and vowel segment address data corresponding to the phoneme string data from the phoneme converting circuit 14 to the consonant segment file 10 and the vowel segment file 12, respectively.
  • the control circuit 16 simultaneously generates the switch control signal to set the switch 24 into the first switching position.
  • the control circuit 16 supplies the consonant and vowel segment address data coresponding to consonant segment [g] and vowel segment [o] to the consonant and vowel segment files 10 and 12, respectively, on the basis of the phoneme data corresponding to the two successive monosyllables of [goma] generated from the phoneme converting circuit 14. Due to this, the first to third speech characteristic parameter data corresponding to the power spectra of three frames indicative of consonant segment [g] in Fig. 9 are read out from the consonant segment file 10.
  • the fourth speech characteristic parameter data corresponding to the power spectrum of one frame indicative of vowel [o] is read out from vowel segment file 12.
  • the interpolation circuit 18 calculates the fifth to eighth speech characteristic parameter data indicative of the power spectrum of a predetermined number of frames, in this example, four frames between consonant segment [g] and vowel segment [o] shown in Fig. 9, on the basis of the third speech characteristic parameter data read out from the consonant segment file 10 and the fourth speech characteristic parameter data read out from the vowel segment file 12.
  • this interpolation circuit 18 supplies the 1st to 3rd speech characteristic parameter data from the consonant segment file 10, the 5th to 8th speech characteristic parameter data thus calculated, and the 4th speech characteristic parameter data from the vowel segment file 12 to the buffer register 22 through the switch 24 in this order in response to the interpolation control signal from the control circuit 16.
  • the switch 24 is set into the second switching position by the switching control signal from the control circuit 16.
  • the control circuit 16 then supplies the control pulses of the number corresponding to the vowel time duration data stored in the RAM 16A to the repetition circuit 20 and through an OR gate 32 to the buffer register 22.
  • the repetition circuit 20 fetches the speed characteristic parameter data from the vowel segment file 12 a corresponding number of times in response to the control pulse from the control circuit 16, and sequentially supplies to the buffer register 22.
  • the speech characteristic parameter data representing the power spectra similar to the power spectra shown in Fig. 7B is stored in the buffer register 22.
  • Fig. 9 the speech characteristic parameter data representing the power spectra similar to the power spectra shown in Fig. 7B is stored in the buffer register 22.
  • the power spectra shown by the solid lines indicate the power spectra corresponding to the speech characteristic parameter data read out from the consonant and vowel segment files 10 and 12, and the power spectra shown by the broken lines represent the power spectra calculated by the interpolation circuit 18 and the power spectra generated from the repetition circuit 20.
  • the control circuit 16 supplies the interpolation control signal through the OR gate 32 to the buffer register 22 and also supplies the interpolation control signal to the interpolation circuit 26, thereby allowing the speech characteristic parameter data in the buffer register 22 to be sequentially sent to the interpolation circuit 26.
  • the interpolation circuit 26 then creates a predetermined number of interpolated speech characteristic parameter data on the basis of the speech characteristic parameter data of the successive two frames sent from the buffer register 22 and sequentially supplies to the speech synthesizer 28.
  • the control circuit 16 simultaneously reads out the accent data stored in the RAM 16A and supplies to the pitch generation circuit 30, thereby allowing this pitch generation circuit 30 to generate the pitch period data.
  • the speech synthesizer 28 synthesizes the speech signal including the pitch information in accordance with the speech characteristic parameter data from the interpolation circuit 26 and the pitch period data from the pitch generation circuit 30 and then generates the synthesized speech signal.
  • the repetition circuit 20 is constituted in such a manner that it fetches the vowel characteristic parameter data from the ,, vowel segment file 12 in response to the control pulses from the control circuit 16.
  • this repetition circuit 20 may be modified such that a high-level signal is generated from the control circuit 16 over the period of time corresponding to the time length data, and that the repetition circuit 20 fetches the vowel characteristic parameter data at a fixed interval from the vowel segment file 12 in response to this high-level signal.
  • the vowel characteristic parameter data each of which represents one frame power spectrum have been stored in the vowel segment file 12
  • the vowel characteristic parameter data each of which represents a plurality of power spectra can be stored in this vowel segment file.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Telephone Function (AREA)
  • Feedback Control In General (AREA)
  • Stereophonic System (AREA)
  • Fats And Perfumes (AREA)

Claims (6)

1. Appareil de synthèse de la parole, comprenant: un circuit générateur de données (14) servant à produire des données de chaînes de phonèmes; un moyen de mémoire (10 et 12), dans lequel des données de paramètres caractéristiques de consonnes et de voyelles représentatives de segments de consonnes et de voyelles sont emmagasinées et qui possède un fichier de segments de consonnes (10) dans lequel plusieurs données de paramètres caractéristiques de consonnes représentatives de plusieurs segments de consonnes, chacun desquels possède une partie consonne et un segment transitoire vers un segment de voyelles, sont emmagasinées, et un fichier de segments de voyelles (12) dans lequel plusieurs données de paramètres caractéristiques de voyelles représentatives de plusieurs segments de voyelles stationnaires sont emmagasinées; un moyen de commande (16) servant à permettre que les données de paramètres caractéristiques de consonnes et de voyelles correspondantes soient produites par ledit moyen de mémoire (10 et 12) en fonction desdites données de chaînes de phonèmes; et un moyen de synthèse (18,20,22,24,26, 28 et 30) servant à synthétiser un signal de parole sur la base desdites données de paramètres caractéristiques de consonnes et de voyelles venant dudit moyen mémoire (10 et 12); et comportant un circuit générateur de série de données de paramètres (18, 20 et 24) servant à produire une série de données de paramètres caractéristiques de consonnes et de voyelles sur la base des données de paramètres caractéristiques de consonnes et de voyelles obtenues à partir desdites données de paramètres caractéristiques de consonnes et de voyelles venant desdits fichiers de segments de consonnes et de voyelles (10 et 12), et un circuit de synthèse (22, 26, 28 et 30) servant à synthétiser le signal de parole sur la base de la série de données de paramètres venant dudit circuit générateur de série de données de paramètres (18, 20 et 24), caractérisé en ce que leditfichier de segments de voyelles (12) comprend en outre plusieurs données de paramètres caractéristiques de voyelles représentatives de plusieurs segments de voyelles coarticulés, chacun desdits segments de voyelles stationnaires et coarticulés étant formés de données de paramètres d'un bloc unique, ledit moyen de commande (16) produit des données de longueur de temps indicatives de la durée des voyelles en fonction des données de chaînes de phonèmes venant dudit circuit générateur de données (14), et ledit circuit générateur de série de données de paramètres (18, 20, 24) comporte un circuit de répétition (20) qui extrait dudit fichier de segments de voyelles (12) les données de paramètres caractéristiques de voyelles un nombre de fois qui correspond à ladite donnée de longueur de temps.
2. Appareil de synthèse de la parole selon la revendication 1, caractérisé en ce que ledit circuit générateur de série de données de paramètres comporte en outre: un circuit d'interpolation (18) servant à calculer un nombre prédéterminé de données de paramètres caractéristiques interpolées sur la base des données de paramètres caractéristiques de consonnes et de voyelles venant desdits fichiers de segments de consonnes et de voyelles (10 et 20); et un circuit de sélection de données (24) servant à délivrer séquentielle- ment et sélectivement les données de paramètres caractéristiques venant dudit circuit d'interpolation (18) et dudit circuit de répétition (20) audit circuit de synthèse (22, 26, 28 et 30).
3. Appareil de synthèse de la parole selon la revendication 2, caractérisé en ce que ledit circuit de sélection de données est un circuit de commutation (24) dont la position de commutation est commandée en fonction d'un signal de commande de commutation venant dudit moyen de commande (16).
4. Appareil de synthèse de la parole selon la revendication 2, caractérisé en ce que ledit circuit générateur de données (14) produit des données d'accent en même temps que lesdites données de chaînes de phonèmes et ledit moyen de commande (16) produit des données de hauteur de son en fonction desdites données d'accent, et en ce que ledit circuit de synthèse (22, 26, 28 et 30) synthétise le signal de parole sur la base de la série de données de paramètres venant dudit circuit générateur de série de données de paramètres (18, 20 et 24) et des données de hauteur de son venant dudit moyen de commande (16).
5. Appareil de synthèse de la parole selon la revendication 2, caractérisé en ce que ledit circuit de synthèse comprend: un interpolateur (26) qui reçoit la série de données de paramètres de la part du circuit générateur de série de données de paramètres (18, 20 et 24) et calcule un nombre prédéterminé de données de paramètres interpolées sur la base de deux données de paramètres successives; et une unité de synthèse (28) servant à synthétiser le signal de parole sur la base des données de paramètres venant dudit interpolateur (26).
6. Appareil de synthèse de la parole selon la revendication 5, caractérisé en ce que ledit circuit générateur de données (14) produit des données d'accent en même temps que lesdites données de chaînes de phonèmes et ledit moyen de commande (16) produit des données de hauteur de son en fonction desdites données d'accent, et en ce que ledit circuit de synthèse (22, 26, 28 et 30) synthèse le signal de parole sur la base de la série de données de paramètres venant dudit circuit générateur de série de données de paramètres (18, 20 et 24) et des données de hauteur de son venant dudit moyen de commande (16).
EP83306228A 1982-10-19 1983-10-14 Dispositif pour la synthèse de la parole Expired EP0107945B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP183410/82 1982-10-19
JP57183410A JPS5972494A (ja) 1982-10-19 1982-10-19 規則合成方式

Publications (2)

Publication Number Publication Date
EP0107945A1 EP0107945A1 (fr) 1984-05-09
EP0107945B1 true EP0107945B1 (fr) 1987-03-18

Family

ID=16135290

Family Applications (1)

Application Number Title Priority Date Filing Date
EP83306228A Expired EP0107945B1 (fr) 1982-10-19 1983-10-14 Dispositif pour la synthèse de la parole

Country Status (3)

Country Link
EP (1) EP0107945B1 (fr)
JP (1) JPS5972494A (fr)
DE (1) DE3370390D1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0642158B2 (ja) * 1983-11-01 1994-06-01 日本電気株式会社 音声合成装置
JPH0756598B2 (ja) * 1984-07-25 1995-06-14 株式会社日立製作所 音声合成装置の音声合成方式
JPH0833744B2 (ja) * 1986-01-09 1996-03-29 株式会社東芝 音声合成装置
JP2577372B2 (ja) * 1987-02-24 1997-01-29 株式会社東芝 音声合成装置および方法
DK46493D0 (da) * 1993-04-22 1993-04-22 Frank Uldall Leonhard Metode for signalbehandling til bestemmelse af transientforhold i auditive signaler
US5978764A (en) * 1995-03-07 1999-11-02 British Telecommunications Public Limited Company Speech synthesis

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3975587A (en) * 1974-09-13 1976-08-17 International Telephone And Telegraph Corporation Digital vocoder
DE2531006A1 (de) * 1975-07-11 1977-01-27 Deutsche Bundespost System zur synthese von sprache im zeitbereich aus doppellauten und lautelementen
DE3105518A1 (de) * 1981-02-11 1982-08-19 Heinrich-Hertz-Institut für Nachrichtentechnik Berlin GmbH, 1000 Berlin Verfahren zur synthese von sprache mit unbegrenztem wortschatz und schaltungsanordnung zur durchfuehrung des verfahrens

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ASSP-24, No. 6, Dec. 76, p. 446 *

Also Published As

Publication number Publication date
DE3370390D1 (en) 1987-04-23
EP0107945A1 (fr) 1984-05-09
JPS5972494A (ja) 1984-04-24

Similar Documents

Publication Publication Date Title
US4862504A (en) Speech synthesis system of rule-synthesis type
US4692941A (en) Real-time text-to-speech conversion system
EP0886853B1 (fr) Procede de synthese vocale a base de microsegments
US4685135A (en) Text-to-speech synthesis system
US4398059A (en) Speech producing system
EP0059880A2 (fr) Dispositif pour la synthèse de la parole à partir d'un texte
US5633984A (en) Method and apparatus for speech processing
EP0239394B1 (fr) Dispositif de synthèse de la parole
US5463715A (en) Method and apparatus for speech generation from phonetic codes
EP0107945B1 (fr) Dispositif pour la synthèse de la parole
US6970819B1 (en) Speech synthesis device
EP0144731B1 (fr) Synthétiseur de parole
US6829577B1 (en) Generating non-stationary additive noise for addition to synthesized speech
van Rijnsoever A multilingual text-to-speech system
JP3771565B2 (ja) 基本周波数パタン生成装置、基本周波数パタン生成方法、及びプログラム記録媒体
JP2703253B2 (ja) 音声合成装置
KR100202539B1 (ko) 음성합성방법
JPH0594199A (ja) 残差駆動型音声合成装置
JPS5914752B2 (ja) 音声合成方式
JP2573586B2 (ja) 規則型音声合成装置
JP2573585B2 (ja) 音声スペクトルパタン生成装置
JP2573587B2 (ja) ピッチパタン生成装置
JPS58168096A (ja) 複数言語音声合成装置
JPS63174100A (ja) 音声規則合成方式
JPH055116B2 (fr)

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19831024

AK Designated contracting states

Designated state(s): DE FR GB NL

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: KABUSHIKI KAISHA TOSHIBA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB NL

REF Corresponds to:

Ref document number: 3370390

Country of ref document: DE

Date of ref document: 19870423

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: 746

Effective date: 19980909

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 19981009

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 19981016

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 19981023

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 19981028

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: D6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 19991014

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20000501

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 19991014

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20000630

NLV4 Nl: lapsed or anulled due to non-payment of the annual fee

Effective date: 20000501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20000801

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST