EP1638080B1 - A text-to-speech system and method - Google Patents

A text-to-speech system and method Download PDF

Info

Publication number
EP1638080B1
EP1638080B1 EP05107389A EP05107389A EP1638080B1 EP 1638080 B1 EP1638080 B1 EP 1638080B1 EP 05107389 A EP05107389 A EP 05107389A EP 05107389 A EP05107389 A EP 05107389A EP 1638080 B1 EP1638080 B1 EP 1638080B1
Authority
EP
European Patent Office
Prior art keywords
speech
phonetic
speaker
phonetic transcriptions
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP05107389A
Other languages
German (de)
French (fr)
Other versions
EP1638080A2 (en
EP1638080A3 (en
Inventor
Christel Amato
Hubert Crepy
Stephane Revelin
Claire Waast-Richard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to EP05107389A priority Critical patent/EP1638080B1/en
Publication of EP1638080A2 publication Critical patent/EP1638080A2/en
Publication of EP1638080A3 publication Critical patent/EP1638080A3/en
Application granted granted Critical
Publication of EP1638080B1 publication Critical patent/EP1638080B1/en
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Definitions

  • the present invention relates generally to Text-To-Speech system and method, and more particularly to such system and method based on concatenative technology.
  • Text-To-Speech (TTS) systems generate synthetic speech simulating natural speech from an input text.
  • TTS systems based on concatenative technology usually comprise three components: a Speaker Database, a TTS Engine and a Front-End.
  • the Speaker Database is firstly created by recording a large number of sentences that are uttered by a speaker, the speaker utterances. Those utterances are transcribed into elementary phonetic units that are extracted from the recordings as speech samples (or segments) that constitute the speaker database of speech segments. It is to be appreciated that each database created is speaker-specific.
  • the Front-End that is generally based on linguistic rules and is the first component used at runtime. It takes an input text and normalizes it to generate through a phonetizer one phonetic transcription for each word of the input text. It is to be appreciated that the Front-End is speaker independent.
  • the TTS engine selects for the complete phonetic transcription of the input text the appropriate speech segments from a speaker database and concatenates them to generate synthetic speech.
  • the TTS engine may use any of the available speaker databases (or voices), but only using one at a time.
  • the Front-End is speaker independent and generates the same phonetic transcriptions even if databases of speech segments from different speakers (i.e. different "voices") are being used. But in reality, speakers (even professional ones) do differ in their way of speaking and pronouncing words, at least because of dialectal or speaking style variations. For example, the word "tomato" may be pronounced [tom ah toe] or [tom hey toe].
  • the main object of the invention is to provide a Text-To-Speech system and to achieve a method which highly improves the quality of the synthesized speech generated, by reducing the number of artefacts between speech segments, thereby saving a lot of time processing.
  • the TTS engine selects the appropriate segments by operating a dynamic programming algorithm which scores each hypothesis with a cost function based on several criteria. The sequence of segments which gets the lowest cost is then selected.
  • the phonetic transcription provided by the Front-End to the TTS engine at runtime matches well the recorded speaker's pronunciation style, it is easier for the engine to find a matching segment sequence in the speaker database. There is less signal processing required to smoothly splice the segments together.
  • the search algorithm evaluates several possibilities of phonetic transcription for each word instead of only one, then computes the best cost for each possibility.
  • the chosen phonetic transcription will be the one which yields the lowest concatenative cost.
  • the Front-End may phonetize "tomato" into the two possibilities [tom ah toe] or [tom hey toe].
  • the one that matches the recorded speaker's speaking style is likely to bear a lower concatenation cost, and will therefore be chosen by the engine for synthesis.
  • the invention operates in a computer implemented Text-To-Speech system comprising at least a speaker database that has been previously created from user recordings, a Front-End system to receive an input text and a Text-To-Speech engine.
  • the Front-End system generates multiple phonetic transcriptions for each word of the input text, and the TTS engine is using a cost function to select which phonetic transcription is the more appropriate for searching the speech segments within the speaker database to be concatenated and synthesized.
  • a computer system for generating synthetic speech comprises:
  • the computer readable program means is embodied on a program storage device that is readable by a computer machine.
  • Another object of the invention is to provide a method as defined in the method claims.
  • a Text-To-Speech (TTS) system is illustrated in Figure 1.
  • the general system 100 comprises a speaker database 102 to contain speaker recordings and a Front-End block 104 to receive an input text.
  • a cost computational block 106 is coupled to the speaker database and to the Front-End block to operate a cost function algorithm.
  • a post-processing block 108 is coupled to the cost computational block to concatene the results issued from the cost computational block.
  • the post-processing block is coupled to an output block 110 to produce a synthetic speech.
  • the TTS system preferably used by the present invention is a concatenative technology based one. It requires a speaker database built from the recordings of one speaker. However, without limitation of the invention, several speakers can record sentences to create several speaker databases.
  • the speaker database will be different but the TTS engine and the Front-End engine will be the same.
  • different speakers may pronounce a given word in different ways, even in a specific context.
  • the word "tomato” may be pronounced [tom ah toe] or [tom hey toe] and the French word “fenlection” may be pronounced [f e n è t r e] or [f e n è t r] or [f n è t r].
  • the Front-End predicts the pronunciation [f e n é t r] while the recorded speaker has always pronounced [f n é t r], then it will be difficult to find the missing [e] in this context for this word in the speaker database.
  • the speaker has used both pronunciations, it could be useful to choose one or the other depending on the others constraints which can be different from one sentence to another. Then, the Front-End provides multiple phonetic transcriptions for each word of the input text and the TTS engine will choose the preferred one when searching the speech segments recorded in order to achieve the best possible quality of the synthetic speech.
  • the speaker database used in the TTS system of the invention is built in a usual way from a speaker recording a plurality of the sentences.
  • the sentences are processed to associate to each of the recorded word an appropriate phonetic transcription.
  • the phonetic transcriptions may differ for each occurrence of the same word.
  • each audio file is divided into units (so called speech samples or segments) according to these phonetic transcriptions.
  • the speech segments are classified according to several parameters like the phonetic context, the pitch, the duration or the energy. This classification constitutes the speaker database from which the speech segments will be extracted by the cost computational block 106 during runtime as it will be explained later and then will be concatenated within the post-processing block 108 to finally produce synthetic speech within the output block 110.
  • the process starts on steps 202 with the reception of an input text within the Front-End block.
  • the input text may be in the form of a user typing a text or of any application transmitting a user request.
  • step 204 the input text is normalized in an usual way well known by those skilled in the art.
  • the chosen Front-End block may generate these three phonetic forms.
  • the French word "romo" has two possible pronunciations depending on its grammatical class: [p r é z i d an] if it is a noun or [p r é z i d] if it is a verb. The choice of one or the other is totally depending on the sentence syntax. In this case the Front-End must not generate multiple phonetic transcription for the word "rom".
  • the Front-End produces a prediction of the overall pitch contour of the input text (and so produces incidentally the pitch values), the duration and the energy of the speech segments, the well-known prosody parameter. Doing so, the Front-End defines targeted features that will be then used by the search algorithm on next step 210.
  • Step 210 allows to operate a cost function for each phonetic transcription provided by the Front-End.
  • a speech segment extraction is made, and given a current segment, this search algorithm aims at finding the next best segments among those available, to be concatenated to the current one. This quest takes into account the features of each segment and the targeted features provided by the Front-End.
  • the search routine allows to evaluate several paths in parallel as it is illustrated in figure 3. For each unit selection as pointed by a different letter in the example of figure 3, several segments are costed and selected given the previous selected candidates (if any). For each segment a concatenation cost is computed by the cost function and the ones that have the lowest costs are added to a grid of candidate segments.
  • the cost function is based on several criteria which are tunable, (e.g. they can be weighted differently). For instance, if phonetic duration is deemed very important, a high weight to this criterion will penalize the choice of segments which have duration very different from the targeted duration.
  • the best/preferred path is selected, which is in the preferred embodiment the one that yields the overall lowest cost.
  • the segments aligned to this path are then kept.
  • all selected speech samples are concatenated on step 214 using standard signal processing techniques to finally produce synthetic speech on step 216.
  • the best possible quality of the synthetic speech is achieved when the search algorithm successfully limits the amount of signal processing applied to the speech samples. If the phonetic transcriptions used to synthesize a sentence are the same as those that were actually used by the speaker during recordings, the dynamic programming search algorithm will likely find segments in similar contexts and ideally contiguous in the speaker database.
  • a first pass method or a one-pass selection method, now detailed.
  • the first pass method consists of running the search algorithm in a first pass only to perform the phonetic transcription selection.
  • the principle is to favor the phonetic criterion in the cost function, e.g. by setting a zero (or extremely small) weight to the others criteria in order to emphasize the phonetic constraints.
  • This method maximizes the chances of choosing a phonetic form identical or very close to the ones used by the speaker during recordings.
  • the TTS engine goes on in a second pass with the usual speech segments search given the result of this first pass as shown on figure 4-b.
  • the second approach 'the one pass selection' allows to select the appropriate phonetic form amongst multiple phonetic transcriptions by introducing them into the usual search step.
  • the principle is mainly the same as the previous method except that only one search pass is done and no parameters of the cost function are strongly favored. All parameters of the cost function are tuned to reach the best tradeoff in the segments choice between the phonetic forms and the other constraints. If a speaker has pronounced a word in different manner during recordings, the choice of the best suitable phonetic transcription may be helped by the others constraints like the pitch, duration, type of the sentence. This is illustrated on figure 4. For instance, here are two French sentences with the same word 'fenlection' pronounced differently:
  • the first sentence is affirmative while the second one is exclamatory.
  • These sentences differ in pitch contour, duration and energy.
  • this information may help to select the appropriate phonetic form because it will be easier for the search algorithm to find speech segments close to the predicted pitch, duration and energy in sentences of a matching type, for example.
  • the phonetic transcription selection is done at the same time as the speech units selection. Then the segments are concatenated to produce the synthesized speech.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Document Processing Apparatus (AREA)

Description

    Technical field
  • The present invention relates generally to Text-To-Speech system and method, and more particularly to such system and method based on concatenative technology.
  • Background
  • Text-To-Speech (TTS) systems generate synthetic speech simulating natural speech from an input text. TTS systems based on concatenative technology usually comprise three components: a Speaker Database, a TTS Engine and a Front-End.
  • The Speaker Database is firstly created by recording a large number of sentences that are uttered by a speaker, the speaker utterances. Those utterances are transcribed into elementary phonetic units that are extracted from the recordings as speech samples (or segments) that constitute the speaker database of speech segments. It is to be appreciated that each database created is speaker-specific.
  • The Front-End that is generally based on linguistic rules and is the first component used at runtime. It takes an input text and normalizes it to generate through a phonetizer one phonetic transcription for each word of the input text. It is to be appreciated that the Front-End is speaker independent.
  • The TTS engine then selects for the complete phonetic transcription of the input text the appropriate speech segments from a speaker database and concatenates them to generate synthetic speech. The TTS engine may use any of the available speaker databases (or voices), but only using one at a time.
  • As mentioned above, the Front-End is speaker independent and generates the same phonetic transcriptions even if databases of speech segments from different speakers (i.e. different "voices") are being used. But in reality, speakers (even professional ones) do differ in their way of speaking and pronouncing words, at least because of dialectal or speaking style variations. For example, the word "tomato" may be pronounced [tom ah toe] or [tom hey toe].
  • Current Front-End systems predict phonetic forms using speaker-independent statistical models or rules. Ideally, the phonetic forms output by the Front-End should match the speaker's pronunciation style. Otherwise, the target phonetic forms prescribed by the Front-End don't find good matches in the speaker database and this is resulting in a degraded output signal.
  • In the case of a rule-based Front-End, the rules are in most cases created by expert linguists. For speaker adaptation, each time a new voice (i.e. a TTS system with a new speaker database) is created, the expert would have to adapt manually the rules to the speaker's speaking style. This may be very time consuming.
  • In the case of a statistical Front-End, a new one dedicated to the speaker must be trained, which is time consuming too.
  • Thus, the current speaker-independent Front-End systems force pronunciations which are not necessarily natural for the recorded speakers. Such mismatches have a very negative impact on the final signal quality, by causing a lot of concatenations and signal processing adjustments.
  • Thus it would be desirable to have a Text-To-Speech system that do not impact the quality of the final signal due to mismatches between the Front-End phonetic transcriptions and the recorded speech segments. The present invention offers such solution.
  • Summary of the invention
  • Accordingly, the main object of the invention is to provide a Text-To-Speech system and to achieve a method which highly improves the quality of the synthesized speech generated, by reducing the number of artefacts between speech segments, thereby saving a lot of time processing.
  • To summarize, when a sequence of phones is prescribed by the Front-End, there are different sequences of speech segments that can be used to synthesize this phonetic sequence, i.e. several hypotheses. The TTS engine selects the appropriate segments by operating a dynamic programming algorithm which scores each hypothesis with a cost function based on several criteria. The sequence of segments which gets the lowest cost is then selected. When the phonetic transcription provided by the Front-End to the TTS engine at runtime matches well the recorded speaker's pronunciation style, it is easier for the engine to find a matching segment sequence in the speaker database. There is less signal processing required to smoothly splice the segments together. In this setup, the search algorithm evaluates several possibilities of phonetic transcription for each word instead of only one, then computes the best cost for each possibility. In the end, the chosen phonetic transcription will be the one which yields the lowest concatenative cost. For example, the Front-End may phonetize "tomato" into the two possibilities [tom ah toe] or [tom hey toe]. The one that matches the recorded speaker's speaking style is likely to bear a lower concatenation cost, and will therefore be chosen by the engine for synthesis.
  • In a preferred embodiment, the invention operates in a computer implemented Text-To-Speech system comprising at least a speaker database that has been previously created from user recordings, a Front-End system to receive an input text and a Text-To-Speech engine. Particularly, the Front-End system generates multiple phonetic transcriptions for each word of the input text, and the TTS engine is using a cost function to select which phonetic transcription is the more appropriate for searching the speech segments within the speaker database to be concatenated and synthesized.
  • More generally, a computer system for generating synthetic speech comprises:
    • (a)a speaker database to store speech segments;
    • (b)a front-end interface to receive an input text made of a plurality of words;
    • (c)an output interface to audibly output the synthetic speech; and
    • (d)computer readable program means executable by the computer for performing actions, including:
      • (i)creating a plurality of phonetic transcriptions for each word the input text;
      • (ii)computing a cost score for each phonetic transcription by operating a cost function on the plurality of speech segments; and
      • (iii)sorting the plurality of phonetic transcriptions according to the computed cost scores.
  • In a commercial form, the computer readable program means is embodied on a program storage device that is readable by a computer machine.
  • Another object of the invention is to provide a method as defined in the method claims.
  • Brief description of the drawings
  • The above and other objects, features and advantages of the invention will be better understood by reading the following more particular description of the invention in conjunction with the accompanying drawings wherein :
    • ■ Figure 1 is a general view of the system of the present invention;
    • Figure 2 is a flow chart of the main steps to generate a synthetic speech as defined by the present invention;
    • ■ Figure 3 shows an illustrative curve of the cost function;
    • Figures 4-a and 4-b exemplify the preferred segments selection in a first-pass approach;
    • ■ Figure 5 exemplifies the preferred segments selection in a one-pass approach.
    Detailed description of the invention
  • A Text-To-Speech (TTS) system according to the invention is illustrated in Figure 1. The general system 100 comprises a speaker database 102 to contain speaker recordings and a Front-End block 104 to receive an input text. A cost computational block 106 is coupled to the speaker database and to the Front-End block to operate a cost function algorithm. A post-processing block 108 is coupled to the cost computational block to concatene the results issued from the cost computational block. The post-processing block is coupled to an output block 110 to produce a synthetic speech.
    The TTS system preferably used by the present invention is a concatenative technology based one. It requires a speaker database built from the recordings of one speaker. However, without limitation of the invention, several speakers can record sentences to create several speaker databases. In application, for each TTS system, the speaker database will be different but the TTS engine and the Front-End engine will be the same.
    However, different speakers may pronounce a given word in different ways, even in a specific context. In the following two examples, the word "tomato" may be pronounced [tom ah toe] or [tom hey toe] and the French word "fenêtre" may be pronounced [f e n è t r e] or [f e n è t r] or [f n è t r]. If the Front-End predicts the pronunciation [f e n è t r] while the recorded speaker has always pronounced [f n è t r], then it will be difficult to find the missing [e] in this context for this word in the speaker database. On the other hand, if the speaker has used both pronunciations, it could be useful to choose one or the other depending on the others constraints which can be different from one sentence to another. Then, the Front-End provides multiple phonetic transcriptions for each word of the input text and the TTS engine will choose the preferred one when searching the speech segments recorded in order to achieve the best possible quality of the synthetic speech.
  • As already mentioned, the speaker database used in the TTS system of the invention is built in a usual way from a speaker recording a plurality of the sentences. The sentences are processed to associate to each of the recorded word an appropriate phonetic transcription. Based on the speaker speaking style, the phonetic transcriptions may differ for each occurrence of the same word. Once the phonetic transcription of every recorded word is done, each audio file is divided into units (so called speech samples or segments) according to these phonetic transcriptions. And the speech segments are classified according to several parameters like the phonetic context, the pitch, the duration or the energy. This classification constitutes the speaker database from which the speech segments will be extracted by the cost computational block 106 during runtime as it will be explained later and then will be concatenated within the post-processing block 108 to finally produce synthetic speech within the output block 110.
  • Referring now to figure 2, the main steps of the overall process 200 to issue an improved synthetic speech as defined by the present invention is described.
  • The process starts on steps 202 with the reception of an input text within the Front-End block. The input text may be in the form of a user typing a text or of any application transmitting a user request.
  • On step 204, the input text is normalized in an usual way well known by those skilled in the art.
  • On next step 206, several phonetic transcriptions are generated for each word of the normalized text. It is to be appreciated that the way the Front-End generates multiple phonetic forms is not critical as long as all the alternate forms are correct for the given sentence. Thus a statistical or rule-based Front-End may be indifferently used or any Front-End based on any other methods. The person skilled in the art would find complete information on statistical Front-End system in « Optimisation d'arbres de décision pour la conversion graphèmes-phonèmes », H.Crépy, C.Amato-Beaujard, J.C. Marcadet and C. Waast-Richard, Proc. of XXIVèmes Journées d'Étude sur la Parole, Nancy, 2002 and more complete information on rule-based Front-End systems in « Self-learning techniques for Grapheme-to-Phoneme conversion », F.Yvon, Proc. of the 2nd Onomastica Research Colloquim, 1994.
    Whatever the Front-End system used, it has to disambiguate non-homophonic homographs by itself (e.g. "record" [r ey k o r d] and "record" [r e k o r d]) and it has to propose phonetic forms that are valid for the word usage in the sentence.
    To illustrate this on the previous example of the word "fenêtre" which can be pronounced [f e n è t r e], [f e n è t r] or [f n è t r], depending on speaking style, the chosen Front-End block may generate these three phonetic forms.
    By contrast, the French word "président" has two possible pronunciations depending on its grammatical class: [p r é z i d an] if it is a noun or [p r é z i d] if it is a verb. The choice of one or the other is totally depending on the sentence syntax. In this case the Front-End must not generate multiple phonetic transcription for the word "président".
  • On step 208, the Front-End produces a prediction of the overall pitch contour of the input text (and so produces incidentally the pitch values), the duration and the energy of the speech segments, the well-known prosody parameter. Doing so, the Front-End defines targeted features that will be then used by the search algorithm on next step 210.
  • Step 210 allows to operate a cost function for each phonetic transcription provided by the Front-End. A speech segment extraction is made, and given a current segment, this search algorithm aims at finding the next best segments among those available, to be concatenated to the current one. This quest takes into account the features of each segment and the targeted features provided by the Front-End. The search routine allows to evaluate several paths in parallel as it is illustrated in figure 3.
    For each unit selection as pointed by a different letter in the example of figure 3, several segments are costed and selected given the previous selected candidates (if any). For each segment a concatenation cost is computed by the cost function and the ones that have the lowest costs are added to a grid of candidate segments. The cost function is based on several criteria which are tunable, (e.g. they can be weighted differently). For instance, if phonetic duration is deemed very important, a high weight to this criterion will penalize the choice of segments which have duration very different from the targeted duration.
  • Next, on step 212, the best/preferred path is selected, which is in the preferred embodiment the one that yields the overall lowest cost. The segments aligned to this path are then kept. Once the algorithm has found the best path among the several possibilities, all selected speech samples are concatenated on step 214 using standard signal processing techniques to finally produce synthetic speech on step 216. The best possible quality of the synthetic speech is achieved when the search algorithm successfully limits the amount of signal processing applied to the speech samples. If the phonetic transcriptions used to synthesize a sentence are the same as those that were actually used by the speaker during recordings, the dynamic programming search algorithm will likely find segments in similar contexts and ideally contiguous in the speaker database. When two segments are contiguous in the database, they can be concatenated smoothly as almost no signal processing is involved in joining them. Avoiding or limiting the degradation introduced by signal processing, leads to better signal quality of the synthesized speech. Providing several alternate candidate phonetic transcriptions to the search algorithm increases the chances of selecting best-matching speaker's segments, since those will exhibit lower concatenation costs.
    To read more details on the concatenation and production of synthetic speech, the person skilled in the art would refer to «Current status of the IBM Trainable Speech Synthesis System», R. Donovan, A. Ittycheriah, M. Franz, B. Ramabhadran, E. Eide, M. Viswanathan, R. Bakis, W. Hamza, M. Picheny, P. Gleason, T. Rutherfoord, P. Cox, D. Green, E. Janke, S. Revelin, C. Waast, B. Zeller, C. Guenther, and S. Kunzmann, Proc. of the 4th ISCA Tutorial and Research Workshop on Speech Synthesis, Edinburgh, Scotland, 2001 and to «Recent improvements to the IBM Trainable Speech Synthesis System», E. Eide, A. Aaron, R. Bakis, P. Cohen, R. Donovan, W. Hamza, T. Mathes, J. Ordinas, M. Polkosky, M. Picheny, M. Smith, and M. Viswanathan, Proc. of the IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Hong Kong, 2003.Front-End.
  • It is to be noted that two methods of selecting the most appropriate phonetic transcriptions may be used: a first pass method or a one-pass selection method, now detailed.
  • The first pass method consists of running the search algorithm in a first pass only to perform the phonetic transcription selection. The principle is to favor the phonetic criterion in the cost function, e.g. by setting a zero (or extremely small) weight to the others criteria in order to emphasize the phonetic constraints. This method maximizes the chances of choosing a phonetic form identical or very close to the ones used by the speaker during recordings. For each phonetic form provided by the Front-End for a word, different paths are evaluated as shown on figure 4-a. The best paths of all the phonetic forms are compared and the very best one is the phonetic transcription retained for the further speech segments selection (step 212). Once the phonetic transcription is chosen, the TTS engine goes on in a second pass with the usual speech segments search given the result of this first pass as shown on figure 4-b.
  • The second approach 'the one pass selection' allows to select the appropriate phonetic form amongst multiple phonetic transcriptions by introducing them into the usual search step. The principle is mainly the same as the previous method except that only one search pass is done and no parameters of the cost function are strongly favored. All parameters of the cost function are tuned to reach the best tradeoff in the segments choice between the phonetic forms and the other constraints. If a speaker has pronounced a word in different manner during recordings, the choice of the best suitable phonetic transcription may be helped by the others constraints like the pitch, duration, type of the sentence. This is illustrated on figure 4. For instance, here are two French sentences with the same word 'fenêtre' pronounced differently:
    • (1) La fenêtre est ouverte.
      with the word 'fenêtre' pronounced [ f e n è t r ], and
    • (2) Ferme la fenêtre !
      with the word 'fenêtre' pronounced [ f n è t r ].
  • The first sentence is affirmative while the second one is exclamatory. These sentences differ in pitch contour, duration and energy. During synthesis this information may help to select the appropriate phonetic form because it will be easier for the search algorithm to find speech segments close to the predicted pitch, duration and energy in sentences of a matching type, for example.
  • In this implementation, the phonetic transcription selection is done at the same time as the speech units selection. Then the segments are concatenated to produce the synthesized speech.

Claims (10)

  1. A method, suitable for a text-to-speech system, for selecting preferred phonetic transcriptions of an input text comprising the steps of:
    creating a plurality of phonetic transcriptions for each word of the input text;
    computing a cost score for each phonetic transcription by operating a concatenation cost function on a plurality of predefined speech segments selected as candidates for synthesizing said transcription; and
    sorting the plurality of phonetic transcriptions according to the computed cost scores.
  2. The method of claim 1 further comprising the step of normalizing the input text before creating the plurality of phonetic transcriptions.
  3. The method of claim 1 or 2 further comprising the step of generating prosody parameters after the step of creating a plurality of phonetic transcriptions.
  4. The method of anyone of claims 1 to 3 further comprising the step of selecting preferred speech segments after the step of sorting the plurality of phonetic transcriptions.
  5. The method of claim 4 further comprising the step of concatenating the preferred speech segments.
  6. The method of claim 5 further comprising the step of outputting synthetic speech after the concatenating step.
  7. The method of anyone of claims 1 to 6 wherein the step of creating a plurality of phonetic transcriptions is a rule-based step.
  8. The method of anyone of claims 1 or 6 wherein the step of creating a plurality of phonetic transcriptions is based on statistical computation.
  9. A system comprising means adapted for carrying out the steps of the method of anyone of claims 1 to 8.
  10. A computer program comprising instructions adapted to carry out the steps of the method according to any one of claims 1 to 8 when said computer program is executed on a computer system.
EP05107389A 2004-08-11 2005-08-11 A text-to-speech system and method Not-in-force EP1638080B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP05107389A EP1638080B1 (en) 2004-08-11 2005-08-11 A text-to-speech system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04300531 2004-08-11
EP05107389A EP1638080B1 (en) 2004-08-11 2005-08-11 A text-to-speech system and method

Publications (3)

Publication Number Publication Date
EP1638080A2 EP1638080A2 (en) 2006-03-22
EP1638080A3 EP1638080A3 (en) 2006-07-26
EP1638080B1 true EP1638080B1 (en) 2007-10-03

Family

ID=35874715

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05107389A Not-in-force EP1638080B1 (en) 2004-08-11 2005-08-11 A text-to-speech system and method

Country Status (1)

Country Link
EP (1) EP1638080B1 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW422967B (en) * 1998-04-29 2001-02-21 Matsushita Electric Ind Co Ltd Method and apparatus using decision trees to generate and score multiple pronunciations for a spelled word

Also Published As

Publication number Publication date
EP1638080A2 (en) 2006-03-22
EP1638080A3 (en) 2006-07-26

Similar Documents

Publication Publication Date Title
US7869999B2 (en) Systems and methods for selecting from multiple phonectic transcriptions for text-to-speech synthesis
US11062694B2 (en) Text-to-speech processing with emphasized output audio
CA2351988C (en) Method and system for preselection of suitable units for concatenative speech
Tokuda et al. An HMM-based speech synthesis system applied to English
US20200410981A1 (en) Text-to-speech (tts) processing
EP2192575B1 (en) Speech recognition based on a multilingual acoustic model
US10497362B2 (en) System and method for outlier identification to remove poor alignments in speech synthesis
US11763797B2 (en) Text-to-speech (TTS) processing
JP2002304190A (en) Method for generating pronunciation change form and method for speech recognition
JPH0772840B2 (en) Speech model configuration method, speech recognition method, speech recognition device, and speech model training method
US10699695B1 (en) Text-to-speech (TTS) processing
AU2020205275B2 (en) System and method for outlier identification to remove poor alignments in speech synthesis
JP2017167526A (en) Multiple stream spectrum expression for synthesis of statistical parametric voice
JP4283133B2 (en) Voice recognition device
KR100259777B1 (en) Optimal synthesis unit selection method in text-to-speech system
EP1638080B1 (en) A text-to-speech system and method
EP1589524B1 (en) Method and device for speech synthesis
Ronanki et al. The CSTR entry to the Blizzard Challenge 2017
EP1640968A1 (en) Method and device for speech synthesis
Khaw et al. A fast adaptation technique for building dialectal malay speech synthesis acoustic model
KR20240060961A (en) Method for generating voice data, apparatus for generating voice data and computer-readable recording medium
Pobar et al. Development of Croatian unit selection and statistical parametric speech synthesis
Raghavendra et al. Blizzard 2008: Experiments on unit size for unit selection speech synthesis

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

17P Request for examination filed

Effective date: 20060927

17Q First examination report despatched

Effective date: 20061027

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 13/08 20060101AFI20070507BHEP

Ipc: G10L 13/06 20060101ALN20070507BHEP

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: PETER M. KLETT

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: PETER M. KLETT

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602005002706

Country of ref document: DE

Date of ref document: 20071115

Kind code of ref document: P

ET Fr: translation filed
NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080114

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080103

Ref country code: CH

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080103

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080303

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080203

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

26N No opposition filed

Effective date: 20080704

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080104

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080811

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20090903 AND 20090909

REG Reference to a national code

Ref country code: FR

Ref legal event code: TP

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20100610 AND 20100616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080404

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080811

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071003

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080831

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20150804

Year of fee payment: 11

Ref country code: GB

Payment date: 20150805

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20150629

Year of fee payment: 11

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602005002706

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20160811

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20170428

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170301

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160831

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160811