EP1777697B1 - Procédé de synthèse vocale sans modification de prosodie - Google Patents

Procédé de synthèse vocale sans modification de prosodie Download PDF

Info

Publication number
EP1777697B1
EP1777697B1 EP07002565A EP07002565A EP1777697B1 EP 1777697 B1 EP1777697 B1 EP 1777697B1 EP 07002565 A EP07002565 A EP 07002565A EP 07002565 A EP07002565 A EP 07002565A EP 1777697 B1 EP1777697 B1 EP 1777697B1
Authority
EP
European Patent Office
Prior art keywords
speech
context
training
input
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP07002565A
Other languages
German (de)
English (en)
Other versions
EP1777697A2 (fr
EP1777697A3 (fr
Inventor
Min Chu
Hu Peng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Corp
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/850,527 external-priority patent/US6978239B2/en
Application filed by Microsoft Corp filed Critical Microsoft Corp
Publication of EP1777697A2 publication Critical patent/EP1777697A2/fr
Publication of EP1777697A3 publication Critical patent/EP1777697A3/fr
Application granted granted Critical
Publication of EP1777697B1 publication Critical patent/EP1777697B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules
    • G10L13/07Concatenation rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management

Definitions

  • the present invention relates to speech synthesis.
  • the present- invention relates to prosody in speech synthesis.
  • Text-to-speech technology allows computerized systems to communicate with users through synthesized speech.
  • the quality of these systems is typically measured by how natural or human-like the synthesized speech sounds.
  • Very natural sounding speech can be produced by simply replaying a recording of an entire sentence or paragraph of speech.
  • the complexities of human languages and the limitations of computer storage make it impossible to store every conceivable sentence that may occur in a text.
  • This concatenative approach combines stored speech samples representing small speech units such as phonemes, diphones, triphones, or syllables to form a larger speech signal.
  • a stored speech sample has a pitch and duration that is set by the context in which the sample was spoken. For example, in the sentence “Joe went to the store” the speech units associated with the word “store” have a lower pitch than in the question "Joe went to the store?" Because of this, if stored samples are simply retrieved without reference to their pitch or duration, some of the samples will have the wrong pitch and/or duration for the sentence resulting in unnatural sounding speech.
  • One technique for overcoming this is to identify the proper pitch and duration for each sample. Based on this prosody information, a particular sample may be selected and/or modified to match the target pitch and duration.
  • Identifying the proper pitch and duration is known as prosody prediction. Typically, it involves generating a model that describes the most likely pitch and duration for each speech unit given some text. The result of this prediction is a set of numerical targets for the pitch and duration -of each speech segment.
  • targets can then be used to select and/or modify a stored speech segment.
  • the targets can be used to first select the speech segment that has the closest pitch and duration to the target pitch and duration. This segment can then be used directly or can be further modified to better match the target values.
  • TD-PSOLA Time-Domain Pitch-Synchronous Overlap-and-Add
  • the prior art copies a segment of the complex waveform that is as long as the pitch period. This copied segment is then shifted by some portion of the pitch period and reinserted into the waveform. For example, to double the pitch, the copied segment would be shifted by one-half the pitch period, thereby inserting a new peak half-way between two existing peaks and cutting the pitch period in half.
  • the prior art copies a section of the speech segment and inserts the copy into the complex waveform.
  • the entire portion of the speech segment after the copied segment is time-shifted by the length of the copied section so that the duration of the speech unit increases.
  • US 6,064,960 describes a method and an apparatus for duration modeling of phonemes in a speech synthesis system.
  • the phoneme duration model which is used along with a phoneme pitch model, is produced by developing a non-exponential functional transformation form for use with a generalized additive model.
  • the received text is processed by specifying at least one of a number of contextual factors for the generalized additive model.
  • FIG. 1 illustrates an example of a suitable computing system environment 100 on which the invention may be implemented.
  • the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
  • the invention is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • an exemplary system for implementing the invention includes a general-purpose computing device in the form of a computer 110.
  • Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120.
  • the system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Computer 110 typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 100.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, FR, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
  • the system memory 130 includes computer storage media in the form' of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132.
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system
  • RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120.
  • FIG. 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.
  • the computer 110 may also include other removable/non-removable volatile/nonvolatile computer storage media.
  • FIG. 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD RCM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
  • hard disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 110 through input devices such as a keyboard 162, a microphone 163, and a pointing device 161, such as a mouse, trackball or touch pad.
  • Other input devices may include a joystick, game pad, satellite dish, scanner, or the like.
  • a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190.
  • computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 190.
  • the computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180.
  • the remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110.
  • the logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet.
  • the modem 172 which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism.
  • program modules depicted relative to the computer 110, or portions thereof may be stored in the remote memory storage device.
  • FIG. 1 illustrates remote application programs 185 as residing on remote computer 180. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • FIG. 2 is a block diagram of a mobile device 200, which is an exemplary computing environment.
  • Mobile device 200 includes a microprocessor 202, memory 204, input/output (I/O) components 206, and a communication interface 208 for communicating with remote computers or other mobile devices.
  • I/O input/output
  • the afore-mentioned components are coupled for communication with one another over a suitable bus 210.
  • Memory 204 is implemented as non-volatile electronic memory such as random access memory (RAM) with a battery back-up module (not shown) such that information stored in memory 204 is not lost when the general power to mobile device 200 is shut down.
  • RAM random access memory
  • a portion of memory 204 is preferably allocated as addressable memory for program execution, while another portion of memory 204 is preferably used for storage, such as to simulate storage on a disk drive.
  • Memory 204 includes an operating system 212, application programs 214 as well as an object store 216.
  • operating system 212 is preferably executed by processor 202 from memory 204.
  • Operating system 212 in one preferred embodiment, is a WINDOWS® CE brand operating system commercially available from Microsoft Corporation.
  • Operating system 212 is preferably designed for mobile devices, and implements database features that can be utilized by applications 214 through a set of exposed application programming interfaces and methods.
  • the objects in object store 216 are maintained by applications 214 and operating system 212, at least partially in response to calls to the exposed application programming interfaces and methods.
  • Communication interface 208 represents numerous devices and technologies that allow mobile device 200 to send and receive information.
  • the devices include wired and wireless modems, satellite receivers and broadcast tuners to name a few.
  • Mobile device 200 can also be directly connected to a computer to exchange data therewith.
  • communication interface 208 can be an infrared transceiver or a serial or parallel communication connection, all of which are capable of transmitting streaming information.
  • Input/output components 206 include a variety of input devices such as a touch-sensitive screen, buttons, rollers, and a microphone as well as a variety of output devices including an audio generator, a vibrating device, and a display.
  • input devices such as a touch-sensitive screen, buttons, rollers, and a microphone
  • output devices including an audio generator, a vibrating device, and a display.
  • the devices listed above are by way of example and need not all be present on mobile device 200.
  • other input/output devices may be attached to or found with mobile device 200 within the scope of the present invention.
  • a speech synthesizer that concatenates stored samples of speech units without modifying the prosody of the samples.
  • the present invention is able to achieve a high level of naturalness in synthesized speech with a carefully designed speech corpus by storing samples based on the prosodic and phonetic context in which they occur.
  • the present invention limits the training text to those sentences that will produce the most frequent sets of prosodic contexts for each speech unit.
  • the present invention also provides a multi-tier selection mechanism for selecting a set of samples that will produce the most natural sounding speech.
  • FIG. 3 is a block diagram of a speech synthesizer 300 that is capable of constructing synthesized speech 302 from an input text 304 under embodiments of the present invention.
  • speech synthesizer 300 Before speech synthesizer 300 can be utilized to construct speech 302, it must be initialized with samples of speech units taken from a training text 306 that is read into speech synthesizer 300 as training speech 308.
  • speech synthesizers are constrained by a limited size memory. Because of this, training text 306 must be limited in size to fit within the memory. However, if the training text is too small, there will not be enough samples of the training speech to allow for concatenative synthesis without prosody modifications.
  • One aspect of the present invention overcomes this problem by trying to identify a set of speech units in a very large text corpus that must be included in the training text to allow for concatenative synthesis without prosody modifications.
  • FIG. 4 provides a block diagram of components used to identify smaller training text 306 of FIG. 3 from a very large corpus 400.
  • very large corpus 400 is a corpus of five years worth of the People's Daily, a Chinese newspaper, and contains about 97 million Chinese Characters.
  • large corpus 400 is parsed by a parser/semantic identifier 402 into strings of individual speech units.
  • the speech units are tonal syllables.
  • other speech units such as phonemes, diphones, or triphones may be used within the scope of the present invention.
  • Parser/semantic identifier 402 also identifies high-level prosodic information about each sentence provided to the parser. This high-level prosodic information includes the predicted tonal levels for each speech unit as well as the grouping of speech units into prosodic words and phrases. In embodiments where tonal syllable speech units are used, parser/semantic identifier 402 also identifies the first and last phoneme in each speech unit.
  • the strings of speech units produced from the training text are provided to a context vector generator 404, which generates a Speech unit-Dependent Descriptive Contextual Variation Vector (SDDCVV, hereinafter referred to as a context vector).
  • SDDCVV Speech unit-Dependent Descriptive Contextual Variation Vector
  • the context vector describes several context variables that can affect the prosody of the speech unit. Under one embodiment, the context vector describes six variables or coordinates. They are:
  • the position-in-phrase coordinate and the position-in-word coordinate can each have one of four values
  • the left phonetic context can have one of eleven values
  • the right phonetic context can have one of twenty-six values
  • the left and right tonal contexts can each have one of two values.
  • there are 4 * 4 * 11 * 26 * 2 * 2 18304 possible context vectors for each speech unit.
  • the context vectors produced by generator 404 are grouped based on their speech unit. For each speech unit, a frequency-based sorter 406 identifies the most frequent context vectors for each speech unit. The most frequently occurring context vectors for each speech unit are then stored in a list of necessary context vectors 408. In one embodiment, the top context vectors, whose accumulated frequency of occurrence is not less than half of the total frequency of occurrence of all units, are stored in the list.
  • the sorting and pruning performed by sorter 406 is based on a discovery made by the present inventors.
  • the present inventors have found that certain context vectors occur repeatedly in the corpus. By making sure that these context vectors are found in the training corpus, the present invention increases the chances of having an exact context match for an input text without greatly increasing the size of the training corpus. For example, the present inventors have found that by ensuring that the top two percent of the context vectors are represented in the training corpus, an exact context match will be found for an input text speech unit over fifty percent of the time.
  • a text selection unit 410 selects sentences from very large corpus 400 to produce training text subset 306.
  • text selection unit 410 uses a greedy algorithm to select sentences from corpus 400. Under this greedy algorithm, selection unit 410 scans all sentences in the corpus and picks out one at a time to add to the selected group.
  • selection unit 410 determines how many context vectors in list 408 are found in each sentence. The sentence that contains the maximum number of needed context vectors is then added to training text 306. The context vectors that the sentence contains are removed from list 408 and the sentence is removed from the large text corpus 400. The scanning is repeated until all of the context vectors have been removed from list 408.
  • training text subset 306 After training text subset 306 has been formed, it is read by a person and digitized into a training speech corpus. Both the training text and training speech can be used to initialize speech synthesizer 300 of FIG. 3 . This initialization begins by parsing the sentences of text 306 into individual speech units that are annotated with high-level prosodic information. In FIG. 3 , this is accomplished by a parser/semantic identifier 310, which is similar to parser/semantic identifier 402 of FIG. 4 . The parsed speech units and their high-level prosodic description are then provided to a context vector generator 312, which is similar to context vector generator 404 of FIG. 4 .
  • the context vectors produced by context vector generator 312 are provided to a component storing unit 314 along with speech samples produced by a sampler 316 from training speech signal 308. Each sample provided by sampler 316 corresponds to a speech unit identified by parser 310. Component storing unit 314 indexes each speech sample by its context vector to form an indexed set of stored speech components 318.
  • the samples are indexed by a prosody-dependent decision tree (PDDT), which is formed automatically using a classification and regression tree (CART).
  • PDDT prosody-dependent decision tree
  • CART provides a mechanism for selecting questions that can be used to divide the stored speech components into small groups of similar speech samples. Typically, each question is used to divide a group of speech components into two smaller groups. With each question, the components in the smaller groups become more homogenous. The process for using CART to form the decision tree is shown in FIG. 5 .
  • a list of candidate questions is generated for the decision tree.
  • each question is directed toward some coordinate or combination of coordinates in the context vector.
  • an expected square error is determined for all of the training samples from sampler 316.
  • the expected square error gives a measure of the distances among a set of features of each sample in a group.
  • the features are prosodic features of average fundamental frequency (F a ), average duration (F b ), and range of the fundamental frequency (F c ) for a unit.
  • ESE(t) is the expected square error for all samples X on node t in the decision tree
  • E a , E b , and E c are the square error for F a , F b , and F c , respectively
  • W a , W b , and W e are weights
  • the operation of determining the expected value of the sum of square errors is indicated by the outer E().
  • 2 , j a , b , c
  • R(F j ) is a regression value calculated from samples X on node t.
  • the first question in the question list is selected at step 504.
  • the selected question is applied to the context vectors at step 506 to group the samples into candidate sub-nodes for the tree.
  • the expected square error of each sub-node is then determined at step 508 using equations 1 and 2 above.
  • a reduction in expected square error created by generating the two sub-nodes is determined.
  • ⁇ WESE(t) is the reduction in expected square error
  • ESE(t) is the expected square error of node t, against which the question was applied
  • P(t) is the percentage of samples in node t
  • ESE(1) and ESE(r) are the expected square error of the left and right sub-nodes formed by the question, respectively
  • P(1) and P(r) are the percentage of samples in the left and right node, respectively.
  • the reduction in expected square error provided by the current question is stored and the CART process determines if the current question is the last question in the list at step 512. If there are more questions in the list, the next question is selected at step 514 and the process returns to step 506 to divide the current node into sub-nodes based on the new question.
  • each leaf node when the decision tree is in its final form, each leaf node will contain a number of samples for a speech unit. These samples have slightly different prosody from each other. For example, they may have different phonetic contexts or different tonal contexts from each other. By maintaining these minor differences within a leaf node, this embodiment of the invention introduces slender diversity in prosody, which is helpful in removing monotonous prosody.
  • step 516 If the current leaf nodes are to be further divided at step 516, a leaf node is selected at step 518 and the process returns to step 504 to find a question to associate with the selected node. If the decision tree is complete at step 516, the process of FIG. 5 ends at step 520.
  • FIG. 5 results in a prosody-dependent decision tree 320 of FIG. 3 and a set of stored speech samples 318, indexed by decision tree 320.
  • decision tree 320 and speech samples 318 can be used under further aspects of the present invention to generate concatenative speech without requiring prosody modification.
  • the process for forming concatenative speech begins by parsing a sentence in input text 304 using parser/semantic identifier 310 and identifying high-level prosodic information for each speech unit produced by the parse. This prosodic information is then provided to context vector generator 312, which generates a context vector for each speech unit identified in the parse. The parsing and the production of the context vectors are performed in the same manner as was done during the training of prosody decision tree 320.
  • the context vectors are provided to a component locator 322, which uses the vectors to identify a set of samples for the sentence.
  • component locator 322 uses a multi-tier non-uniform unit selection algorithm to identify the samples from the context vectors.
  • FIGS. 6 and 7 provide a block diagram and a flow diagram for the multi-tier non-uniform selection algorithm.
  • each vector in the set of input context vectors is applied to prosody-dependent decision tree 320 to identify a leaf node array 600 that contains a leaf node for each context vector.
  • a set of distances is determined by a distance calculator 602 for each input context vector. In particular, a separate distance is calculated between the input context vector and each context vector found in its respective leaf node.
  • D c the context distance
  • D i the distance for coordinate i of the context vector
  • W ci is a weight associated with coordinate i
  • I the number of coordinates in each context vector.
  • the N samples with the closest context vectors are retained while the remaining samples are pruned from node array 600 to form pruned leaf node array 604.
  • the number of samples, N, to leave in the pruned nodes is determined by balancing improvements in prosody with improved processing time. In general, more samples left in the pruned nodes means better prosody at the cost of longer processing time.
  • the pruned array is provided to a Viterbi decoder 606, which identifies a lowest cost path through the pruned array.
  • the lowest cost path is identified simply by selecting the sample with the closest context vector in each node.
  • C c is the concatenation cost for the entire sentence
  • W c is a weight associated with the distance measure of the concatenated cost
  • D cj is the distance calculated in equation 4 for the j th speech unit in the sentence
  • W s is a weight associated with a smoothness measure of the concatenated cost
  • C sj is a smoothness cost for the j th speech unit
  • J is the number of speech units in the sentence.
  • the smoothness cost in Equation 5 is defined to provide a measure of the prosodic mismatch between sample j and the samples proposed as the neighbors to sample j by the Viterbi decoder.
  • the smoothness cost is determined based on whether a sample and its neighbors were found as neighbors in an utterance in the training corpus. If a sample occurred next to its neighbors in the training corpus, the smoothness cost is zero since the samples contain the proper prosody to be combined together. If a sample did not occur next to its neighbors in the training corpus, the smoothness cost is set to one.
  • the identified samples 608 are provided to speech constructor 303.
  • speech constructor 303 simply concatenates the speech units to form synthesized speech 302. Thus, the speech units are combined without having to change their prosody.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Document Processing Apparatus (AREA)

Claims (14)

  1. Procédé de synthèse vocale comprenant :
    la génération d'un vecteur de contexte d'apprentissage pour chaque unité d'un ensemble d'unités vocales d'apprentissage dans un corpus vocal d'apprentissage, chaque vecteur de contexte d'apprentissage indiquant le contexte prosodique d'une unité vocale d'apprentissage contenue dans le corpus vocal d'apprentissage, dans lequel chaque vecteur de contexte comporte une coordonnée tonale gauche indiquant une catégorie de tonalité d'une unité vocale située à gauche de l'unité vocale d'apprentissage ;
    l'indexation d'un ensemble de segments vocaux associés à un ensemble d'unités vocales d'apprentissage basé sur les vecteurs de contexte pour les unités vocales d'apprentissage ;
    la génération d'un vecteur de contexte d'entrée pour chaque unité d'un ensemble d'unités vocales d'entrée dans un texte d'entrée (304), chaque vecteur de contexte d'entrée indiquant le contexte prosodique d'une unité vocale d'entrée dans le texte d'entrée ;
    l'utilisation des vecteurs de contexte d'entrée pour identifier un segment vocal pour chaque unité vocale d'entrée ; et
    la concaténation des segments vocaux identifiés pour former un signal vocal synthétique.
  2. Procédé selon la revendication 1, dans lequel chaque vecteur de contexte comporte une coordonnée de position dans une expression indiquant la position de l'unité vocale dans une expression.
  3. Procédé selon la revendication 1, dans lequel chaque vecteur de contexte comporte une coordonnée de position dans un mot indiquant la position de l'unité vocale dans un mot.
  4. Procédé selon la revendication 1, dans lequel chaque vecteur de contexte comporte une coordonnée phonétique gauche indiquant une catégorie pour le phonème situé à gauche de l'unité vocale.
  5. Procédé selon la revendication 1, dans lequel chaque vecteur de contexte comporte une coordonnée phonétique droite indiquant une catégorie pour le phonème situé à droite de l'unité vocale.
  6. Procédé selon la revendication 1, dans lequel chaque vecteur de contexte comporte une coordonnée de tonalité droite indiquant une catégorie pour le ton de l'unité vocale située à droite de l'unité vocale.
  7. Procédé selon la revendication 1, dans lequel l'indexation d'un ensemble de segments vocaux comporte la génération d'une arborescence de décision (320) basée sur les vecteurs de contexte d'apprentissage.
  8. Procédé selon la revendication 7, dans lequel l'utilisation des vecteurs de contexte pour identifier un segment vocal comporte la recherche de l'arborescence de décision en utilisant le vecteur de contexte d'entrée.
  9. Procédé selon la revendication 8, dans lequel la recherche dans l'arborescence de décision comporte :
    l'identification d'une feuille dans l'arborescence pour chaque vecteur de contexte d'entrée, chaque feuille comportant au moins un segment vocal candidat ; et
    la sélection d'un segment vocal candidat pour chaque noeud feuille dans lequel, s'il existe plusieurs segments vocaux candidats au niveau du noeud, la sélection se fonde sur une fonction de coût.
  10. Procédé selon la revendication 9, dans lequel la fonction de coût comporte une distance entre le vecteur de contexte d'entrée et un vecteur de contexte d'apprentissage associé à un segment vocal.
  11. Procédé selon la revendication 10, dans lequel la fonction de coût comporte en outre un coût de lissage fondé sur un segment vocal candidat d'au moins une unité vocale voisine.
  12. Procédé selon la revendication 11, dans lequel le coût de lissage donne préférence à la sélection d'une série de segments vocaux pour une série de vecteurs de contexte d'entrée si la série de segments vocaux est apparue sous la forme d'une série dans le corpus vocal d'apprentissage.
  13. Procédé selon la revendication 1, dans lequel la sélection de segments pour une synthèse vocale avec concaténation comporte :
    l'analyse d'un texte d'entrée sous la forme d'unités vocales ;
    l'identification d'informations de contexte pour chaque unité vocale en se basant sur sa localisation dans le texte d'entrée et sur au moins une unité vocale voisine ;
    l'identification d'un ensemble de segments vocaux candidats pour chaque unité vocale en se basant sur les informations de contexte ; et
    l'identification d'une séquence de segments vocaux parmi les segments vocaux candidats en se basant en partie sur un coût de lissage entre les segments vocaux.
  14. Support lisible par ordinateur contenant des instructions exécutables par ordinateur, conçues pour mettre en oeuvre le procédé selon l'une des revendications précédentes, lorsqu'elles sont exécutées sur un ordinateur (110).
EP07002565A 2000-12-04 2001-12-03 Procédé de synthèse vocale sans modification de prosodie Expired - Lifetime EP1777697B1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US25116700P 2000-12-04 2000-12-04
US09/850,527 US6978239B2 (en) 2000-12-04 2001-05-07 Method and apparatus for speech synthesis without prosody modification
EP01128765A EP1213705B1 (fr) 2000-12-04 2001-12-03 Procédé et dispositif pour la synthèse de la parole

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
EP01128765.3 Division 2001-12-03
EP01128765A Division EP1213705B1 (fr) 2000-12-04 2001-12-03 Procédé et dispositif pour la synthèse de la parole

Publications (3)

Publication Number Publication Date
EP1777697A2 EP1777697A2 (fr) 2007-04-25
EP1777697A3 EP1777697A3 (fr) 2008-06-18
EP1777697B1 true EP1777697B1 (fr) 2013-03-20

Family

ID=37831625

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07002565A Expired - Lifetime EP1777697B1 (fr) 2000-12-04 2001-12-03 Procédé de synthèse vocale sans modification de prosodie

Country Status (1)

Country Link
EP (1) EP1777697B1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020118643A1 (fr) * 2018-12-13 2020-06-18 Microsoft Technology Licensing, Llc Synthèse texte-voix neuronale avec informations textuelles multi-niveaux

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2559766A (en) * 2017-02-17 2018-08-22 Pastel Dreams Method and system for defining text content for speech segmentation
GB2559767A (en) * 2017-02-17 2018-08-22 Pastel Dreams Method and system for personalised voice synthesis

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064960A (en) 1997-12-18 2000-05-16 Apple Computer, Inc. Method and apparatus for improved duration modeling of phonemes
JP2000075878A (ja) * 1998-08-31 2000-03-14 Canon Inc 音声合成装置およびその方法ならびに記憶媒体

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020118643A1 (fr) * 2018-12-13 2020-06-18 Microsoft Technology Licensing, Llc Synthèse texte-voix neuronale avec informations textuelles multi-niveaux

Also Published As

Publication number Publication date
EP1777697A2 (fr) 2007-04-25
EP1777697A3 (fr) 2008-06-18

Similar Documents

Publication Publication Date Title
EP1213705B1 (fr) Procédé et dispositif pour la synthèse de la parole
US7124083B2 (en) Method and system for preselection of suitable units for concatenative speech
US7024362B2 (en) Objective measure for estimating mean opinion score of synthesized speech
US7263488B2 (en) Method and apparatus for identifying prosodic word boundaries
EP1138038B1 (fr) Synthese de la parole par concatenation de signaux vocaux
US5905972A (en) Prosodic databases holding fundamental frequency templates for use in speech synthesis
US7386451B2 (en) Optimization of an objective measure for estimating mean opinion score of synthesized speech
Taylor Concept-to-speech synthesis by phonological structure matching
US6778962B1 (en) Speech synthesis with prosodic model data and accent type
US6173263B1 (en) Method and system for performing concatenative speech synthesis using half-phonemes
EP1221693B1 (fr) Comparaison de références de prosodie pour des systèmes de conversion texte-parole
US9666179B2 (en) Speech synthesis apparatus and method utilizing acquisition of at least two speech unit waveforms acquired from a continuous memory region by one access
US7328157B1 (en) Domain adaptation for TTS systems
Chu et al. A concatenative Mandarin TTS system without prosody model and prosody modification.
EP1777697B1 (fr) Procédé de synthèse vocale sans modification de prosodie
Chen et al. A Mandarin Text-to-Speech System
Dong et al. A Unit Selection-based Speech Synthesis Approach for Mandarin Chinese.
Narupiyakul et al. A stochastic knowledge-based Thai text-to-speech system
EP1501075B1 (fr) Synthèse de la parole par concaténation de formes d'ondes de parole
Narupiyakul et al. Thai syllable analysis for rule-based text to speech system
JPH09198074A (ja) 音声合成装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070206

AC Divisional application: reference to earlier application

Ref document number: 1213705

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 13/06 20060101ALI20080515BHEP

Ipc: G10L 13/08 20060101AFI20080515BHEP

17Q First examination report despatched

Effective date: 20080724

AKX Designation fees paid

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AC Divisional application: reference to earlier application

Ref document number: 1213705

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 602496

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130415

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 60147799

Country of ref document: DE

Effective date: 20130516

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130701

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 602496

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130320

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130621

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20130320

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130722

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

26N No opposition filed

Effective date: 20140102

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 60147799

Country of ref document: DE

Effective date: 20140102

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131203

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131231

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131203

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131231

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 60147799

Country of ref document: DE

Representative=s name: GRUENECKER, KINKELDEY, STOCKMAIR & SCHWANHAEUS, DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20150108 AND 20150114

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 60147799

Country of ref document: DE

Representative=s name: GRUENECKER PATENT- UND RECHTSANWAELTE PARTG MB, DE

Effective date: 20150126

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147799

Country of ref document: DE

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, REDMOND, US

Free format text: FORMER OWNER: MICROSOFT CORP., REDMOND, WASH., US

Effective date: 20150126

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147799

Country of ref document: DE

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, REDMOND, US

Free format text: FORMER OWNER: MICROSOFT CORP., REDMOND, WASH., US

Effective date: 20130320

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130320

REG Reference to a national code

Ref country code: FR

Ref legal event code: TP

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, US

Effective date: 20150724

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20171113

Year of fee payment: 17

Ref country code: DE

Payment date: 20171129

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20171129

Year of fee payment: 17

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60147799

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20181203

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181231

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190702

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181203