US20140142946A1 - System and method for voice transformation - Google Patents

System and method for voice transformation Download PDF

Info

Publication number
US20140142946A1
US20140142946A1 US13/625,317 US201213625317A US2014142946A1 US 20140142946 A1 US20140142946 A1 US 20140142946A1 US 201213625317 A US201213625317 A US 201213625317A US 2014142946 A1 US2014142946 A1 US 2014142946A1
Authority
US
United States
Prior art keywords
voice
timbre
new
vectors
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/625,317
Other versions
US8744854B1 (en
Inventor
Chengjun Julian Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Columbia University in the City of New York
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US13/625,317 priority Critical patent/US8744854B1/en
Priority to US13/692,584 priority patent/US8719030B2/en
Priority to US13/692,621 priority patent/US20140088968A1/en
Priority to PCT/US2013/053549 priority patent/WO2014046789A1/en
Publication of US20140142946A1 publication Critical patent/US20140142946A1/en
Application granted granted Critical
Publication of US8744854B1 publication Critical patent/US8744854B1/en
Assigned to THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK reassignment THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, CHENGJUN JULIAN
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit

Definitions

  • the present invention generally relates to voice transformation, in particular to voice transformation using orthogonal functions, and its applications in speech synthesis and automatic speech recognition.
  • Voice transformation involves parameterization of a speech signal into a mathematical format which can be extensively manipulated such that the properties of the original speech, for example, pitch, speed, relative length of phones, prosody, and speaker identity, can be changed, but still sound natural.
  • a straightforward application of voice transformation is singing synthesis. If the new parametric representation is successfully demonstrated to work well in voice transformation, it can be used for speech synthesis and automatic speech recognition.
  • Speech synthesis involves the use of a computer-based system to convert a written document into audible speech.
  • a good TTS system should generate natural, or human-like, and highly intelligible speech.
  • the rule-based TTS systems or the formant synthesizers, were used. These systems generate intelligible speech, but the speech sounds robotic, and unnatural.
  • ASR Automatic speech recognition
  • the present invention is directed to a novel mathematical representation of the human voice as a timbre vector, together with a method of parameterizing speech into a timbre vector, and a method to recover human voice from a series of timbre vectors with variations.
  • a speech signal is first segmented into non-overlapping frames using the glottal closure moment information. Using Fourier analysis, the speech signal in each frame is converted into amplitude spectrum, then Laguerre functions (based on a set of orthogonal polynomials) are used to convert the amplitude spectrum into a unit vector characteristic to the instantaneous timbre.
  • a timbre vector is formed along with voicedness index, frame duration, and an intensity parameter. Because of the accuracy of the system and method and the complete separation of prosody and timbre, a variety of voice transformation operations can be applied, and the output voice is natural. A straightforward application of voice transformation is singing synthesis.
  • One difference of the current invention from all previous methods is that the frames, or processing units, are non-overlapping, and do not require a window function.
  • All previous parameterization methods including linear prediction confidents, sinusoidal models, mel-cepstral coefficients and time-domain pitch synchronized overlap add methods rely on overlapping frames requiring a window function (such as Hamming window, Hann window, cosine window, triangular window, Gaussian window, etc.) and a shift time which is smaller than the duration of the frame, which makes an overlap.
  • a window function such as Hamming window, Hann window, cosine window, triangular window, Gaussian window, etc.
  • An important application of the inventive parametric representation is speech synthesis.
  • the speech segments can be modified to the prosodic requirements and regenerate an output speech with high quality.
  • the synthesized speech can have different speaker identity (baby, child, male, female, giant, etc), base pitch (up to three octaves), speed (up to 10 times), and various prosodic variations (calm, emotional, up to shouting).
  • the timbre vector method disclosed in the present invention can be used to build high-quality speech synthesis systems using a compact speech database.
  • Another important application of the inventive parametric representation of speech signal is to serve as the acoustic signal format to improve the accuracy of automatic speech recognition.
  • the timbre vector method disclosed in the present invention can greatly improve the accuracy of automatic speech recognition.
  • FIG. 1 is a block diagram of a voice transformation systems using timbre vectors according to an exemplary embodiment of the present invention.
  • FIG. 2 is an explanation of the basic concept of parameterization according to an exemplary embodiment of the present invention.
  • FIG. 3 is the process of segmenting the PCM data according to an exemplary embodiment of the present invention.
  • FIG. 4 is a plot of the Laguerre functions according to an exemplary embodiment of the present invention.
  • FIG. 5 is the data structure of a timbre vector according to an exemplary embodiment of the present invention.
  • FIG. 6 is the binomial interpolation of timbre vectors according to an exemplary embodiment of the present invention.
  • FIG. 7 is a block diagram of a speech synthesis system using timbre vectors according to an exemplary embodiment of the present invention.
  • FIG. 8 is a block diagram of an automatic speech recognition system using timbre vectors according to an exemplary embodiment of the present invention.
  • Various exemplary embodiments of the present invention are implemented on a computer system including one or more processors and one or more memory units.
  • steps of the various methods described herein are performed on one or more computer processors according to instructions encoded on a computer-readable medium.
  • FIG. 1 is a block diagram of the voice transformation system according to an exemplary embodiment of the present invention.
  • the source is the voice from a speaker 101 .
  • PCM Pulse Code Modulation
  • the PCM signal 103 is then segmented by segmenter 104 into frames 105 , according to segment points 110 .
  • the second one is to use a glottal closure instants detection unit 108 to generate GCI from the voice waveform.
  • the glottal closure instants (GCI) 107 and the voice signal (PCM) 103 are sent to a processing unit 109 , to generate a complete set of segment points 110 .
  • FIG. 3 The details of this process is shown in FIG. 3 .
  • the voice signal in each frame 105 proceeds through a Fourier analysis unit 111 to generate amplitude spectrum 112 .
  • the amplitude spectrum 112 proceeds through an orthogonal transform unit 113 to generate timbre vectors 114 .
  • Laguerre functions are the most appropriate mathematical functions for converting the amplitude spectrum into a compact and convenient form (see FIG. 4 ).
  • Data structure of a timbre vector is shown in FIG. 5 .
  • a number of voice manipulations can be made according to specifications 115 by voice manipulator 116 , so as to generate new timbre vectors 117 , then the voice can be regenerated using the new timbre vectors 117 .
  • Laguerre transform 118 is used to regenerate amplitude spectrum 119 ; the phase generator 120 (based on Kramers-Kronig relations) is used to generate phase spectrum 121 ; FFT (Fast Fourier Transform) 122 is used to generate an elementary acoustic wave 123 , from the amplitude spectrum and phase spectrum; then those elementary acoustic waves 123 are superposed according to the timing information 124 in the new timbre vectors, each one is delayed by the time of frame duration 125 of the previous frame.
  • the output wave in electric form then drives a loudspeaker 126 to produce an output voice 127 .
  • FIG. 2 shows the process of speech generation, particularly the generation of voiced sections, and the properties of the PCM and EGG signals.
  • Air flow 201 comes from the lungs to the opening between the two vocal cords, or glottis, 202 . If the glottis is constantly open, there is a constant air flow 203 , but no voice signal is generated. At the instant the glottis closes, or a glottal closure occurs, which is always very rapid due to the Bernoulli effect, the inertia of the moving air in the vocal track 204 generates a d'Alembert wave front, then excites an acoustic resonance.
  • the actions of the glottis is monitored by the signals from a electroglottograph (EGG) 205 .
  • EGG electroglottograph
  • the instrument When there is a glottal closure, the instrument generates a sharp peak in the derivative of the EGG signal, as shown as 207 in FIG. 2 .
  • a microphone 206 is placed near the mouth to generate a signal, typically a Pulse Code Modulation signal, or PCM, as shown in 209 in FIG. 2 . If the glottis remains closed after a closure, as shown as shown as 208 , then the acoustic excitation sustains, as shown as 210 .
  • FIG. 3 shows the details of processing unit 109 to generate the segmentation points.
  • the input data is the PCM signal 301 - 303 and EGG signal 304 , produced by the source speaker 101 .
  • EGG signal 304 When there are clear peaks in the EGG signal, such as 304 , corresponding to PCM signal 301 , those peaks are selected as the segmentation points 305 .
  • the segmentation points are generated by comparing the waveform 302 with the neighboring ones 301 , and if the waveform 302 is still periodic, then segmentation points 306 are generated at the same intervals as the segmentation points 305 . If the signal is no longer periodic, such as 303 , the PCM is segmented according to points 307 into frames with an equal interval, here 5 msec. Therefore, the entire PCM signal is segmented into frames.
  • the values of the voice signal at two adjacent closure moments may not match.
  • the following is an algorithm that may be used to match the ends. Let the number of sampling points between two adjacent glottal closures be N, and the original voice signal be x 0 (n). The smoothed signal x(n) in a small interval 0 ⁇ n ⁇ M is defined as
  • x ⁇ ( N - n ) x 0 ⁇ ( N - n ) ⁇ n M + x 0 ⁇ ( - n ) ⁇ M - n M .
  • ⁇ n ⁇ ( x ) n ! ( n + k ) ! ⁇ ⁇ - x / 2 ⁇ x k / 2 ⁇ L n ( k ) ⁇ ( x ) ,
  • the norm of the vector C is the intensity parameter I,
  • ⁇ ⁇ ( ⁇ ) - 1 ⁇ ⁇ ⁇ lim ⁇ ⁇ 0 ⁇ ⁇ [ ⁇ - ⁇ ⁇ - ⁇ ⁇ ln ⁇ ⁇ A ⁇ ( ⁇ ′ ) ⁇ ′ - ⁇ ⁇ ⁇ ⁇ ⁇ ′ + ⁇ ⁇ + ⁇ ⁇ ⁇ ln ⁇ ⁇ A ⁇ ( ⁇ ′ ) ⁇ ′ - ⁇ ⁇ ⁇ ⁇ ⁇ ′ ]
  • the output wave for a frame, the acoustic exciton can be calculated from the amplitude spectrum A( ⁇ ) and the phase spectrum ⁇ ( ⁇ ),
  • x ⁇ ( t ) ⁇ 0 ⁇ ⁇ A ⁇ ( ⁇ ) ⁇ cos ⁇ ( ⁇ ⁇ ⁇ t - ⁇ ⁇ ( ⁇ ) ) ⁇ ⁇ ⁇ .
  • FIG. 4 shows the Laguerre function. After proper scaling, twenty-nine Laguerre functions are used on the frequency scale 401 of 0 to 11 kHz.
  • the first Laguerre function 402 actually probes the first formant.
  • the resolution in the low-frequency range is successively improved; and extended to the high-frequency range 404 . Because of the accuracy scaling, it makes an accurate but concise representation of the spectrum.
  • FIG. 5 shows the data structure of a timbre vector including the voicedness index (V) 501 , the frame duration (T) 502 , the intensity parameter (I) 503 , and the normalized Laguerre coefficients 504 .
  • Timbre interpolation The unit vector of Laguerre coefficients varies slowly with frames. It can be interpolated for reduced number of frames or extended number of frames for any section of voice to produce natural sounding speech of arbitrary temporal variations. For example, the speech can be made very fast but still recognizable by a blind person.
  • Timbre fusing By connecting two sets of timbre vectors of two different phonemes and smear-averaging over the juncture, a natural-sounding transition is generated. Phoneme assimilation may be automatically produced. By connecting a syllable ended with [g] with a syllable started with [n], after fusing, the sound [n] is automatically assimilated into [ng].
  • FIG. 6 shows the principles of the timbre fusing operation.
  • Original timbre vectors from the first phoneme 601 include timbre vectors A, B, and C.
  • Original timbre vectors from the second phoneme 602 include timbre vectors D and E.
  • the output timbre vectors 603 through 607 are weighed averages from the original timbre vectors. For example, output timbre vector D′ is generated from timbre vector C, D, and E using the binomial coefficients 1, 2, and 1; output timbre vector C′ is generated from original timbre vectors A, B, C, D, and E using the binomial coefficients 1, 4, 6, 4, and 1.
  • the state-of-the-art technology for pitch modification of speech signal is the time-domain pitch-synchronized overlap-add (TD-PSOLA) method, which can change pitch from ⁇ 30% to +50%. Otherwise the output would sound unnatural.
  • TD-PSOLA time-domain pitch-synchronized overlap-add
  • pitch can be easily modified by changing the time of separation T, then using timbre interpolation to compensate speed.
  • Natural sounding speech can be produced with pitch modifications as large as three octaves.
  • intensity parameter I is a property of a frame, it can be changed to produce any stress pattern required by prosody input.
  • the head size can be changed.
  • the voice of an average adult speaker can be changed to that of a baby, a child, a woman, a man, or a giant.
  • high-quality speech synthesizers with a compact database can be constructed using the parametric representation based on timbre vectors (see FIG. 7 ).
  • the speech synthesis system has two major parts: database building part 101 (the left-hand side of FIG. 7 ), and the synthesis part 121 (right-hand side of FIG. 7 ).
  • a source speaker 702 reads a prepared text.
  • the voice is recorded by a microphone to become the PCM signal 703 .
  • the glottal closure signal is recorded by an electroglottograph (EGG) to become EGG signal 704 .
  • EGG electroglottograph
  • the origin and properties of those signals are shown in FIG. 2 .
  • the EGG signal and the PCM signal are used by the processing unit 705 to generate a set of segment points 706 .
  • the details of the segmenting process, or the function of the processing unit, is shown in FIG. 3 .
  • the PCM signal is segmented by the segmenter 707 into frames 708 using the segment points 706 . Each frame is processed by a unit of Fourier analysis 709 to generate amplitude spectrum 710 .
  • the amplitude spectrum of each frame is then processed using a Laguerre transform unit 711 to become a unit vector, representing the instantaneous timbre of that frame, to become the basis of timbre vectors 712 .
  • the Laguerre functions are shown in FIG. 4 .
  • the structure of the timbre vector is shown in FIG. 5 .
  • the timbre vectors of various units of speech such as, for example, phonemes, diphones, demisyllables, syllables, words and even phrases, are then stored in the speech database 720 .
  • the input text 722 together with synthesis parameters 723 are fed into the frontend 724 .
  • Detailed instructions about the phonemes, intensity and pitch values 725 , for generating the desired speech are generated, then input to a processing unit 726 .
  • the processing unit 726 selects timbre vectors from the database 720 , then converts the selected timbre vectors to a new series of timbre vectors 727 according to the instructions from the process unit 726 , and using timbre fusing if necessary (see FIG. 6 ).
  • Each timbre vector is converted into an amplitude spectrum 729 by Laguerre transform unit 728 .
  • the phase spectrum 731 is generated from the amplitude spectrum 729 by phase generator 730 using a Kramers-Kronig relations algorithm.
  • the amplitude spectrum 729 and the phase spectrum 731 are sent to a FFT (Fast Fourier Transform) unit 732 , to generate an elementary acoustic wave 733 .
  • FFT Fast Fourier Transform
  • Those elementary acoustic waves 733 are than superposed by the superposition unit 735 according to the timing information 734 provided by the new timbre vectors 727 , to generate the final result, output speech signal 736 .
  • the parametric representation of human voice in terms of timbre vectors can also be used as the basis of automatic speech recognition systems.
  • the most widely used acoustic features, or parametric representation of human speech in automatic speech recognition is the mel-cepstrum.
  • the speech signal is segmented into frames of fixed length, typically 20 msec, with a window, typically Hann window or Hamming window, and a shift of 10 msec.
  • Those parametric representations are crude and inaccurate. Features that cross the phoneme borders occur very often.
  • the parametric representation based on timbre vectors is more accurate.
  • a well-behaved timbre distance ⁇ between two frames can be defined as
  • c (1) n and c (2) n are elements of the normalized Laguerre coefficients of the two timbre vectors (see FIG. 5 ).
  • the distance is less than 0.1.
  • the distance is 0.1 to 0.6.
  • vowels and unvoiced consonants are well separated.
  • the intensity parameter I silence is well separated from real sound.
  • pitch is an important parameter (see, for example, U.S. Pat. No. 5,751,905 and U.S. Pat. No. 6,510,410).
  • the frame duration T provides a very accurate measure of pitch (see FIG. 5 ). Therefore, using parametric representation based on timbre vectors, the accuracy of speech recognition can be greatly improved.
  • FIG. 8 shows a block diagram of an automatic speech recognition system based on timbre vectors.
  • the first half of the procedure, converting speech signal into timbre vectors, is similar to step 102 through step 114 of FIG. 1 for voice transformation.
  • the voice from a speaker 801 is recorded in the computer as PCM signal 803 .
  • the PCM signal 803 is then segmented by segmenter 804 into frames 805 , according to segment points 810 .
  • the first one is to use an electroglottograph (EGG) 806 to detect the glottal closure instants (GCI) 807 directly (see FIG. 2 ).
  • the second one is to use the glottal closure instants detection unit 808 , to generate GCI from the voice waveform.
  • the glottal closure instants (GCI) 807 and the voice signal (PCM) 803 are sent to a processing unit 809 , to generate a complete set of segment points 810 .
  • the voice signal in each frame 805 proceeds through a Fourier analysis unit 811 to generate amplitude spectrum 812 .
  • the amplitude spectrum 812 proceeds through a Laguerre transform 813 to generate timbre vectors 814 .
  • the timbre vectors 814 are streamed into acoustic decoder 815 , to compare with the timbre vectors stored in the acoustic models 816 .
  • Possible phoneme sequence 817 is generated.
  • the phoneme sequence is sent to language decoder 818 , assisted with language model 819 , to find the most probable output text 820 .
  • the language decoder 818 may be essentially the same as other automatic speech recognition systems. Because the accuracy of the inventive parametric representation is much higher, the accuracy of the acoustic decoder 815 may be much higher.
  • the PCM signals generated through a microphone can be sufficient.
  • the addition of an electroglottograph 806 can substantially improve the accuracy.

Abstract

The present invention is a method and system to convert speech signal into a parametric representation in terms of timbre vectors, and to recover the speech signal thereof. The speech signal is first segmented into non-overlapping frames using the glottal closure instant information, each frame is converted into an amplitude spectrum using a Fourier analyzer, and then using Laguerre functions to generate a set of coefficients which constitute a timbre vector. A sequence of timbre vectors can be subject to a variety of manipulations. The new timbre vectors are converted back into voice signals by first transforming into amplitude spectra using Laguerre functions, then generating phase spectra from the amplitude spectra using Kramers-Knonig relations. A Fourier transformer converts the amplitude spectra and phase spectra into elementary waveforms, then superposed to become the output voice. The method and system can be used for voice transformation, speech synthesis, and automatic speech recognition.

Description

    FIELD OF THE INVENTION
  • The present invention generally relates to voice transformation, in particular to voice transformation using orthogonal functions, and its applications in speech synthesis and automatic speech recognition.
  • BACKGROUND OF THE INVENTION
  • Voice transformation involves parameterization of a speech signal into a mathematical format which can be extensively manipulated such that the properties of the original speech, for example, pitch, speed, relative length of phones, prosody, and speaker identity, can be changed, but still sound natural. A straightforward application of voice transformation is singing synthesis. If the new parametric representation is successfully demonstrated to work well in voice transformation, it can be used for speech synthesis and automatic speech recognition.
  • Speech synthesis, or text-to-speech (TTS), involves the use of a computer-based system to convert a written document into audible speech. A good TTS system should generate natural, or human-like, and highly intelligible speech. In the early years, the rule-based TTS systems, or the formant synthesizers, were used. These systems generate intelligible speech, but the speech sounds robotic, and unnatural.
  • Currently, a great majority of commercial TTS systems are concatenative TTS system using the unit-selection method. According to this approach, a very large body of speech is recorded and stored. During the process of synthesis, the input text is first analyzed and the required prosodic features are predicted. Then, appropriate units are selected from a huge speech database, and stitched together. There are always mismatches at the border of consecutive segments from different origins. And there are always cases of required segments that do not exist in the speech database. Therefore, modifications of the recorder speech segments are necessary. Currently, the most popular method of speech modification is the time-domain pitch-synchronized overlap-add method (TD-PSOLA), LPC (linear prediction coefficients), mel-cepstral coefficients and sinusoidal representations. However, using those methods, the quality of voice is severely degraded. To improve the quality of speech synthesis and to allow for the use of a small database, voice transformation is the key. (See Part D of Springer Handbook of Speech Processing, Springer Verlag 2008).
  • Automatic speech recognition (ASR) is the inverse process of speech synthesis. The first step, acoustic processing, reduces the speech signal into a parametric representation. Then, typically using HMM (Hidden Markov Model), with a statistic language model, the most likely text is thus produced. The state-of-the-art parametric representation for speech is LPC (linear prediction coefficients) and mel-cepstral coefficients. Obviously, the accuracy of speech parameterization affects the overall accuracy. (See Part E of Springer Handbook of Speech Processing, Springer Verlag 2008).
  • SUMMARY OF THE INVENTION
  • The present invention is directed to a novel mathematical representation of the human voice as a timbre vector, together with a method of parameterizing speech into a timbre vector, and a method to recover human voice from a series of timbre vectors with variations. According to an exemplary embodiment of the invention, a speech signal is first segmented into non-overlapping frames using the glottal closure moment information. Using Fourier analysis, the speech signal in each frame is converted into amplitude spectrum, then Laguerre functions (based on a set of orthogonal polynomials) are used to convert the amplitude spectrum into a unit vector characteristic to the instantaneous timbre. A timbre vector is formed along with voicedness index, frame duration, and an intensity parameter. Because of the accuracy of the system and method and the complete separation of prosody and timbre, a variety of voice transformation operations can be applied, and the output voice is natural. A straightforward application of voice transformation is singing synthesis.
  • One difference of the current invention from all previous methods is that the frames, or processing units, are non-overlapping, and do not require a window function. All previous parameterization methods, including linear prediction confidents, sinusoidal models, mel-cepstral coefficients and time-domain pitch synchronized overlap add methods rely on overlapping frames requiring a window function (such as Hamming window, Hann window, cosine window, triangular window, Gaussian window, etc.) and a shift time which is smaller than the duration of the frame, which makes an overlap.
  • An important application of the inventive parametric representation is speech synthesis. Using the parametric representation in terms of timbre vectors, the speech segments can be modified to the prosodic requirements and regenerate an output speech with high quality. Furthermore, because of the complete separation of timbre and prosody data, the synthesized speech can have different speaker identity (baby, child, male, female, giant, etc), base pitch (up to three octaves), speed (up to 10 times), and various prosodic variations (calm, emotional, up to shouting). The timbre vector method disclosed in the present invention can be used to build high-quality speech synthesis systems using a compact speech database.
  • Another important application of the inventive parametric representation of speech signal is to serve as the acoustic signal format to improve the accuracy of automatic speech recognition. The timbre vector method disclosed in the present invention can greatly improve the accuracy of automatic speech recognition.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram of a voice transformation systems using timbre vectors according to an exemplary embodiment of the present invention.
  • FIG. 2 is an explanation of the basic concept of parameterization according to an exemplary embodiment of the present invention.
  • FIG. 3 is the process of segmenting the PCM data according to an exemplary embodiment of the present invention.
  • FIG. 4 is a plot of the Laguerre functions according to an exemplary embodiment of the present invention.
  • FIG. 5 is the data structure of a timbre vector according to an exemplary embodiment of the present invention.
  • FIG. 6 is the binomial interpolation of timbre vectors according to an exemplary embodiment of the present invention.
  • FIG. 7 is a block diagram of a speech synthesis system using timbre vectors according to an exemplary embodiment of the present invention.
  • FIG. 8 is a block diagram of an automatic speech recognition system using timbre vectors according to an exemplary embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Various exemplary embodiments of the present invention are implemented on a computer system including one or more processors and one or more memory units. In this regard, according to exemplary embodiments, steps of the various methods described herein are performed on one or more computer processors according to instructions encoded on a computer-readable medium.
  • FIG. 1 is a block diagram of the voice transformation system according to an exemplary embodiment of the present invention. The source is the voice from a speaker 101. Through a microphone 102, the voice is converted into electrical signal, and recorded in the computer as PCM (Pulse Code Modulation) signal 103. The PCM signal 103 is then segmented by segmenter 104 into frames 105, according to segment points 110. There are two methods to generate the segment points. The first one is to use an electroglottograph (EGG) 106 to detect the glottal closure instants (GCI) 107 directly (See FIG. 2). The second one is to use a glottal closure instants detection unit 108 to generate GCI from the voice waveform. The glottal closure instants (GCI) 107 and the voice signal (PCM) 103 are sent to a processing unit 109, to generate a complete set of segment points 110. The details of this process is shown in FIG. 3.
  • The voice signal in each frame 105 proceeds through a Fourier analysis unit 111 to generate amplitude spectrum 112. The amplitude spectrum 112 proceeds through an orthogonal transform unit 113 to generate timbre vectors 114. In exemplary embodiments, Laguerre functions are the most appropriate mathematical functions for converting the amplitude spectrum into a compact and convenient form (see FIG. 4). Data structure of a timbre vector is shown in FIG. 5.
  • After the PCM signal 103 is converted into timbre vectors 114, a number of voice manipulations can be made according to specifications 115 by voice manipulator 116, so as to generate new timbre vectors 117, then the voice can be regenerated using the new timbre vectors 117. In detail, the steps are as follows: Laguerre transform 118 is used to regenerate amplitude spectrum 119; the phase generator 120 (based on Kramers-Kronig relations) is used to generate phase spectrum 121; FFT (Fast Fourier Transform) 122 is used to generate an elementary acoustic wave 123, from the amplitude spectrum and phase spectrum; then those elementary acoustic waves 123 are superposed according to the timing information 124 in the new timbre vectors, each one is delayed by the time of frame duration 125 of the previous frame. The output wave in electric form then drives a loudspeaker 126 to produce an output voice 127.
  • FIG. 2 shows the process of speech generation, particularly the generation of voiced sections, and the properties of the PCM and EGG signals. Air flow 201 comes from the lungs to the opening between the two vocal cords, or glottis, 202. If the glottis is constantly open, there is a constant air flow 203, but no voice signal is generated. At the instant the glottis closes, or a glottal closure occurs, which is always very rapid due to the Bernoulli effect, the inertia of the moving air in the vocal track 204 generates a d'Alembert wave front, then excites an acoustic resonance. The actions of the glottis is monitored by the signals from a electroglottograph (EGG) 205. When there is a glottal closure, the instrument generates a sharp peak in the derivative of the EGG signal, as shown as 207 in FIG. 2. A microphone 206 is placed near the mouth to generate a signal, typically a Pulse Code Modulation signal, or PCM, as shown in 209 in FIG. 2. If the glottis remains closed after a closure, as shown as 208, then the acoustic excitation sustains, as shown as 210.
  • FIG. 3 shows the details of processing unit 109 to generate the segmentation points. The input data is the PCM signal 301-303 and EGG signal 304, produced by the source speaker 101. When there are clear peaks in the EGG signal, such as 304, corresponding to PCM signal 301, those peaks are selected as the segmentation points 305. For some quasi-periodic segments of the voice 302, there is no clear EGG peaks. The segmentation points are generated by comparing the waveform 302 with the neighboring ones 301, and if the waveform 302 is still periodic, then segmentation points 306 are generated at the same intervals as the segmentation points 305. If the signal is no longer periodic, such as 303, the PCM is segmented according to points 307 into frames with an equal interval, here 5 msec. Therefore, the entire PCM signal is segmented into frames.
  • The values of the voice signal at two adjacent closure moments may not match. The following is an algorithm that may be used to match the ends. Let the number of sampling points between two adjacent glottal closures be N, and the original voice signal be x0(n). The smoothed signal x(n) in a small interval 0<n<M is defined as
  • x ( N - n ) = x 0 ( N - n ) n M + x 0 ( - n ) M - n M .
  • Where M is about N/10. Otherwise x(n)=x0(n). Direct inspection shows that the ends of the waveform are matched, and it is smooth. Therefore, no window functions are required. The waveform in a frame is processed by Fourier analysis to generate an amplitude spectrum. The amplitude spectrum is further processed by a Laguarre transform unit to generate timbre vectors as follows.
  • Laguerre functions are defined as
  • Φ n ( x ) = n ! ( n + k ) ! - x / 2 x k / 2 L n ( k ) ( x ) ,
  • where k is an integer, typically k=2 or k=4; and the associated Laguerre polynomials are
  • L n ( k ) ( x ) = x x - k n ! n x n ( e - x x n + k ) .
  • The amplitude spectrum A(w) is expended into Laguerre functions
  • A ( ω ) = n = 0 N C n Φ n ( κ ω ) ,
  • where the coefficients are calculated by
  • C n = 0 κ A ( ω ) Φ n ( κ ω ) ω ,
  • and κ is a scaling factor to maximize accuracy. The norm of the vector C is the intensity parameter I,
  • I = n = 0 N C n 2 ,
  • and the normalized Laguarre coefficients are defined as

  • c n =C n /I.
  • To recover phase spectrum φ(Ω) from amplitude spectrum A(Ω), Kramers-Kronig relations are used,
  • ϕ ( ω ) = - 1 π lim ɛ 0 [ - ω - ɛ ln A ( ω ) ω - ω ω + ω + ɛ ln A ( ω ) ω - ω ω ]
  • The output wave for a frame, the acoustic exciton, can be calculated from the amplitude spectrum A(Ω) and the phase spectrum φ(Ω),
  • x ( t ) = 0 A ( ω ) cos ( ω t - ϕ ( ω ) ) ω .
  • FIG. 4 shows the Laguerre function. After proper scaling, twenty-nine Laguerre functions are used on the frequency scale 401 of 0 to 11 kHz. The first Laguerre function 402 actually probes the first formant. For higher order Laguerre functions, such as the Laguerre function 403, the resolution in the low-frequency range is successively improved; and extended to the high-frequency range 404. Because of the accuracy scaling, it makes an accurate but concise representation of the spectrum.
  • FIG. 5 shows the data structure of a timbre vector including the voicedness index (V) 501, the frame duration (T) 502, the intensity parameter (I) 503, and the normalized Laguerre coefficients 504.
  • There are many possible voice transformation manipulations, including, for example, the following:
  • Timbre interpolation. The unit vector of Laguerre coefficients varies slowly with frames. It can be interpolated for reduced number of frames or extended number of frames for any section of voice to produce natural sounding speech of arbitrary temporal variations. For example, the speech can be made very fast but still recognizable by a blind person.
  • Timbre fusing. By connecting two sets of timbre vectors of two different phonemes and smear-averaging over the juncture, a natural-sounding transition is generated. Phoneme assimilation may be automatically produced. By connecting a syllable ended with [g] with a syllable started with [n], after fusing, the sound [n] is automatically assimilated into [ng].
  • FIG. 6 shows the principles of the timbre fusing operation. Original timbre vectors from the first phoneme 601 include timbre vectors A, B, and C. Original timbre vectors from the second phoneme 602 include timbre vectors D and E. The output timbre vectors 603 through 607 are weighed averages from the original timbre vectors. For example, output timbre vector D′ is generated from timbre vector C, D, and E using the binomial coefficients 1, 2, and 1; output timbre vector C′ is generated from original timbre vectors A, B, C, D, and E using the binomial coefficients 1, 4, 6, 4, and 1. As a very simple case is shown here, the number of timbre vectors involved can be a larger number of 2n+1, for example, 9, 17, 33, or 65 for n=3, 4, 5, and 6.
  • Pitch modification. The state-of-the-art technology for pitch modification of speech signal is the time-domain pitch-synchronized overlap-add (TD-PSOLA) method, which can change pitch from −30% to +50%. Otherwise the output would sound unnatural. Here, pitch can be easily modified by changing the time of separation T, then using timbre interpolation to compensate speed. Natural sounding speech can be produced with pitch modifications as large as three octaves.
  • Intensity profiling. Because the intensity parameter I is a property of a frame, it can be changed to produce any stress pattern required by prosody input.
  • Change of speaker identity. First, by rescaling the amplitude spectrum on the frequency axis, the head size can be changed. The voice of an average adult speaker can be changed to that of a baby, a child, a woman, a man, or a giant. Second, by using a filter to alter the spectral envelop, special voice effects can be created.
  • Using those voice manipulation capabilities and timbre fusing (see FIG. 6), high-quality speech synthesizers with a compact database can be constructed using the parametric representation based on timbre vectors (see FIG. 7). The speech synthesis system has two major parts: database building part 101 (the left-hand side of FIG. 7), and the synthesis part 121 (right-hand side of FIG. 7).
  • In the database building unit 701, a source speaker 702 reads a prepared text. The voice is recorded by a microphone to become the PCM signal 703. The glottal closure signal is recorded by an electroglottograph (EGG) to become EGG signal 704. The origin and properties of those signals are shown in FIG. 2. The EGG signal and the PCM signal are used by the processing unit 705 to generate a set of segment points 706. The details of the segmenting process, or the function of the processing unit, is shown in FIG. 3. The PCM signal is segmented by the segmenter 707 into frames 708 using the segment points 706. Each frame is processed by a unit of Fourier analysis 709 to generate amplitude spectrum 710. The amplitude spectrum of each frame is then processed using a Laguerre transform unit 711 to become a unit vector, representing the instantaneous timbre of that frame, to become the basis of timbre vectors 712. The Laguerre functions are shown in FIG. 4. The structure of the timbre vector is shown in FIG. 5. The timbre vectors of various units of speech, such as, for example, phonemes, diphones, demisyllables, syllables, words and even phrases, are then stored in the speech database 720.
  • In the synthesis unit 721, the input text 722 together with synthesis parameters 723, are fed into the frontend 724. Detailed instructions about the phonemes, intensity and pitch values 725, for generating the desired speech are generated, then input to a processing unit 726. The processing unit 726 selects timbre vectors from the database 720, then converts the selected timbre vectors to a new series of timbre vectors 727 according to the instructions from the process unit 726, and using timbre fusing if necessary (see FIG. 6). Each timbre vector is converted into an amplitude spectrum 729 by Laguerre transform unit 728. The phase spectrum 731 is generated from the amplitude spectrum 729 by phase generator 730 using a Kramers-Kronig relations algorithm. The amplitude spectrum 729 and the phase spectrum 731 are sent to a FFT (Fast Fourier Transform) unit 732, to generate an elementary acoustic wave 733. Those elementary acoustic waves 733 are than superposed by the superposition unit 735 according to the timing information 734 provided by the new timbre vectors 727, to generate the final result, output speech signal 736.
  • The parametric representation of human voice in terms of timbre vectors can also be used as the basis of automatic speech recognition systems. To date, the most widely used acoustic features, or parametric representation of human speech in automatic speech recognition is the mel-cepstrum. First, the speech signal is segmented into frames of fixed length, typically 20 msec, with a window, typically Hann window or Hamming window, and a shift of 10 msec. Those parametric representations are crude and inaccurate. Features that cross the phoneme borders occur very often.
  • The parametric representation based on timbre vectors is more accurate. Especially, a well-behaved timbre distance δ between two frames can be defined as
  • δ = n = 0 N [ c n ( 1 ) - c n ( 2 ) ] 2 ,
  • where c(1) n and c(2) n are elements of the normalized Laguerre coefficients of the two timbre vectors (see FIG. 5). Experiments have shown that for two timbre vectors of the same phoneme (not diphthong), the distance is less than 0.1. For timbre vectors of different vowels, the distance is 0.1 to 0.6. Furthermore, because of the presence of the voicedness index V (see FIG. 5), vowels and unvoiced consonants are well separated. Because of the intensity parameter I, silence is well separated from real sound. For the recognition of tone languages such as Mandarin, Cantonese, Thai etc., pitch is an important parameter (see, for example, U.S. Pat. No. 5,751,905 and U.S. Pat. No. 6,510,410). The frame duration T provides a very accurate measure of pitch (see FIG. 5). Therefore, using parametric representation based on timbre vectors, the accuracy of speech recognition can be greatly improved.
  • FIG. 8 shows a block diagram of an automatic speech recognition system based on timbre vectors. The first half of the procedure, converting speech signal into timbre vectors, is similar to step 102 through step 114 of FIG. 1 for voice transformation. The voice from a speaker 801 is recorded in the computer as PCM signal 803. The PCM signal 803 is then segmented by segmenter 804 into frames 805, according to segment points 810. There are two methods to generate the segment points. The first one is to use an electroglottograph (EGG) 806 to detect the glottal closure instants (GCI) 807 directly (see FIG. 2). The second one is to use the glottal closure instants detection unit 808, to generate GCI from the voice waveform. The glottal closure instants (GCI) 807 and the voice signal (PCM) 803 are sent to a processing unit 809, to generate a complete set of segment points 810. The details of this process are shown in FIG. 3.
  • The voice signal in each frame 805 proceeds through a Fourier analysis unit 811 to generate amplitude spectrum 812. The amplitude spectrum 812 proceeds through a Laguerre transform 813 to generate timbre vectors 814.
  • The timbre vectors 814 are streamed into acoustic decoder 815, to compare with the timbre vectors stored in the acoustic models 816. Possible phoneme sequence 817 is generated. The phoneme sequence is sent to language decoder 818, assisted with language model 819, to find the most probable output text 820. The language decoder 818 may be essentially the same as other automatic speech recognition systems. Because the accuracy of the inventive parametric representation is much higher, the accuracy of the acoustic decoder 815 may be much higher.
  • For using the speech recognition system in a quiet environment, the PCM signals generated through a microphone can be sufficient. In noisy environments, the addition of an electroglottograph 806 can substantially improve the accuracy.
  • In ordinary speech recognition systems, adaptation for a given speaker by recording a good number (for example 100) of spoken sentences from a given speaker and processing it can improve the accuracy. Because of the simplicity of the timbre-vector parametric representation, it is possible to use a single recorded sentence from a given speaker to improve the accuracy.
  • While this invention has been described in conjunction with the exemplary embodiments outlined above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the exemplary embodiments of the invention, as set forth above, are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention.

Claims (20)

I claim:
1. A method of voice transformation using one or more processors comprising:
segmenting the voice-signal into non-overlapping frames, wherein for voiced sections the frames are pitch periods;
generating amplitude spectra of the said frames using Fourier analysis;
transforming the said amplitude spectra into timbre vectors using orthogonal functions;
manipulating the voice parameters of the said timbre vectors to generate a set of new timbre vectors according to the specifications of voice transformation;
inverse transforming the new timbre vectors into new amplitude spectra using orthogonal functions;
generating new phase spectra from the new amplitude spectra using Kramers-Kronig relations;
generating elementary acoustic waves from the new amplitude spectra and the new phase spectra using Fourier transform;
producing new voice waveform by superposing the said elementary acoustic waves according to the timing data given by the new timbre vectors.
2. The method of claim 1, wherein segmenting of the voice-signal is based on the glottal closure signals from an electroglottograph and by analyzing the sections of the voice-signal where glottal closure signals do not exist.
3. The method of claim 1, wherein segmenting of the voice-signal is based on analyzing the entirety of the voice-signal by a software comprising the capability of pitch period detection.
4. The method of claim 1, wherein the orthogonal functions are Laguerre functions.
5. The method of claim 1, wherein the voice parameter manipulating comprises executing local duration alternation using timbre interpolation.
6. The method of claim 1, wherein the voice parameter manipulating comprises executing overall speed alternation using timbre interpolation.
7. The method of claim 1, wherein the voice parameter manipulating comprises executing pitch profile alternation by changing the frame duration parameters of the timbre vectors.
8. The method of claim 1, wherein the voice parameter manipulating comprises executing intensity profile alternation by changing the intensity parameters of the timbre vectors.
9. The method of claim 1, wherein the voice parameter manipulating comprises executing speaker identity alternation by changing the head size and the spectral profiles.
10. The method of claim 1, wherein the voice parameter manipulating comprises implementing melody information through the frame duration parameters in the timbre vectors for singing synthesis.
11. A system of voice transformation comprising:
one or more data processing apparatus; and
a computer-readable medium coupled to the one or more data processing apparatus having instructions stored thereon which, when executed by the one or more data processing apparatus, cause the one or more data processing apparatus to perform a method comprising:
segmenting the voice-signal into non-overlapping frames, wherein for voiced sections the frames are pitch periods;
generating amplitude spectra of the said frames using Fourier analysis;
transforming the said amplitude spectra into timbre vectors using orthogonal functions;
manipulating the voice parameters of the said timbre vectors to generate a set of new timbre vectors according to the specifications of voice transformation;
inverse transforming the new timbre vectors into new amplitude spectra using orthogonal functions;
generating new phase spectra from the new amplitude spectra using Kramers-Kronig relations;
generating elementary acoustic waves from the new amplitude spectra and the new phase spectra using Fourier transform;
producing new voice waveform by superposing the said elementary acoustic waves according to the timing data given by the new timbre vectors.
12. The system of claim 11, wherein segmenting of the voice-signal is based on the glottal closure signals from an electroglottograph and by analyzing the sections of the voice-signal where glottal closure signals do not exist.
13. The system of claim 11, wherein segmenting of the voice-signal is based on analyzing the entirety of the voice-signal by a software comprising the capability of pitch period detection.
14. The system of claim 11, wherein the orthogonal functions are Laguerre functions.
15. The system of claim 11, wherein the voice parameter manipulating comprises executing local duration alternation using timbre interpolation.
16. The system of claim 11, wherein the voice parameter manipulating comprises executing overall speed alternation using timbre interpolation.
17. The system of claim 11, wherein the voice parameter manipulating comprises executing pitch profile alternation by changing the frame duration parameters of the timbre vectors.
18. The system of claim 11, wherein the voice parameter manipulating comprises executing intensity profile alternation by changing the intensity parameters of the timbre vectors.
19. The system of claim 11, wherein the voice parameter manipulating comprises executing speaker identity alternation by changing the head size and the spectral profiles.
20. The system of claim 11, wherein the voice parameter manipulating comprises implementing melody information through the frame duration parameters in the timbre vectors for singing synthesis.
US13/625,317 2012-09-24 2012-09-24 System and method for voice transformation Active US8744854B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US13/625,317 US8744854B1 (en) 2012-09-24 2012-09-24 System and method for voice transformation
US13/692,584 US8719030B2 (en) 2012-09-24 2012-12-03 System and method for speech synthesis
US13/692,621 US20140088968A1 (en) 2012-09-24 2012-12-03 System and method for speech recognition using timbre vectors
PCT/US2013/053549 WO2014046789A1 (en) 2012-09-24 2013-08-05 System and method for voice transformation, speech synthesis, and speech recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/625,317 US8744854B1 (en) 2012-09-24 2012-09-24 System and method for voice transformation

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US13/692,621 Continuation US20140088968A1 (en) 2012-09-24 2012-12-03 System and method for speech recognition using timbre vectors
US13/692,584 Continuation US8719030B2 (en) 2012-09-24 2012-12-03 System and method for speech synthesis

Publications (2)

Publication Number Publication Date
US20140142946A1 true US20140142946A1 (en) 2014-05-22
US8744854B1 US8744854B1 (en) 2014-06-03

Family

ID=50339719

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/625,317 Active US8744854B1 (en) 2012-09-24 2012-09-24 System and method for voice transformation
US13/692,584 Active US8719030B2 (en) 2012-09-24 2012-12-03 System and method for speech synthesis
US13/692,621 Abandoned US20140088968A1 (en) 2012-09-24 2012-12-03 System and method for speech recognition using timbre vectors

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/692,584 Active US8719030B2 (en) 2012-09-24 2012-12-03 System and method for speech synthesis
US13/692,621 Abandoned US20140088968A1 (en) 2012-09-24 2012-12-03 System and method for speech recognition using timbre vectors

Country Status (2)

Country Link
US (3) US8744854B1 (en)
WO (1) WO2014046789A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140130655A1 (en) * 2012-11-13 2014-05-15 Yamaha Corporation Delayed registration data readout in electronic music apparatus
US20140156280A1 (en) * 2012-11-30 2014-06-05 Kabushiki Kaisha Toshiba Speech processing system
WO2015183254A1 (en) * 2014-05-28 2015-12-03 Interactive Intelligence, Inc. Method for forming the excitation signal for a glottal pulse model based parametric speech synthesis system
CN106920547A (en) * 2017-02-21 2017-07-04 腾讯科技(上海)有限公司 Phonetics transfer method and device
US10014007B2 (en) 2014-05-28 2018-07-03 Interactive Intelligence, Inc. Method for forming the excitation signal for a glottal pulse model based parametric speech synthesis system
US10255903B2 (en) 2014-05-28 2019-04-09 Interactive Intelligence Group, Inc. Method for forming the excitation signal for a glottal pulse model based parametric speech synthesis system
CN111179905A (en) * 2020-01-10 2020-05-19 北京中科深智科技有限公司 Rapid dubbing generation method and device
US10978076B2 (en) * 2017-03-22 2021-04-13 Kabushiki Kaisha Toshiba Speaker retrieval device, speaker retrieval method, and computer program product
CN112820267A (en) * 2021-01-15 2021-05-18 科大讯飞股份有限公司 Waveform generation method, training method of related model, related equipment and device
CN113066472A (en) * 2019-12-13 2021-07-02 科大讯飞股份有限公司 Synthetic speech processing method and related device

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3020732A1 (en) * 2014-04-30 2015-11-06 Orange PERFECTED FRAME LOSS CORRECTION WITH VOICE INFORMATION
CN104021148B (en) * 2014-05-16 2017-05-03 小米科技有限责任公司 Method and device for adjusting sound effect
US9607610B2 (en) * 2014-07-03 2017-03-28 Google Inc. Devices and methods for noise modulation in a universal vocoder synthesizer
US9685169B2 (en) * 2015-04-15 2017-06-20 International Business Machines Corporation Coherent pitch and intensity modification of speech signals
JP6496030B2 (en) * 2015-09-16 2019-04-03 株式会社東芝 Audio processing apparatus, audio processing method, and audio processing program
US10861476B2 (en) * 2017-05-24 2020-12-08 Modulate, Inc. System and method for building a voice database
CN107240401B (en) * 2017-06-13 2020-05-15 厦门美图之家科技有限公司 Tone conversion method and computing device
JP7147211B2 (en) * 2018-03-22 2022-10-05 ヤマハ株式会社 Information processing method and information processing device
CN108830232B (en) * 2018-06-21 2021-06-15 浙江中点人工智能科技有限公司 Voice signal period segmentation method based on multi-scale nonlinear energy operator
CN111048062B (en) * 2018-10-10 2022-10-04 华为技术有限公司 Speech synthesis method and apparatus
CN110138654B (en) * 2019-06-06 2022-02-11 北京百度网讯科技有限公司 Method and apparatus for processing speech
WO2021030759A1 (en) 2019-08-14 2021-02-18 Modulate, Inc. Generation and detection of watermark for real-time voice conversion
CN110689885B (en) * 2019-09-18 2023-05-23 平安科技(深圳)有限公司 Machine synthesized voice recognition method, device, storage medium and electronic equipment
CN110808026B (en) * 2019-11-04 2022-08-23 金华航大北斗应用技术有限公司 Electroglottography voice conversion method based on LSTM
CN111435591B (en) * 2020-01-17 2023-06-20 珠海市杰理科技股份有限公司 Voice synthesis method and system, audio processing chip and electronic equipment
CN111489734B (en) * 2020-04-03 2023-08-22 支付宝(杭州)信息技术有限公司 Model training method and device based on multiple speakers
CN114203147A (en) 2020-08-28 2022-03-18 微软技术许可有限责任公司 System and method for text-to-speech cross-speaker style delivery and for training data generation
CN112562728A (en) * 2020-11-13 2021-03-26 百果园技术(新加坡)有限公司 Training method for generating confrontation network, and audio style migration method and device
CN112365882B (en) * 2020-11-30 2023-09-22 北京百度网讯科技有限公司 Speech synthesis method, model training method, device, equipment and storage medium
CN112562635B (en) * 2020-12-03 2024-04-09 云知声智能科技股份有限公司 Method, device and system for solving generation of pulse signals at splicing position in speech synthesis
CN112652318B (en) * 2020-12-21 2024-03-29 北京捷通华声科技股份有限公司 Tone color conversion method and device and electronic equipment
KR102576606B1 (en) * 2021-03-26 2023-09-08 주식회사 엔씨소프트 Apparatus and method for timbre embedding model learning
RU2763124C1 (en) * 2021-07-06 2021-12-27 Валерий Олегович Лелейтнер Method for speaker-independent phoneme recognition in a speech signal
CN116246643B (en) * 2022-12-26 2023-07-28 深度好奇(杭州)科技有限公司 Speech robot and artificial seat tone normalization method, system and equipment

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2636163B1 (en) 1988-09-02 1991-07-05 Hamon Christian METHOD AND DEVICE FOR SYNTHESIZING SPEECH BY ADDING-COVERING WAVEFORMS
DE69231266T2 (en) 1991-08-09 2001-03-15 Koninkl Philips Electronics Nv Method and device for manipulating the duration of a physical audio signal and a storage medium containing such a physical audio signal
KR940002854B1 (en) 1991-11-06 1994-04-04 한국전기통신공사 Sound synthesizing system
US5751905A (en) 1995-03-15 1998-05-12 International Business Machines Corporation Statistical acoustic processing method and apparatus for speech recognition using a toned phoneme system
DE69612958T2 (en) 1995-11-22 2001-11-29 Koninkl Philips Electronics Nv METHOD AND DEVICE FOR RESYNTHETIZING A VOICE SIGNAL
GB9600774D0 (en) * 1996-01-15 1996-03-20 British Telecomm Waveform synthesis
US5905972A (en) * 1996-09-30 1999-05-18 Microsoft Corporation Prosodic databases holding fundamental frequency templates for use in speech synthesis
KR100269255B1 (en) 1997-11-28 2000-10-16 정선종 Pitch Correction Method by Variation of Gender Closure Signal in Voiced Signal
CA2296330C (en) * 1997-07-31 2009-07-21 British Telecommunications Public Limited Company Generation of voice messages
US6304846B1 (en) * 1997-10-22 2001-10-16 Texas Instruments Incorporated Singing voice synthesis
US6101470A (en) * 1998-05-26 2000-08-08 International Business Machines Corporation Methods for generating pitch and duration contours in a text to speech system
US20070233479A1 (en) * 2002-05-30 2007-10-04 Burnett Gregory C Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
US6510410B1 (en) 2000-07-28 2003-01-21 International Business Machines Corporation Method and apparatus for recognizing tone languages using pitch information
US20020099541A1 (en) * 2000-11-21 2002-07-25 Burnett Gregory C. Method and apparatus for voiced speech excitation function determination and non-acoustic assisted feature extraction
WO2004068467A1 (en) * 2003-01-31 2004-08-12 Oticon A/S Sound system improving speech intelligibility
SG120121A1 (en) * 2003-09-26 2006-03-28 St Microelectronics Asia Pitch detection of speech signals
US20070011009A1 (en) 2005-07-08 2007-01-11 Nokia Corporation Supporting a concatenative text-to-speech synthesis
JP4129989B2 (en) * 2006-08-21 2008-08-06 インターナショナル・ビジネス・マシーンズ・コーポレーション A system to support text-to-speech synthesis
EP1970894A1 (en) * 2007-03-12 2008-09-17 France Télécom Method and device for modifying an audio signal
CN101281744B (en) * 2007-04-04 2011-07-06 纽昂斯通讯公司 Method and apparatus for analyzing and synthesizing voice
US8401849B2 (en) * 2008-12-18 2013-03-19 Lessac Technologies, Inc. Methods employing phase state analysis for use in speech synthesis and recognition
JP5433696B2 (en) * 2009-07-31 2014-03-05 株式会社東芝 Audio processing device
WO2011068417A1 (en) * 2009-12-01 2011-06-09 Jevon Joseph Longdell Method and apparatus for detection of ultrasound

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9111514B2 (en) * 2012-11-13 2015-08-18 Yamaha Corporation Delayed registration data readout in electronic music apparatus
US20140130655A1 (en) * 2012-11-13 2014-05-15 Yamaha Corporation Delayed registration data readout in electronic music apparatus
US9466285B2 (en) * 2012-11-30 2016-10-11 Kabushiki Kaisha Toshiba Speech processing system
US20140156280A1 (en) * 2012-11-30 2014-06-05 Kabushiki Kaisha Toshiba Speech processing system
US10014007B2 (en) 2014-05-28 2018-07-03 Interactive Intelligence, Inc. Method for forming the excitation signal for a glottal pulse model based parametric speech synthesis system
WO2015183254A1 (en) * 2014-05-28 2015-12-03 Interactive Intelligence, Inc. Method for forming the excitation signal for a glottal pulse model based parametric speech synthesis system
US10255903B2 (en) 2014-05-28 2019-04-09 Interactive Intelligence Group, Inc. Method for forming the excitation signal for a glottal pulse model based parametric speech synthesis system
US10621969B2 (en) 2014-05-28 2020-04-14 Genesys Telecommunications Laboratories, Inc. Method for forming the excitation signal for a glottal pulse model based parametric speech synthesis system
CN106920547A (en) * 2017-02-21 2017-07-04 腾讯科技(上海)有限公司 Phonetics transfer method and device
US10978076B2 (en) * 2017-03-22 2021-04-13 Kabushiki Kaisha Toshiba Speaker retrieval device, speaker retrieval method, and computer program product
CN113066472A (en) * 2019-12-13 2021-07-02 科大讯飞股份有限公司 Synthetic speech processing method and related device
CN111179905A (en) * 2020-01-10 2020-05-19 北京中科深智科技有限公司 Rapid dubbing generation method and device
CN112820267A (en) * 2021-01-15 2021-05-18 科大讯飞股份有限公司 Waveform generation method, training method of related model, related equipment and device

Also Published As

Publication number Publication date
US8744854B1 (en) 2014-06-03
US20140088958A1 (en) 2014-03-27
US8719030B2 (en) 2014-05-06
WO2014046789A1 (en) 2014-03-27
US20140088968A1 (en) 2014-03-27

Similar Documents

Publication Publication Date Title
US8744854B1 (en) System and method for voice transformation
Drugman et al. Glottal source processing: From analysis to applications
Toda et al. Statistical mapping between articulatory movements and acoustic spectrum using a Gaussian mixture model
US6804649B2 (en) Expressivity of voice synthesis by emphasizing source signal features
JP4355772B2 (en) Force conversion device, speech conversion device, speech synthesis device, speech conversion method, speech synthesis method, and program
JP3408477B2 (en) Semisyllable-coupled formant-based speech synthesizer with independent crossfading in filter parameters and source domain
Degottex et al. Mixed source model and its adapted vocal tract filter estimate for voice transformation and synthesis
Bellegarda et al. Statistical prosodic modeling: from corpus design to parameter estimation
Reddy et al. Excitation modelling using epoch features for statistical parametric speech synthesis
Lee et al. Spectral and prosodic transformations of hearing-impaired Mandarin speech
Mann An investigation of nonlinear speech synthesis and pitch modification techniques
Fahad et al. Synthesis of emotional speech by prosody modification of vowel segments of neutral speech
Wang et al. Emotional voice conversion for mandarin using tone nucleus model–small corpus and high efficiency
Waghmare et al. Analysis of pitch and duration in speech synthesis using PSOLA
Del Pozo Voice source and duration modelling for voice conversion and speech repair
Csapó et al. A novel irregular voice model for HMM-based speech synthesis.
Matsuda et al. Applying generation process model constraint to fundamental frequency contours generated by hidden-Markov-model-based speech synthesis
Reddy et al. Neutral to joyous happy emotion conversion
Lehana et al. Improving quality of speech synthesis in Indian Languages
Dutoit et al. Speech Synthesis
Anil et al. Pitch and duration modification for expressive speech synthesis in Marathi TTS system
Lavner et al. Voice morphing using 3D waveform interpolation surfaces and lossless tube area functions
Li Prosody analysis and modeling for Cantonese text-to-speech
Nguyen Studies on spectral modification in voice transformation
Ferencz et al. THE NEW VERSION OF THE ROMVOX TEXT-TO-SPEECH SYNTHESIS SYSTEM BASED ON A HYBRID TIME DOMAIN-LPC SYNTHESIS

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHEN, CHENGJUN JULIAN;REEL/FRAME:037522/0331

Effective date: 20160114

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 8