WO2013020329A1 - 参数语音合成方法和系统 - Google Patents
参数语音合成方法和系统 Download PDFInfo
- Publication number
- WO2013020329A1 WO2013020329A1 PCT/CN2011/081452 CN2011081452W WO2013020329A1 WO 2013020329 A1 WO2013020329 A1 WO 2013020329A1 CN 2011081452 W CN2011081452 W CN 2011081452W WO 2013020329 A1 WO2013020329 A1 WO 2013020329A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- speech
- parameter
- parameters
- value
- phoneme
- Prior art date
Links
- 238000001308 synthesis method Methods 0.000 title claims abstract description 32
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 109
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 109
- 238000013179 statistical model Methods 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 52
- 238000005457 optimization Methods 0.000 claims abstract description 38
- 230000003068 static effect Effects 0.000 claims description 46
- 230000005284 excitation Effects 0.000 claims description 32
- 238000001914 filtration Methods 0.000 claims description 22
- 238000009499 grossing Methods 0.000 claims description 22
- 230000003595 spectral effect Effects 0.000 claims description 20
- 230000002194 synthesizing effect Effects 0.000 claims description 20
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 claims description 10
- 230000000717 retained effect Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 3
- 125000004122 cyclic group Chemical group 0.000 claims description 2
- 230000008569 process Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 17
- 238000007476 Maximum Likelihood Methods 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000003672 processing method Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 238000003066 decision tree Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/04—Details of speech synthesis systems, e.g. synthesiser structure or memory management
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/227—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
Definitions
- the present invention relates to the field of parametric speech synthesis technology, and more particularly to a parametric speech synthesis method and system for continuously synthesizing speech of arbitrary duration.
- Speech synthesis produces artificial speech by mechanical and electronic methods, which is an important technique for making human-computer interaction more natural.
- speech synthesis techniques There are two types of common speech synthesis techniques, one is speech synthesis based on unit selection and waveform splicing, and the other is parametric speech synthesis based on acoustic statistical model. Since the parameter speech synthesis method has relatively small requirements on the storage space, it is more suitable for application on small electronic devices.
- the parameter speech synthesis method it is divided into two stages of training and synthesis.
- the acoustic parameters of all speech in the corpus are first extracted, including static parameters such as spectral envelope parameters, gene frequency parameters, and dynamic parameters, such as the first step of the spectral envelope parameters and the fundamental frequency parameters. And second-order difference parameters; then, for each phoneme, according to its context annotation information, the corresponding acoustic statistical model is trained, and the global variance model for the entire corpus is trained; finally, the acoustic statistical model of all phonemes and the global variance model form a model library. .
- the layered offline processing is used to synthesize speech.
- the first layer is included: Analyze the entire text of the input to get all the phonemes with context information to form a phoneme sequence.
- the second layer The model sequence model corresponding to each phoneme in the phoneme sequence is extracted from the trained model library.
- the third layer The maximum likelihood algorithm is used to predict the acoustic parameters corresponding to each frame of speech from the model sequence to form a sequence of speech parameters.
- the fourth layer The overall optimization of the speech parameter sequence is performed using the global variance model.
- Layer 5 Input all optimized speech parameter sequences into the parametric speech synthesizer to generate the final synthesized speech.
- the existing parameter speech synthesis method adopts a horizontal processing method in the hierarchical operation of the synthesis stage: taking out the parameters of all statistical models and predicting and generating by the maximum likelihood algorithm.
- the smoothing parameters of all frames, the optimization parameters of all frames are obtained by the global variance model, and finally the speech of all frames is output from the parameter synthesizer, that is, the relevant parameters of all frames need to be saved at each layer, resulting in random randomization required for speech synthesis.
- the capacity of the memory Random Access Memory, RAM
- the RAM of the chip is as small as less than 100K bytes, and the existing parameter speech synthesis The method cannot continuously synthesize any length of speech on a chip with a smaller RAM.
- the implementation of the speech parameter sequence predicted from the model sequence by using the maximum likelihood algorithm must be implemented by frame-by-frame forward recursion and backward recursion.
- corresponding temporary parameters are generated for each frame of speech.
- the temporary parameters of all frames are then input to the second reverse recursion process to predict the desired sequence of parameters.
- the length of the synthesized speech is longer, the number of corresponding speech frames is larger.
- each speech parameter is predicted, a corresponding temporary parameter is generated.
- the temporary parameters of all frames must be stored in RAM to complete the second recursive prediction process, resulting in the inability to continuously synthesize any length of speech on a chip with smaller RAM.
- an object of the present invention is to solve the problem that the RAM size required in the original speech synthesis process increases in proportion to the length of the synthesized speech, and thus it is impossible to continuously synthesize any length of speech on the chip of the small RAM.
- a parameter speech synthesis method including a training phase and a synthesis phase, wherein the synthesis phase specifically includes:
- the corresponding statistical model is extracted from the statistical model library, and the corresponding model parameter of the statistical model in the current frame of the current phoneme is used as the rough value of the currently predicted speech parameter;
- the coarse value is filtered by using the coarse value and the information of the previous time speech frame to obtain a smoothed value of the currently predicted speech parameter, where the information of the last time speech frame is The smoothed value of the predicted speech parameters at the last moment.
- a preferred solution is to obtain a global mean value and a global standard deviation ratio of the voice parameter according to a statistic, and globally optimize the smoothed value of the currently predicted voice parameter to generate a required voice parameter:
- ⁇ the smoothed value of the speech parameter before the optimization at time t, which is the initial optimized value
- W the weight value
- r the statistically obtained predicted speech parameter.
- m the global mean of the predicted speech parameters obtained by statistics
- the values of r and m are constant.
- the solution further includes: constructing a voiced subband filter and an unvoiced subband filter by using a subband voiced parameter; and synthesizing a quasi-periodic pulse sequence constructed by a pitch frequency parameter through the voiced subband filter a voiced component of the signal; a random sequence constructed by white noise, the unvoiced component of the voice signal is obtained through the unvoiced subband filter; the voiced component is added to the unvoiced component to obtain a mixed excitation signal; and the mixed excitation signal is passed A filter constructed by a spectral envelope parameter outputs a synthesized speech waveform.
- the method further includes a training phase,
- the acoustic parameters extracted from the corpus only include static parameters, or the acoustic parameters extracted from the corpus include static parameters and dynamic parameters; only the static model parameters are retained in the model parameters of the statistical model obtained after training;
- the corresponding static model parameter of the statistical model obtained in the training phase in the current frame of the current phoneme is used as the rough value of the currently predicted speech parameter.
- a parametric speech synthesis system comprising:
- a loop synthesis device configured to perform speech synthesis on each frame of each phoneme in the phoneme sequence of the input text in sequence during the synthesis phase;
- the cyclic synthesis device includes:
- a rough search unit configured to extract a corresponding statistical model from a statistical model library for the current phoneme in the phoneme sequence of the input text, and use the corresponding model parameter of the statistical model in the current frame of the current phoneme as the current predicted speech parameter.
- a smoothing filtering unit configured to filter the coarse value by using the coarse value and information of a predetermined number of voice frames before the current time to obtain a smoothed value of the currently predicted voice parameter
- a global optimization unit configured to globally optimize a smoothed value of the currently predicted speech parameter according to a statistically obtained global mean value and a global standard deviation ratio of the speech parameter to generate a required speech parameter
- the parameter speech synthesis unit is configured to synthesize the generated speech parameters to obtain a frame of speech synthesized by the current frame of the current phoneme.
- the smoothing filtering unit includes a low-pass filter group, configured to filter the coarse value by using the coarse value and the information of the previous time speech frame to obtain a smoothed value of the currently predicted speech parameter.
- the information of the speech frame at the previous moment is the smoothed value of the predicted speech parameter at the previous moment.
- the global optimization unit includes a global parameter optimizer for utilizing The following formula obtains a global mean value and a global standard deviation ratio of the voice parameter according to statistics, and globally optimizes the smoothed value of the currently predicted voice parameter to generate a required voice parameter:
- ⁇ is the smoothed value of the speech parameter before the optimization at time t, which is the initial optimized value
- w the weight value, and is the required speech parameter obtained after global optimization
- r is the statistically obtained predicted speech parameter.
- m is the global mean of the predicted speech parameters obtained by statistics
- the values of r and m are constant.
- a filter construction module for constructing a voiced subband filter and an unvoiced subband filter using subband voiced parameters
- the voiced subband filter is configured to filter a quasi-periodic pulse sequence constructed by a pitch frequency parameter to obtain a voiced component of the voice signal;
- the unvoiced subband filter is configured to filter a random sequence constructed by white noise to obtain an unvoiced component of the voice signal
- An adder configured to add the voiced component and the unvoiced component to obtain a mixed excitation signal
- a synthesis filter configured to output the mixed excitation signal to a one-frame synthesized speech waveform after passing through a filter constructed by a spectral envelope parameter .
- the system further includes training means for, during the training phase, the acoustic parameters extracted from the corpus include only static parameters, or the acoustic parameters extracted from the corpus include static parameters and dynamic parameters; and, in training Only the static model parameters are retained in the model parameters of the obtained statistical model;
- the coarse search unit is specifically configured to use, according to the current phoneme, the corresponding static model parameter of the statistical model obtained in the training stage in the current frame of the current phoneme as the rough value of the currently predicted speech parameter according to the current phoneme.
- the technical solution of the embodiment of the present invention utilizes the voice before the current frame.
- the information of the frame and the technical means of pre-statistically obtaining the global mean value of the speech parameters and the global standard deviation ratio provide a novel parametric speech synthesis scheme.
- the parameter speech synthesis method and system provided by the invention adopts a synthesis method of vertical processing, that is, the synthesis of each frame of speech needs to take out the statistical model rough value, the filtered smooth value, the global optimized optimization value, and the parameter speech synthesis.
- Four steps of speech, and then the synthesis of each frame of speech repeats the four steps again, so that only the parameters of the fixed storage capacity required for the current frame need to be saved in the process of parameter speech synthesis processing, so that the speech synthesis institute
- the required RAM does not increase as the length of the synthesized speech increases, and the duration of the synthesized speech is no longer limited by the RAM.
- the acoustic parameters used in the present invention are static parameters, and only the static mean parameters of the respective models are saved in the model library, so that the size of the statistical model library can be effectively reduced.
- the present invention uses a multi-sub-band turbidity mixed excitation in the process of synthesizing speech, so that the unvoiced and voiced sounds in each sub-band are mixed according to the voiced sound, so that the unvoiced and voiced sounds no longer have a clear hard boundary in time. Significant distortion of the sound quality after speech synthesis is avoided.
- FIG. 1 is a schematic diagram of a staged segment of a speech synthesis method based on dynamic parameters and maximum likelihood criteria in the prior art
- FIG. 2 is a flowchart of a parameter speech synthesis method according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram showing a phased manner of a parameter speech synthesis method according to an embodiment of the present invention
- FIG. 5 is a schematic diagram of filtering smoothing parameter prediction based on static parameters according to an embodiment of the present invention
- FIG. 6 is a schematic diagram of a hybrid excitation based synthesis filter in accordance with one embodiment of the present invention
- FIG. 8 is a block diagram showing a parametric speech synthesis system according to another embodiment of the present invention.
- FIG. 9 is a schematic diagram showing the logical structure of a parameter speech synthesis unit according to another embodiment of the present invention.
- FIG. 10 is a flowchart of a method for synthesizing a parameter speech according to still another embodiment of the present invention.
- FIG. 11 is a schematic structural diagram of a parameter speech synthesis system according to still another embodiment of the present invention.
- FIG. 2 shows a flow chart of a parametric speech synthesis method in accordance with one embodiment of the present invention.
- the implementation of the parameter speech synthesis method capable of continuously synthesizing speech of any duration is provided by the present invention, including the following steps:
- S220 sequentially extract one phoneme in the phoneme sequence, search a statistical model library for a statistical model corresponding to each acoustic parameter of the phoneme, and take each statistical model of the phoneme as a rough value of the speech parameter to be synthesized according to a frame;
- S230 Perform a parameter smoothing on the rough value of the to-be-synthesized speech parameter by using a filter group to obtain a smoothed speech parameter.
- S240 Perform global parameter optimization on the smoothed speech parameter by using a global parameter optimizer to obtain an optimized speech parameter.
- S250 Synthesize the optimized speech parameters by using a parameter speech synthesizer, and output a frame of synthesized speech;
- S260 Determine whether all the frames of the phoneme are processed, if not, repeat the speech synthesis processing of steps S220 to S250 for the next frame of the phoneme until all the phonemes in the phoneme sequence are processed. frame.
- FIG. 3 is a schematic diagram showing a phased manner of a parameter speech synthesis method according to an embodiment of the present invention. As shown in FIG.
- the implementation of the parameter speech synthesis of the present invention also includes two stages of training and synthesis, wherein the training stage is used for
- the acoustic parameters of the speech are extracted by the speech information in the corpus, and the statistical model corresponding to each phoneme in each context information is trained according to the extracted acoustic parameters to form a statistical model library of phonemes required for the synthesis phase.
- Steps S210 ⁇ S260 belong to the synthesis stage.
- the three parts include text analysis, parameter prediction and speech synthesis.
- the parameter prediction part can be subdivided into three parts: target model search, parameter generation and parameter optimization.
- the main difference between the present invention and the existing parameter speech synthesis technology is that: the acoustic parameters extracted in the prior art include dynamic parameters, and the extracted in the present invention.
- the acoustic parameters may all be static parameters, or may include dynamic parameters that characterize the changes of the frame parameters before and after, such as first-order or second-order differential parameters, to improve the accuracy of the model after training.
- the acoustic parameters extracted from the corpus of the present invention include at least three static parameters: a spectral envelope parameter, a pitch frequency parameter, a sub-band voiced parameter, and optionally other parameters such as a resonant peak frequency.
- the spectrum envelope parameter may be a linear prediction coefficient (LPC) or a derivative parameter thereof, such as a line spectrum pair parameter (LSP), or a cepstrum type parameter; or may be a parameter of the first few formants (frequency, bandwidth) , amplitude) or discrete Fourier transform coefficients.
- LPC linear prediction coefficient
- LSP line spectrum pair parameter
- cepstrum type parameter a parameter of the first few formants (frequency, bandwidth) , amplitude) or discrete Fourier transform coefficients.
- variants of these spectral envelope parameters in the Meier domain can be used to improve the sound quality of synthesized speech.
- the fundamental tone rate uses a logarithmic fundamental tone rate, and the sub-band voiced tone is the proportion of the voiced tone in the subband.
- the acoustic parameters extracted from the corpus may also include dynamic parameters that characterize changes in the acoustic parameters of the frames before and after, such as first or second order parameters between the fundamental frequencies of the previous and subsequent frames.
- dynamic parameters that characterize changes in the acoustic parameters of the frames before and after, such as first or second order parameters between the fundamental frequencies of the previous and subsequent frames.
- each phoneme is automatically aligned to a large number of speech segments in the corpus, and then the acoustic parameter model corresponding to the phoneme is calculated from these speech segments.
- the accuracy of automatic alignment using static parameters and dynamic parameters is slightly higher than that of using only static parameters, making the parameters of the model more accurate.
- the present invention since the present invention does not require dynamic parameters in the model during the synthesis phase, the present invention retains only static parameters in the final trained model library.
- each acoustic parameter is modeled by a Hidden Markov Model (HMM).
- HMM Hidden Markov Model
- This modeling scheme is a modeling scheme existing in the prior art, so only the modeling scheme will be briefly explained in the following description.
- HMM is a typical statistical signal processing method. It is widely used due to its randomness, ability to process string input with unknown word length, can effectively avoid the problem of segmentation, and has a large number of fast and effective training and recognition algorithms. Used in various fields of signal processing.
- the structure of the HMM is five state-left-type, and the distribution of observed probability in each state is a single Gaussian density function. This function is uniquely determined by the mean and variance of the parameters.
- the mean is composed of the mean of the static parameters and the mean of the dynamic parameters (first and second order differences).
- the variance consists of the variance of the static parameters and the variance of the dynamic parameters (first and second order differences).
- a model is trained for each acoustic parameter of each phoneme according to the context information.
- the related phonemes need to be clustered according to the context information of the phoneme, such as a decision tree-based clustering method.
- the models are used to perform frame-to-state forced alignment on the speech in the training corpus, and then use the duration information generated during the alignment process (ie, the number of frames corresponding to each state) to train the phonemes.
- the state time length model after clustering of decision trees is used in different context information, and finally the statistical model library corresponding to each acoustic parameter of each phoneme in different context information forms a statistical model library.
- the present invention saves only the static mean parameters of each model in the model library.
- the existing parameter speech synthesis method needs to retain the static mean parameter, the first-order difference parameter, the second-order difference mean parameter and the variance parameter corresponding to these parameters, and the statistical model library is large.
- the size of the statistical model library for storing only the static mean parameters of each model is only about 1/6 of that of the statistical model library formed in the prior art, which greatly reduces the storage space of the statistical model library.
- the reduced data is necessary in the existing parameter speech synthesis technology, but is not required for the parameter speech synthesis technical solution provided by the present invention. Therefore, the reduction of the data amount does not affect the parameters of the present invention.
- the implementation of speech synthesis is necessary in the existing parameter speech synthesis technology, but is not required for the parameter speech synthesis technical solution provided by the present invention. Therefore, the reduction of the data amount does not affect the parameters of the present invention.
- the input text needs to be analyzed first to extract a phoneme sequence containing the context information (step S210) as a basis for parameter synthesis.
- the context information of the phoneme refers to the information of the phonemes adjacent to the current phoneme, and the context information may be the name of one or several phonemes before and after it, and may also include information of other language layers or phonological layers.
- the context information of a phoneme includes the current phoneme name, two phonemes before and after, The pitch or accent of the syllable, and optionally the part of speech of the word.
- one of the phonemes in the sequence may be sequentially taken out, the statistical model corresponding to each acoustic parameter of the phoneme is searched in the statistical model library, and then the statistics of the phoneme are taken out by the frame.
- the model is used as a rough value of the speech parameters to be synthesized (step S220).
- the context annotation information of the phoneme is input into the cluster decision tree, and the statistical model corresponding to the spectrum envelope parameter, the fundamental tone rate parameter, the sub-band voiced parameter, and the state duration parameter can be searched.
- the state duration parameter is not a static acoustic parameter extracted from the original corpus. It is a new parameter generated when the state is aligned with the frame during training.
- the average value of the saved static parameters is taken out from each state of the model, which is the static mean parameter corresponding to each parameter.
- the state duration mean parameter is directly used to determine how many frames of each state in the phoneme to be synthesized, and the static mean parameters such as the spectral envelope, the fundamental frequency, and the sub-band voiced sound are the rough values of the speech parameters to be synthesized.
- the determined speech parameter coarse value is filtered based on the filter bank to predict the speech parameter (step S230).
- a special set of filters is used to filter the spectral envelope, the fundamental frequency and the sub-band voiced sound to predict the better synthesized speech parameter values.
- the filtering method employed in the step S230 of the present invention is a smoothing filtering method based on static parameters.
- FIG. 5 is a schematic diagram of filtering smoothing parameter prediction based on static parameters according to the present invention.
- the present invention replaces the maximum likelihood parameter predictor in the existing parameter speech synthesis technology with the set of parameter prediction filters.
- the group low pass filter is used to respectively predict a spectral envelope parameter, a pitch frequency parameter, and a sub-band voiced parameter of the speech parameter to be synthesized.
- ⁇ is the rough value of a speech parameter obtained from the model at the tth frame, is the filtered smoothed value, and the operator * indicates the convolution, which is pre-designed.
- the impulse response of the filter For different types of acoustic parameters, ⁇ can be designed to be different representations due to different parameter characteristics.
- Equation (2) For the spectral envelope parameters and sub-clouded pitch parameters, the filter shown in equation (2) can be used.
- the filter can be predicted using the filter shown in equation (3).
- ⁇ is a pre-designed fixed filter coefficient, and the selection can be determined experimentally according to the degree of change of the fundamental frequency parameter in the actual speech over time.
- the parameters involved in the process of predicting the speech parameters to be synthesized are not extended to future parameters, and the output frame at a certain time depends only on the input frame at the moment and before. Or the output frame at the previous moment of the time, regardless of the future input or output frame, so that the RAM size required by the filter bank can be fixed in advance. That is, in the present invention, when the acoustic parameters of the speech are predicted using equations (2) and (3), the output parameters of the current frame depend only on the input of the current frame and the output parameters of the previous frame.
- the prediction process of the entire parameter can be realized by using a fixed-size RAM buffer, and does not increase with the increase of the duration of the speech to be synthesized, so that the speech parameters of any duration can be continuously predicted, and the maximum application in the prior art is solved.
- the likelihood criterion predicts the problem that the RAM required in the parameter process increases proportionally with the synthesized speech duration.
- the filter group when used to perform parameter smoothing on the rough value of the speech parameter to be synthesized at the current time, the coarse value of the time and the speech frame of the previous time can be used.
- the information is filtered to obtain a smoothed speech parameter.
- the information of the speech frame at the last moment is the smoothed value of the speech parameter predicted at the previous moment.
- the smoothed speech parameters can be optimized using the global parameter optimizer to determine the optimized speech parameters (step S240).
- the present invention adjusts the variation range of the synthesized speech parameters using the following formula (4) in the process of optimizing the speech parameters.
- w is the mean value of the synthesized speech, and is the ratio of the training speech to the synthesized speech standard deviation
- w is the control A fixed weight for the adjustment effect.
- the existing parameter speech synthesis method needs to use the value corresponding to a certain speech parameter in all frames to calculate the mean and variance when determining the sum of the sum, and then use the global variance model to adjust the parameters of all the frames, so that the adjustment The variance of the post-synthesized speech parameters is consistent with the global variance model, as shown in (5) to improve the sound quality.
- the total duration of the speech to be synthesized is a frame
- ⁇ is a standard deviation of all the speech parameters of a certain speech parameter in the training corpus (provided by the global variance model), which is the standard deviation of the current speech parameters to be synthesized, and each synthesis segment
- the text needs to be recalculated. Since the calculation of "and ⁇ needs to use the speech parameter values of all the frames of the pre-adjusted synthesized speech, the RAM is required to save all the parameters when the frame is not optimized. Therefore, the required RAM increases as the duration of the speech to be synthesized increases. This results in a fixed-size RAM that does not meet the need to continuously synthesize any length of speech.
- the present invention redesigns the global parameter optimizer when optimizing the parameter speech, and optimizes the parameter speech using the following formula (6).
- the method for determining is to synthesize a long speech, such as a synthesized speech of about one hour, without using global parameter optimization, and then calculate the ratio of the mean to the standard deviation of each acoustic parameter using equation (5), and
- the global parameter optimizer designed by the present invention contains the global mean and the global variance ratio, and the global mean value is used to represent the mean value of each acoustic parameter of the synthesized speech, and the global variance ratio is used. The ratio of the parameters of the synthetic speech and the training speech to the variance.
- one frame of speech parameters can be directly optimized at each synthesis, and no longer need to be from all synthesized speech frames.
- the mean and standard deviation ratios of the speech parameters are recalculated, so there is no need to save the values of all frames of the speech parameters to be synthesized.
- the fixed parameter RAM solves the problem that the existing parameter speech synthesis method RAM grows proportionally with the synthesized speech duration.
- the present invention adjusts the same m and r for each synthesized speech, and the original method uses the newly calculated m and r for adjustment in each synthesis, so that the present invention synthesizes the synthesized speech when synthesizing different texts. Sex is better than the original method.
- the computational complexity of the present invention is lower than the original method.
- the optimized speech parameters may be synthesized by a parametric speech synthesizer to synthesize a frame of speech waveforms (step S250).
- FIG. 6 is a schematic diagram of a synthesis filter based on hybrid excitation according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of a synthesis filter based on a clear/turbidity decision in the prior art.
- the hybrid excitation based synthesis filter employed in the present invention takes the form of a source-filter; whereas the prior art filter excitation is a simple binary excitation.
- the technique used in synthesizing speech using the parameter synthesizer is based on the parameter speech synthesis of the clear/turbidity decision, and it is necessary to use a preset threshold to make a hard judgment of the clear/voiced sound, and to The frame synthesized speech is either determined to be voiced or determined to be unvoiced. This causes an unvoiced frame to suddenly appear in some of the synthesized voiced sounds, and there is a noticeable distortion in the sense of sound.
- the clear/voiced sound is predicted, and then the excitation is performed separately, the white noise is used as the excitation in the unvoiced sound, the quasi-periodic pulse is used as the excitation in the voiced sound, and finally the excitation is synthesized.
- the filter obtains the waveform of the synthesized speech. Inevitably, this method of excitation synthesis results in a clear hard boundary of the unvoiced and voiced sounds that are synthesized, resulting in significant distortion of the sound quality in the synthesized speech.
- the multi-sub-band turbidity mixed excitation is used, and the clear/turbidity prediction is no longer performed, but the unvoiced and voiced sounds in each sub-band are pressed.
- the voiced sounds are mixed, so that the unvoiced and voiced sounds no longer have a clear hard boundary in time, which solves the problem that the original method is obviously distorted due to the sudden appearance of the unvoiced sound in some voiced sounds.
- the voicedness of the current frame of a subband can be extracted from the speech of the following library:
- & is the value of the t-th speech sample of the current frame of a sub-band
- ⁇ is the value of the speech sample with time interval t
- ⁇ is the number of samples of one frame.
- a quasi-periodic pulse sequence is first constructed according to the pitch frequency parameter in the speech parameter, and a random sequence is constructed by white noise;
- the voiced subband filter product constructed by the voiced tone obtains the voiced component of the signal from the constructed quasi-periodic pulse sequence, and obtains the unvoiced component of the signal from the random sequence through the unvoiced subband filter constructed by the voiced tone;
- the mixed excitation signal is obtained by adding the unvoiced component.
- the mixed excitation signal is passed through a synthesis filter constructed by the spectral envelope parameter to output a frame synthesized speech waveform.
- the present invention is preferably an embodiment in which the above-described cleaning/turbidity prediction is not performed and the multi-subband turbidity mixing excitation is used.
- the present invention has the advantage of continuously synthesizing any length of speech, it is possible to continue to cyclically process the next frame of speech after completing the output of one frame of the speech waveform. Since the optimized speech parameters of the next frame are not pre-generated and stored in the RAM, after the current frame is processed, it is required to return to step S220 to take the rough value of the next frame speech parameter of the phoneme from the model, repeating Steps S220 to S250 are performed to perform speech synthesis processing on the next frame of the phoneme to finally output the speech waveform of the next frame. This loops until all the parameters of all the phoneme models have been processed and all the speeches are synthesized.
- FIG. 8 shows a block schematic diagram of a parametric speech synthesis system 800 in accordance with another embodiment of the present invention.
- the parametric speech synthesis system 800 includes an input text analysis unit 830, a coarse search unit 840, a smoothing filter unit 850, a global optimization unit 860, a parametric speech synthesis unit 870, and a loop determination unit 880.
- an acoustic parameter extraction unit and a statistical model training unit (not shown) for corpus training may also be included.
- the acoustic parameter extraction unit is configured to extract acoustic parameters of the speech in the training corpus;
- the training unit is configured to train a statistical model corresponding to each acoustic parameter of each phoneme according to the acoustic parameters extracted by the acoustic parameter extraction unit, and save the statistical model in the statistical model input text analysis unit 830 for analysis.
- Entering text, and acquiring a phoneme sequence containing context information according to the analysis of the input text; the coarse search unit 840 is configured to sequentially extract one phoneme in the phoneme sequence, and search the statistical model library for the acquired by the input text analysis unit 830.
- the smoothing filtering unit 850 is configured to filter the coarse value of the synthesized speech parameter by using the filter group The smoothed speech parameter is obtained;
- the global optimization unit 860 is configured to perform global parameter optimization on each speech parameter smoothed by the smoothing filtering unit 850 using the global parameter optimizer to obtain an optimized speech parameter;
- the parameter speech synthesizing unit 870 is configured to: Using a parametric speech synthesizer for global The speech parameters optimization unit 860 synthesizes and outputs synthesized speech.
- the loop judging unit 880 is connected between the parametric speech synthesizing unit 870 and the coarse search unit 840 to determine whether there is an unprocessed frame in the phoneme after completing the output of the one-frame speech waveform, and if so, the lower of the phoneme.
- the one-frame repetition uses the coarse search unit, the smoothing filter unit, the global optimization unit, and the parameter speech synthesis unit to continue searching to obtain a statistical model rough value corresponding to the acoustic parameter, a filtered smooth value, a global optimization, and a loop synthesis of the parameter speech synthesis. Until all frames of all phonemes in the phoneme sequence are processed.
- the smoothing filtering unit 850, the global optimization unit 860, and the parametric speech synthesis unit 870 perform speech synthesis processing to finally output the speech waveform of the next frame. This loop is processed until all the parameters of all the frames of all the phonemes in the phoneme sequence are processed and all the speech is synthesized.
- the statistical model training unit further includes an acoustic parameter model training unit, a clustering unit, a forced alignment unit, a state duration model training unit, and a model statistical unit (Fig. Not shown), specific:
- An acoustic parameter model training unit configured to train a model for each acoustic parameter of each phoneme according to context information of each phoneme;
- a clustering unit configured to cluster related phonemes according to context information of the phoneme
- a forced alignment unit for performing frame-to-state forced alignment of speech in the training corpus using the model
- a state duration model training unit configured to use the duration information generated by the forced alignment unit in the forced alignment process to train a state duration model after the phonemes are clustered in different context information
- the model statistic unit is configured to form a statistical model library for a statistical model corresponding to each acoustic parameter of each phoneme in different context information.
- Figure 9 is a diagram showing the logical structure of a parametric speech synthesis unit in accordance with a preferred embodiment of the present invention.
- the parametric speech synthesis unit 870 further includes a quasi-periodic pulse generator 871, a white noise generator 872, a voiced subband filter 873, an unvoiced subband filter 874, an adder 875, and a synthesis filter 876.
- the quasi-periodic pulse generator 871 is configured to construct a quasi-periodic pulse sequence according to the pitch frequency parameter in the speech parameter;
- the white noise generator 872 is configured to construct a random sequence by white noise;
- the voiced subband filter 873 is used to The voiced tone determines the voiced component of the signal from the constructed quasi-periodic pulse sequence;
- the unvoiced subband filter 874 is used to determine the unvoiced component of the signal from the random sequence based on the sub-band voiced vowel; and then passes the voiced component and the unvoiced component through the adder
- the 875 is added to get the mixed excitation signal.
- the mixed excitation signal is filtered by the synthesis filter 876 constructed by the spectral envelope parameter to output a corresponding one-frame synthesized speech waveform.
- the synthesis method adopted by the present invention is vertical processing, that is, the synthesis of each frame of speech requires the extraction of the statistical model rough value, the filtered smooth value, the global optimized optimization value, and the parameter speech synthesis speech. After each link, the synthesis of each frame of speech is repeated again.
- the existing parameter speech synthesis method adopts the horizontal offline processing, that is, the rough parameters of all the models are taken out, the smoothing parameters of all the frames are generated by the maximum likelihood algorithm, the optimization parameters of all the frames are obtained by the global variance model, and finally the parameters are synthesized.
- the output of the voice of all frames Compared with the existing parameter speech synthesis method, each layer needs to save the parameters of all the frames.
- the vertical processing mode of the present invention only needs to save the parameters of the fixed storage capacity required by the current frame, so the vertical processing method of the present invention It also solves the problem that the length of the synthesized speech caused by the horizontal processing method of the original method is limited.
- the present invention reduces the size of the model library to about 1/6 of the original method by using only static parameters in the synthesis stage, and no longer using dynamic parameters and variance information.
- a specially designed filter bank instead of the maximum likelihood parameter method for smooth generation of parameters, and using the new global parameter optimizer to replace the global variance model in the original method for speech parameter optimization, combined with the vertical processing structure
- Using a fixed-size RAM to continuously predict the voice parameters of arbitrary duration solves the problem that the original method can not continuously predict the speech parameters of any duration on a small RAM chip, and helps to expand the speech synthesis method on a small memory space chip.
- a parameter speech synthesis method provided by still another embodiment of the present invention, referring to FIG. 10, the method includes:
- each frame of each phoneme in the phoneme sequence of the input text is processed in turn as follows:
- Step 102 Filter the coarse value by using the coarse value and information of a predetermined number of voice frames before the current time to obtain a smoothed value of the currently predicted voice parameter.
- the parameters involved in the prediction do not extend to future parameters, and the output frame at a certain time depends only on the input frame at the moment and before or at the moment.
- the output frame at the previous moment regardless of future input or output frames.
- the coarse value and the information of the previous time speech frame may be used to filter the coarse value to obtain a smoothed value of the currently predicted speech parameter, where the information of the last time speech frame is obtained.
- the smoothed value of the speech parameters predicted for the previous moment may be used to filter the coarse value to obtain a smoothed value of the currently predicted speech parameter, where the information of the last time speech frame is obtained.
- the predicted speech parameter is a spectral envelope parameter and a sub-band voiced sound parameter
- the solution uses the rough value and the previous one according to the following formula.
- y t a - y t _ l + (l - a) - x t .
- the solution uses the coarse value and the smoothed value of the predicted speech parameter at the previous moment to filter the coarse value to obtain the current
- the smoothed value of the predicted speech parameter wherein, in the above formula, the time is the first frame, and the predicted speech parameter is a rough value at the time of the frame, and ⁇ represents the filtered smoothed value, respectively, the coefficient of the filter, and The values are different.
- the program may specifically include the following processing: constructing the voiced subband filter and the unvoiced subband filter by using the subband voiced parameter; and constructing the quasi-periodic pulse sequence constructed by the pitch frequency parameter
- the voiced subband filter obtains a voiced component of the voice signal; a random sequence constructed by white noise is passed through the unvoiced subband filter to obtain an unvoiced component of the voice signal;
- the program includes a training phase before the above-mentioned synthesis phase.
- the acoustic parameters extracted from the corpus only include static parameters, or the acoustic parameters extracted from the corpus include static parameters and dynamic parameters; only the static model parameters are retained in the model parameters of the statistical model obtained after training;
- the step 101 in the synthesizing stage may specifically include: according to the current phoneme, the corresponding static model parameter of the statistical model obtained in the training phase in the current frame of the current phoneme is used as a rough value of the currently predicted speech parameter.
- a further embodiment of the present invention also provides a parametric speech synthesis system. Referring to FIG. 11, the system includes:
- the loop synthesizing device 110 includes:
- the rough search unit 111 is configured to extract a corresponding statistical model from the statistical model library for the current phoneme in the phoneme sequence of the input text, and use the corresponding model parameter of the statistical model in the current frame of the current phoneme as the current predicted speech parameter.
- the smoothing filtering unit 112 is configured to filter the coarse value by using the coarse value and information of a predetermined number of voice frames before the current time to obtain a smoothed value of the currently predicted voice parameter;
- the global optimization unit 113 is configured to globally optimize the smoothed value of the currently predicted speech parameter according to the global mean value and the global standard deviation ratio of the voice parameter obtained by the statistics, to generate a required voice parameter;
- the parameter speech synthesis unit 114 is configured to synthesize the generated speech parameters to obtain a frame of speech synthesized by the current frame of the current phoneme.
- the smoothing filtering unit 112 includes a low-pass filter group, configured to filter the coarse value by using the coarse value and the information of the previous time speech frame to obtain a smoothed value of the currently predicted speech parameter.
- the information of the speech frame at the last moment is a smoothed value of the predicted speech parameter at the previous moment.
- the low-pass filter bank uses the coarse value and the smoothed value of the predicted speech parameter at the previous moment according to the following formula, Filtering the coarse value to obtain a smoothed value of the currently predicted speech parameter: when the predicted speech parameter is a pitch frequency parameter, the low pass filter bank uses the coarse value and the previous moment according to the following formula Predicting a smoothed value of the speech parameter, and filtering the coarse value to obtain a smoothed value of the currently predicted speech parameter: wherein, in the above formula, the time is the first frame, and the rough value of the predicted speech parameter in the first frame is displayed, Indicates the filtered smoothed value, a, which is the coefficient of the filter, and the value of the sum is different.
- the global optimization unit 113 includes a global parameter optimizer for obtaining a global mean value and a global standard deviation ratio of the voice parameter according to a statistic, and performing smoothing values of the currently predicted voice
- ⁇ is the smoothed value of the speech parameter before the optimization at time t, which is the initial optimized value.
- w is the weight value, which is the required speech parameter obtained after global optimization, r is the global standard deviation ratio of the predicted speech parameters obtained by statistics, m is the global mean value of the predicted speech parameters obtained by statistics, r and m are taken The value is a constant.
- the parameter speech synthesis unit 114 includes:
- a filter construction module for constructing a voiced subband filter and an unvoiced subband filter using subband voiced parameters
- the voiced subband filter is configured to filter a quasi-periodic pulse sequence constructed by a pitch frequency parameter to obtain a voiced component of the voice signal;
- the unvoiced subband filter is configured to filter a random sequence constructed by white noise to obtain an unvoiced component of the voice signal
- An adder configured to add the voiced component and the unvoiced component to obtain a mixed excitation signal
- a synthesis filter configured to output the mixed excitation signal to a one-frame synthesized speech waveform after passing through a filter constructed by a spectral envelope parameter .
- the system further includes training means for, during the training phase, the acoustic parameters extracted from the corpus include only static parameters, or the acoustic parameters extracted from the corpus include static parameters and dynamic parameters; and, in training Only the static model parameters are retained in the model parameters of the obtained statistical model;
- the coarse search unit 111 is specifically configured to use, according to the current phoneme, the corresponding static model parameter of the statistical model obtained in the training stage in the current frame of the current phoneme as the rough value of the currently predicted speech parameter. .
- the related operations of the coarse search unit 111, the smoothing filter unit 112, the global optimization unit 113, and the parameter speech synthesis unit 114 in the embodiment of the present invention may be referred to the coarse search unit 840, the smoothing filter unit 850, and the global optimization in the foregoing embodiments, respectively.
- the relevant content of unit 860 and parametric speech synthesis unit 870 As described above, the technical solution of the embodiment of the present invention provides a novel parametric speech synthesis scheme by using information of a speech frame before the current frame and a global mean value and a global standard deviation ratio of the speech parameters. .
- the vertical processing method is adopted in the synthesis stage, and each frame of speech is separately synthesized one by one, and only the parameters of the fixed capacity required by the current frame are saved in the synthesis process.
- the new vertical processing architecture of the solution enables the synthesis of speech of any length of time using a fixed-size RAM, which significantly reduces the RAM capacity requirement for speech synthesis, thereby enabling continuous synthesis of arbitrary on a chip with smaller RAM. Duration speech.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Machine Translation (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013527464A JP5685649B2 (ja) | 2011-08-10 | 2011-10-27 | パラメータ音声の合成方法及びシステム |
DK11864132.3T DK2579249T3 (en) | 2011-08-10 | 2011-10-27 | PARAMETER SPEECH SYNTHESIS PROCEDURE AND SYSTEM |
EP11864132.3A EP2579249B1 (en) | 2011-08-10 | 2011-10-27 | Parameter speech synthesis method and system |
KR1020127031341A KR101420557B1 (ko) | 2011-08-10 | 2011-10-27 | 파라미터 음성 합성 방법 및 시스템 |
US13/640,562 US8977551B2 (en) | 2011-08-10 | 2011-10-27 | Parametric speech synthesis method and system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102290132A CN102270449A (zh) | 2011-08-10 | 2011-08-10 | 参数语音合成方法和系统 |
CN201110229013.2 | 2011-08-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013020329A1 true WO2013020329A1 (zh) | 2013-02-14 |
Family
ID=45052729
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2011/081452 WO2013020329A1 (zh) | 2011-08-10 | 2011-10-27 | 参数语音合成方法和系统 |
Country Status (7)
Country | Link |
---|---|
US (1) | US8977551B2 (zh) |
EP (1) | EP2579249B1 (zh) |
JP (1) | JP5685649B2 (zh) |
KR (1) | KR101420557B1 (zh) |
CN (2) | CN102270449A (zh) |
DK (1) | DK2579249T3 (zh) |
WO (1) | WO2013020329A1 (zh) |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103854643B (zh) * | 2012-11-29 | 2017-03-01 | 株式会社东芝 | 用于合成语音的方法和装置 |
CN103226946B (zh) * | 2013-03-26 | 2015-06-17 | 中国科学技术大学 | 一种基于受限玻尔兹曼机的语音合成方法 |
US9484015B2 (en) * | 2013-05-28 | 2016-11-01 | International Business Machines Corporation | Hybrid predictive model for enhancing prosodic expressiveness |
AU2015206631A1 (en) * | 2014-01-14 | 2016-06-30 | Interactive Intelligence Group, Inc. | System and method for synthesis of speech from provided text |
US9472182B2 (en) * | 2014-02-26 | 2016-10-18 | Microsoft Technology Licensing, Llc | Voice font speaker and prosody interpolation |
KR20160058470A (ko) * | 2014-11-17 | 2016-05-25 | 삼성전자주식회사 | 음성 합성 장치 및 그 제어 방법 |
JP5995226B2 (ja) * | 2014-11-27 | 2016-09-21 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 音響モデルを改善する方法、並びに、音響モデルを改善する為のコンピュータ及びそのコンピュータ・プログラム |
JP6483578B2 (ja) * | 2015-09-14 | 2019-03-13 | 株式会社東芝 | 音声合成装置、音声合成方法およびプログラム |
CN107924678B (zh) * | 2015-09-16 | 2021-12-17 | 株式会社东芝 | 语音合成装置、语音合成方法及存储介质 |
EP3363015A4 (en) * | 2015-10-06 | 2019-06-12 | Interactive Intelligence Group, Inc. | METHOD FOR FORMING THE EXCITATION SIGNAL FOR A PARAMETRIC SPEECH SYNTHESIS SYSTEM BASED ON GLOTTAL PULSE MODEL |
CN105654939B (zh) * | 2016-01-04 | 2019-09-13 | 极限元(杭州)智能科技股份有限公司 | 一种基于音向量文本特征的语音合成方法 |
US10044710B2 (en) | 2016-02-22 | 2018-08-07 | Bpip Limited Liability Company | Device and method for validating a user using an intelligent voice print |
JP6852478B2 (ja) * | 2017-03-14 | 2021-03-31 | 株式会社リコー | 通信端末、通信プログラム及び通信方法 |
JP7209275B2 (ja) * | 2017-08-31 | 2023-01-20 | 国立研究開発法人情報通信研究機構 | オーディオデータ学習装置、オーディオデータ推論装置、およびプログラム |
CN107481715B (zh) * | 2017-09-29 | 2020-12-08 | 百度在线网络技术(北京)有限公司 | 用于生成信息的方法和装置 |
CN107945786B (zh) * | 2017-11-27 | 2021-05-25 | 北京百度网讯科技有限公司 | 语音合成方法和装置 |
US11264010B2 (en) | 2018-05-11 | 2022-03-01 | Google Llc | Clockwork hierarchical variational encoder |
EP3776531A1 (en) | 2018-05-11 | 2021-02-17 | Google LLC | Clockwork hierarchical variational encoder |
CN109036377A (zh) * | 2018-07-26 | 2018-12-18 | 中国银联股份有限公司 | 一种语音合成方法及装置 |
CN108899009B (zh) * | 2018-08-17 | 2020-07-03 | 百卓网络科技有限公司 | 一种基于音素的中文语音合成系统 |
CN109102796A (zh) * | 2018-08-31 | 2018-12-28 | 北京未来媒体科技股份有限公司 | 一种语音合成方法及装置 |
CN109285535A (zh) * | 2018-10-11 | 2019-01-29 | 四川长虹电器股份有限公司 | 基于前端设计的语音合成方法 |
CN109285537B (zh) * | 2018-11-23 | 2021-04-13 | 北京羽扇智信息科技有限公司 | 声学模型建立、语音合成方法、装置、设备及存储介质 |
US11302301B2 (en) * | 2020-03-03 | 2022-04-12 | Tencent America LLC | Learnable speed control for speech synthesis |
CN111862931B (zh) * | 2020-05-08 | 2024-09-24 | 北京嘀嘀无限科技发展有限公司 | 一种语音生成方法及装置 |
US11495200B2 (en) * | 2021-01-14 | 2022-11-08 | Agora Lab, Inc. | Real-time speech to singing conversion |
CN112802449B (zh) * | 2021-03-19 | 2021-07-02 | 广州酷狗计算机科技有限公司 | 音频合成方法、装置、计算机设备及存储介质 |
CN113160794B (zh) * | 2021-04-30 | 2022-12-27 | 京东科技控股股份有限公司 | 基于音色克隆的语音合成方法、装置及相关设备 |
CN115440205A (zh) * | 2021-06-04 | 2022-12-06 | 中国移动通信集团浙江有限公司 | 语音处理方法、装置、终端以及程序产品 |
CN113571064B (zh) * | 2021-07-07 | 2024-01-30 | 肇庆小鹏新能源投资有限公司 | 自然语言理解方法及装置、交通工具及介质 |
CN114822492B (zh) * | 2022-06-28 | 2022-10-28 | 北京达佳互联信息技术有限公司 | 语音合成方法及装置、电子设备、计算机可读存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101178896A (zh) * | 2007-12-06 | 2008-05-14 | 安徽科大讯飞信息科技股份有限公司 | 基于声学统计模型的单元挑选语音合成方法 |
US7478039B2 (en) * | 2000-05-31 | 2009-01-13 | At&T Corp. | Stochastic modeling of spectral adjustment for high quality pitch modification |
CN101369423A (zh) * | 2007-08-17 | 2009-02-18 | 株式会社东芝 | 语音合成方法和装置 |
US7996222B2 (en) * | 2006-09-29 | 2011-08-09 | Nokia Corporation | Prosody conversion |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03102399A (ja) * | 1989-09-18 | 1991-04-26 | Fujitsu Ltd | 規則音声合成装置 |
AU1941697A (en) * | 1996-03-25 | 1997-10-17 | Arcadia, Inc. | Sound source generator, voice synthesizer and voice synthesizing method |
GB0112749D0 (en) * | 2001-05-25 | 2001-07-18 | Rhetorical Systems Ltd | Speech synthesis |
US6912495B2 (en) * | 2001-11-20 | 2005-06-28 | Digital Voice Systems, Inc. | Speech model and analysis, synthesis, and quantization methods |
US20030135374A1 (en) * | 2002-01-16 | 2003-07-17 | Hardwick John C. | Speech synthesizer |
CN1262987C (zh) * | 2003-10-24 | 2006-07-05 | 无敌科技股份有限公司 | 母音间转音的平滑处理方法 |
WO2006032744A1 (fr) * | 2004-09-16 | 2006-03-30 | France Telecom | Procede et dispositif de selection d'unites acoustiques et procede et dispositif de synthese vocale |
US20060129399A1 (en) * | 2004-11-10 | 2006-06-15 | Voxonic, Inc. | Speech conversion system and method |
US20060229877A1 (en) * | 2005-04-06 | 2006-10-12 | Jilei Tian | Memory usage in a text-to-speech system |
JP4662139B2 (ja) * | 2005-07-04 | 2011-03-30 | ソニー株式会社 | データ出力装置、データ出力方法、およびプログラム |
CN1835075B (zh) * | 2006-04-07 | 2011-06-29 | 安徽中科大讯飞信息科技有限公司 | 一种结合自然样本挑选与声学参数建模的语音合成方法 |
US8321222B2 (en) * | 2007-08-14 | 2012-11-27 | Nuance Communications, Inc. | Synthesis by generation and concatenation of multi-form segments |
KR100932538B1 (ko) * | 2007-12-12 | 2009-12-17 | 한국전자통신연구원 | 음성 합성 방법 및 장치 |
EP2357646B1 (en) * | 2009-05-28 | 2013-08-07 | International Business Machines Corporation | Apparatus, method and program for generating a synthesised voice based on a speaker-adaptive technique. |
US20110071835A1 (en) * | 2009-09-22 | 2011-03-24 | Microsoft Corporation | Small footprint text-to-speech engine |
GB2478314B (en) * | 2010-03-02 | 2012-09-12 | Toshiba Res Europ Ltd | A speech processor, a speech processing method and a method of training a speech processor |
US20120143611A1 (en) * | 2010-12-07 | 2012-06-07 | Microsoft Corporation | Trajectory Tiling Approach for Text-to-Speech |
-
2011
- 2011-08-10 CN CN2011102290132A patent/CN102270449A/zh active Pending
- 2011-10-27 US US13/640,562 patent/US8977551B2/en active Active
- 2011-10-27 KR KR1020127031341A patent/KR101420557B1/ko active IP Right Grant
- 2011-10-27 WO PCT/CN2011/081452 patent/WO2013020329A1/zh active Application Filing
- 2011-10-27 CN CN201110331821XA patent/CN102385859B/zh not_active Expired - Fee Related
- 2011-10-27 JP JP2013527464A patent/JP5685649B2/ja not_active Expired - Fee Related
- 2011-10-27 EP EP11864132.3A patent/EP2579249B1/en active Active
- 2011-10-27 DK DK11864132.3T patent/DK2579249T3/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7478039B2 (en) * | 2000-05-31 | 2009-01-13 | At&T Corp. | Stochastic modeling of spectral adjustment for high quality pitch modification |
US7996222B2 (en) * | 2006-09-29 | 2011-08-09 | Nokia Corporation | Prosody conversion |
CN101369423A (zh) * | 2007-08-17 | 2009-02-18 | 株式会社东芝 | 语音合成方法和装置 |
CN101178896A (zh) * | 2007-12-06 | 2008-05-14 | 安徽科大讯飞信息科技股份有限公司 | 基于声学统计模型的单元挑选语音合成方法 |
Also Published As
Publication number | Publication date |
---|---|
KR101420557B1 (ko) | 2014-07-16 |
DK2579249T3 (en) | 2018-05-28 |
US8977551B2 (en) | 2015-03-10 |
EP2579249A1 (en) | 2013-04-10 |
US20130066631A1 (en) | 2013-03-14 |
JP2013539558A (ja) | 2013-10-24 |
EP2579249B1 (en) | 2018-03-28 |
CN102270449A (zh) | 2011-12-07 |
CN102385859A (zh) | 2012-03-21 |
JP5685649B2 (ja) | 2015-03-18 |
CN102385859B (zh) | 2012-12-19 |
EP2579249A4 (en) | 2015-04-01 |
KR20130042492A (ko) | 2013-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013020329A1 (zh) | 参数语音合成方法和系统 | |
CN109147758B (zh) | 一种说话人声音转换方法及装置 | |
CN112420026B (zh) | 优化关键词检索系统 | |
Ardaillon et al. | Fully-convolutional network for pitch estimation of speech signals | |
KR20150016225A (ko) | 타겟 운율 또는 리듬이 있는 노래, 랩 또는 다른 가청 표현으로의 스피치 자동 변환 | |
WO2014183411A1 (en) | Method, apparatus and speech synthesis system for classifying unvoiced and voiced sound | |
GB2603776A (en) | Methods and systems for modifying speech generated by a text-to-speech synthesiser | |
WO2015025788A1 (ja) | 定量的f0パターン生成装置及び方法、並びにf0パターン生成のためのモデル学習装置及び方法 | |
CN116994553A (zh) | 语音合成模型的训练方法、语音合成方法、装置及设备 | |
CN108369803A (zh) | 用于形成基于声门脉冲模型的参数语音合成系统的激励信号的方法 | |
JP6594251B2 (ja) | 音響モデル学習装置、音声合成装置、これらの方法及びプログラム | |
JP4945465B2 (ja) | 音声情報処理装置及びその方法 | |
CN116168678A (zh) | 语音合成方法、装置、计算机设备和存储介质 | |
CN111862931B (zh) | 一种语音生成方法及装置 | |
JP7088796B2 (ja) | 音声合成に用いる統計モデルを学習する学習装置及びプログラム | |
CN112951256A (zh) | 语音处理方法及装置 | |
CN112164387A (zh) | 音频合成方法、装置及电子设备和计算机可读存储介质 | |
CN111739547B (zh) | 语音匹配方法、装置、计算机设备和存储介质 | |
JP6234134B2 (ja) | 音声合成装置 | |
CN111696530B (zh) | 一种目标声学模型获取方法及装置 | |
US20140343934A1 (en) | Method, Apparatus, and Speech Synthesis System for Classifying Unvoiced and Voiced Sound | |
JP6587308B1 (ja) | 音声処理装置、および音声処理方法 | |
CN114005467A (zh) | 一种语音情感识别方法、装置、设备及存储介质 | |
Galajit et al. | ThaiSpoof: A Database for Spoof Detection in Thai Language | |
CN117765898A (zh) | 一种数据处理方法、装置、计算机设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 13640562 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011864132 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2013527464 Country of ref document: JP Kind code of ref document: A Ref document number: 20127031341 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11864132 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |