GB2489473A - A voice conversion method and system - Google Patents

A voice conversion method and system Download PDF

Info

Publication number
GB2489473A
GB2489473A GB201105314A GB201105314A GB2489473A GB 2489473 A GB2489473 A GB 2489473A GB 201105314 A GB201105314 A GB 201105314A GB 201105314 A GB201105314 A GB 201105314A GB 2489473 A GB2489473 A GB 2489473A
Authority
GB
United Kingdom
Prior art keywords
voice
speech
input
training data
clusters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB201105314A
Other versions
GB201105314D0 (en
GB2489473B (en
Inventor
Byung Ha Chun
Mark John Francis Gales
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB201105314A priority Critical patent/GB2489473B/en
Publication of GB201105314D0 publication Critical patent/GB201105314D0/en
Priority to US13/217,628 priority patent/US8930183B2/en
Publication of GB2489473A publication Critical patent/GB2489473A/en
Application granted granted Critical
Publication of GB2489473B publication Critical patent/GB2489473B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • G10L2021/0135Voice conversion or morphing

Abstract

A method of converting speech from the characteristics of a first voice to the characteristics of a second voice, the method comprising receiving a speech input from a first voice, dividing said speech input into a plurality of frames; mapping the speech from the first voice to a second voice; and outputting the speech in the second voice, wherein mapping the speech from the first voice to the second voice comprises, deriving Gaussian kernels demonstrating the similarity between speech features derived from the frames of the speech input from the first voice and stored frames of training data for said first voice, the training data corresponding to different text to that of the speech input and wherein the mapping step uses a plurality of kernels derived for each frame of input speech with a plurality of stored frames of training data of the first voice. The invention is non-parametric in the mapping function which means that there are only a few hyper-parameters that need to be trained which help to circumvent issues with scaling.

Description

A Voice Conversion Method and System
Field
Embodiments of the present invention described herein generally relate to voice conversion.
Background
Voice Conversion (VC) is a technique for allowing the speaker characteristics of speech to be altered. Non-linguistic information, such as the voice characteristics, is modified while keeping the linguistic information unchanged. Voice conversion can be used for speaker conversion in which the voice of a certain speaker (source speaker) is converted to sound like that of another speaker (target speaker).
The standard approaches to VC employ a statistical feature mapping process. This mapping function is trained in advance using a small amount of training data consisting of utterance pairs of source and target voices. The resulting mapping function is then required to be able to convert of any sample of the source speech into that of the target without any linguistic information such as phoneme transcription.
The normal approach to VC is to train a parametric model such as a Gaussian Mixture Model on the joint probability density of source and target spectra and derive the conditional probability density given source spectra to be converted.
Brief Description of the Drawings
The present invention will now be described with reference to the following non-limiting embodiments.
Figure 1 is a schematic of a voice conversion system in accordance with an embodiment of the present invention; Figure 2 is a plot of a number of samples drawn from a Gaussian process prior with a gamma exponential kernel with s12o and a=2.O; Figure 3 is a plot of a number of samples drawn from the distribution shown in equation 19; Figure 4 is a plot showing the mean and associated variance of the data of figure 3 at each point; Figure 5 is a flow diagram showing a method in accordance with the present invention; Figure 6 is a flow diagram continuing from figure 5 showing a method in accordance with an embodiment of the present invention; Figure 7 is a flow diagram showing the training stages of a method in accordance with an embodiment of the present invention; Figures 8 (a) to 8(d) is a schematic illustrating clustering which may be used in a method in accordance with the present invention; Figure 9 (a) is a schematic showing a parametric approach for voice conversion and figure 9(b) is a schematic showing a method in accordance with an embodiment of the present invention; and Figure 10 shows a plot of running spectra of converted speech for a static parametric based approach (Figure 1 Oa), a dynamic parametric based approach (Figure lOb), a trajectory parametric based approach, which uses a parametric model including explicit dynamic feature constraints (Figure lOc), a Gaussian Process based approach using static speech features in accordance with an embodiment of the present invention (figure 1 Od) and a Gaussian Process based approach using dynamic speech features in accordance with an embodiment of the present invention (figure lOe).
Detailed Description
In an embodiment, the present invention provides a method of converting speech from the characteristics of a first voice to the characteristics of a second voice, the method comprising: receiving a speech input from a first voice, dividing said speech input into a plurality of frames; mapping the speech from the first voice to a second voice; and outputting the speech in the second voice, wherein mapping the speech from the first voice to the second voice comprises, deriving kemels demonstrating the similarity between speech features derived from the frames of the speech input from the first voice and stored frames of training data for said first voice, the training data corresponding to different text to that of the speech input and wherein the mapping step uses a plurality of kernels derived for each frame of input speech with a plurality of stored frames of training data of the first voice.
The kernels can be derived for either static features on their own or static and dynamic features. Dynamic features take into account the preceding and following frames.
In one embodiment, the speech to be output is determined according to a Gaussian Process predictive distribution: p(yt I xt,x*,y*,14) =Ar(t.i(xt),E(xt)), where yt is the speech vector for frame t to be output, x is the speech vector for the input speech for frame t, x",y is {, y} , , y}, where xt* is the tth frame of training data for the first voice and yt* is the tth frame of training data for the second voice, M denotes the model, 1u(x,) and L(x1) are the mean and variance of the predictive distribution for given x1.
Further: T * 2 -I * * p(xt)=rn(xt)+k[K +crl] (y -p), E(xt) =k(xt,xt)+cr2-k[K* +a21]'kt, where = [rn(xfl m(x) ... m(x)]T k(x,x) k(xT,x) ...
k(x,x) k(x,x) ... k(x,x7r) k(x,x) k(x,x) k= [k(xt,xt) k(x,xt) ... k(x,xt)]T and a is a parameter to be trained, rn(xj) is a mean function and k(a,b) is a kernel function representing the similarity between a and b.
The kernel function may be isotropic or non-stationery. The kernel may contain a hyper-parameter or be parameter free.
In an embodiment, the mean function is of the form:m(x)ax+R.
In a further embodiment, the speech features are represented by vectors in an acoustic space and said acoustic space is partitioned for the training data such that a cluster of training data represents each part of the partitioned acoustic space, wherein during mapping a frame of input speech is compared with the stored frames of training data for the first voice which have been assigned to the same cluster as the frame of input speech.
In an embodiment, two types of clusters are used, hard clusters and soft clusters. In the hard clusters the boundary between adjacent clusters is hard so that there is no overlap between clusters. The soft clusters extend slightly beyond the boundary of the hard clusters so that there is overlap between the soft clusters. During mapping, the hard clusters will be used for assignment of a vector representing input speech to a cluster.
However, the Gramians K* and/or k may be determined over the soft clusters.
The method may operate using pre-stored training data or it may gather the training data prior to use. The training data is used to train hyper-parameters. If the acoustic space has been partitioned, in an embodiment, the hyper-parameters are trained over soft clusters.
Systems and methods in accordance with embodiments of the present invention can be applied to many uses. For example, they may be used to convert a natural input voice or a synthetic voice input. The synthetic voice input may be speech which is from a speech to speech language converter, a satellite navigation system or the like.
In a further embodiment, systems in accordance with embodiments of the present invention can be used as part of an implant to allow a patient to regain their old voice after vocal surgery.
The above described embodiments apply a Gaussian process (GP) to Voice Conversion.
Gaussian processes are non-parametric Bayesian models that can be thought of as a distribution over functions. They provide advantages over the conventional parametric approaches, such as flexibility due to their non-parametric nature.
Further, such a Gaussian Process based approach is resistant to over-lifting.
As such an approach is non-parametric it tackles the issue of the meaning of parameters used in a parametric approach. Also, being non-parametric means that there are only a few hyper-parameters that need to be trained and these parameters maintain their meaning even when more data is introduced. These advantages help to circumvent issues with scaling.
In accordance with further embodiments, a system is provided for converting speech from the characteristics of a first voice to the characteristics of a second voice, the system comprising: a receiver for receiving a speech input from a first voice; a processor configured to: divide said speech input into a plurality of frames; and map the speech from the first voice to a second voice, the system further comprising an output to output the speech in the second voice, wherein to map the speech from the first voice to the second voice, the processor is further adapted to derive kernels demonstrating the similarity between speech features derived from the frames of the speech input from the first voice and stored frames of training data for said first voice, the training data corresponding to different text to that of the speech input, the processor using a plurality of kernels derived for each frame of input speech with a plurality of stored frames of training data of the first voice.
Methods arid systems in accordance with embodiments can be implemented either in hardware or on software in a general purpose computer. Further embodiments can be implemented in a combination of hardware and software. Embodiments may also be implemented by a single processing apparatus or a distributed network of processing apparatuses.
Since methods and systems in accordance with embodiments can be implemented by software, systems and methods in accordance with embodiments may be implanted using computer code provided to a general purpose computer on any suitable carrier medium. The carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
Figure 1 is a schematic of a system which may be used for voice conversion in accordance with an embodiment of the present invention.
Figure 1 is schematic of a voice conversion system which may be used in accordance with an embodiment of the present invention. The system 51 comprises a processor 53 which runs voice conversion application 55. The system is also provided with memory 57 which communicates with the application as directed by the processor 53. There is also provided a voice input module 61 and a voice output module 63. Voice input module 61 receives a speech input from speech input 65. Speech input 65 may be a microphone or maybe received from a storage medium, streamed online etc. The voice input module 61 then communicates the input data to the processor 53 running application 55. Application 55 outputs data corresponding to the text of the speech input via module 61 but in a voice different to that used to input the speech. The speech will be output in the voice of a target speaker which the user may select through application 55. This data is then put in output to voice output module 63 which converts the data into a form to be output by voice output 67. Voice output 67 may be a direct voice output such as a speaker or maybe the output for a speech file to be directed towards a storage medium, streamed over the Internet or directed towards a further program as required.
The above voice combination system converts speech from one speaker, (an input speaker) into speech from a different speaker (the target speaker). Ideally, the actual words spoken by the input speaker should be identical to those spoken by the target speaker. The speech of the input speaker is matched to the speech of the output speaker using a mapping function. In embodiments of the present invention, the mapping operation is derived using Gaussian Processes. This is essentially a non-parametric approach to the mapping operation.
To explain how the mapping operation is derived using Gaussian Processes, it is first useful to understand how the mapping fImction is derived for a parametric Gaussian Mixture Model. Conditionals and marginals of Gaussian distributions are themselves Gaussian. Namely if p(xi, x2) = JV ([:1; :]) then p(x) =N(xi; pi,Eii), p(x2) = fif(x2; /LLi,E22), p(x I x2) = g (xi; iti + E12Ja'(x2 /12), Y11 -5212E22Y221), p(x2 I =A1(x2; p2+E2iEjj'(xi -pi),E22 -E2iEr'E).
Let X1 andy1 be spectral features at frame t for source and target voices, respectively.
(For notation simplicity, it is assumed that x, and Yt are scalar values. Extending them to vectors is straightforward.) GMM-based voice conversion, approaches typically model the joint probability density of the source and target spectral features by a GMM as p(zt = Wm N(zt; (1) where Zt is a joint vector [x1, yt] cm is the mixture component index, Mis the total number of mixture components, üim is the weight of the rn-th mixture component. The mean vector and covariance matrix of the rn-tb component, p and are given as (x) (xx) (xv) (z) jim (z) in 1m -(y) i-ri v'(vx) -(vv) jim -S' A parameter set of the GMM is which consists of weights, mean vectors, and the covariance matrices for individual mixture components.
The parameters set X is estimated from supervised training data, {x, y1' } , {x, , y}, which is expressed as x*,y* for the source and targets, based on the maximum likelihood (ML) criterion as * (z) A =argmaxp(z IA), (3) A(z) where z is the set of training joint vectors z = { , z} and z is the training joint vector at frame t, z =[x,y]T.
In order to derive the mapping function, the conditional probability density of yt, given Xt, is derived from the estimated GMM as follows: p(yt I X, A) = P(m I xt,A)p(yt xt,m, A). in=1 (4)
The conventional approach, the conversion may be performed on the basis of the minimum mean-square error (MMSE) as follows: Ut =IE[y2 I ret] (5) =fv(Yt I xt,A)ydyt (6) = f p(m I A)p(yt I m,A)ydy (7) = Ep(m 1 xt,A(z))E[yt 1 xt,m], (8) n=1 where E[yt xt,lm]= j$) +EE' (Xt -t$7V). (9) In order to avoid each frame being independently mapped, it is possible to consider the dynamic features of the parameter trajectory. Here both the static and dynamic parameters are converted, yielding a set of Gaussian experts to estimate each dimension. Thus
Zt = (10) = (x+1 -(11) and similarly for 4p. Using this modified joint model, a 0MM is trained with the following parameters for each component rn: (z) = [(x) (y) ,1(Ax) (Av) T (12) E(xx) (zy) in in (yx) (yy) -in in -(àxzx) (Axzy) U U m in fl fl 1(L:1y) c-&swy) U U tim L-im Note to limit the number of parameters in the covariance matrix of z the static and delta parameters are assumed to be conditionally independent given the component. The same process as for the static parameters alone can be used to derive the model parameters. When applying voice conversion to a particular source sequence, this will yield two experts (assuming just delta parameters are added): * static expert p(yt I xt,tht,A) * dynamic expert:p(Ltsyt I where 2 th _-argrnax{P(rnIxt,Axt,A)}. (14) As in standard Hidden Markov Model (HIvIM)-based speech synthesis the sequence 5' = {P Yg} that maximises the output probability given both experts is produced: u=arrnax{fiP(wt I p(Ayt I (15) noting that Ayt = (y+1 -yt-i). (16) In a method and system according to an embodiment of the present invention, the mapping function is derived using non parametric techniques such as Gaussian Processes. Gaussian processes (GPs) are flexible models that fit well within a probabilistic Bayesian modelling framework. A GP can be used as a prior probability distribution over functions in Bayesian inference. Given any set of N points in the desired domain of functions, a multivariate Gaussian whose covariance matrix parameter is the Gramian matrix of the N points with some desired kernel, and sample from that Gaussian. Inference of continuous values with a GP prior is known as GP regression. Thus GPs are also useful as a powerful non-linear interpolation tool.
Gaussian processes are an extension of multivariate Gaussian distributions to infinite numbers of variables.
The underlying model for a number of prediction models is that (again considering a single dimension) yt =f(xt;A) +, (17) where epsilon is some Gaussian noise term and % are the parameters that defme the model.
A Gaussian Process Prior can be thought of to represent a distribution over functions.
Figure 2 shows a number of samples drawn from a Gaussian process prior with a Gamma-Exponential kernel with s-I = 2.0 and a = 2.0.
The above Bayesian likelihood function (17) as before is used with a Gaussian process prior for j(x;w): f(x; A) G7'(m(x), k(x, x')), (18) where k(x, x) is a kernel function, which defmes the "similarity" between x and xc and m(x) is the mean function. Many different types of kernels can be used. For example: covLIN -Linear covariance function: k(xp,xq) = xpTxq (Ki) covLlNard -Linear covariance function with Automatic Relevance Determination, where P is a hyper parameter to be trained.
k(xp,xq) xpTPlxq (K2) covLll'JOne -Linear covariance function with a bias. Where t2 is a byper parameter to be trained
T
7/ \ XpXq+ Xp,Xq) = t2 (K3) covMatemiso -Matern covariance function with v = *2, r -Xq)T F' [Xp Xq) and isotropic distance measure.
k(xp,xq)=a*f(V*r) *exp(_*r) (K4) covNNone -Neural network covariance function with a single parameter for the distance measure. Where crj is a hyperparameter to be trained.
2 x73Pxq k(xp,xq)=a1arcsin f(i + 4Px).(1 + xPxq) (K5) covPoly -Polynomial covariance function. Where c is a hyper-parameter to be trained k(xp,xq) = u (c±xxq)d (K6) covPPiso -Piecewise polynomial covariance function with compact support k(x, xq) = * (1 -* f(r,j) covRQard -Rational Quadratic covariance function with Automatic Relevance Determination where ci is a hyperparameter to be trained.
k(xp,xq) = a {i + -xq)TP(xp -(K7) covRQiso -Rational Quadratic covariance function with isotropic distance measure k(xp,xq) a{1+ (x (KS) covSEard -Squared Exponential covariance function with Automatic Relevance Determination k(xp,xq) = 4exp {T( xq)TP(xp -x)} (K9) covSEiso -Squared Exponential covariance function with isotropic distance measure.
k(x, xq) = exp { (x -xq)TP(xp -xq) } (Kb) covSEisoU -Squared Exponential covariance function with isotropic distance measure with unit magnitude.
k(xp,xq) = exp {: xq)TP(xp xq) } (Ku) Using equations 18 and 19 above, leads to a Gaussian process predictive distribution which is shown in figures 3 and 4: Figure 3 shows a number of samples drawn from the resulting Gaussian process posterior exposing the underlying sinc function through noisy observations. The posterior exhibits large variance where there is no local observed data. Figure 4 shows the confidence intervals on sampling from the posterior of the GP computed on samples from the same noisy sinc function. The distribution is represented as p(yt I xt,x*,y*,Jvt) -_ Jsf (jz(x),E(xt)) (19) where p(x1) and E(x,) are the mean and variance of the predictive distribution for given x,. These may be expressed as T * 2 -1 * * p(xt)=rn(xt)+kt[K +crl] (y -p), (20) E(xt) = k(xt,xt)+a2 -k[K* +a2I]1k, (21) Where p is the training mean vector and IC and k are Gramian matrices. They are given as * * * * T = [m(xi) m(x2) . .. m(XN)] (22) k(xx) k(x,x) *1* k(x,x7r) k(4x) k(x,x) ... k(x,x7r) k(x,x) k(x,x) ::. (23) = [k(xt,xt) k(x,xt) ... k(x,xt)]T The above method computes a matrix inversion which is 0(N3) however sparse methods and other reductions like using Cholesky decomposition may be used.
Using the above method it is possible to use GPs to derive a mapping function between source and target speakers.
From Eqs. (20) and (21) the means and covariance matrices for the prediction can be obtained. However if used directly this would again yield a frame-by-frame prediction.
To address this the dynamic parameters can also be predicted. Thus, two GP experts can be produced: * static expert: yt r.iM(u(xt),E(xt)) * dynamic expert: L\yt Af(,a(Axt), E(Ax)) In an embodiment, GPs for each of the static and delta experts are trained independently, though this is not necessary.
If only the static expert is used, then in the same fashion as GMM VC the estimated trajectory is just frame by frame. Thus lit =E{y (25) fP(Yt 1 xt,x*,y*,M)ytdyt (26) (27) In the same fashion as the standard GMM VC process it is possible to use these = arrnc{ñM(Yt; p(xt), E(x)) N(Ayt; (xt), ) (28) As the OP predictive distributions are Gaussian, a standard speech parameter generation algorithm can be used to generate the smooth trajectories of target static features from the OP experts.
A Gaussian Process is completely described by its covariance and mean functions.
These when coupled with a likelihood function are everything that is needed to perform inference. The covariance function of a Gaussian Process can be thought of as a measure that describes the local covariance of a smooth function. Thus a data point with a high covariance function value with another is likely to deviate from its mean in the same direction as the other point. Not all functions are covariance functions as they need to form a positive definite Gram matrix.
There are two kinds of kernel, stationary and non-stationary. A stationary covariance function is a function of x -x1. Thus it is invariant stationery to translations in the input space. Non-stationery kernels take into account translation and rotation. Thus isotropic kernel are atemporal when looking at time series as they will yield the same value wherever they are evaluated if their input vectors are the same distance apart. This contrast with non-stationary kernels that will give difference values. An example of an isotropic kernel is the squared exponential 1 2 k(xp,xq)=exp1-(xp-xq) J, (29) which is a function of the distance between its input vectors. An example of a non-stationary kernel is the linear kernel.
k(x, xq) xp xq, (30) Both types can be of use in voice conversion. Firstly under stationary assumptions iso-tropic kernels can capture the local behaviour of a spectrum well. Non-stationary kernels handle time series better when there is little correlation. The kernels described above are parameter free. It is also possible to have covariance functions that have hyperparameters that can be trained. One example is a linear covariance function with automatic relevance detection (ARD) where: k(xp,xq)xp*(P1)*xq (31) P is a free parameter that needs to be trained. For a complete list of the forms of covariance function examined in this work see Appendix A. A combination of kernels can also be used to describe speech signals. There are also a few choices for the mean function of a Gaussian Process; a zero mean, in(x) 0, a constant mean p(x) u, a linear mean in(x) = ax, or their combination m(x) = ax + p. In this embodiment, the combination of constant and linear mean, m(x) ax + p, was used for all systems.
Covariance and mean functions have parameters and selecting good values for these parameters has an impact on the performance of the predictor. These hyper-parameters can be set a priori but it makes sense to set them to the values that best describe the data; maximize the negative marginal log likelihood of the data. In an embodiment, the hyper-parameters are optimized using Polack-Ribiere conjugate gradients to compute the search directions, and a line search using quadratic and cubic polynomial approximations and the Wolfe-Powell stopping criteria was used together with the slope ratio method for guessing initial step sizes.
The size of the Gramian matrix K, which is equal to the number of samples in the training data, can be tens of thousands in VC. Computing the inverse of the Gramian matrix requires 0(N3). In an embodiment, the input space is first divided into its sub-spaces then a GP is trained for each sub-space. This reduces the number of samples that are trained for each GP. This circumvents the issue of slow matrix inversion and also allows a more accurate training procedure that improves the accuracy of the mapping on a per-cluster level. The Linde-Buza-Gray (LBG) algorithm with the Euclidean distance in mel-cepstral coefficients is used to split the data into its sub-spaces.
A voice conversion method in accordance with an embodiment of the present invention will now be described with reference to figure 5.
Figure 5 is a schematic of a flow diagram showing a method in accordance with an embodiment of the present invention using the Gaussian Processes which have just been described. Speech is input in step 8101. The input speech is digitised and split into frames of equal lengths. The speech signals are then subjected to a spectral analysis to determine various features which are plotted in an "acoustic space".
The front end unit also removes signals which are not believed to be speech signals and other irrelevant information. Popular front end units comprise apparatus which use filter bank (F BANK) parameters, Melfrequency Cepstral Coefficients (MFCC) and Perceptual Linear Predictive (PLP) parameters. The output of the front end unit is in the form of an input vector which is in n-dimensional acoustic space.
The speech features are extracted in step 8105. In some systems, it may be possible to select between multiple target voices. If this is the case, a target voice will be selected in step 8106. The training data which will be described with reference to figure 7 is then retrieved in step 8107.
Next, kernels are derived which defines the similarity between two speech vectors. In step 8109, kernels are derived which show the similarity between different speech vectors in the training data. In order to reduce the computing complexity, in an embodiment, the training data will be partitioned as described with reference to figures 7 and 8. The following explanation will not use clustering, then an example will be described using clustering.
Next, kernels are derived looking this time at the similarity between speech features derived from the training data and the actual input speech.
The method then continues at step Si 13 of figure 6. Here, the first Gramian matrix is derived using equation 23 from the kernel functions obtained in step S 109. The Gramian matrix K* can be derived during operation or may be computed offline since it is derived purely from training data.
The training mean vector p * is then derived using equation 22 and this is the mean taken over all training samples in this embodiment.
A second Gramian matrix k is derived using equation 24 this uses the kernel functions obtained in step Slil which looks at the similarity between training data and input speech.
Then using the results of step S113, S115 and Sl17, the mean value at each frame is computed for the target speech using equation 25.
The variant value is then computed for each frame of the converted speech. The converted speech is the most likely approximation to the target speech. Using the results derived in S 113, S115 and S117. The covariant function has hyper-paramcter cr.
Hyper-parameter ci can be optimized as previously described using techniques such as Polack-Ribiere conjugate gradients to compute the search directions and a line search using quadratic and cubic polynomial approximations and the Wolfe-Powell stopping criteria was used together with the slope ratio method for guessing initial step sizes.
Using the results of step S1l9 and step S 121, the most probable static feature y (target speech) from the mean and variances is generated by solving equation 28. The target speech is then output in step S 125.
Figure 7 shows a flow diagram on how the training data is handled. The training data can be pre-progranimed into the system so that all manipulations using purely the training data can be computed offline or training data can be gathered before voice conversion takes place. For example, a user could be asked to read known text just S prior to voice conversion taking place. When the training data is received in step S201, it is processed it is digitised and split it into frames of equal lengths. The speech signals are then subjected to a spectral analysis to determine various parameters which are plotted in an "acoustic space" or feature space. In this embodiment, static, delta and delta delta, features are extracted in step S203. Although, in some embodiments, only static features will be extracted.
Signals which are believed not to be speech signals and other irrelevant information are removed.
In this embodiment, the speech features are clustered 5205 as shown in figure Sa The acoustic space is then partitioned on the basis of these clusters. Clustering will produce smaller Gramians in equations 23 and 24 which will allow them to be more easily manipulated. Also, by partitioning the input space, the hyper-parameters can be trained over the smaller amount of data for each cluster as opposed to over the whole acoustic space.
For each cluster, the hyper-parameters are trained for each cluster in step S207 and figure Sb. J2m and are obtained for each cluster in step 5209 and stored as shown in figure 8c. Gramian Matrix. K* is also stored.
The procedure is then repeated for each cluster.
In an embodiment where clustering has been performed, in use, an input speech vector which is extracted from the speech which is to be converted is assigned to a cluster.
The assignment takes place by seeing in which cluster in acoustic space the input vector lies. The vectors t(xt) and E(xt) are then determined using the data stored for that cluster.
In a further embodiment, soft clusters are used for training the hyper-parameters. Here, the volume of the cluster which is used to train the byper-parameters for a part of acoustic space is taken over a region over acoustic space which is larger than the said part. This allows the clusters to overlap at their edges and mitigates discontinuities at cluster boundaries. However, in this embodiment although the clusters extend over a volume larger than the part of acoustic space defined when acoustic space is partitioned in step S205, assignment of an speech vector to be convened will be on the basis of the partitions derived in step S205.
Voice conversion systems which incorporate a method in accordance with the above described embodiment, are, in general more resistant to overfitting and oversmoothing.
It also provides an accurate prediction of the format structure. Over-smoothing exhibits itself when there is not enough flexibilityin a modelling of the relationship between the target speaker and input speaker to capture certain structure in the spectral features of the target speaker. The most detrimental manifestation of this is the over-smoothing of the target spectra. When parametric methods are used to model the relationship between the target speaker and input speaker, it is possible to add more parameters.
However, adding more mixture components allows for more flexibility in the set of mean parameters and can tackle these problems of over-smoothing but soon encounters over-fitting in the data and quality is lost especially in an objective measure like melcepstral distortion. Also parametric models have more limited ability as more data is introduced as they lose flexibility and also the meaning of the parameters can become difficult to interpret.
The above described embodiment applies a Gaussian process (GP) to Voice Conversion. Gaussian processes are non-parametric Bayesian models that can be thought of as a distribution over functions. They provide advantages over the conventional parametric approaches, such as flexibility due to their non-parametric nature.
Further, such a Gaussian Process based approach is resistant to over-fitting.
As such an approach is non-parametric it tackles the issue of the meaning of parameters used in a parametric approach. Also, being non-parametric means that there are only a few hyper-parameters that need to be trained and these parameters maintain their meaning even when more data is introduced. These advantages help to circumvent issues with scaling.
Figures 9a and 9b show schematically how the above Gaussian Process based approach differs from parametric approaches. Here, following the previous notation, it is desired to convert speech vectors Xt from the first voice to speech vectors y of the second voice.
In the previous parametric based approaches, set of model parameters X are derived based on speech vectors of the first voice xl *,...,xN* and the second voice y1 *,...,yN*.
The parameters are derived by looking at the correspondence between the speech vectors of the training data for the first voice with the corresponding speech vectors of the training data of the second voice. Once the parameters are derived, they are used to derive the mapping function from the input vector from the first voice xt to the second voice yt. In this stage, only the derived parameters X is used as shown in figure 9a.
However, in embodiments according to the present invention, model parameters are not derived and the mapping function is derived by looking at the distribution across all training vectors either across the whole acoustic space or within a cluster if the acoustic space has been partitioned.
To evaluate the performance of the Gaussian Process based approach, a speaker conversion experiment was conducted. Fifty sentences uttered by female speakers, CLB and SLT, from the CMU ARCTIC database were used for training (source: CLB, target: SLT). Fifty sentences, which were not included in the training data, were used for evaluation. Speech signals were sampled at a rate of 16 kHz and windowed with 5 ms of shift, and then 40th-order mel-cepstral coefficients were obtained by using a mel-cepstral analysis technique. The log P0 values for each utterance were also extracted.
The feature vectors of source and target speech consisted of 41 mel-cepstral coefficients including the zeroth coefficients. The DTW algorithm was used to obtain time alignments between source and target feature vector sequences. According to the DTW results, joint feature vectors were composed for training joint probability density between source and target features. The total number of training samples was 34,664.
Five systems were compared in this experiment, which were * GIvIMs without dynamic features as shown in figure lOa * GMMs with dynamic features as shown in figure lob; * trajectory GMMs as shown in figure lOc; * UPs without dynamic features as shown in figure lOd * UPs with dynamic features as shown in figure l0e.
They were trained from the composed joint feature vectors. The dynamic features (delta and delta-delta features) were calculated as = O.5xt+ -A2 k XtXt+ibXt+Xt_1.
For UP-based VC, we split the input space (mcl-cepstral coefficients from the source speaker) into 32 regions using the LBG algorithm then trained a UP for each cluster for each dimension. According to the results of a preliminary experiment, we chose combination of constant and linear fbnctions for the mean function of GP-based VC.
The log FO values in this experiment were converted by using the simple linear conversion. The speech waveform was re-synthesized from the converted mel-cepstral coefficients and log PD values through the mel log spectrum approximation (MLSA) filter with pulse-train or white-noise excitation.
The accuracy of the method in accordance with an embodiment was measured for various kernel functions. The mel-cepstral distortion between the target and converted mel-cepstral coefficients in the evaluation set was used as an objective evaluation measure.
First, the choice of kernel functions (covariance function), the effect of optimizing hyper-parameters, and the effect of dynamic features was evaluated. Tables 1 and 2 show the melcepstral distortions between target speech and converted speech by the proposed OP-based mapping with various kernel functions, with and without using dynamic features, respectively.
It can be seen from Table I that optimizing the hyper-parameter slightly reduced the distortions and the isotropic kernels appeared to outperform the non-stationary ones.
This is believed to be due to the consistency between evaluation measure and kernel function. The mel-cepstral distortion is actually the total Euclidean distance between two mel-cepstral coefficients in dB scale. The linear kernel uses the distance metric in input space (mel-cepstral coefficients), thus the evaluation measure (mel-cepstral distortion) and similarity measure (kernel function) was consistent. Table 2 indicates that the use of dynamic features degraded the mapping quality.
Next the OP-based conversion in accordance with an embodiment of the invention is compared with the conventional approaches. Table 3 shows the mel-cepstral distortions by conversion approaches by 0MM with and without dynamic features, trajectory GMMs, and the proposed OP based approaches. It can be seen from the table that the proposed OP-based approaches achieved significant improvements over the conventional parametric approaches.
It can be seen from the results of figure 10 that the 0MM is excessively smoother compared to the OP approach without dynamic features. It is known that the statistical modeling process often removes details of spectral structure. The OP-based approach has not suffered from this problem and maintains the fine structure of the speech spectra.
Table 1: Mel-cepstral distortions between target speech and converted speech by OP models (without dynamic features) using various kernel function with and without optimizing hyperparameters.
Covariance -Distortion [dB] - Functions w/o optimization wI optimization -covLIN 3.97 3.96 covLlNard 3.97 3.95 covLlNone 4.94 4.94 covMaterniso 4.98 4.96 covNNone 4.95 4.96 covPoly 4.97 495 covPPiso 4.99 4.96 covRQard 4.97 4.96 covRQiso 4.97 4.96 covSEard 4.96 4.95 covSEiso 4.96 4.95 covSEisoU 4.96 4.95 Table 2: Mel-cepstral distortions between target speech and converted speech by GP models using various kernel functions with and without dynamic features. Note that hyper-parameters were optimized.
Covariance Dfstortion [dB] Functions w/o dyn. feats. w[d3rn. feats.
covLIN 3.96 4.15 covLlNard 3.95 4.15 covLlNone 4.94 5.92 covMatemiso 4.96 5.99 covNNone 4.96 5.95 covPoly 4.95 5.80 covPPiso 4.96 6.00 covRQard 4.96 5.98 covRQiso 4.96 5.98 covSEard 4.95 5.98 covSEiso 4.95 5.98 covSEisoU -4.95 5.98 Table 3: Mel-cepstral distortions between target speech and converted speech by 0MM, trajectory 0MM, and OP-based approaches. Note that the kernel function for OP-based approaches was covLlNard and its hyper-parameters were optimized.
-#of GMM GMM Traj. GP -OP Mixs. w/o dyn. wI dyn. 0MM w/o dyn. -wI dyn.
2 -5.97 5.95 5.90 4 5.75 5.82 5.81 8 5.66 5.69 5.63 16 5.56 5.59 5.52 32 5.49 5.53 5.45 395 4.15 64 5.43 5.45 5.38 128 5.40 5.38 5.33 256 5.39 5.35 5.35 512 5.41 5.33 5.42 1024 5.50 5.34 5.64 _________ ________ The above experimental results shown here indicated that OP with the simple linear kernel function achieved the lowest melcepstral distortion among many kernel functions. It is believed that this is due to the consistency between evaluation measure and kernel function. The mel-cepstral distortion used here is actually the total Euclidean distance between two mel-cepstral coefficients. The linear kernel uses the distance metric in input space (mel-cepstral coefficients), thus the evaluation measure (mel-cepstral distortion) and similarity measure (kernel function) was consistent.
However, it is known that the mel-cepstral distortion is not highly correlated to human perception.
Therefore, in a further embodiment, the kernel function is replaced by a distance metric more correlated to human perception.
One possible metric is the log-spectral distortion (LSD), where the distance between two power spectra P(w) and P(w) is computed as DLS= f [1o1oio]dw (32) where these two spectra can be computed from the mel-cepstral coefficients using a recursive formulae. An alternative is the Itakura-Saito distance which measures the perceived difference between two spectra. It was proposed by Fumitada Itakura and Shuzo Saito in the 1 970s and is defmed as Dis(P(w) P(w)) = In -log P(w -1 dw.
71 f-n P(w) P(w) (33) The current implementation operates on scalar inputs, but could be extended to vector inputs.
In a further embodiment, linear combination of iso-tropic and non-stationary kernels are used, for example combinations of those listed as Kl to Kl 0 above.
In the above embodiments, Gaussian Process based voice conversion is applied to convert the speaker characteristics in natural speech. However, it can also be used to convert synthesised speech for example the output for an in-car Sat Nay system or a speech to speech translation system.
In a further embodiment, the input speech is not produced by vocal excitations. For example, the input speech could be bodyconducted speech, esophageal speech etc. This type of system could be of benefit where a user had received a larygotomy and was relying on non-larynx based speech. The system could modify the non-larynx based speech to reproduce the original speech of the user before the laryngotomy. Thus allowing a used to regain a voice which is close to their original voice.
Voice conversion has many uses, for example modifying a source voice to a selected voice in systems such as in-car navigation systems, uses in games software and also for medical applications to allow a speaker who has undergone surgery or otherwise has their voice compromised to regain their original voice.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel systems and methods described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the systems and methods described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (13)

  1. Claims 1. A method of converting speech from the characteristics of a first voice to the characteristics of a second voice, the method comprising: receiving a speech input from a first voice, dividing said speech input into a plurality of frames; mapping the speech from the first voice to a second voice; and outputting the speech in the second voice, wherein mapping the speech from the first voice to the second voice comprises, deriving kernels demonstrating the similarity between speech features derived from the frames of the speech input from the first voice and stored frames of training data for said first voice, the training data corresponding to different text to that of the speech input and wherein the mapping step uses a plurality of kernels derived for each frame of input speech with a plurality of stored frames of training data of the first voice.
  2. 2. A method according to claim 1, wherein kernels are derived for both static and dynamic speech features.
  3. 3. A method according to claim 1, wherein the speech to be output is determined according to a Gaussian Process predictive distribution: p(yt ( xt,x*,y*,ivl) =A1(ji(xt),EQrt)), where Yt is the speech vector for frame t to be output, x1 is the speech vector for the input speech for frame t, is {x, y} , {x, , y}, where xe" is the t-th frame of training data for the first voice and y/k is the t-th frame of training data for the second voice, M denotes the model, p(x,) and E(x,) are the mean and variance of the predictive distribution for given x,.
  4. 4. A method according to claim 3, wherein T * 2 -1 * * ii(xt)=m(xt)+k[K +cI] (y -i')' E(xt) = k(x, xt) + cr2 -kT[K* + a21]'kt, where * * * * T p = [m(xi) m(x2) ... m(xN)] k(xT,x) k(x,x) k(xx7v) k(x,x) k(x,x) ... k(x,x7ST) if * *\ if * *\ If * * MXN,X1) IttXN,X2) *. I'UIXN,XN = [k(xt,xt) k(x,x) ... k(x,xt)]T and u is a parameter to be trained, m(x,) is a mean function and k(x,, x,') is a kernel function representing the similarity between x and x1'.
  5. 5. A method according to claim 4, wherein the kernel function is isotropic.
  6. 6. A method according to claim 4, wherein the kernel function is parameter free.
  7. 7. A method according to claim 4, wherein the mean function is of the form: m(xt)= ax,+b
  8. 8. A method according to claim 1, wherein the speech features are represented by vectors in an acoustic space and said acoustic space is partitioned for the training data such that a cluster of training data represents each part of the partitioned acoustic space, wherein during mapping, a frame of input speech is compared with the stored frames of training data for the first voice which have been assigned to the same cluster as the frame of input speech.
  9. 9. A method according to claim 8, wherein two types of clusters are used, hard clusters and soft clusters, wherein in said hard clusters the boundary between adjacent clusters is hard so that there is no overlap between clusters and said soft clusters extend beyond the boundary of the hard clusters so that there is overlap between adjacent soft clusters, said frame of input speech being assigned to a cluster on the basis of the hard clusters.
  10. 10. A method according to claim 9, wherein the frame of input speech which has been assigned to a cluster on the basis of hard clusters, is then compared with data from the extended soft cluster.
  11. 11. A method according to claim 3, further comprising receiving training data for a first voice and a second voice.
  12. 12. A method according to claim 11, further comprising training hyper-parameters from the training data.
  13. 13. A method according to claim 1, wherein the first voice is a synthetic voice.14+ A method according to claim 1, wherein the first voice comprises non-larynx excitations.15. A carrier medium carrying computer readable instructions for controlling the computer to carry out the method of claim 1.16. A system for converting speech from the characteristics of a first voice to the characteristics of a second voice, the system comprising: a receiver for receiving a speech input from a first voice; a processor configured to: divide said speech input into a plurality of frames; and map the speech from the first voice to a second voice, the system further comprising an output to output the speech in the second voice, wherein to map the speech from the first voice to the second voice, the processor is flirther adapted to derive kernels demonstrating the similarity between speech features derived from the frames of the speech input from the first voice and stored frames of training data for said first voice, the training data corresponding to different text to that of the speech input, the processor using a plurality of kernels derived for each frame of input speech with a plurality of stored frames of training data of the first voice.
GB201105314A 2011-03-29 2011-03-29 A voice conversion method and system Expired - Fee Related GB2489473B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB201105314A GB2489473B (en) 2011-03-29 2011-03-29 A voice conversion method and system
US13/217,628 US8930183B2 (en) 2011-03-29 2011-08-25 Voice conversion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201105314A GB2489473B (en) 2011-03-29 2011-03-29 A voice conversion method and system

Publications (3)

Publication Number Publication Date
GB201105314D0 GB201105314D0 (en) 2011-05-11
GB2489473A true GB2489473A (en) 2012-10-03
GB2489473B GB2489473B (en) 2013-09-18

Family

ID=44067599

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201105314A Expired - Fee Related GB2489473B (en) 2011-03-29 2011-03-29 A voice conversion method and system

Country Status (2)

Country Link
US (1) US8930183B2 (en)
GB (1) GB2489473B (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5961950B2 (en) * 2010-09-15 2016-08-03 ヤマハ株式会社 Audio processing device
CN103413548B (en) * 2013-08-16 2016-02-03 中国科学技术大学 A kind of sound converting method of the joint spectrum modeling based on limited Boltzmann machine
US10133538B2 (en) * 2015-03-27 2018-11-20 Sri International Semi-supervised speaker diarization
CN105206280A (en) * 2015-09-14 2015-12-30 联想(北京)有限公司 Information processing method and electronic equipment
KR101779584B1 (en) * 2016-04-29 2017-09-18 경희대학교 산학협력단 Method for recovering original signal in direct sequence code division multiple access based on complexity reduction
US10176819B2 (en) * 2016-07-11 2019-01-08 The Chinese University Of Hong Kong Phonetic posteriorgrams for many-to-one voice conversion
US10453476B1 (en) * 2016-07-21 2019-10-22 Oben, Inc. Split-model architecture for DNN-based small corpus voice conversion
CN106897511A (en) * 2017-02-17 2017-06-27 江苏科技大学 Annulus tie Microstrip Antenna Forecasting Methodology
KR20200027475A (en) * 2017-05-24 2020-03-12 모듈레이트, 인크 System and method for speech-to-speech conversion
CN108198566B (en) * 2018-01-24 2021-07-20 咪咕文化科技有限公司 Information processing method and device, electronic device and storage medium
CN110164445B (en) * 2018-02-13 2023-06-16 阿里巴巴集团控股有限公司 Speech recognition method, device, equipment and computer storage medium
CN109256142B (en) * 2018-09-27 2022-12-02 河海大学常州校区 Modeling method and device for processing scattered data based on extended kernel type grid method in voice conversion
US11024291B2 (en) 2018-11-21 2021-06-01 Sri International Real-time class recognition for an audio stream
CN113678200A (en) * 2019-02-21 2021-11-19 谷歌有限责任公司 End-to-end voice conversion
US11183201B2 (en) * 2019-06-10 2021-11-23 John Alexander Angland System and method for transferring a voice from one body of recordings to other recordings
US11410667B2 (en) 2019-06-28 2022-08-09 Ford Global Technologies, Llc Hierarchical encoder for speech conversion system
US11538485B2 (en) 2019-08-14 2022-12-27 Modulate, Inc. Generation and detection of watermark for real-time voice conversion
CN113053356A (en) * 2019-12-27 2021-06-29 科大讯飞股份有限公司 Voice waveform generation method, device, server and storage medium
CN111213205B (en) * 2019-12-30 2023-09-08 深圳市优必选科技股份有限公司 Stream-type voice conversion method, device, computer equipment and storage medium
EP4270255A3 (en) * 2019-12-30 2023-12-06 TMRW Foundation IP SARL Cross-lingual voice conversion system and method
WO2021134520A1 (en) * 2019-12-31 2021-07-08 深圳市优必选科技股份有限公司 Voice conversion method, voice conversion training method, intelligent device and storage medium
CN111402923B (en) * 2020-03-27 2023-11-03 中南大学 Emotion voice conversion method based on wavenet
CN111599368B (en) * 2020-05-18 2022-10-18 杭州电子科技大学 Adaptive instance normalized voice conversion method based on histogram matching
US11523200B2 (en) 2021-03-22 2022-12-06 Kyndryl, Inc. Respirator acoustic amelioration
US11854572B2 (en) 2021-05-18 2023-12-26 International Business Machines Corporation Mitigating voice frequency loss
CN113362805B (en) * 2021-06-18 2022-06-21 四川启睿克科技有限公司 Chinese and English speech synthesis method and device with controllable tone and accent

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5704006A (en) * 1994-09-13 1997-12-30 Sony Corporation Method for processing speech signal using sub-converting functions and a weighting function to produce synthesized speech
US6374216B1 (en) * 1999-09-27 2002-04-16 International Business Machines Corporation Penalized maximum likelihood estimation methods, the baum welch algorithm and diagonal balancing of symmetric matrices for the training of acoustic models in speech recognition
US20080201150A1 (en) * 2007-02-20 2008-08-21 Kabushiki Kaisha Toshiba Voice conversion apparatus and speech synthesis apparatus
US20080262838A1 (en) * 2007-04-17 2008-10-23 Nokia Corporation Method, apparatus and computer program product for providing voice conversion using temporal dynamic features
US20090089063A1 (en) * 2007-09-29 2009-04-02 Fan Ping Meng Voice conversion method and system
CN101751921A (en) * 2009-12-16 2010-06-23 南京邮电大学 Real-time voice conversion method under conditions of minimal amount of training data

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030135374A1 (en) * 2002-01-16 2003-07-17 Hardwick John C. Speech synthesizer
JP4263412B2 (en) * 2002-01-29 2009-05-13 富士通株式会社 Speech code conversion method
JP4178319B2 (en) * 2002-09-13 2008-11-12 インターナショナル・ビジネス・マシーンズ・コーポレーション Phase alignment in speech processing
US7634399B2 (en) * 2003-01-30 2009-12-15 Digital Voice Systems, Inc. Voice transcoder
US7412377B2 (en) * 2003-12-19 2008-08-12 International Business Machines Corporation Voice model for speech processing based on ordered average ranks of spectral features
US7505950B2 (en) * 2006-04-26 2009-03-17 Nokia Corporation Soft alignment based on a probability of time alignment
US20080082320A1 (en) * 2006-09-29 2008-04-03 Nokia Corporation Apparatus, method and computer program product for advanced voice conversion
US20080111887A1 (en) * 2006-11-13 2008-05-15 Pixel Instruments, Corp. Method, system, and program product for measuring audio video synchronization independent of speaker characteristics
US8060565B1 (en) * 2007-01-31 2011-11-15 Avaya Inc. Voice and text session converter
US8131550B2 (en) * 2007-10-04 2012-03-06 Nokia Corporation Method, apparatus and computer program product for providing improved voice conversion
JP5038995B2 (en) * 2008-08-25 2012-10-03 株式会社東芝 Voice quality conversion apparatus and method, speech synthesis apparatus and method
CN102227770A (en) * 2009-07-06 2011-10-26 松下电器产业株式会社 Voice tone converting device, voice pitch converting device, and voice tone converting method
GB2478314B (en) * 2010-03-02 2012-09-12 Toshiba Res Europ Ltd A speech processor, a speech processing method and a method of training a speech processor
US8892436B2 (en) * 2010-10-19 2014-11-18 Samsung Electronics Co., Ltd. Front-end processor for speech recognition, and speech recognizing apparatus and method using the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5704006A (en) * 1994-09-13 1997-12-30 Sony Corporation Method for processing speech signal using sub-converting functions and a weighting function to produce synthesized speech
US6374216B1 (en) * 1999-09-27 2002-04-16 International Business Machines Corporation Penalized maximum likelihood estimation methods, the baum welch algorithm and diagonal balancing of symmetric matrices for the training of acoustic models in speech recognition
US20080201150A1 (en) * 2007-02-20 2008-08-21 Kabushiki Kaisha Toshiba Voice conversion apparatus and speech synthesis apparatus
US20080262838A1 (en) * 2007-04-17 2008-10-23 Nokia Corporation Method, apparatus and computer program product for providing voice conversion using temporal dynamic features
US20090089063A1 (en) * 2007-09-29 2009-04-02 Fan Ping Meng Voice conversion method and system
CN101751921A (en) * 2009-12-16 2010-06-23 南京邮电大学 Real-time voice conversion method under conditions of minimal amount of training data

Also Published As

Publication number Publication date
GB201105314D0 (en) 2011-05-11
US20120253794A1 (en) 2012-10-04
GB2489473B (en) 2013-09-18
US8930183B2 (en) 2015-01-06

Similar Documents

Publication Publication Date Title
US8930183B2 (en) Voice conversion method and system
Huang et al. Joint optimization of masks and deep recurrent neural networks for monaural source separation
Spille et al. Predicting speech intelligibility with deep neural networks
Zen et al. Statistical parametric speech synthesis using deep neural networks
Samui et al. Time–frequency masking based supervised speech enhancement framework using fuzzy deep belief network
Stuttle A Gaussian mixture model spectral representation for speech recognition
Saleem et al. A review of supervised learning algorithms for single channel speech enhancement
Tsao et al. An ensemble speaker and speaking environment modeling approach to robust speech recognition
Abdullah et al. Towards more efficient DNN-based speech enhancement using quantized correlation mask
Fritsch Modular neural networks for speech recognition
Wang Supervised speech separation using deep neural networks
JP7423056B2 (en) Reasoners and how to learn them
Samui et al. Tensor-train long short-term memory for monaural speech enhancement
Al-Radhi et al. Continuous vocoder applied in deep neural network based voice conversion
Kato et al. Statistical regression models for noise robust F0 estimation using recurrent deep neural networks
Bourlard et al. Towards using hierarchical posteriors for flexible automatic speech recognition systems
Sodanil et al. Thai word recognition using hybrid MLP-HMM
Bawa et al. Developing sequentially trained robust Punjabi speech recognition system under matched and mismatched conditions
CN117546237A (en) Decoder
CN114270433A (en) Acoustic model learning device, speech synthesis device, method, and program
Sarikaya Robust and efficient techniques for speech recognition in noise
Coto-Jiménez et al. Speech Synthesis Based on Hidden Markov Models and Deep Learning.
Yin et al. Modeling spectral envelopes using deep conditional restricted Boltzmann machines for statistical parametric speech synthesis
Baali et al. Arabic Dysarthric Speech Recognition Using Adversarial and Signal-Based Augmentation
Chen Noise robustness in automatic speech recognition

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20230329