WO2007063827A1 - 声質変換システム - Google Patents
声質変換システム Download PDFInfo
- Publication number
- WO2007063827A1 WO2007063827A1 PCT/JP2006/323667 JP2006323667W WO2007063827A1 WO 2007063827 A1 WO2007063827 A1 WO 2007063827A1 JP 2006323667 W JP2006323667 W JP 2006323667W WO 2007063827 A1 WO2007063827 A1 WO 2007063827A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voice
- speaker
- conversion
- target
- conversion function
- Prior art date
Links
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 681
- 230000006870 function Effects 0.000 claims abstract description 440
- 230000003595 spectral effect Effects 0.000 claims description 11
- 230000005540 biological transmission Effects 0.000 claims description 10
- 230000002194 synthesizing effect Effects 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 238000000034 method Methods 0.000 description 115
- 230000008569 process Effects 0.000 description 48
- 238000012545 processing Methods 0.000 description 28
- 101150011264 setB gene Proteins 0.000 description 18
- 101100149325 Escherichia coli (strain K12) setC gene Proteins 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000007796 conventional method Methods 0.000 description 8
- 238000001228 spectrum Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000001831 conversion spectrum Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 101150049349 setA gene Proteins 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241000282693 Cercopithecidae Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000037433 frameshift Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
- G10L2021/0135—Voice conversion or morphing
Definitions
- the present invention relates to a voice quality conversion learning system, a voice quality conversion system, a voice quality conversion client-server system, and a program for converting a voice of an original speaker into a voice of a target speaker.
- Patent Document 1 For example, see Patent Document 1 and Non-Patent Document 1).
- FIG. 22 shows a process of basic voice quality conversion processing.
- the voice conversion process consists of a learning process and a conversion process.
- the learning process the voice of the original speaker and the target speaker that is the conversion target are recorded, the learning voice data is stored, and learning is performed on the basis of the learning voice data.
- any speech uttered by the original speaker is converted to the target speaker's speech using the conversion function generated in the learning process.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2002-215198
- Non-Patent Document 1 Alexander Kain and Michael W. Macon "SPECTRAL VOICE CONVE RSION FOR TEXT-TO-SPEECH SYNTHESIS"
- the target speaker's voice is an anime character, a celebrity voice, a person who has passed away, etc., they will be asked to utter the voice set required for voice quality conversion. Doing this may be costly and impractical or impossible.
- the present invention has been made to solve the conventional problems as described above.
- a voice quality conversion learning system a voice quality conversion system, and a voice quality capable of performing voice quality conversion with a small learning burden.
- a conversion client server system and a program are provided.
- the invention according to claim 1 is a voice quality conversion system for converting a voice of an original speaker into a voice of a target speaker.
- a voice quality conversion system characterized by comprising voice quality conversion means for converting into voice of a target speaker through conversion to voice.
- the voice quality conversion system converts the voice of the original speaker into the voice of the target speaker via conversion to the voice of the intermediate speaker.
- a conversion function for converting each of the original speaker's voice into an intermediate speaker's voice and a conversion function for converting the intermediate speaker's voice into each of the target speaker's voices are provided. If prepared, each voice of the original speaker can be converted into each voice of the target speaker. Therefore, follow Since the number of conversion functions required is less than converting each of the original speaker's voices directly to each of the target speaker's voices, the voice quality conversion is performed using the conversion functions generated with a small learning burden. Can be performed.
- the invention according to claim 2 is the voice quality conversion learning system for learning a function for converting the voice of each of the one or more original speakers into the voice of each of the one or more target speakers.
- the voice quality conversion learning system includes an intermediate conversion function for converting the speech of each of one or more former speakers into the speech of one intermediate speaker, and one intermediate speaker's speech.
- an intermediate conversion function for converting speech to the speech of each of one or more target speakers
- target conversion function for converting speech to the speech of each of one or more target speakers
- Direct target speaker The number of conversion functions to be generated is reduced rather than converting to each speech, enabling voice quality conversion learning to be performed with less burden, intermediate conversion functions and targets generated with less burden of learning.
- the voice of the original speaker can be converted to the voice of the target speaker.
- the invention according to claim 3 is the voice conversion learning system according to claim 2, wherein the target conversion function generation unit is configured to convert the speech of the former speaker by the intermediate conversion function. Is generated as the target conversion function.
- the voice of the original speaker is converted by the intermediate conversion function, and the converted voice is converted by the target conversion function. Therefore, the accuracy of the voice quality at the time of voice quality conversion is higher than the function for converting the recorded actual intermediate speaker's voice into the target speaker's voice as the target conversion function. Becomes higher.
- the invention according to claim 4 is the voice quality conversion learning system according to claim 2 or 3.
- the voice of the intermediate speaker used for the learning is a voice output from a voice synthesizer that outputs any voice content with a predetermined voice quality.
- the voice content of the intermediate speaker used for learning is set as the voice output from the voice synthesizer, so that the same voice content as that of the original speaker or the target speaker is obtained. Power can also be output easily, which increases the convenience that the content of utterances of the original speaker and target speaker during learning is not restricted.
- the invention according to claim 5 is the voice quality conversion learning system according to any one of claims 2 to 4, wherein the voice of the original speaker used for the learning has an arbitrary voice content.
- the voice synthesizer power that is output with a predetermined voice quality is also output voice.
- the voice of the original speaker used for learning as the voice output from the voice synthesizer
- the same voice content as that of the target speaker can be easily output from the voice synthesizer. can do.
- the user's speech content during learning is not restricted and the convenience is increased. For example, when the voice of an actor recorded in a movie is used as the target speaker's voice, learning can be easily performed even if only limited voice content is recorded.
- the invention according to claim 6 is the voice conversion learning system according to any one of claims 2 to 5, wherein the intermediate conversion function generated by the intermediate conversion function generation means and the target conversion It further comprises conversion function synthesis means for generating a function for converting the voice of the original speaker into the voice of the target speaker by synthesizing with the target conversion function generated by the function generation means.
- the voice of the original speaker is converted to the voice of the target speaker than when the intermediate conversion function and the target conversion function are used.
- the calculation time required for is reduced. It is also possible to reduce the memory size used during voice quality conversion processing.
- the invention according to claim 7 uses the function generated by the voice conversion learning system according to any one of claims 2 to 6 to convert the voice of the original speaker to the target speaker.
- a voice quality conversion system characterized by comprising voice quality conversion means for converting to a voice of the above.
- the voice quality conversion system uses a function generated with a small learning burden. Thus, it is possible to convert the speech of each of the one or more original speakers into the speech of each of the one or more target speakers.
- the invention according to claim 8 is the voice quality conversion system according to claim 7, in which the intermediate conversion function is used as the voice quality conversion means from the voice of the former speaker using the intermediate conversion function.
- the voice quality conversion system can convert the speech of each original speaker into the speech of each target speaker using a smaller number of conversion functions than in the past.
- the invention according to claim 9 is the voice quality conversion system according to claim 7, wherein the voice quality conversion means uses the function obtained by synthesizing the intermediate conversion function and the target conversion function. The voice of the former speaker is converted into the voice of the target speaker.
- the voice quality conversion system can convert the voice of the original speaker into the voice of the target speaker using a function in which the intermediate conversion function and the target conversion function are synthesized. Therefore, the calculation time required to convert the voice of the original speaker to the voice of the target speaker is shorter than when using the intermediate conversion function and the target conversion function. It is also possible to reduce the memory size used during voice quality conversion processing.
- the invention according to claim 10 is the voice quality conversion system according to any one of claims 7 to 9, wherein the voice quality conversion means converts a spectral sequence that is a feature amount of speech.
- voice quality conversion can be easily performed by converting code data transmitted to an existing speech encoder power speech decoder.
- the invention according to claim 11 is a voice quality in which a client computer and a server computer are connected via a network, and each voice of one or more users is converted to voice of each of one or more target speakers.
- the client computer includes user voice acquisition means for acquiring the user voice, and user voice transmission means for transmitting the user voice acquired by the user voice acquisition means to the server computer.
- the voice of the user in common to each of the one or more users
- Intermediate conversion function receiving means for receiving from the server computer an intermediate conversion function for converting the voice of one intermediate speaker provided, and for converting the voice of the intermediate speaker into the voice of the target speaker
- a target conversion function receiving means for receiving the target conversion function by the server computer power, wherein the server computer receives the user's voice from the client computer; and a voice of the intermediate speaker.
- Intermediate speaker voice storage means for storing in advance, intermediate conversion function generation means for generating an intermediate conversion function for converting the user's voice into the voice of the intermediate speaker, and voice of the target speaker are stored in advance
- Target speaker voice storage means and target conversion function generation means for generating a target conversion function for converting the voice of the intermediate speaker into the voice of the target speaker
- An intermediate conversion function transmitting means for transmitting the intermediate conversion function to the client computer; and a target conversion function transmitting means for transmitting the target conversion function to the client computer.
- the client computer further includes the intermediate conversion function.
- An intermediate voice quality conversion means for generating the intermediate speaker's voice from the user's voice using the function, and the target speaker's voice power using the target conversion function to generate the target speaker's voice
- a voice quality conversion client-server system characterized by comprising a target conversion means.
- the server computer generates the intermediate conversion function and the target conversion function for the user
- the client computer receives the intermediate conversion function and the target conversion function from the server computer.
- the client computer can convert the user's voice into the target speaker's voice.
- the invention according to claim 12 is the intermediate conversion function generation for generating each of the intermediate conversion functions for converting the speech of each of the one or more original speakers into the speech of one intermediate speaker.
- a target conversion function generating step for generating each of the target conversion functions for converting the voice of one intermediate speaker into the voice of each of the one or more target speakers.
- an intermediate conversion function and a target conversion function for use in voice quality conversion can be generated by storing the program in one or more computers. Monkey.
- the invention according to claim 13 is directed to a computer, an intermediate conversion function for converting the voice of the original speaker into the voice of the intermediate speaker, and the voice of the target speaker as the voice of the intermediate speaker.
- the former speaker's voice power is generated using the transformation function obtaining step for obtaining a target transformation function for conversion into the intermediate function and the intermediate transformation function obtained in the transformation function obtaining step.
- the target speech from the voice of the intermediate speaker generated in the intermediate voice quality conversion step.
- the computer converts the voice of the original speaker into the voice of the target speaker through conversion to the voice of the intermediate speaker. Is possible.
- the voice quality conversion learning system includes an intermediate conversion function for converting the speech of each of one or more former speakers into the speech of one intermediate speaker, and the one intermediate speaker.
- the target conversion function for converting the voice of one or more target speakers into the voice of each of one or more target speakers
- the voice quality conversion system can convert the voice of the original speaker into the voice of the target speaker using the function generated by the voice quality conversion learning system.
- FIG. 1 is a diagram showing a configuration of a voice quality learning / conversion system according to an embodiment of the present invention.
- FIG. 2 is a diagram showing a configuration function of a server according to the embodiment.
- FIG. 4 is a graph for showing an example of wl (f), w2 (f), w ′ (f) according to the embodiment.
- FIG. 5 is a diagram showing a functional configuration of the mobile terminal according to the embodiment. 6] Each former speaker power according to the embodiment is a diagram for explaining the number of conversion functions required for voice quality conversion to each target speaker.
- ⁇ 7] is a flowchart showing the flow of learning and storage processing of the conversion function Gy (i) in the server according to the embodiment.
- FIG. 8 is a flowchart showing a procedure for obtaining the conversion function F for the original speaker X in the mobile terminal according to the embodiment.
- ⁇ 11 A flowchart for explaining the second pattern of the conversion function generation process and the voice quality conversion process when the conversion function learning method according to the embodiment is a post-conversion feature value conversion method.
- ⁇ 14 A flowchart for explaining the first pattern of the conversion function generation process and the voice quality conversion process when the conversion function learning method according to the embodiment is the pre-conversion feature value conversion method.
- FIG. 17 is a graph for comparing cepstrum distortion between the method according to the embodiment and the conventional method.
- FIG. 18 is a flowchart showing a generation procedure of the conversion function F in the mobile terminal when the mobile terminal according to the modification includes an intermediate conversion function generation unit.
- FIG. 19 When voice quality of voice input to the mobile phone on the transmission side is converted and output from the mobile phone on the reception side according to the modification, voice conversion is performed on the mobile phone on the transmission side. It is a figure which shows an example of a processing pattern.
- FIG. 20 shows a case in which voice quality conversion is performed on the receiving side mobile phone when the voice quality of the voice input to the transmitting side mobile phone is converted and output from the receiving side mobile phone according to the modification. It is a figure which shows an example of a processing pattern.
- FIG. 21 is a diagram showing an example of a processing pattern when voice quality conversion is performed by a server according to a modified example.
- FIG. 22 is a diagram showing a conventional basic voice quality conversion process.
- FIG. 23 is a diagram for explaining an example of the number of conversion functions required for converting the voice of the former speaker into the voice of the target speaker in the past.
- FIG. 1 shows the configuration of a voice quality conversion client server system 1 according to an embodiment of the present invention.
- FIG. 1 shows the configuration of a voice quality conversion client server system 1 according to an embodiment of the present invention.
- a voice quality conversion client-server system 1 includes a server (corresponding to a "voice quality conversion learning system") 10 and a plurality of mobile terminals ("voice quality conversion”). "Applicable to the system”).
- the server 10 learns and generates a conversion function for converting the voice of the user holding the mobile terminal 20 into the voice of the target speaker.
- the mobile terminal 20 acquires a conversion function from the server 10 and converts the user's voice into the target speaker's voice based on the conversion function.
- speech represents a waveform or a parameter series extracted from the waveform by some method.
- the server 10 includes an intermediate conversion function generation unit 101 and a target conversion function generation unit 102. These functions are realized when the CPU mounted on the server 10 executes processing according to the program stored in the storage device.
- the intermediate conversion function generating unit 101 performs learning based on the voice of the original speaker and the voice of the intermediate speaker, thereby converting the voice of the original speaker into the voice of the intermediate speaker.
- F (corresponding to “intermediate conversion function”) is generated.
- the voice of the original speaker and the voice of the intermediate speaker are recorded in advance by uttering and recording the same approximately 50 sentences (one set of voice contents) by the original speaker and the intermediate speaker.
- a learning method for example, a feature quantity conversion method based on a mixed normal distribution model (GMM) can be used. In addition to this, any known method can be used.
- GMM mixed normal distribution model
- the target conversion function generation unit 102 generates a conversion function G (corresponding to "target conversion function") for converting the voice of the intermediate speaker into the voice of the target speaker.
- the first learning method is a method for learning the correspondence between the recorded feature of the original speaker's voice using the conversion function F and the recorded feature of the target speaker's voice. It is a formula.
- This first conversion method is called “post-conversion feature conversion method”.
- the voice of the original speaker is converted by the conversion function F, and the converted voice is converted by the conversion function G to generate the target speaker's voice. Learning can be performed in consideration of the processing procedure at the time of conversion.
- the second learning method does not take into account the actual voice quality conversion procedure, and includes the recorded voice features of the intermediate speaker and the recorded voice features of the target speaker. This is a method for learning the correspondence between the two. This second conversion method is called “pre-conversion feature conversion method”.
- the format of the conversion functions F and G is not limited to a mathematical expression, and may be expressed in the form of a conversion table.
- the conversion function synthesis unit 103 synthesizes the conversion function F generated by the intermediate conversion function generation unit 101 and the conversion function G generated by the target conversion function generation unit 102, so that the voice of the original speaker is synthesized. Generate a function to convert to the target speaker's voice.
- FIG. 3 shows that the conversion function F) and the conversion function Gy (i) are used to convert the voice of the original speaker x into the voice of the target speaker y (FIG. 3 (a)).
- the conversion function Hy (x) generated by combining F (x) and the conversion function Gy (i) the speech of the original speaker X is converted to the speech of the target speaker y
- Fig. 3 (b ) Is a diagram showing the procedure.
- the conversion function Hy (X) compared to using the conversion function F (x) and the conversion function Gy (i)
- the voice of the original speaker X is changed to the voice of the target speaker y.
- the calculation time required for conversion is approximately halved.
- since the feature amount of the middle speaker is not generated, it is possible to reduce the memory size used during the voice quality conversion process.
- a function for converting the voice of the original speaker into the voice of the target speaker can be generated by synthesizing the conversion function F and the conversion function G.
- the feature value is a spectral parameter.
- the function for the spectral parameters is expressed by a linear function, and f is the frequency
- the conversion from the pre-conversion spectrum s (f) to the post-conversion spectrum s' (f) is expressed by the following equation.
- w () is a function representing frequency conversion.
- former speaker power wl () for frequency conversion to intermediate speaker w2 () for frequency conversion from intermediate speaker to target speaker, If the spectrum is s (f), the intermediate speaker spectrum is s, (f), and the target speaker spectrum is s, (f), then
- the mobile terminal 20 corresponds to, for example, a mobile phone. In addition to the mobile phone, a personal computer to which a microphone is connected may be used.
- FIG. 5 shows a functional configuration of the mobile terminal 20. This functional configuration is realized by executing processing according to a program stored in the nonvolatile memory by the CPU mounted on the mobile terminal 20.
- the mobile terminal 20 includes a voice quality conversion unit 21.
- the voice quality conversion unit 21 converts the voice quality by converting a spectrum sequence.
- the voice quality conversion unit 21 performs voice quality conversion by converting both the spectral sequence conversion and the sound source signal.
- cepstrum coefficients or LSP (Line Spectral Pair) coefficients can be used.
- Voice quality conversion unit 21 includes intermediate voice quality conversion unit 211 and target voice quality conversion unit 212.
- the intermediate voice quality conversion unit 211 converts the voice of the original speaker into the voice of the intermediate speaker using the conversion function F.
- the target voice quality conversion unit 212 uses the conversion function G to convert the voice of the intermediate speaker converted by the intermediate voice quality conversion unit 211 into the voice of the target speaker.
- the conversion functions F and G are created by the server 10 and downloaded to the mobile terminal 20.
- FIG. 6 when the original speakers ⁇ , ⁇ , ⁇ , Y, ⁇ , the intermediate speaker i, and the target speakers 1, 2, ⁇ , 9, 10 exist.
- FIG. 5 is a diagram for explaining the number of conversion functions necessary for voice quality conversion from each original speaker to each target speaker.
- the conversion function F is (A) ⁇ F (B),... ⁇ F (Y), F (Z) 26 types are required.
- 260 types of conversion functions are required. Thus, in the present embodiment, the number of conversion functions can be significantly reduced.
- the former speaker X and the intermediate speaker i are people or TTS (Text-to-Speech), and are prepared on the vendor side that owns the server 10.
- TTS is a known device that converts an arbitrary text (character) into a corresponding voice and outputs the voice with a predetermined voice quality.
- FIG. 7 (a) shows a processing procedure when learning the conversion function G by the post-conversion feature value conversion method.
- the intermediate conversion function generation unit 101 obtains the voice of the original speaker X in advance and stores it in the storage device, and the voice of the intermediate speaker i (“intermediate speaker”). And a conversion function F (x) is generated. Then, the voice X after the voice of the original speaker X is converted by the conversion function F (X) is output (step S 101). [0062] Next, the target conversion function generation unit 102 converts the converted speech x 'and the speech of the target speaker y (corresponding to "target speaker speech storage means") obtained in advance and stored in the storage device. Based on the above, learning is performed to generate a conversion function Gy (i) (step S102), and the generated conversion function Gy (i) is stored in a storage device included in the server 10 (step S103).
- FIG. 7 (b) shows a processing procedure when learning the conversion function G by the pre-conversion feature value conversion method.
- the target conversion function generator 102 performs learning based on the voice of the intermediate speaker i and the voice of the target speaker y, and generates a conversion function Gy (i) (step S201). Then, the generated conversion function Gy (i) is stored in the storage device included in the server 10 (step S202).
- FIG. 8 (a) shows a procedure when a human voice is used as the voice of the intermediate speaker i.
- the mobile terminal 20 collects the voice of the original speaker X with a microphone ("user voice acquisition means"). ), The corresponding voice is transmitted to the server 10 (corresponding to “user voice transmission means”) (step S301).
- the server 10 receives the voice of the original speaker X (corresponding to “user voice reception means”), and the intermediate conversion function generation unit 101 learns based on the voice of the original speaker X and the voice of the intermediate speaker i.
- the conversion function F (x) is generated (step S302).
- the server 10 transmits the generated conversion function F (x) to the mobile terminal 20 (corresponding to “intermediate conversion function transmission means”) (step S303).
- FIG. 8 (b) shows the processing procedure when the voice output from the TTS is used as the voice of the intermediate speaker i.
- the mobile terminal 20 20 collects the voice of the former speaker x with the microphone and transmits the voice to the server 10 (step S401).
- the content of the voice of the former speaker X received by the server 10 is converted into text by a voice recognition device or manually (step S402), and the text is input to the TTS (step S403).
- TTS generates and outputs the voice of intermediate speaker i (TTS) based on the input text (step S404).
- Intermediate conversion function generation section 101 performs learning based on the voice of original speaker X and the voice of intermediate speaker i, and generates conversion function F (x) (step S405).
- the server 10 transmits the generated conversion function F) to the mobile terminal 20 (step S406).
- the portable terminal 20 stores the received conversion function F) in a nonvolatile memory. Conversion function F
- the former speaker X downloads the desired conversion function G from the server 10 to the mobile terminal 20 ("Send target conversion function transmission").
- the voice of the original speaker X can be converted into the voice of the desired target speaker.
- the original speaker X had to speak in accordance with the contents of each target speaker's voice set, and obtain a conversion function for each target speaker.
- Speaker X only needs to obtain one conversion function F (X) by uttering one set of speech, and the burden on the original speaker X can be reduced.
- the nonvolatile memory of the mobile terminal 20 includes a conversion function F (A) for converting the voice of the original speaker A into the voice of the intermediate speaker, and the voice of the intermediate speaker as the voice of the target speaker y. It is assumed that the conversion function G for conversion is downloaded from Sano 10 and stored.
- the intermediate voice quality conversion unit 211 uses the conversion function F (A) to convert the voice of the original speaker A to the intermediate speaker. Convert to audio (step S5 01).
- the target voice quality conversion unit 212 converts the voice of the intermediate speaker into the voice of the target speaker y using the conversion function Gy (i) V (step S502), and converts the voice of the target speaker y.
- Output step S503.
- the output sound is transmitted through a communication network, for example. It is transmitted to the mobile terminal of the hand and output from the speaker provided in the mobile terminal. Further, the speaker A may be output from a speaker provided in the mobile terminal 20 in order to confirm the converted voice.
- the conversion function learning method is a post-conversion feature value conversion method
- Figure 10 shows the learning process and conversion process when the speech of the intermediate speaker recorded for use in learning is one set (setA).
- the intermediate conversion function generation unit 101 performs learning based on the voice setA of the former speaker Src. 1 and the voice setA of the intermediate speaker In., And performs the conversion function F (Src. 1 ( A)) is generated (step S 1101).
- the intermediate conversion function generation unit 101 performs learning based on the speech set A of the original speaker Src. 2 and the speech set A of the intermediate speaker In. And converts the conversion function F (Src. 2 (A) ) Is generated (step S 1102).
- the target conversion function generation unit 102 converts the speech set A of the original speaker Src. 1 with the conversion function F (Src. 1 (A)) generated in step SI 101, and converts the converted Tr. SetA is generated (Step S1103). Then, the target conversion function generation unit 102 performs learning based on the converted Tr.setA and the speech setA of the target speaker Tag.1, and generates the conversion function Gl (Tr. (A)) (steps). S 1104).
- the target conversion function generator 102 performs learning based on the converted Tr. SetA and the target speaker Tag. 2's voice setA, and generates the conversion function G2 (Tr. (A )) Is generated (step SI 1 05).
- the intermediate voice quality conversion unit 211 uses the conversion function F (Src. 1 (A)) generated in the learning process to convert any speech of the original speaker Src. Convert to In. Audio (step S1107).
- the target voice quality conversion unit 212 converts the voice of the intermediate speaker In. Using the number Gl (Tr. (A)) or the conversion function G2 (Tr. (A)), the speech is converted to the target speaker Tag. 1 or target speaker Tag. 2 (step SI 108).
- the intermediate voice quality conversion unit 211 converts an arbitrary voice of the original speaker Src. 2 into a conversion function F (Src.
- the target voice quality conversion unit 212 uses the conversion function Gl (Tr. (A)) or the conversion function G2 (T r. (A)) to convert the voice of the intermediate speaker In. Conversion to voice of target speaker Tag. 2 (step S1110).
- Fig. 11 shows the learning process and conversion process when the voice of the intermediate speaker is TTS or a set of voices (setA, setB) uttered by a person.
- the intermediate conversion function generation unit 101 performs learning based on the speech set A of the former speaker Src. 1 and the speech set A of the intermediate speaker In. And converts the conversion function F (Src. 1 ( A)) is generated (step S 1201).
- the intermediate conversion function generation unit 101 performs learning based on the speech setB of the original speaker Src. 2 and the speech setB of the intermediate speaker In., And performs the conversion function F (Src. 2 (B) ) (Step S1202) o
- the target conversion function generation unit 102 converts the speech set A of the original speaker Src. 1 with the conversion function F (Src. 1 (A)) generated in step SI 201 and converts the converted Tr. SetA is generated (Step S1203). Then, the target conversion function generation unit 102 performs learning based on the converted Tr. SetA and the target speaker Tag g. 1 speech setA, and generates a conversion function Gl (Tr. (A)) ( Step S 1204).
- the target conversion function generation unit 102 converts the speech setB of the original speaker Src. 2 with the conversion function F (Src. 2 (B)) generated in step SI 20 2, and converts the converted Tr Generate setB (step S1205). Then, the target conversion function generation unit 102 performs learning based on the converted Tr. SetB and the speech setB of the target speaker T ag. 2 to generate the conversion function G2 (Tr. (B)). Step S 1206).
- the intermediate voice quality conversion unit 211 converts the arbitrary speech of the original speaker Src. 1 into the speech of the intermediate speaker In. Using the conversion function F (Src. 1 (A)). Convert (step S1207).
- the target voice quality conversion unit 212 uses the conversion function Gl (Tr. (A)) or the conversion function G2 (Tr. (B)) to convert the voice of the intermediate speaker In. It is converted to the voice of speaker Tag. 2 (step S 1208).
- the intermediate voice quality conversion unit 211 converts an arbitrary voice of the original speaker Src. 2 into the voice of the intermediate speaker In. Using the conversion function F (Src. 2 (B)). (Step SI 209).
- the target voice quality conversion unit 212 uses the conversion function Gl (Tr. (A)) or the conversion function G2 (Tr. (B)) to convert the voice of the intermediate speaker In. Conversion to target speaker Tag. 2 (step S 1210).
- the utterance content of the original speaker and the utterance content of the target speaker must be the same (sets A and sets B).
- the intermediate speaker is set to TTS, the utterance content of the intermediate speaker can be uttered according to the voice content of the original speaker and the target speaker. Convenience at the time of learning increases just by matching the utterance contents. If the intermediate speaker is TTS, the intermediate speaker's voice can be uttered semipermanently.
- some of the voices of the original speaker used for learning are voices of multiple sets (setA, setB, setC) uttered by TTS or a person, and the voice of the intermediate speaker Shows the learning process and the conversion process when is a set of speech (setA).
- the intermediate conversion function generation unit 101 converts the voice of the original speaker into the voice of the intermediate speaker In. Based on the voice set A of the original speaker and the voice set A of the intermediate speaker In. A conversion function F (TTS (A)) is generated (step S1301).
- the target conversion function generation unit 102 converts the voice setB of the original speaker with the generated conversion function F (TTS (A)), and generates a converted Tr. SetB (step S1302).
- the target conversion function generator 102 performs learning based on the converted Tr. SetB and the target speaker Tag. 1's voice setB, and uses the intermediate speaker In.'S voice as the target speaker Tag.
- a conversion function Gl (Tr. (B)) for converting to speech is created (step SI 303).
- the target conversion function generation unit 102 uses the generated conversion function F (TTS (A)) to Audio setC is converted and Tr. SetC is created after conversion (step SI 304).
- the target conversion function generation unit 102 performs learning based on the converted Tr. SetC and the target speaker Tag. 1 speech set C, and uses the intermediate speaker In. As the target speaker Tag.
- a conversion function G2 (Tr. (C)) for converting to voice 2 is created (step S1305).
- the intermediate conversion function generation unit 101 converts the voice of the original speaker Src. 1 into the intermediate speaker In based on the voice set A of the original speaker Src. 1 and the voice set A of the intermediate speaker In.
- a conversion function F (Src. 1 (A)) for converting to a voice of. Is generated (step S 1306).
- the intermediate conversion function generation unit 101 converts the voice of the original speaker Src. 2 into the intermediate speaker In based on the voice set A of the original speaker Src. 1 and the voice set A of the intermediate speaker In.
- a conversion function F (Src. 2 (A)) for converting to a voice of. Is generated (step S 1307).
- the intermediate voice quality conversion unit 211 converts the arbitrary speech of the original speaker Src. 1 into the speech of the intermediate speaker In. Using the conversion function F (Src. 1 (A)). (Step S1308).
- the target voice quality converter 212 uses the conversion function Gl (Tr. (B)) or the conversion function G2 (Tr. (C)) to convert the voice of the intermediate speaker In. 1 or target speaker Tag. 2 (step S 1309).
- the intermediate voice quality conversion unit 211 converts an arbitrary voice of the original speaker Src. 2 into a conversion function F (Src.
- Step S1310 the target voice quality conversion unit 212 uses the conversion function Gl (Tr. (B)) or the conversion function G2 (Tr. (C)) to convert the voice of the intermediate speaker In. 1 or target speaker Tag. 2 is converted (step S 1311).
- the speech content of the intermediate speaker and the speech content of the target speaker can be made non-parallel.
- the content of the TSS utterance as the original speaker can be flexibly changed according to the utterance content of the target speaker. be able to.
- the speech content of the intermediate speaker I n. Is only one set (setA) the conversion function F for the voice conversion of the former speakers Src. 1 and Src. 2 possessing the mobile terminal 10 is obtained.
- the content spoken by the former speakers Src. 1 and Src. 2 must be setA, which is the same as the content spoken by the intermediate speaker In. (4)
- some of the voices of the original speaker used for learning are voices of multiple sets (setA, setB) uttered by TTS or a person
- the voice of the intermediate speaker is It shows the learning process and conversion process for multiple sets (setA, setC, setD) spoken by TTS or a person.
- the intermediate conversion function generation unit 101 performs learning based on the speech set A of the original speaker and the speech set A of the intermediate speaker In.
- the speech set A of the original speaker is converted to the intermediate speaker In.
- a conversion function F (TTS (A)) for converting to the voice set A is generated (step S1401).
- the target conversion function generation unit 102 generates the converted Tr. SetA by converting the voice setA of the original speaker with the conversion function F (TT S (A)) generated in step S1401. Yes (Step S 1402).
- the target conversion function generator 102 performs learning based on the converted Tr. SetA and the speech set A of the target speaker Tag. 1, and uses the intermediate speaker's speech as the target speaker Tag.
- a conversion function Gl (Tr. (A)) for converting to the voice of is created (step S 1403).
- the target conversion function generation unit 102 generates a converted Tr. SetB by converting the voice set B of the original speaker with the conversion function F (TTS (A)) (step S 1404).
- the target transformation function generation unit 102 performs learning based on the converted Tr.setB and the target speaker Tag.2 speech setB, and converts the intermediate speaker speech to the target speaker Tag.2 speech.
- a conversion function G2 (Tr. (B)) for conversion is created (step S1405).
- the intermediate conversion function generator 101 performs learning based on the voice setC of the original speaker Src. 1 and the voice setC of the intermediate speaker In.
- a function F (Src. 1 (C)) for converting to the voice of the speaker In. Is generated (step S 1406).
- the intermediate conversion function generation unit 101 performs learning based on the voice setD of the original speaker Src. 2 and the voice setD of the intermediate speaker In. A function F (Src. 2 (D)) for generating the voice of the speaker In. Is generated (step S 1407).
- the intermediate voice quality conversion unit 211 converts the arbitrary speech of the original speaker Src. 1 into the speech of the intermediate speaker In. Using the conversion function F (Src. 1 (C)). (Step S1408).
- the target voice quality conversion unit 212 uses the conversion function Gl (Tr. (A)) or the conversion function G2 (Tr. (B)) to convert the voice of the intermediate speaker In. Change to voice of speaker Tag. (Step S 1409).
- the intermediate voice quality conversion unit 211 converts an arbitrary voice of the original speaker Src. 2 into a conversion function F (Src.
- Step S1410 the target voice quality conversion unit 212 converts the voice of the intermediate speaker In. Into the target speaker Tag using the conversion function Gl (Tr. (A)) or the conversion function G2 (Tr. (B)). 1 or target speaker Tag. 2 is converted (step S 1411).
- the speech contents of the original speaker and the target speaker and the intermediate speaker and the target speaker at the time of learning can be made into a non-parallel corpus.
- any utterance content can be output from TTS, so that the former speakers Src. 1 and Src.
- the conversion function F to perform the content of the utterances of the original speakers Src.1, Src.2 does not have to be determined. If the original speaker is TTS, the target speaker's utterance may not be determined.
- the conversion function learning method is a pre-conversion feature value conversion method.
- the conversion function G is generated in consideration of the actual voice quality conversion processing procedure.
- the conversion function F and the conversion function G are learned independently. In this method, the learning process is reduced, but the accuracy of the voice quality after conversion is slightly reduced.
- Figure 14 shows the learning process and conversion process when the speech of the intermediate speaker for learning is a set of speech (setA).
- the intermediate conversion function generation unit 101 performs learning based on the speech set A of the former speaker Src. 1 and the speech set A of the intermediate speaker In., And the conversion function F (Src. 1 ( A)) is generated (step S 1501). Similarly, the intermediate conversion function generation unit 101 performs learning based on the speech set A of the former speaker Src. 2 and the speech set A of the intermediate speaker In. And generates the conversion function F (Src. 2 (A)). (Step S 1502).
- the target conversion function generation unit 102 performs learning based on the speech set A of the intermediate speaker In. And the speech set A of the target speaker Tag. 1, and performs the conversion function Gl (In. (A)). Generate (Step S1503) o Similarly, the target conversion function generation unit 102 performs training based on the intermediate speaker In. Voice SETA and the target speaker Tag. 2 to the voice SETA, the conversion function G2 (In. (A)) Is generated (step S 1503).
- the intermediate voice quality conversion unit 211 converts the arbitrary speech of the original speaker Src. 1 into the speech of the intermediate speaker In. Using the conversion function F (Src. 1 (A)). (Step S1505).
- the target voice quality conversion unit 212 uses the conversion function Gl (In. (A)) or the conversion function G2 (In. (A)) to convert the voice of the intermediate speaker In. 1 or the target speaker Tag. 2 (step S 1506).
- the intermediate voice quality conversion unit 211 converts an arbitrary voice of the original speaker Src. 2 into a conversion function F (Src.
- Step S1507 the target voice quality conversion unit 212 uses the conversion function Gl (In. (A)) or the conversion function G2 (In. (A)) to convert the voice of the intermediate speaker In. Convert to voice of 1 or target speaker Tag. 2 (step S 1508).
- FIG. 15 shows the learning process and conversion process when the voice of the intermediate speaker is a set of voices (setA, setB, setC, setD) uttered by TTS or a person.
- the intermediate conversion function generation unit 101 performs learning based on the speech set A of the former speaker Src. 1 and the speech set A of the intermediate speaker In., And performs the conversion function F (Src. 1 ( A)) is generated (step S1601). Similarly, the intermediate conversion function generation unit 101 performs learning based on the voice setB of the former speaker Src. 2 and the voice setB of the intermediate speaker In. And generates the conversion function F (Src. 2 (B)). (Step S1602).
- the target conversion function generation unit 102 performs learning based on the speech setC of the intermediate speaker In. And the speech setC of the target speaker Tag. 1, and performs the conversion function Gl (In. C)) is generated (step S1603). Similarly, the target conversion function generation unit 102 performs learning based on the speech set D of the intermediate speaker In. And the speech set A of the target speaker Tag. 2, and generates the conversion function G2 (In. (D)). (Step SI 604).
- the intermediate voice quality conversion unit 211 converts any speech of the original speaker Src. 1 into the speech of the intermediate speaker In. Using the conversion function F (Src. 1 (A)). (Step S1605).
- the target voice quality conversion unit 212 uses the conversion function Gl (In. (C)) or the conversion function G2 (In. (D)) to convert the voice of the intermediate speaker In. The voice is converted to the voice of speaker Tag. 2 (step S 1606).
- the intermediate voice quality conversion unit 211 converts an arbitrary voice of the original speaker Src. 2 into a conversion function F (Src.
- Step S 1607 the target voice quality conversion unit 212 uses the conversion function Gl (In. (C)) or the conversion function G2 (In. (D)) to convert the voice of the intermediate speaker In.
- the voice is converted to the voice of speaker Tag. 2 (step S1608).
- the intermediate speaker when the intermediate speaker is set to TTS, it is possible to cause the intermediate speaker to utter a sound of a predetermined voice quality semipermanently.
- the voice content that matches the utterance content of the original and intermediate speakers can be output from the TTS.
- the person's utterance content is not restricted. For this reason, convenience is enhanced and a conversion function can be easily generated.
- the utterance contents of the original speaker and the target speaker can be made into a non-parallel corpus.
- Figure 16 shows multiple sets of voices (in this case, setA, setB) where a part of the voice of the original speaker is uttered by TTS or a person, and the voice of the intermediate speaker is TTS or person.
- the learning process and the conversion process for multiple sets of voices (setA, setC, setD in this case) uttered by are shown.
- the target conversion function generation unit 102 performs learning based on the speech set A of the intermediate speaker In. And the speech set A of the target speaker Tag. 1, and generates the conversion function Gl (In. (A)). (Step SI 701
- the target conversion function generation unit 102 performs the speech setB of the intermediate speaker In. And the target speaker Tag.
- the intermediate conversion function generation unit 101 performs the voice setC of the former speaker Src. 1 and the voice set of the intermediate speaker In. Learning based on C and generating the conversion function F (Src. 1 (C)) (Step SI 703)
- the intermediate conversion function generation unit 101 performs learning based on the speech setD of the original speaker Src. 2 and the speech setD of the intermediate speaker In. And converts the conversion function F (Src. 2 (D) ) Is generated (step S 1704).
- the intermediate voice quality conversion unit 211 converts the arbitrary speech of the original speaker Src. 1 into the speech of the intermediate speaker In. Using the conversion function F (Src. 1 (C)). (Step S1705).
- the target voice quality converter 212 uses the conversion function Gl (In. (A)) or the conversion function G2 (In. (B)) to convert the voice of the intermediate speaker In. The voice is converted to the voice of 1 or target speaker Tag. 2 (step S 1706).
- the intermediate voice quality conversion unit 211 converts an arbitrary voice of the original speaker Src. 2 into a conversion function F (Src.
- the target voice quality conversion unit 212 uses the conversion function Gl (In. (A)) or the conversion function G2 (I n. (B)) to convert the voice of the intermediate speaker In. Convert to voice of 1 or target speaker Tag. 2 (step S 1708).
- the utterance content of the original speaker can be changed according to the utterance content of the original speaker and the target speaker, and can be converted flexibly. Function learning is possible.
- the speech content of the original speaker and the target speaker during learning can be made a non-parallel corpus.
- the feature conversion method based on the mixed normal distribution model (GMM) (for example, A. Kain and MWMacon, Spectral voice conversion for text-to-speech synthesis, "Proc .ICASSP, pp.285-288, Seattle, USA May, 1998.).
- GMM mixed normal distribution model
- p is the number of dimensions of the feature value
- T indicates transposition.
- p (X) of the feature X of speech is
- N (x; i, ⁇ i) is a normal distribution with mean vector ⁇ i and covariance matrix ⁇ i in class i
- i (X) and i (y) represent the mean vectors of x and y in class i, respectively.
- ⁇ i (XX) denotes the covariance matrix of X in class i
- ⁇ i (yx) denotes the cross-covariance matrix in class i of y and X.
- hi (x) is
- the conversion parameters (a i, i (x), / z i (y) ⁇ ⁇ i (xx), ⁇ i (yx)) can be estimated by a publicly known EM algorithm.
- ATR phoneme balance sentences for example, Anobu Nobunobu, Mozaka Yoshinori, Umeda Tetsuo, Kuwabara Naoo, "Research Japanese Speech Database Usage Manual (Speed Reading Speech Data),” ATR Technical Report, TR-I-0166, 1990.
- Subset 50 sentences not included in the learning data are used as evaluation data.
- STRAIGHT analysis eg, H. Kawahara et al. “Restructuring s speech representation using a pitch-adaptive time-frequency smoothing and an insta ntaneous—frequency—based AO extraction: possible role of a repetitive structure in sounds, "Speech Communication, Vol.27, No.3-4, pp.187-207, 1999.).
- the sampling period is 16 kHz and the frame shift is 5 ms.
- the 1st to 41st order cepstrum coefficients converted from the STRAIGHT spectrum are used.
- the total number of GMM is 64.
- Cepstral distortion is used as an evaluation measure of conversion accuracy. In the evaluation, the distortion between the cepstrum converted from the original speaker power and the cepstrum of the target speaker is calculated.
- the cepstrum strain is expressed by equation (1), and the smaller the value, the higher the evaluation.
- Ci (x) is the cepstrum coefficient of the target speaker's voice
- Ci (y) is the cepstrum coefficient of the converted voice
- P is the order of the cepstrum coefficient.
- Figure 17 shows a graph of the experimental results.
- the vertical axis of the graph is the cepstrum distortion, and this value is the average value of the cepstrum distortion obtained by Equation (1) for each frame in all frames.
- (a) represents the distortion between the cepstrum of the original speaker (A, B) and the cepstrum of the target speaker T.
- (b) corresponds to the conventional method, and the cepstrum converted from the original speaker (A, B) and the target speaker T when the original speaker (A, B) and the target speaker T learn directly. Represents distortion with the cepstrum.
- (c) and (d) apply the method of the present application. Specifically, (c) is described as follows: the intermediate conversion function from the original speaker A to the intermediate speaker I is F (A), and the original speaker A uses the speech generated using F (A). Let G (A) be the target conversion function for speaker T's speech.
- the intermediate conversion function from former speaker B to intermediate speaker I is converted to F (B), and the original speaker B uses F (B) to generate the target speaker T's voice.
- G (B) be the target transformation function of.
- the original speaker A force F (A) is used, the cepstrum is converted once to the cepstrum of the intermediate speaker I, and then converted to the target speaker T using G (A), and the target speaker T Distortion with the cepstrum (former speaker A ⁇ represents the target speaker T).
- (d) represents the case where the target conversion function G other than the principal is used in (c).
- the cepstrum converted from the original speaker A to the intermediate speaker I using F (A) and then converted to the target speaker T using G (B) and the target speaker T Denotes the distortion of the cepstrum (original speaker A ⁇ target speaker T).
- the conventional method (b) and the method of the present application (c) have approximately the same cepstrum distortion, and therefore, conversion using an intermediate speaker is the same as the conventional method. It can be said that it can maintain a certain level of quality. Furthermore, since the cepstrum distortion is almost the same in the conventional method (b) and the method (d) of the present application, when performing conversion through an intermediate speaker, the intermediate speaker strength is targeted to the target speaker. It can be seen that the target conversion function can maintain the same level of quality as the conventional method even if one kind of G is commonly used for each target speaker created by any former speaker.
- the server 10 converts the voice of each of one or more former speakers into the voice of one intermediate speaker and the voice of the one intermediate speaker.
- a conversion function G for converting each of one or more target speakers into speech
- each source speaker's speech If the conversion function for converting the voice of the middle speaker and the conversion function for converting the voice of the intermediate speaker to each voice of the target speaker are prepared, the voice of each of the original speakers is Can be converted to In other words, it is possible to perform voice quality conversion with a smaller number of conversion functions than in the prior art, in which a conversion function for converting each voice of the original speaker to each voice of the target speaker is prepared. Therefore, learning can be performed with a small burden to generate a conversion function, and voice conversion can be performed using the conversion function.
- a user who performs voice quality conversion of his / her voice using the mobile terminal 20 creates one conversion function F for converting his / her voice into the voice of an intermediate speaker, and Memory
- the conversion function G for converting the voice of the target speaker desired by the user into the target speaker's voice from the server 10, the user's voice can be easily converted into the target speaker's voice. It becomes possible.
- the target conversion function generation unit 102 generates, as an intermediate conversion function, a function for converting the voice of the original speaker converted by the conversion function F into the target speaker's voice. Can do. Therefore, it is possible to generate a conversion function tailored to the actual voice quality conversion process, rather than generating a conversion function for converting the directly collected speech from the intermediate speaker to the target speaker's speech. However, the voice quality accuracy during actual voice quality conversion can be improved.
- the voice of the intermediate speaker is set to the voice output from the TTS, so that the same voice can be output to the TTS regardless of what kind of voice is spoken. Can be uttered. For this reason, there are no restrictions on the utterance content of the original speaker and target speaker during learning, and it is possible to easily learn the conversion function by eliminating the trouble of collecting specific speech content from the original speaker and target speaker. be able to.
- the voice of the original speaker is set to TTS, so that any voice content can be spoken to the TTS as the original speaker in accordance with the content of the target speaker.
- the conversion function G can be easily learned without being restricted by the content of the target speaker's utterance.
- the server 10 includes the intermediate conversion function generation unit 101 and the target conversion function generation unit 102, and the mobile terminal 20 includes the intermediate voice quality. It has been described that the converter 211 and the target voice quality converter 212 are provided. However, the device configuration of the voice quality conversion client server system 1 and the devices constituting the voice quality conversion client server system 1 are not limited to this. The arrangement of the intermediate conversion function generation unit 101, the target conversion function generation unit 102, the intermediate voice quality conversion unit 211, and the target voice quality conversion unit 212 may be any arrangement.
- one apparatus may include all the functions of the intermediate conversion function generation unit 101, the target conversion function generation unit 102, the intermediate voice quality conversion unit 211, and the target voice quality conversion unit 212.
- the mobile terminal 20 may include the intermediate conversion function generation unit 101, and the server 10 may include the target conversion function generation unit 102. In this case, it is necessary to store a program for learning and generating the conversion function F in the nonvolatile memory of the portable terminal 20.
- Fig. 18 (a) shows the procedure when the utterance content of the former speaker A is fixed.
- the speech of the intermediate speaker with the content is stored in advance in the nonvolatile memory of the mobile terminal 20.
- learning is performed based on the voice of the original speaker X collected by the microphone included in the mobile terminal 20 and the voice of the intermediate speaker i stored in the mobile terminal 20 (step S601), and the conversion function F ( X) is acquired (step S602).
- Fig. 18 (b) shows a processing procedure when the utterance content of the original speaker A is free.
- the mobile terminal 20 is equipped with a speech recognition device that converts speech into text and a TTS that converts text into speech.
- the speech recognition apparatus performs speech recognition of the voice of the original speaker X collected by the microphone included in the mobile terminal 20, and converts the utterance content of the original speaker X into text (step S701). Enter in TTS. TTS generates the speech of intermediate speaker i (TTS) from the text (step S702).
- TTS speech of intermediate speaker i
- Intermediate conversion function generation section 101 learns based on the voice of intermediate speaker i (TTS) and the voice of the original speaker (step S703), and acquires conversion function F (X) (step S704). .
- the voice quality conversion unit 21 uses the conversion function F to convert the voice of the original speaker into the voice of the intermediate speaker, and the conversion function G
- the target voice quality conversion unit 212 that converts the voice of the intermediate speaker into the voice of the target speaker is described. This is only an example, and the voice quality conversion unit 21 converts the conversion function F and the conversion function. It may have a function to directly convert the voice of the original speaker into the voice of the target speaker using a function synthesized with the number G.
- the conversion function of the sender (speech input person) or the cluster of the conversion functions to which the sender belongs is accurate.
- Information about the sender's conversion function such as an index to determine
- TTS is used as a speech synthesizer.
- a device that converts input speech content into a predetermined voice quality and outputs it may be used.
- the two-stage voice quality conversion through the conversion to the voice of the intermediate speaker is described.
- it is not limited to this, but it may be a multi-stage voice quality conversion through conversion to the speech of a plurality of intermediate speakers.
- It can be used for voice quality conversion services that can convert many users 'voices into various target speakers' voices with less conversion learning and fewer conversion functions.
Landscapes
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Telephonic Communication Services (AREA)
- Electrically Operated Instructional Devices (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/085,922 US8099282B2 (en) | 2005-12-02 | 2006-11-28 | Voice conversion system |
EP06833471A EP2017832A4 (en) | 2005-12-02 | 2006-11-28 | VOICE QUALITY CONVERSION SYSTEM |
JP2007547942A JP4928465B2 (ja) | 2005-12-02 | 2006-11-28 | 声質変換システム |
CN2006800453611A CN101351841B (zh) | 2005-12-02 | 2006-11-28 | 音质转换系统 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005-349754 | 2005-12-02 | ||
JP2005349754 | 2005-12-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007063827A1 true WO2007063827A1 (ja) | 2007-06-07 |
Family
ID=38092160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/323667 WO2007063827A1 (ja) | 2005-12-02 | 2006-11-28 | 声質変換システム |
Country Status (6)
Country | Link |
---|---|
US (1) | US8099282B2 (ja) |
EP (1) | EP2017832A4 (ja) |
JP (1) | JP4928465B2 (ja) |
KR (1) | KR101015522B1 (ja) |
CN (1) | CN101351841B (ja) |
WO (1) | WO2007063827A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008058696A (ja) * | 2006-08-31 | 2008-03-13 | Nara Institute Of Science & Technology | 声質変換モデル生成装置及び声質変換システム |
US20090094031A1 (en) * | 2007-10-04 | 2009-04-09 | Nokia Corporation | Method, Apparatus and Computer Program Product for Providing Text Independent Voice Conversion |
JP2010049196A (ja) * | 2008-08-25 | 2010-03-04 | Toshiba Corp | 声質変換装置及び方法、音声合成装置及び方法 |
JP2017003622A (ja) * | 2015-06-04 | 2017-01-05 | 国立大学法人神戸大学 | 声質変換方法および声質変換装置 |
JP2019109306A (ja) * | 2017-12-15 | 2019-07-04 | 日本電信電話株式会社 | 音声変換装置、音声変換方法及びプログラム |
JP2020056996A (ja) * | 2018-08-16 | 2020-04-09 | 國立臺灣科技大學 | 音色選択可能なボイス再生システム、その再生方法、およびコンピュータ読み取り可能な記録媒体 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8131550B2 (en) * | 2007-10-04 | 2012-03-06 | Nokia Corporation | Method, apparatus and computer program product for providing improved voice conversion |
ES2796493T3 (es) * | 2008-03-20 | 2020-11-27 | Fraunhofer Ges Forschung | Aparato y método para convertir una señal de audio en una representación parametrizada, aparato y método para modificar una representación parametrizada, aparato y método para sintetizar una representación parametrizada de una señal de audio |
US9058818B2 (en) * | 2009-10-22 | 2015-06-16 | Broadcom Corporation | User attribute derivation and update for network/peer assisted speech coding |
US9798653B1 (en) * | 2010-05-05 | 2017-10-24 | Nuance Communications, Inc. | Methods, apparatus and data structure for cross-language speech adaptation |
JP5961950B2 (ja) * | 2010-09-15 | 2016-08-03 | ヤマハ株式会社 | 音声処理装置 |
CN103856390B (zh) * | 2012-12-04 | 2017-05-17 | 腾讯科技(深圳)有限公司 | 即时通讯方法及系统、通讯信息处理方法、终端 |
US9613620B2 (en) | 2014-07-03 | 2017-04-04 | Google Inc. | Methods and systems for voice conversion |
US10614826B2 (en) * | 2017-05-24 | 2020-04-07 | Modulate, Inc. | System and method for voice-to-voice conversion |
US20190362737A1 (en) * | 2018-05-25 | 2019-11-28 | i2x GmbH | Modifying voice data of a conversation to achieve a desired outcome |
CN109377986B (zh) * | 2018-11-29 | 2022-02-01 | 四川长虹电器股份有限公司 | 一种非平行语料语音个性化转换方法 |
CN110085254A (zh) * | 2019-04-22 | 2019-08-02 | 南京邮电大学 | 基于beta-VAE和i-vector的多对多语音转换方法 |
CN110071938B (zh) * | 2019-05-05 | 2021-12-03 | 广州虎牙信息科技有限公司 | 虚拟形象互动方法、装置、电子设备及可读存储介质 |
US11854562B2 (en) * | 2019-05-14 | 2023-12-26 | International Business Machines Corporation | High-quality non-parallel many-to-many voice conversion |
WO2021030759A1 (en) | 2019-08-14 | 2021-02-18 | Modulate, Inc. | Generation and detection of watermark for real-time voice conversion |
KR20230130608A (ko) | 2020-10-08 | 2023-09-12 | 모듈레이트, 인크 | 콘텐츠 완화를 위한 멀티-스테이지 적응 시스템 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07104792A (ja) * | 1993-10-01 | 1995-04-21 | Nippon Telegr & Teleph Corp <Ntt> | 声質変換方法 |
JP2002182683A (ja) * | 2000-12-15 | 2002-06-26 | Sharp Corp | 話者特徴推定装置および話者特徴推定方法、クラスタモデル作成装置、音声認識装置、音声合成装置、並びに、プログラム記録媒体 |
JP2002215198A (ja) | 2001-01-16 | 2002-07-31 | Sharp Corp | 声質変換装置および声質変換方法およびプログラム記憶媒体 |
JP2002244689A (ja) * | 2001-02-22 | 2002-08-30 | Rikogaku Shinkokai | 平均声の合成方法及び平均声からの任意話者音声の合成方法 |
JP2005266349A (ja) * | 2004-03-18 | 2005-09-29 | Nec Corp | 声質変換装置および声質変換方法ならびに声質変換プログラム |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1993018505A1 (en) * | 1992-03-02 | 1993-09-16 | The Walt Disney Company | Voice transformation system |
FI96247C (fi) * | 1993-02-12 | 1996-05-27 | Nokia Telecommunications Oy | Menetelmä puheen muuntamiseksi |
JP3354363B2 (ja) | 1995-11-28 | 2002-12-09 | 三洋電機株式会社 | 音声変換装置 |
US6336092B1 (en) * | 1997-04-28 | 2002-01-01 | Ivl Technologies Ltd | Targeted vocal transformation |
JPH1185194A (ja) | 1997-09-04 | 1999-03-30 | Atr Onsei Honyaku Tsushin Kenkyusho:Kk | 声質変換音声合成装置 |
TW430778B (en) * | 1998-06-15 | 2001-04-21 | Yamaha Corp | Voice converter with extraction and modification of attribute data |
IL140082A0 (en) * | 2000-12-04 | 2002-02-10 | Sisbit Trade And Dev Ltd | Improved speech transformation system and apparatus |
CN1369834B (zh) * | 2001-01-24 | 2010-04-28 | 松下电器产业株式会社 | 语音转换设备 |
CN1156819C (zh) * | 2001-04-06 | 2004-07-07 | 国际商业机器公司 | 由文本生成个性化语音的方法 |
JP2003157100A (ja) * | 2001-11-22 | 2003-05-30 | Nippon Telegr & Teleph Corp <Ntt> | 音声通信方法及び装置、並びに音声通信プログラム |
US7275032B2 (en) * | 2003-04-25 | 2007-09-25 | Bvoice Corporation | Telephone call handling center where operators utilize synthesized voices generated or modified to exhibit or omit prescribed speech characteristics |
FR2868587A1 (fr) * | 2004-03-31 | 2005-10-07 | France Telecom | Procede et systeme de conversion rapides d'un signal vocal |
US8666746B2 (en) * | 2004-05-13 | 2014-03-04 | At&T Intellectual Property Ii, L.P. | System and method for generating customized text-to-speech voices |
EP1846918B1 (fr) * | 2005-01-31 | 2009-02-25 | France Télécom | Procede d'estimation d'une fonction de conversion de voix |
US20080161057A1 (en) * | 2005-04-15 | 2008-07-03 | Nokia Corporation | Voice conversion in ring tones and other features for a communication device |
-
2006
- 2006-11-28 US US12/085,922 patent/US8099282B2/en not_active Expired - Fee Related
- 2006-11-28 EP EP06833471A patent/EP2017832A4/en not_active Withdrawn
- 2006-11-28 CN CN2006800453611A patent/CN101351841B/zh not_active Expired - Fee Related
- 2006-11-28 JP JP2007547942A patent/JP4928465B2/ja not_active Expired - Fee Related
- 2006-11-28 WO PCT/JP2006/323667 patent/WO2007063827A1/ja active Application Filing
- 2006-11-28 KR KR1020087012959A patent/KR101015522B1/ko not_active IP Right Cessation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07104792A (ja) * | 1993-10-01 | 1995-04-21 | Nippon Telegr & Teleph Corp <Ntt> | 声質変換方法 |
JP2002182683A (ja) * | 2000-12-15 | 2002-06-26 | Sharp Corp | 話者特徴推定装置および話者特徴推定方法、クラスタモデル作成装置、音声認識装置、音声合成装置、並びに、プログラム記録媒体 |
JP2002215198A (ja) | 2001-01-16 | 2002-07-31 | Sharp Corp | 声質変換装置および声質変換方法およびプログラム記憶媒体 |
JP2002244689A (ja) * | 2001-02-22 | 2002-08-30 | Rikogaku Shinkokai | 平均声の合成方法及び平均声からの任意話者音声の合成方法 |
JP2005266349A (ja) * | 2004-03-18 | 2005-09-29 | Nec Corp | 声質変換装置および声質変換方法ならびに声質変換プログラム |
Non-Patent Citations (4)
Title |
---|
A. KAIN; M. W. MACON: "Spectral voice conversion for text-to-speech synthesis", PROC. ICASSP, May 1998 (1998-05-01), pages 285 - 288 |
H. KAWAHARA ET AL.: "Restructuring speech representation using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based ill extraction: possible role of a repetitive structure in sounds", SPEECH COMMUNICATION, vol. 27, no. 3-4, 1999, pages 187 - 207 |
MASANOBU ABE ET AL.: "Laboratory Japanese speech database user's manual (speed- reading speech data", ATR TECHNICAL REPORT, TR-I-0166, 1990 |
See also references of EP2017832A4 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008058696A (ja) * | 2006-08-31 | 2008-03-13 | Nara Institute Of Science & Technology | 声質変換モデル生成装置及び声質変換システム |
US20090094031A1 (en) * | 2007-10-04 | 2009-04-09 | Nokia Corporation | Method, Apparatus and Computer Program Product for Providing Text Independent Voice Conversion |
US20140249815A1 (en) * | 2007-10-04 | 2014-09-04 | Core Wireless Licensing, S.a.r.l. | Method, apparatus and computer program product for providing text independent voice conversion |
JP2010049196A (ja) * | 2008-08-25 | 2010-03-04 | Toshiba Corp | 声質変換装置及び方法、音声合成装置及び方法 |
JP2017003622A (ja) * | 2015-06-04 | 2017-01-05 | 国立大学法人神戸大学 | 声質変換方法および声質変換装置 |
JP2019109306A (ja) * | 2017-12-15 | 2019-07-04 | 日本電信電話株式会社 | 音声変換装置、音声変換方法及びプログラム |
JP2020056996A (ja) * | 2018-08-16 | 2020-04-09 | 國立臺灣科技大學 | 音色選択可能なボイス再生システム、その再生方法、およびコンピュータ読み取り可能な記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
KR20080070725A (ko) | 2008-07-30 |
KR101015522B1 (ko) | 2011-02-16 |
EP2017832A4 (en) | 2009-10-21 |
US8099282B2 (en) | 2012-01-17 |
EP2017832A1 (en) | 2009-01-21 |
JPWO2007063827A1 (ja) | 2009-05-07 |
JP4928465B2 (ja) | 2012-05-09 |
US20100198600A1 (en) | 2010-08-05 |
CN101351841A (zh) | 2009-01-21 |
CN101351841B (zh) | 2011-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007063827A1 (ja) | 声質変換システム | |
CN111899719B (zh) | 用于生成音频的方法、装置、设备和介质 | |
US8898055B2 (en) | Voice quality conversion device and voice quality conversion method for converting voice quality of an input speech using target vocal tract information and received vocal tract information corresponding to the input speech | |
US9430467B2 (en) | Mobile speech-to-speech interpretation system | |
US10186252B1 (en) | Text to speech synthesis using deep neural network with constant unit length spectrogram | |
EP2126900B1 (en) | Method and system for creating entries in a speech recognition lexicon | |
TW394925B (en) | A vocoder-based voice recognizer | |
US6119086A (en) | Speech coding via speech recognition and synthesis based on pre-enrolled phonetic tokens | |
US20070213987A1 (en) | Codebook-less speech conversion method and system | |
JP2000504849A (ja) | 音響学および電磁波を用いた音声の符号化、再構成および認識 | |
JPH10260692A (ja) | 音声の認識合成符号化/復号化方法及び音声符号化/復号化システム | |
CN113470622B (zh) | 一种可将任意语音转换成多个语音的转换方法及装置 | |
US20070129946A1 (en) | High quality speech reconstruction for a dialog method and system | |
JP7339151B2 (ja) | 音声合成装置、音声合成プログラム及び音声合成方法 | |
WO1997007498A1 (fr) | Unite de traitement des signaux vocaux | |
EP2541544A1 (en) | Voice sample tagging | |
JP2020013008A (ja) | 音声処理装置、音声処理プログラムおよび音声処理方法 | |
JP3914612B2 (ja) | 通信システム | |
JP2003122395A (ja) | 音声認識システム、端末およびプログラム、並びに音声認識方法 | |
JP3465334B2 (ja) | 音声対話装置及び音声対話方法 | |
JP2023014765A (ja) | 音声合成装置、音声合成プログラム及び音声合成方法並びに音声変換装置、音声変換プログラム及び音声変換方法 | |
WO2014203329A1 (ja) | 音声応答装置および応答音声生成方法 | |
Zaim | Two channel adaptive speech enhancement | |
JP2002287791A (ja) | 専門家システムを用いた音声認識基盤の知能型対話装置及びその方法 | |
JP2002099298A (ja) | 音声認識システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680045361.1 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2007547942 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006833471 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020087012959 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12085922 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |