WO2022237665A1 - 语音合成方法、装置、电子设备和存储介质 - Google Patents

语音合成方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022237665A1
WO2022237665A1 PCT/CN2022/091348 CN2022091348W WO2022237665A1 WO 2022237665 A1 WO2022237665 A1 WO 2022237665A1 CN 2022091348 W CN2022091348 W CN 2022091348W WO 2022237665 A1 WO2022237665 A1 WO 2022237665A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
identification information
emotion
target
synthesized
Prior art date
Application number
PCT/CN2022/091348
Other languages
English (en)
French (fr)
Inventor
吴鹏飞
潘俊杰
马泽君
Original Assignee
北京有竹居网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京有竹居网络技术有限公司 filed Critical 北京有竹居网络技术有限公司
Publication of WO2022237665A1 publication Critical patent/WO2022237665A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • Embodiments of the present disclosure relate to the field of computer technology, for example, to a speech synthesis method, device, electronic equipment, and storage medium.
  • emotion transfer is a technique of great practical value.
  • the speaker only needs to record part of the emotional voice to realize the speech synthesis of the speaker with different emotions; if If the emotion transfer between different speakers that have been authorized to be used can be realized, the emotion in the voice of the speaker with emotion deduction ability can be transferred to the speaker with poor emotion deduction ability, and the emotion deduction ability can be realized.
  • the synthesis of the speech of the speaker with different emotions can directly generate an audio book that broadcasts the corresponding sentence in the novel with the emotion that matches the scene of the novel based on the existing speech of a certain speaker that has been authorized for use.
  • Embodiments of the present disclosure provide a speech synthesis method, device, electronic device and storage medium, so as to realize speech synthesis of different speakers with different emotions.
  • an embodiment of the present disclosure provides a speech synthesis method, including:
  • the embodiment of the present disclosure also provides a speech synthesis device, including:
  • An acquisition module configured to acquire the text to be synthesized, the character identification information of the target person, and the emotion identification information of the target emotion;
  • a synthesis module configured to perform speech synthesis on the text to be synthesized based on the character identification information and the emotion identification information to obtain a target voice, the target voice having the voice characteristics of the target character and the emotion of the target emotion feature.
  • an embodiment of the present disclosure also provides an electronic device, including:
  • a memory configured to store at least one program
  • the at least one processor When the at least one program is executed by the at least one processor, the at least one processor implements the speech synthesis method according to the embodiments of the present disclosure.
  • the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the speech synthesis method as described in the embodiments of the present disclosure is implemented.
  • FIG. 1 is a schematic flowchart of a speech synthesis method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another speech synthesis method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a speech synthesis model provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a model structure during training of a speech synthesis model provided by an embodiment of the present disclosure
  • FIG. 5 is a structural block diagram of a speech synthesis device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a schematic flowchart of a speech synthesis method provided by an embodiment of the present disclosure.
  • the method can be executed by a speech synthesis device, wherein the device can be implemented by software and/or hardware, and can be configured in an electronic device, typically, a mobile phone or a tablet computer.
  • the speech synthesis method provided by the embodiments of the present disclosure is suitable for synthesizing speeches of different authorized characters with different emotions. As shown in Figure 1, the speech synthesis method provided by this embodiment may include:
  • the text to be synthesized can be understood as the text whose corresponding speech is to be synthesized, which is obtained after the authorization of the user.
  • the target person may be an authorized speaker of the text to be synthesized, that is, a person who intends to synthesize the authorized voice with its voice characteristics.
  • the target emotion may be the emotion adopted by the authorized target person when speaking the text to be synthesized (or one or more sentences in the text to be synthesized), such as happy, neutral, sad or angry.
  • the character identification information of the target character can be information used to uniquely identify the speaker of the text to be synthesized, such as the character name, character ID or character code of the speaker;
  • the emotion identification information of the target emotion can be used to uniquely identify The emotion information used by the presenter of the text to be synthesized when broadcasting the text to be synthesized, such as the emotion name, emotion ID or emotion code of the emotion.
  • the person identification information of the target person and the emotion identification information of the target emotion can be input by the user when the text to be synthesized needs to be synthesized, or can be preset by the publisher of the text to be synthesized or the provider of the target voice.
  • an authorized user when an authorized user wants to synthesize a piece of speech, he can input the text to be synthesized corresponding to the speech, and select or input the person identification information of the authorized speaker of the speech and The emotional identification information of the emotion that should be carried in the speech that has been authorized to be used; correspondingly, the electronic device can obtain the text input by the user as the text to be synthesized, and obtain the character identification information selected or input by the user as the target character's character identification information, and acquire the emotion identification information selected or input by the user as the emotion identification information of the target emotion.
  • a user when a user reads a text to be synthesized (such as an article, etc.), if he wants to listen to the voice of the text to be synthesized, he can input or select the voice of the authorized player of the text to be synthesized.
  • Character identification information and the emotion identification information of the emotions carried in the authorized voice correspondingly, the electronic device can obtain the user-selected or input character identification information as the target person's character identification information, and obtain the user-selected or input
  • the emotion identification information of the target emotion is used as the emotion identification information of the target emotion.
  • the novel provider can pre-set the emotion that each sentence in the novel it provides to the user should carry; thus, when the user wants to read the novel by listening to voice, he can set the The authorized speaker corresponding to each character in the novel; correspondingly, the electronic device can obtain the authorized speaker corresponding to each character in the novel set by the user, and can sequentially convert each sentence in the novel text
  • the corresponding authorized speaker's character identification information is used as the character identification information of the target person of the statement
  • the emotion identification information of the emotion corresponding to the sentence is used as the emotion identification information of the target emotion to synthesize the novel's The voice corresponding to each sentence.
  • the audiobook developer wants to generate an audiobook of a certain novel, he can set the speaker of each sentence in the novel and the emotion that each sentence should carry;
  • the trigger operation of the audiobook or the trigger operation of listening to the audiobook of the novel is received, the character identification information of the speaker corresponding to each sentence in the novel text can be used as the character identification information of the target person of the sentence in turn , using the emotion identification information of the emotion corresponding to the sentence as the emotion identification information of the target emotion, so as to synthesize the voice corresponding to each sentence in the novel.
  • S102 Perform speech synthesis on the text to be synthesized based on the character identification information and the emotion identification information to obtain a target speech, where the target speech has speech features of the target character and emotional features of the target emotion.
  • the target voice can be the target voice obtained by performing speech synthesis on the text to be synthesized (or one or more sentences in the text to be synthesized), and the target voice has the emotional characteristics of the target voice feature and the target emotion, that is, the target voice can be The target person authorized by the user uses the target emotion authorized by the user to play the synthesized text (or one or more sentences in the text to be synthesized).
  • speech synthesis can be performed on the text to be synthesized according to the person identification information of the target person and the emotion identification of the target emotion
  • speech feature information such as the speech feature vector
  • the speech feature information of the target character determines the speech feature information (such as the speech feature vector) of the target character according to the character identification information of the target character
  • the emotional feature information of the target emotion according to the emotion identification information of the target emotion (such as Emotional feature vector)
  • the target voice of the text to be synthesized is generated, that is, the target voice spoken by the target character with the target emotion is synthesized.
  • the determined voice feature information of the target person can be the voice feature information in the voice authorized by the target person, and the target emotion carried in the generated target voice broadcasted by the target person can be carried by the target person. emotion.
  • the speech synthesis of the text to be synthesized based on the character identification information and the emotion identification information to obtain the target speech includes: a speech synthesis model obtained through pre-training based on the character identification information Determining the speech spectrum sequence of the text to be synthesized with the emotion identification information; inputting the speech spectrum sequence into a vocoder to perform speech synthesis on the speech spectrum sequence to obtain target speech.
  • the speech spectrum sequence of the text to be synthesized can be the spectrum sequence of the target speech to be synthesized, which can be the Mel spectrum sequence of the target speech, so as to ensure that the target speech synthesized based on the spectrum sequence can be more in line with human hearing habits.
  • the speech spectrum sequence of the text to be synthesized can be generated by using the pre-trained speech synthesis model, and the speech spectrum sequence can be converted into target speech by a vocoder.
  • the text to be synthesized, the character identification information of the target character, and the emotion identification information of the target emotion can be input into the pre-trained speech synthesis model, and the text feature information of the text to be synthesized can be determined through the pre-trained speech synthesis model , the voice feature information of the target person and the emotional feature information of the target emotion, generate the voice batch spectrum sequence of the text to be synthesized according to the text feature information, the voice feature information and the emotion feature information, and input the voice spectrum sequence into the vocoder
  • the target speech of the text to be synthesized is generated by the vocoder.
  • the text to be synthesized is a novel text
  • the acquisition of the text to be synthesized, the character identification information of the target character, and the emotion identification information of the target emotion includes: to be synthesized according to at least one of the texts to be synthesized
  • the arrangement order of the sentences, each sentence to be synthesized is determined as the current sentence in turn, and the current character identification information and the current emotion identification information of the current to-be-synthesized sentence are obtained;
  • Performing speech synthesis on the text to be synthesized to obtain a target speech includes: performing speech synthesis on the current sentence to be synthesized based on the current character identification information and the current emotion identification information to obtain the target speech of the current sentence.
  • the current sentence to be synthesized can be a sentence in the text to be synthesized that needs to be synthesized at the current moment; correspondingly, the current character identification information can be the character identification information of the target person of the current sentence to be synthesized, that is, the authorized use of the current sentence to be synthesized The character identification information of the presenter; the current emotion identification information may be the emotion identification information of the target emotion corresponding to the current sentence to be synthesized, that is, the emotion identification information of the emotion that the current sentence to be synthesized should carry.
  • the novel contains sentences such as dialogues and narrations of multiple characters, the authorized speakers and/or emotions corresponding to different sentences may be different. Therefore, when the text to be synthesized is a novel text
  • the target character and target emotion corresponding to each sentence in the text to be synthesized can be determined sentence by sentence and then synthesized into speech.
  • the speech synthesis of the novel text to be synthesized when performing speech synthesis on a novel text, first determine the first sentence of the novel text as the current sentence, obtain the current character identification information and the current emotion identification information of the current sentence, and based on the current character identification information and the current The emotion identification information performs speech synthesis on the current sentence to be synthesized to obtain the target voice of the current sentence, and determines the next sentence to be synthesized after the current sentence to be synthesized and adjacent to the current sentence to be synthesized in the novel text as the current sentence to be synthesized, And return to the operation of obtaining the current character identification information and current emotion identification information of the current sentence until there is no next sentence to be synthesized, thus, the speech synthesis of the novel text to be synthesized can be realized, and the novel text to be synthesized can be obtained audiobook.
  • the speech synthesis method provided in this embodiment obtains the text to be synthesized, the character identification information of the target character, and the emotion identification information of the target emotion, and performs speech synthesis on the text to be synthesized based on the character identification information and the emotion identification information, and obtains the text with the target character
  • the speech features of the target emotion and the target speech of the emotional features of the target emotion are provided.
  • the audio book that broadcasts the corresponding sentence in the novel with the emotion that matches the scene does not require an authorized speaker to use the emotion to perform the interpretation, and can provide more optional audio book speakers to meet people’s different needs when listening to audio books .
  • FIG. 2 is a schematic flowchart of another speech synthesis method provided by an embodiment of the present disclosure.
  • the solution in this embodiment can be combined with one or more example solutions in the above-mentioned embodiments.
  • the speech synthesis model obtained through pre-training determines the speech spectrum sequence of the text to be synthesized based on the character identification information and the emotion identification information, including: determining the text phoneme sequence of the text to be synthesized; The text phoneme sequence, the character identification information and the emotion identification information are input into the pre-trained speech synthesis model, and the speech spectrum sequence output by the speech synthesis model is obtained.
  • the method further includes: playing the target voice.
  • the speech synthesis method provided in this embodiment may include:
  • a phoneme is the smallest phoneme unit obtained by dividing according to the natural attribute of speech, and correspondingly, the text phoneme sequence of the text to be synthesized may be the smallest phoneme unit sequence of the text to be synthesized.
  • phoneme extraction may be performed on the text to be synthesized to obtain a text phoneme sequence of the text to be synthesized.
  • the functional module for extracting the text phoneme sequence of the text to be synthesized can be set independently of the speech synthesis model, and when synthesizing the speech of the text to be synthesized, the functional module first extracts the text phoneme sequence of the text to be synthesized , and input the text phoneme sequence of the text to be synthesized extracted by the function module into the speech synthesis model for speech synthesis, so as to reduce the complexity of the speech synthesis model.
  • this embodiment can also embed the functional module for extracting the text phoneme sequence of the text to be synthesized into the speech synthesis model, and when synthesizing the speech of the text to be synthesized, directly input the text to be synthesized into the speech synthesis In the model, the text phoneme sequence of the text to be synthesized is obtained by the speech synthesis model.
  • the speech synthesis model can be set to determine the speech frequency spectrum sequence of the text to be synthesized according to the text phoneme sequence of the text to be synthesized, the person identification information of the authorized target person and the emotion identification information of the authorized target emotion , that is, the input of the speech synthesis model is the text phoneme sequence of the text to be synthesized, the character identification information of the target person and the emotion identification information of the target emotion, and the output is the speech spectrum sequence of the text to be synthesized.
  • the speech synthesis model includes a text encoder, a high-dimensional mapping module, an emotion flag layer, an attention module, and a decoder, and the output end of the text encoder and the output end of the emotion flag layer are respectively It is connected to the input end of the attention module, and the output end of the high-dimensional mapping module and the output end of the attention module are respectively connected to the input end of the decoder.
  • the speech synthesis model may include a text encoder 30, a high-dimensional mapping module 31, an emotion marker layer 32, an attention module 33, and a decoder 34, wherein the output of the text encoder 30 Terminal can be connected with the input end of attention module 33, is set to determine the text feature information of the text to be synthesized according to the text phoneme sequence of the text to be synthesized, such as determining the text feature vector of the text to be synthesized, and the text feature information of the text to be synthesized Input in attention module 33;
  • the output end of high-dimensional mapping module 31 can be connected with the input end of attention module 33, is set to determine the voice feature vector of target personage according to the character identification information of the target personage that has been authorized to use, as will The character identification information of the target person authorized to use is mapped to the voice feature vector of the target character, and the voice feature vector is input into the attention module 33 or the decoder 34 (in FIG.
  • the voice feature vector is input to the decoder 34 In the middle as an example);
  • the output end of the emotional mark layer 32 can be connected with the input end of the attention module 33, and is set to determine the emotional feature vector of the target emotion according to the emotion identification information of the target emotion authorized to use;
  • the input of the attention module 33 end can be connected with the input end of decoder 34, and is set to jointly with the decoder according to the text feature vector input by text encoder 30, the voice feature vector input by high-dimensional mapping module 31, and the emotional feature input by emotion flag layer 32 Vector generates a sequence of audio spectra for the text to be synthesized.
  • the text phoneme sequence, the character identification information and the emotion identification information are input into the pre-trained speech synthesis model, and the speech spectrum sequence output by the speech synthesis model is obtained, It may include: using the text encoder to encode the text phoneme sequence to obtain the text feature vector of the text to be synthesized; using the high-dimensional mapping module to perform high-dimensional mapping on the character identification information to obtain The character feature vector of the text to be synthesized; the emotional feature vector corresponding to the emotional identification information is determined by using the emotion flag layer as the emotional feature vector of the text to be synthesized; the text feature vector and the emotion The feature vector is input into the attention module, and the intermediate vector output by the attention module and the character feature vector are input into the decoder to obtain the audio frequency spectrum sequence of the text to be synthesized.
  • the intermediate vector can be understood as a vector output by the attention module after processing the received text identification information, character identification information and emotion identification information.
  • the text phoneme sequence can be first input into the speech synthesis model
  • the text feature vector of the text to be synthesized is determined by the text encoder; the character identification information is input into the high-dimensional mapping module of the speech synthesis model, and the speech feature vector of the target character is determined by the high-dimensional mapping module ; and input the emotion identification information into the emotion label layer of the speech synthesis model, and determine the emotion feature vector of the target emotion through the emotion label layer.
  • input the text feature vector, voice feature vector and emotion feature vector into the attention module of the speech synthesis module to obtain the intermediate vector output by the attention module.
  • the intermediate vector is input into the decoder of the speech synthesis model, and the audio spectrum sequence output by the decoder is obtained as the audio spectrum sequence of the text to be synthesized.
  • the text feature vector output by the text encoder and the emotional feature vector output by the emotion flag layer can be directly input into the attention module; the text feature vector output by the text encoder and The emotional feature vector output by the emotional flag layer is combined into a vector, such as combining or adding the text feature vector output by the text encoder and the emotional feature vector output by the emotional encoder, and inputting the combined vector into the attention module ,As shown in Figure 3.
  • the decoder can synthesize the target speech frame by frame according to the speech frame, and after obtaining the audio spectrum sequence corresponding to the speech frame at the current moment, in addition to outputting the audio spectrum sequence, it can also output the audio spectrum sequence corresponding to the speech frame at the current moment
  • the spectrum sequence is input into the attention module as the input of the attention module when determining the intermediate vector corresponding to the next speech frame at the next moment; correspondingly, the attention module can be based on the text feature vector and speech feature vector at the current moment
  • the intermediate vector is determined by the audio frequency spectrum sequence of the last speech frame output by the decoder at the last moment, the emotion mark layer and the decoder.
  • the speech synthesis model can generate voices in which different characters that have been authorized to use use different emotions to broadcast the synthesized text, that is, when the target characters and/or target emotions selected or set by the user (or supplier) are not
  • the speech synthesis model adopted in this embodiment can generate different target speeches.
  • the model structure of this speech synthesis model when training is as shown in Figure 4, can connect the attention layer 35 at the input end of emotion mark layer 32, connect emotion classifier 36 at the output end of emotion mark layer 32, in attention layer 35
  • the input end of the reference coder 37 is connected, and the output end of the reference coder 37 is connected with a character classifier 38.
  • the training process of the speech synthesis model can be:
  • each speech sample contains at least one emotional speech with a certain emotion.
  • the backpropagation algorithm can include three optimized loss functions: the reconstruction error (such as the minimum mean square error) of the output audio spectrum sequence compared with the original audio spectrum sequence, the emotion identification information output by the emotion classifier and the real The cross-entropy loss between emotions and the error between the character identification information output by the character classifier and the real character identification information corresponding to the audio speech.
  • the reconstruction error such as the minimum mean square error
  • the emotion identification information output by the emotion classifier and the real
  • step c Repeat iteration step c until the model converges, for example, until the value of the above optimization loss function is less than or equal to the preset error threshold, or until the number of repeated iterations reaches the preset threshold, etc.
  • S204 Input the speech spectrum sequence into a vocoder to perform speech synthesis on the speech spectrum sequence to obtain a target speech, the target speech having the speech characteristics of the target character and the emotional characteristics of the target emotion .
  • the text speech spectrum sequence can be input into the vocoder, and the speech spectrum sequence is converted into the target speech by the vocoder .
  • the vocoder can be an optional vocoder, and can be a pre-trained vocoder that matches the speech synthesis model to improve the synthesis effect of the target speech.
  • the vocoder connected to the speech synthesis model can also be trained, so that the vocoder can synthesize target speech with better effect.
  • the target voice may also be played if the user has authorized the playback, so that the user can listen to it.
  • the target voice that the user has authorized to play can be played, such as synthesizing and playing the target voice at the user terminal; it is also possible to store the target voice after the vocoder synthesizes the target voice , and when receiving a playback request for the target voice, play the target voice again, such as synthesizing and storing the target voice of the text to be synthesized at the server end, and receiving the target voice sent by a certain user terminal When playing a request, send the target voice to the user terminal, so as to play the target voice through the user terminal.
  • the speech synthesis method obtaineds the text phoneme sequence of the text to be synthesized, and generates the speech of the text to be synthesized according to the text phoneme sequence of the text to be synthesized, the character identification information of the target character, and the emotion identification information of the target emotion through the speech synthesis model Spectrum sequence, the voice spectrum sequence is synthesized into the target voice through the vocoder, and the target voice is played, which can improve the voice synthesis effect under the premise of realizing the synthesis of voices with different emotions of different characters based on the user's authorization. Thereby, the user experience of listening to the audiobook is improved.
  • Fig. 5 is a structural block diagram of a speech synthesis device provided by an embodiment of the present disclosure.
  • the device can be implemented by software and/or hardware, and can be configured in an electronic device, typically, a mobile phone or a tablet computer, and can perform speech synthesis on text by executing a speech synthesis method.
  • the speech synthesis device provided in this embodiment may include: an acquisition module 501 and a synthesis module 502, wherein,
  • the obtaining module 501 is configured to obtain the text to be synthesized, the character identification information of the target person, and the emotion identification information of the target emotion;
  • the synthesis module 502 is configured to perform speech synthesis on the text to be synthesized based on the character identification information and the emotion identification information to obtain a target voice, the target voice has the voice characteristics of the target character and the target emotion emotional traits.
  • the speech synthesis device obtains the text to be synthesized, the character identification information of the target character, and the emotion identification information of the target emotion through the acquisition module, and performs speech on the text to be synthesized based on the character identification information and the emotion identification information through the synthesis module Synthesize to obtain the target voice with the voice features of the target person and the emotional features of the target emotion.
  • the synthesis module by adopting the above-mentioned technical solution, it is possible to realize the synthesis of voices of different characters with different emotions under the condition of authorization, so that after authorization, only according to any voice of the speaker, it can generate the voices of the speaker and the novel.
  • the audio book that broadcasts the corresponding sentence in the novel with the emotion that matches the scene does not require an authorized speaker to use the emotion to perform the interpretation, and can provide more optional audio book speakers to meet people’s different needs when listening to audio books .
  • the synthesis module 502 may include: a spectrum determination unit configured to determine the speech spectrum sequence of the text to be synthesized based on the character identification information and the emotion identification information through a pre-trained speech synthesis model;
  • the speech synthesis unit is configured to input the speech spectrum sequence into the vocoder, so as to perform speech synthesis on the speech spectrum sequence to obtain target speech.
  • the spectrum determination unit may include: a phoneme acquisition subunit, configured to determine the text phoneme sequence of the text to be synthesized; a spectrum determination subunit, configured to obtain the text phoneme sequence, the character identification information and the emotion identification information are input into the pre-trained speech synthesis model, and the speech spectrum sequence output by the speech synthesis model is obtained.
  • the speech synthesis model may include a text encoder, a high-dimensional mapping module, an emotion label layer, an attention module, and a decoder, and the output terminals of the text encoder and the output terminals of the emotion label layer are respectively It is connected to the input end of the attention module, and the output end of the high-dimensional mapping module and the output end of the attention module are respectively connected to the input end of the decoder.
  • the spectrum determination subunit may be configured to: use the text encoder to encode the text phoneme sequence to obtain the text feature vector of the text to be synthesized; use the high-dimensional mapping module to The character identification information is subjected to high-dimensional mapping to obtain the character feature vector of the text to be synthesized; the emotional feature vector corresponding to the emotional identification information is determined by using the emotional flag layer as the emotional feature of the text to be synthesized Vector; the text feature vector and the emotional feature vector are input into the attention module, and the intermediate vector output by the attention module and the character feature vector are input into the decoder to obtain The audio spectrum sequence of the text to be synthesized.
  • the speech synthesis device may further include: a speech playing module, configured to play the target speech after the target speech is obtained.
  • a speech playing module configured to play the target speech after the target speech is obtained.
  • the text to be synthesized may be a novel text
  • the acquisition module 501 may be configured to: determine each sentence to be synthesized in sequence according to the arrangement order of at least one sentence to be synthesized in the text to be synthesized as The current sentence, and obtain the current character identification information and current emotion identification information of the current sentence to be synthesized;
  • the synthesis module 502 can be configured to: based on the current character identification information and the current emotion identification information, the Speech synthesis is performed on the synthesized sentence to obtain the target speech of the current sentence.
  • the speech synthesis device provided in the embodiments of the present disclosure can execute the speech synthesis method provided in any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the speech synthesis method.
  • the speech synthesis method provided in any embodiment of the present disclosure.
  • FIG. 6 it shows a schematic structural diagram of an electronic device (such as a terminal device) 600 suitable for implementing an embodiment of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 600 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 601, which may be randomly accessed according to a program stored in a read-only memory (ROM) 602 or loaded from a storage device 606. Various appropriate actions and processes are executed by programs in the memory (RAM) 603 . In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604 .
  • the following devices can be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 607 such as a computer; a storage device 606 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 609.
  • the communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 6 shows electronic device 600 having various means, it should be understood that implementing or possessing all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 609, or from storage means 606, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires the text to be synthesized, the character identification information of the target character, and the emotion identification information of the target emotion ; performing speech synthesis on the text to be synthesized based on the character identification information and the emotion identification information to obtain a target speech, the target speech having the speech characteristics of the target person and the emotional characteristics of the target emotion.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the module does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides a speech synthesis method, including:
  • Example 2 is based on the method described in Example 1, performing speech synthesis on the text to be synthesized based on the character identification information and the emotion identification information to obtain a target speech, including :
  • example 3 is according to the method described in example 2, the speech synthesis model obtained through pre-training determines the text to be synthesized based on the character identification information and the emotion identification information Speech spectrum sequences, including:
  • example 4 is according to the method described in example 3, the speech synthesis model includes a text encoder, a high-dimensional mapping module, an emotion label layer, an attention module and a decoder, the text
  • the output end of the encoder and the output end of the emotional label layer are connected to the input end of the attention module respectively, and the output end of the high-dimensional mapping module and the output end of the attention module are respectively connected to the decoder connected to the input.
  • example 5 is according to the method described in example 4, the input of the text phoneme sequence, the character identification information and the emotion identification information into the pre-trained speech synthesis model , and obtain the speech spectrum sequence output by the speech synthesis model, including:
  • the high-dimensional mapping module to perform high-dimensional mapping on the character identification information to obtain a character feature vector of the text to be synthesized
  • the text feature vector and the emotional feature vector are input into the attention module, and the intermediate vector output by the attention module and the character feature vector are input into the decoder to obtain the The audio spectrum sequence of the text to be synthesized.
  • example 6 is according to the method described in any one of examples 1-5, and after the target speech is obtained, it also includes:
  • example 7 is according to the method described in any one of examples 1-5, the text to be synthesized is a novel text, and the acquisition of the text to be synthesized, the character identification information of the target character, and the target emotion emotional identification information, including:
  • each sentence to be synthesized is sequentially determined as a current sentence, and the current character identification information and current emotion identification information of the current sentence to be synthesized are acquired;
  • Example 8 provides a speech synthesis device, including:
  • An acquisition module configured to acquire the text to be synthesized, the character identification information of the target person, and the emotion identification information of the target emotion;
  • a synthesis module configured to perform speech synthesis on the text to be synthesized based on the character identification information and the emotion identification information to obtain a target voice, the target voice having the voice characteristics of the target character and the emotion of the target emotion feature.
  • the synthesis module includes:
  • the spectrum determination unit is configured to determine the speech spectrum sequence of the text to be synthesized based on the character identification information and the emotion identification information through the pre-trained speech synthesis model;
  • the speech synthesis unit is configured to input the speech spectrum sequence into the vocoder, so as to perform speech synthesis on the speech spectrum sequence to obtain target speech.
  • the spectrum determination unit includes:
  • a phoneme acquisition subunit configured to determine the text phoneme sequence of the text to be synthesized
  • the frequency spectrum determination subunit is configured to input the text phoneme sequence, the character identification information and the emotion identification information into the pre-trained speech synthesis model, and obtain the speech spectrum sequence output by the speech synthesis model.
  • the speech synthesis model includes a text encoder, a high-dimensional mapping module, an emotion flag layer, an attention module, and a decoder, and the output of the text encoder and the emotion flag
  • the output terminals of the layers are respectively connected to the input terminals of the attention module, and the output terminals of the high-dimensional mapping module and the output terminals of the attention module are respectively connected to the input terminals of the decoder.
  • the spectrum determination subunit is set to:
  • the high-dimensional mapping module to perform high-dimensional mapping on the character identification information to obtain a character feature vector of the text to be synthesized
  • the text feature vector and the emotional feature vector are input into the attention module, and the intermediate vector output by the attention module and the character feature vector are input into the decoder to obtain the The audio spectrum sequence of the text to be synthesized.
  • the speech synthesis device further includes:
  • the voice playing module is configured to play the target voice after the target voice is obtained.
  • the text to be synthesized is a novel text
  • the acquisition module is set to:
  • each sentence to be synthesized is sequentially determined as a current sentence, and the current character identification information and current emotion identification information of the current sentence to be synthesized are obtained;
  • the synthesis module is configured to: perform speech synthesis on the current sentence to be synthesized based on the current character identification information and the current emotion identification information, so as to obtain the target speech of the current sentence.
  • Example 9 provides an electronic device, comprising:
  • processors one or more processors
  • memory arranged to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the speech synthesis method as described in any one of Examples 1-7.
  • Example 10 provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the speech as described in any one of Examples 1-7 is realized. resolve resolution.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Machine Translation (AREA)

Abstract

一种语音合成方法、装置、电子设备和存储介质。该方法包括:获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息(S101);基于人物标识信息和情感标识信息对待合成文本进行语音合成,得到目标语音,目标语音具有目标人物的语音特征以及目标情感的情感特征(S102)。

Description

语音合成方法、装置、电子设备和存储介质
本申请要求在2021年5月13日提交中国专利局、申请号为202110523097.4的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及计算机技术领域,例如涉及一种语音合成方法、装置、电子设备和存储介质。
背景技术
在语音合成领域中,特别是在生成有声书时,情感迁移是一项具有极大实用价值的技术。在生成有声书时,如果能够实现已授权使用的同一说话人不同语音之间的情感迁移,则说话人只需要录制部分带情感的语音即可实现该说话人带有不同情感的语音合成;如果能够实现已授权使用的不同说话人之间的情感迁移,则可以将具有情感演绎能力的说话人的语音中的情感迁移至情感演绎能力较差的说话人中,实现对情感演绎能力较差的说话人带有不同情感的语音的合成,可以直接根据已授权使用的某一说话人的已有语音生成其采用与小说情景相符的情感播报小说中的相应语句的有声书。
然而,相关技术仅能实现已授权使用的同一说话人带有不同情感的语音的合成,即实现同一说话人之间的情感迁移,而无法实现已授权使用的不同说话人之间带有不同情感的语音的合成,在生成有声书时,需要先录制已授权使用的说话人带有与小说情景相符的情感的语音,才能生成其采用该情感播报小说中的相应语句的有声书,而在已授权使用的说话人无法采用与小说情景相符的情感进行演绎或者无法获取到话人带有该情感的语音时,是无法生成该说话人采用该情感播报小说中的相应语句的有声书的,导致有声书可供用户选择的说话人较为单一,无法满足人们的需求。
发明内容
本公开实施例提供一种语音合成方法、装置、电子设备和存储介质,以实现不同说话人带有不同情感的语音的合成。
第一方面,本公开实施例提供了一种语音合成方法,包括:
获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;
基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
第二方面,本公开实施例还提供了一种语音合成装置,包括:
获取模块,设置为获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;
合成模块,设置为基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
第三方面,本公开实施例还提供了一种电子设备,包括:
至少一个处理器;
存储器,设置为存储至少一个程序,
当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如本公开实施例所述的语音合成方法。
第四方面,本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本公开实施例所述的语音合成方法。
附图说明
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种语音合成方法的流程示意图;
图2为本公开实施例提供的另一种语音合成方法的流程示意图;
图3为本公开实施例提供的一种语音合成模型的结构示意图;
图4为本公开实施例提供的一种语音合成模型训练时的模型结构示意图;
图5为本公开实施例提供的一种语音合成装置的结构框图;
图6为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过多种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本公开实施例提供的一种语音合成方法的流程示意图。该方法可以由语音合成装置执行,其中,该装置可以由软件和/或硬件实现,可配置于电子设备中,典型的,可以配置在手机或平板电脑中。本公开实施例提供的语音合成方法适用于合成已授权使用的不同人物 带有不同情感的语音的场景。如图1所示,本实施例提供的语音合成方法可以包括:
S101、获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息。
其中,待合成文本可以理解为欲合成其对应的语音的文本,其在用户授权之后进行获取。目标人物可以为待合成文本的已授权使用的播讲人,即欲合成具有其语音特征的已授权使用的语音的人物。目标情感可以为已授权使用的目标人物在播讲待合成文本(或待合成文本中的某一条或多条语句)时所采用的情感,如高兴、中立、伤心或生气等。相应的,目标人物的人物标识信息可以为用于唯一标识待合成文本的播讲人的信息,如播讲人的人物名称、人物ID或人物编码等;目标情感的情感标识信息可为用于唯一标识待合成文本的播讲人在播讲待合成文本时所采用的情感的信息,如该情感的情感名称、情感ID或情感编码等。目标人物的人物标识信息和目标情感的情感标识信息可以由用户在需要对待合成文本进行语音合成时输入,也可以由待合成文本的发布者或目标语音的提供商预先进行设置。
在一个示例性的场景中,已授权使用的用户在欲合成一段语音时,可以输入该语音所对应的待合成文本,并选择或输入该段语音的已授权使用的播讲人的人物标识信息以及已授权使用的该段语音中所应携带的情感的情感标识信息;相应的,电子设备可以获取用户输入的文本作为待合成文本、获取用户所选择或输入的人物标识信息作为目标人物的人物标识信息,并获取用户所选择或输入的情感标识信息作为目标情感的情感标识信息。
在另一个示例性的场景中,用户在阅读某一待合成文本(如文章等)时,若欲收听该待合成文本的语音,则可以输入或选择待合成文本的已授权使用的播放人的人物标识信息以及已授权使用的语音中所携带的情感的情感标识信息;相应的,电子设备可以获取用户所选择或输入的人物标识信息作为目标人物的人物标识信息,并获取用户所选择或输入的情感标识信息作为目标情感的情感标识信息。
在又一个示例性的场景中,小说供应商可以预先设置其所向用户提供的小说中的每条语句所应携带的情感;从而,用户在欲通过收听语音的方式阅读小说时,可以设置该小说中的每个人物对应的已授权使用的播讲人;相应的,电子设备可以获取用户所设置的小说中每个人物对应的已授权使用的播讲人,可以依次将小说文本中的每条语句所对应的已授权使用的播讲人的人物标识信息作为该条语句的目标人物的人物标识信息,将该条语句所对应的情感的情感标识信息作为目标情感的情感标识信息,以合成小说中的每条语句对应的语音。或者,有声书开发商在欲生成某一部小说的有声书时,可以设置小说中每条语句的播讲人以及每条语句所应携带的情感;从而,电子设备在接收到有声书开发商生成有声书的触发操作或者在接收到用户收听小说的有声书的触发操作时,可以依次将小说文本中的每条语句所对应的播讲人的人物标识信息作为该条语句的目标人物的人物标识信息,将该条语句所对应的情感的情感标识信息作为目标情感的情感标识信息,以合成小说中的每条语句所对应的语音。
S102、基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
其中,目标语音可以为对待合成文本(或待合成文本中的一条或多条语句)进行语音合成得到的目标语音,该目标语音具有目标语音特征和目标情感的情感特征,即该目标语音可以为用户已经授权的目标人物采用用户已经授权使用的目标情感对待合成文本(或待合成文本中的一条或多条语句)进行播讲得到的语音。
在本实施例中,在获取到待合成文本、目标人物的人物标识信息和目标情感的情感标识 信息之后,可以依据该目标人物的人物标识信息和目标情感的情感标识对对待合成文本进行语音合成,如首先确定待合成文本的文本特征信息,依据目标人物的人物标识信息确定目标人物的语音特征信息(如语音特征向量),并依据目标情感的情感标识信息确定目标情感的情感特征信息(如情感特征向量),然后基于该文本特征信息、语音特征信息和情感特征信息生成待合成文本的目标语音,即合成目标人物以目标情感播讲的目标语音。在此,所确定的目标人物的语音特征信息可以为目标人物授权使用的语音中的语音特征信息,所生成的由目标人物进行播报的目标语音中所携带的目标情感可以为目标人物授权携带的情感。
在一个实施方式中,所述基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,包括:通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列;将所述语音频谱序列输入至声码器中,以对所述语音频谱序列进行语音合成,得到目标语音。
其中,待合成文本的语音频谱序列可以为待合成的目标语音的频谱序列,其可以为目标语音的梅尔频谱序列,以确保基于该频谱序列合成的目标语音能够更符合人的听觉习惯。
在上述实施方式中,可以通过预先训练得到的语音合成模型生成待合成文本的语音频谱序列并通过声码器将该语音频谱序列转换为目标语音。示例性的,可以将待合成文本、目标人物的人物标识信息和目标情感的情感标识信息输入至预先训练得到的语音合成模型中,通过预先训练得到的语音合成模型确定待合成文本的文本特征信息、目标人物的语音特征信息和目标情感的情感特征信息,根据该文本特征信息、该语音特征信息以及该情感特征信息生成待合成文本的语音批频谱序列,并将该语音频谱序列输入至声码器中,通过该声码器生成待合成文本的目标语音。
在一个实施方式中,所述待合成文本为小说文本,所述获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息,包括:按照所述待合成文本中的至少一条待合成语句的排列顺序,依次将每一条待合成语句确定为当前语句,并获取所述当前待合成语句的当前人物标识信息和当前情感标识信息;所述基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,包括:基于所述当前人物标识信息和所述当前情感标识信息对所述当前待合成语句进行语音合成,得到所述当前语句的目标语音。
其中,当前待合成语句可以为当前时刻需要合成的待合成文本中的语句;相应的,当前人物标识信息可以为当前待合成语句的目标人物的人物标识信息,即当前待合成语句的已授权使用的播讲人的人物标识信息;当前情感标识信息可以为当前待合成语句对应的目标情感的情感标识信息,即当前待合成语句所应携带的情感的情感标识信息。
在本实施例中,由于小说中包含多个人物的对话和旁白等语句,不同语句所对应的已授权使用的播讲人和/或情感可能是不相同的,因此,当待合成文本为小说文本时,可以逐句确定对待合成文本中的每条语句对应的目标人物和目标情感并进行语音合成。
示例性的,在对小说文本进行语音合成时,可以首先将小说文本的第一条语句确定为当前语句,获取当前语句的当前人物标识信息和当前情感标识信息,依据该当前人物标识信息和当前情感标识信息对当前待合成语句进行语音合成,得到当前语句的目标语音,将小说文本中位于当前待合成语句之后且与当前待合成语句相邻的下一条待合成语句确定为当前待合成语句,并返回执行获取当前语句的当前人物标识信息和当前情感标识信息的操作,直至不存在下一待合成语句为止,由此,即可实现对待合成的小说文本的语音合成,得到待合成的 小说文本的有声书。
本实施例提供的语音合成方法,获取待合成文本、目标人物的人物标识信息和目标情感的情感标识信息,并基于该人物标识信息和该情感标识信息对待合成文本进行语音合成,得到具有目标人物的语音特征以及目标情感的情感特征的目标语音。本实施例通过采用上述技术方案,可以在已经授权的情况下,实现不同人物具有不同情感的语音的合成,从而,在授权后,仅根据播讲人的任一语音,即能生成其采用与小说情景相符的情感播报小说中的相应语句的有声书,无需已授权使用的播讲人再采用该情感进行演绎,能够提供更多可供选择的有声书播讲人,满足人们收听有声书时的不同需求。
图2为本公开实施例提供的另一种语音合成方法的流程示意图。本实施例中的方案可以与上述实施例中的一个或多个示例方案组合。例如,所述通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列,包括:确定所述待合成文本的文本音素序列;将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列。
例如,在所述得到目标语音之后,还包括:播放所述目标语音。
相应的,如图2所示,本实施例提供的语音合成方法可以包括:
S201、获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息。
S202、确定所述待合成文本的文本音素序列。
其中,音素为根据语音的自然属性划分得到的最小语音单元,相应的,待合成文本的文本音素序列可以为待合成文本的最小语音单元序列。
例如,在获取到待合成文本后,可以对待合成文本进行音素提取,得到待合成文本的文本音素序列。
在本实施例中,可以将用于提取待合成文本的文本音素序列的功能模块独立于语音合成模型设置,在合成待合成文本的语音时,首先由该功能模块提取待合成文本的文本音素序列,并将该功能模块提取得到的待合成文本的文本音素序列输入至语音合成模型中进行语音合成,以降低语音合成模型的复杂程度。
可以理解的是,本实施例也可以将用于提取待合成文本的文本音素序列的功能模块嵌入到语音合成模型中,并在合成待合成文本的语音时,直接将待合成文本输入至语音合成模型中,由语音合成模型获取待合成文本的文本音素序列。
S203、将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列。
在本实施例中,语音合成模型可设置为根据待合成文本的文本音素序列、已授权使用的目标人物的人物标识信息和已授权使用的目标情感的情感标识信息确定待合成文本的语音频谱序列,即语音合成模型的输入为待合成文本的文本音素序列、目标人物的人物标识信息和目标情感的情感标识信息,输出为待合成文本的语音频谱序列。
在一个实施方式中,所述语音合成模型包括文本编码器、高维映射模块、情感标志层、注意力模块和解码器,所述文本编码器的输出端和所述情感标志层的输出端分别与所述注意力模块的输入端相连,所述高维映射模块的输出端和所述注意力模块的输出端分别与所述解码器的输入端相连。
在上述实施方式中,如图3所示,语音合成模型可以包括文本编码器30、高维映射模块31、情感标志层32、注意力模块33和解码器34,其中,文本编码器30的输出端可以与注意力模块33的输入端相连,设置为根据待合成文本的文本音素序列确定待合成文本的文本特征信息,如确定待合成文本的文本特征向量,并将待合成文本的文本特征信息输入至注意力模块33中;高维映射模31的输出端可以与注意力模块33的输入端相连,设置为根据已授权使用的目标人物的人物标识信息确定目标人物的语音特征向量,如将已授权使用的目标人物的人物标识信息映射为目标人物的语音特征向量,并将该语音特征向量输入至注意力模块33或解码器34中(图3中以将语音特征向量输入至解码器34中为例);情感标志层32的输出端可以与注意力模块33的输入端相连,设置为根据已授权使用的目标情感的情感标识信息确定目标情感的情感特征向量;注意力模块33的输入端可以与解码器34的输入端相连,设置为与解码器共同根据文本编码器30所输入的文本特征向量、高维映射模块31所输入的语音特征向量以及情感标志层32所输入的情感特征向量生成待合成文本的音频频谱序列。
在上述实施方式中,所述将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列,可以包括:采用所述文本编码器对所述文本音素序列进行编码,以得到所述待合成文本的文本特征向量;采用所述高维映射模块对所述人物标识信息进行高维映射,以得到所述待合成文本的人物特征向量;采用所述情感标志层确定与所述情感标识信息对应的情感特征向量,作为所述待合成文本的情感特征向量;将所述文本特征向量和所述情感特征向量输入至所述注意力模块中,并将所述注意力模块输出的中间向量以及所述人物特征向量输入至所述解码器中,以得到所述待合成文本的音频频谱序列。其中,中间向量可以理解为注意力模块在对所接收到的文本标识信息、人物标识信息和情感标识信息进行处理后输出的向量。
示例性的,在获取到待合成文本的文本音素序列、已授权使用的目标人物的人物标识信息和已授权使用的目标情感的情感标识信息之后,可以首先将该文本音素序列输入至语音合成模型的文本编码器中,通过该文本编码器确定待合成文本的文本特征向量;将该人物标识信息输入至语音合成模型的高维映射模块中,通过该高维映射模块确定目标人物的语音特征向量;并将该情感标识信息输入至语音合成模型的情感标志层中,通过该情感标志层确定目标情感的情感特征向量。然后将该文本特征向量、语音特征向量和情感特征向量输入至语音合成模块的注意力模块中,得到注意力模块输出的中间向量。最后,将该中间向量输入至语音合成模型的解码器中,并获取该解码器输出的音频频谱序列,作为待合成文本的音频频谱序列。
可以理解的是,在上述实施方式中,可以将文本编码器输出的文本特征向量和情感标志层输出的情感特征向量直接输入至注意力模块中;也可以将文本编码器输出的文本特征向量和情感标志层输出的情感特征向量组合为一个向量,如将文本编码器输出的文本特征向量和情感编码器输出的情感特征向量进行拼或相加,并将组合得到的向量输入至注意力模块中,如图3所示。此外,解码器可以按照语音帧逐帧合成目标语音,并在得到当前时刻的语音帧对应的音频频谱序列后,除对外输出该音频频谱序列之外,还可以将当前时刻的语音帧对应的音频频谱序列输入到注意力模块中,以作为注意力模块在确定下一时刻的下一语音帧对应的中间向量时的输入;相应的,注意力模块可以根据当前时刻的文本特征向量、语音特征向量、情感标志层以及解码器在上一时刻输出的上一语音帧的音频频谱序列确定中间向量。
在本实施例中,语音合成模型能够生成已授权使用的不同人物采用不同情感对待合成文本进行播讲的语音,即当用户(或供应商)所选择或设置的目标人物和/或目标情感不相同时,本实施例所采用的语音合成模型能够生成不同的目标语音。该语音合成模型在训练时的模型结构如图4所示,可以在情感标志层32的输入端连接注意力层35,在情感标志层32的输出端连接情感分类器36,在注意力层35的输入端连接参考编码器37,并在参考编码器37的输出端连接人物分类器38,此时,语音合成模型的训练过程可以为:
a、获取至少一个说话人已经授权使用的语音作为语音样本,并获取每个已经授权使用的语音样本所对应的文本音素序列。其中,每个语音样本中至少包含一条具有某一情感的情感语音。
b、提取每个已经授权使用的语音样本的音频频谱序列。
c、分别将每个已经授权使用的语音样本的文本音素序列输入至文本编码器中,将已经授权使用的语音样本的原始音频频谱序列输入至参考编码器中,并将相应语音样本对应的说话人的人物标识信息输入至高维映射模块中,并获取人物分类器输出的人物标识信息、情感分类器输出的情感标识信息以及解码器的输出音频频谱序列,以通过对抗训练的方式对语音合成模型进行训练,并通过反向传播算法对语音合成模型进行优化。
其中,反向传播算法可包括3个优化损失函数:输出音频频谱序列相较于原始音频频谱序列的重构误差(如最小均方差)、情感分类器输出的情感标识信息与音频语音中的真实情感之间的交叉熵损失以及人物分类器所输出的人物标识信息与音频语音对应的真实人物标识信息之间的误差。
d、重复迭代步骤c,直至模型收敛为止,如直至上述优化损失函数的值小于或等于预设误差阈值为止,或者,直至重复迭代次数达到预设次数阈值为止等。
S204、将所述语音频谱序列输入至声码器中,以对所述语音频谱序列进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
在本实施例中,在获取到语音合成模型输出的待合成文本的语音频谱序列后,可以将该文语音频谱序列输入到声码器中,通过声码器将该语音频谱序列转换为目标语音。其中,该声码器可以为任选的声码器,可以为预先训练得到的、与语音合成模型匹配的声码器,以提高目标语音的合成效果,此时,相应的,在训练语音合成模型时,也可以对语音合成模型所连接的声码器进行训练,以使得声码器能够合成效果较好的目标语音。
S205、播放所述目标语音。
在本实施例中,在合成目标语音后,还可以在用户已经授权播放的情况下,播放该目标语音,以便于用户进行收听。示例性的,可以在声码器合成目标语音之后,即播放用户已经授权播放的该目标语音,如在用户终端合成并播放目标语音;也可以在声码器合成目标语音后,存储该目标语音,并在接收到针对该目标语音的播放请求时,再播放该目标语音,如在服务端合成并存储待合成文本的目标语音,并在接收到某一用户终端发送的、针对该目标语音的播放请求时,将该目标语音发送给该用户终端,以通过该用户终端播放该目标语音。
本实施例提供的语音合成方法,获取待合成文本的文本音素序列,通过语音合成模型根据待合成文本的文本音素序列、目标人物的人物标识信息和目标情感的情感标识信息生成待合成文本的语音频谱序列,通过声码器将该语音频谱序列合成为目标语音,并播放该目标语音,能够在基于用户的授权实现不同人物带有不同情感的语音的合成的前提下,提高语音的 合成效果,进而提高用户收听有声书的体验。
图5为本公开实施例提供的一种语音合成装置的结构框图。该装置可以由软件和/或硬件实现,可配置于电子设备中,典型的,可以配置在手机或平板电脑中,可通过执行语音合成方法对文本进行语音合成。如图5所示,本实施例提供的语音合成装置可以包括:获取模块501和合成模块502,其中,
获取模块501,设置为获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;
合成模块502,设置为基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
本实施例提供的语音合成装置,通过获取模块获取待合成文本、目标人物的人物标识信息和目标情感的情感标识信息,并通过合成模块基于该人物标识信息和该情感标识信息对待合成文本进行语音合成,得到具有目标人物的语音特征以及目标情感的情感特征的目标语音。本实施例通过采用上述技术方案,可以在已经授权的情况下,实现不同人物具有不同情感的语音的合成,从而,在授权后,仅根据播讲人的任一语音,即能生成其采用与小说情景相符的情感播报小说中的相应语句的有声书,无需已授权使用的播讲人再采用该情感进行演绎,能够提供更多可供选择的有声书播讲人,满足人们收听有声书时的不同需求。
在上述方案中,所述合成模块502可以包括:频谱确定单元,设置为通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列;语音合成单元,设置为将所述语音频谱序列输入至声码器中,以对所述语音频谱序列进行语音合成,得到目标语音。
在上述方案中,所述频谱确定单元可以包括:音素获取子单元,设置为确定所述待合成文本的文本音素序列;频谱确定子单元,设置为将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列。
在上述方案中,所述语音合成模型可以包括文本编码器、高维映射模块、情感标志层、注意力模块和解码器,所述文本编码器的输出端和所述情感标志层的输出端分别与所述注意力模块的输入端相连,所述高维映射模块的输出端和所述注意力模块的输出端分别与所述解码器的输入端相连。
在上述方案中,所述频谱确定子单元可以设置为:采用所述文本编码器对所述文本音素序列进行编码,以得到所述待合成文本的文本特征向量;采用所述高维映射模块对所述人物标识信息进行高维映射,以得到所述待合成文本的人物特征向量;采用所述情感标志层确定与所述情感标识信息对应的情感特征向量,作为所述待合成文本的情感特征向量;将所述文本特征向量和所述情感特征向量输入至所述注意力模块中,并将所述注意力模块输出的中间向量以及所述人物特征向量输入至所述解码器中,以得到所述待合成文本的音频频谱序列。
例如,本实施例提供的语音合成装置还可以包括:语音播放模块,设置为在所述得到目标语音之后,播放所述目标语音。
在上述方案中,所述待合成文本可以为小说文本,所述获取模块501可以设置为:按照所述待合成文本中的至少一条待合成语句的排列顺序,依次将每一条待合成语句确定为当前 语句,并获取所述当前待合成语句的当前人物标识信息和当前情感标识信息;所述合成模块502可以设置为:基于所述当前人物标识信息和所述当前情感标识信息对所述当前待合成语句进行语音合成,得到所述当前语句的目标语音。
本公开实施例提供的语音合成装置可执行本公开任意实施例提供的语音合成方法,具备执行语音合成方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本公开任意实施例所提供的语音合成方法。
下面参考图6,其示出了适于用来实现本公开实施例的电子设备(例如终端设备)600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置606加载到随机访问存储器(RAM)603中的程序而执行多种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的多种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置606;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有多种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置606被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于 电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执 行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种语音合成方法,包括:
获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;
基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
根据本公开的一个或多个实施例,示例2根据示例1所述的方法,所述基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,包括:
通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列;
将所述语音频谱序列输入至声码器中,以对所述语音频谱序列进行语音合成,得到目标语音。
根据本公开的一个或多个实施例,示例3根据示例2所述的方法,所述通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列,包括:
确定所述待合成文本的文本音素序列;
将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列。
根据本公开的一个或多个实施例,示例4根据示例3所述的方法,所述语音合成模型包括文本编码器、高维映射模块、情感标志层、注意力模块和解码器,所述文本编码器的输出端和所述情感标志层的输出端分别与所述注意力模块的输入端相连,所述高维映射模块的输出端和所述注意力模块的输出端分别与所述解码器的输入端相连。
根据本公开的一个或多个实施例,示例5根据示例4所述的方法,所述将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列,包括:
采用所述文本编码器对所述文本音素序列进行编码,以得到所述待合成文本的文本特征向量;
采用所述高维映射模块对所述人物标识信息进行高维映射,以得到所述待合成文本的人物特征向量;
采用所述情感标志层确定与所述情感标识信息对应的情感特征向量,作为所述待合成文本的情感特征向量;
将所述文本特征向量和所述情感特征向量输入至所述注意力模块中,并将所述注意力模块输出的中间向量以及所述人物特征向量输入至所述解码器中,以得到所述待合成文本的音频频谱序列。
根据本公开的一个或多个实施例,示例6根据示例1-5任一所述的方法,在所述得到目标语音之后,还包括:
播放所述目标语音。
根据本公开的一个或多个实施例,示例7根据示例1-5任一所述的方法,所述待合成文本为小说文本,所述获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息,包括:
按照所述待合成文本中的至少一条待合成语句的排列顺序,依次将每一条待合成语句确定为当前语句,并获取所述当前待合成语句的当前人物标识信息和当前情感标识信息;
所述基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,包括:
基于所述当前人物标识信息和所述当前情感标识信息对所述当前待合成语句进行语音合成,得到所述当前语句的目标语音。
根据本公开的一个或多个实施例,示例8提供了一种语音合成装置,包括:
获取模块,设置为获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;
合成模块,设置为基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
根据本公开的一个或多个实施例,所述合成模块包括:
频谱确定单元,设置为通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列;
语音合成单元,设置为将所述语音频谱序列输入至声码器中,以对所述语音频谱序列进行语音合成,得到目标语音。
根据本公开的一个或多个实施例,所述频谱确定单元包括:
音素获取子单元,设置为确定所述待合成文本的文本音素序列;
频谱确定子单元,设置为将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列。
根据本公开的一个或多个实施例,所述语音合成模型包括文本编码器、高维映射模块、情感标志层、注意力模块和解码器,所述文本编码器的输出端和所述情感标志层的输出端分别与所述注意力模块的输入端相连,所述高维映射模块的输出端和所述注意力模块的输出端分别与所述解码器的输入端相连。
根据本公开的一个或多个实施例,所述频谱确定子单元设置为:
采用所述文本编码器对所述文本音素序列进行编码,以得到所述待合成文本的文本特征向量;
采用所述高维映射模块对所述人物标识信息进行高维映射,以得到所述待合成文本的人物特征向量;
采用所述情感标志层确定与所述情感标识信息对应的情感特征向量,作为所述待合成文本的情感特征向量;
将所述文本特征向量和所述情感特征向量输入至所述注意力模块中,并将所述注意力模 块输出的中间向量以及所述人物特征向量输入至所述解码器中,以得到所述待合成文本的音频频谱序列。
根据本公开的一个或多个实施例,所述语音合成装置还包括:
语音播放模块,设置为在所述得到目标语音之后,播放所述目标语音。
根据本公开的一个或多个实施例,所述待合成文本为小说文本,所述获取模块设置为:
按照所述待合成文本中的至少一条待合成语句的排列顺序,依次将每一条待合成语句确定为当前语句,并获取所述当前待合成语句的当前人物标识信息和当前情感标识信息;
所述合成模块设置为:基于所述当前人物标识信息和所述当前情感标识信息对所述当前待合成语句进行语音合成,得到所述当前语句的目标语音。
根据本公开的一个或多个实施例,示例9提供了一种电子设备,包括:
一个或多个处理器;
存储器,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如示例1-7中任一所述的语音合成方法。
根据本公开的一个或多个实施例,示例10提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如示例1-7中任一所述的语音合成方法。
以上描述仅为本公开的示例实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了多种操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (10)

  1. 一种语音合成方法,包括:
    获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;
    基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
  2. 根据权利要求1所述的方法,其中,所述基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,包括:
    通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列;
    将所述语音频谱序列输入至声码器中,以对所述语音频谱序列进行语音合成,得到目标语音。
  3. 根据权利要求2所述的方法,其中,所述通过预先训练得到的语音合成模型基于所述人物标识信息和所述情感标识信息确定所述待合成文本的语音频谱序列,包括:
    确定所述待合成文本的文本音素序列;
    将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列。
  4. 根据权利要求3所述的方法,其中,所述语音合成模型包括文本编码器、高维映射模块、情感标志层、注意力模块和解码器,所述文本编码器的输出端和所述情感标志层的输出端分别与所述注意力模块的输入端相连,所述高维映射模块的输出端和所述注意力模块的输出端分别与所述解码器的输入端相连。
  5. 根据权利要求4所述的方法,其中,所述将所述文本音素序列、所述人物标识信息和所述情感标识信息输入至预先训练得到的语音合成模型中,并获取所述语音合成模型输出的语音频谱序列,包括:
    采用所述文本编码器对所述文本音素序列进行编码,以得到所述待合成文本的文本特征向量;
    采用所述高维映射模块对所述人物标识信息进行高维映射,以得到所述待合成文本的人物特征向量;
    采用所述情感标志层确定与所述情感标识信息对应的情感特征向量,作为所述待合成文本的情感特征向量;
    将所述文本特征向量和所述情感特征向量输入至所述注意力模块中,并将所述注意力模块输出的中间向量以及所述人物特征向量输入至所述解码器中,以得到所述待合成文本的音频频谱序列。
  6. 根据权利要求1-5任一所述的方法,在所述得到目标语音之后,还包括:
    播放所述目标语音。
  7. 根据权利要求1-5任一所述的方法,其中,所述待合成文本为小说文本,所述小说文本包括至少一条待合成语句,所述获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息,包括:
    按照所述至少一条待合成语句的排列顺序,依次将每条待合成语句确定为当前语句,并获取所述当前待合成语句的当前人物标识信息和当前情感标识信息;
    所述基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到 目标语音,包括:
    基于所述当前人物标识信息和所述当前情感标识信息对所述当前待合成语句进行语音合成,得到所述当前语句的目标语音。
  8. 一种语音合成装置,包括:
    获取模块,设置为获取待合成文本、目标人物的人物标识信息以及目标情感的情感标识信息;
    合成模块,设置为基于所述人物标识信息和所述情感标识信息对所述待合成文本进行语音合成,得到目标语音,所述目标语音具有所述目标人物的语音特征以及所述目标情感的情感特征。
  9. 一种电子设备,包括:
    至少一个处理器;
    存储器,设置为存储至少一个程序,
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-7中任一所述的语音合成方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一所述的语音合成方法。
PCT/CN2022/091348 2021-05-13 2022-05-07 语音合成方法、装置、电子设备和存储介质 WO2022237665A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110523097.4A CN113257218B (zh) 2021-05-13 2021-05-13 语音合成方法、装置、电子设备和存储介质
CN202110523097.4 2021-05-13

Publications (1)

Publication Number Publication Date
WO2022237665A1 true WO2022237665A1 (zh) 2022-11-17

Family

ID=77183290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091348 WO2022237665A1 (zh) 2021-05-13 2022-05-07 语音合成方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN113257218B (zh)
WO (1) WO2022237665A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115547296A (zh) * 2022-11-29 2022-12-30 零犀(北京)科技有限公司 一种语音合成方法、装置、电子设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113257218B (zh) * 2021-05-13 2024-01-30 北京有竹居网络技术有限公司 语音合成方法、装置、电子设备和存储介质
CN114937104A (zh) * 2022-06-24 2022-08-23 北京有竹居网络技术有限公司 虚拟对象面部信息生成方法、装置和电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160329043A1 (en) * 2014-01-21 2016-11-10 Lg Electronics Inc. Emotional-speech synthesizing device, method of operating the same and mobile terminal including the same
US20200035215A1 (en) * 2019-08-22 2020-01-30 Lg Electronics Inc. Speech synthesis method and apparatus based on emotion information
CN111667811A (zh) * 2020-06-15 2020-09-15 北京百度网讯科技有限公司 语音合成方法、装置、设备和介质
WO2020190054A1 (ko) * 2019-03-19 2020-09-24 휴멜로 주식회사 음성 합성 장치 및 그 방법
CN112289299A (zh) * 2020-10-21 2021-01-29 北京大米科技有限公司 语音合成模型的训练方法、装置、存储介质以及电子设备
CN112349273A (zh) * 2020-11-05 2021-02-09 携程计算机技术(上海)有限公司 基于说话人的语音合成方法、模型训练方法及相关设备
CN113257218A (zh) * 2021-05-13 2021-08-13 北京有竹居网络技术有限公司 语音合成方法、装置、电子设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160329043A1 (en) * 2014-01-21 2016-11-10 Lg Electronics Inc. Emotional-speech synthesizing device, method of operating the same and mobile terminal including the same
WO2020190054A1 (ko) * 2019-03-19 2020-09-24 휴멜로 주식회사 음성 합성 장치 및 그 방법
US20200035215A1 (en) * 2019-08-22 2020-01-30 Lg Electronics Inc. Speech synthesis method and apparatus based on emotion information
CN111667811A (zh) * 2020-06-15 2020-09-15 北京百度网讯科技有限公司 语音合成方法、装置、设备和介质
CN112289299A (zh) * 2020-10-21 2021-01-29 北京大米科技有限公司 语音合成模型的训练方法、装置、存储介质以及电子设备
CN112349273A (zh) * 2020-11-05 2021-02-09 携程计算机技术(上海)有限公司 基于说话人的语音合成方法、模型训练方法及相关设备
CN113257218A (zh) * 2021-05-13 2021-08-13 北京有竹居网络技术有限公司 语音合成方法、装置、电子设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115547296A (zh) * 2022-11-29 2022-12-30 零犀(北京)科技有限公司 一种语音合成方法、装置、电子设备及存储介质
CN115547296B (zh) * 2022-11-29 2023-03-10 零犀(北京)科技有限公司 一种语音合成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113257218A (zh) 2021-08-13
CN113257218B (zh) 2024-01-30

Similar Documents

Publication Publication Date Title
CN111583900B (zh) 歌曲合成方法、装置、可读介质及电子设备
WO2022237665A1 (zh) 语音合成方法、装置、电子设备和存储介质
WO2022033327A1 (zh) 视频生成方法、生成模型训练方法、装置、介质及设备
CN111899719A (zh) 用于生成音频的方法、装置、设备和介质
CN111369967B (zh) 基于虚拟人物的语音合成方法、装置、介质及设备
CN112786006B (zh) 语音合成方法、合成模型训练方法、装置、介质及设备
CN111899720B (zh) 用于生成音频的方法、装置、设备和介质
WO2022143058A1 (zh) 语音识别方法、装置、存储介质及电子设备
CN111402842B (zh) 用于生成音频的方法、装置、设备和介质
CN111798821B (zh) 声音转换方法、装置、可读存储介质及电子设备
CN111368559A (zh) 语音翻译方法、装置、电子设备及存储介质
CN113205793B (zh) 音频生成方法、装置、存储介质及电子设备
CN111369971A (zh) 语音合成方法、装置、存储介质和电子设备
CN113139391B (zh) 翻译模型的训练方法、装置、设备和存储介质
WO2021259300A1 (zh) 音效添加方法和装置、存储介质和电子设备
WO2022037388A1 (zh) 语音生成方法、装置、设备和计算机可读介质
WO2022042418A1 (zh) 音乐合成方法、装置、设备和计算机可读介质
WO2022156413A1 (zh) 语音风格的迁移方法、装置、可读介质和电子设备
CN111369968B (zh) 语音合成方法、装置、可读介质及电子设备
CN116863935B (zh) 语音识别方法、装置、电子设备与计算机可读介质
CN112908292A (zh) 文本的语音合成方法、装置、电子设备及存储介质
WO2023082931A1 (zh) 用于语音识别标点恢复的方法、设备和存储介质
CN110379406A (zh) 语音评论转换方法、系统、介质和电子设备
CN114429658A (zh) 人脸关键点信息获取方法、生成人脸动画的方法及装置
CN112785667A (zh) 视频生成方法、装置、介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22806631

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22806631

Country of ref document: EP

Kind code of ref document: A1