CN111462728A - Method, apparatus, electronic device and computer readable medium for generating speech - Google Patents

Method, apparatus, electronic device and computer readable medium for generating speech Download PDF

Info

Publication number
CN111462728A
CN111462728A CN202010242995.8A CN202010242995A CN111462728A CN 111462728 A CN111462728 A CN 111462728A CN 202010242995 A CN202010242995 A CN 202010242995A CN 111462728 A CN111462728 A CN 111462728A
Authority
CN
China
Prior art keywords
sample
voice
target
target speaker
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010242995.8A
Other languages
Chinese (zh)
Inventor
汤本来
顾宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010242995.8A priority Critical patent/CN111462728A/en
Publication of CN111462728A publication Critical patent/CN111462728A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • G10L13/047Architecture of speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Machine Translation (AREA)

Abstract

Embodiments of the present disclosure disclose methods, apparatuses, electronic devices and computer-readable media for generating speech. One embodiment of the method comprises: acquiring the voice of a user and the voice of a target speaker; extracting a text feature vector in user voice; obtaining the information of the target speaker according to the voice of the target speaker; and generating the voice of the target language based on the target speaker information and the text characteristic vector. The implementation mode realizes the customized voice generation of the voice of any target speaker and improves the user experience.

Description

Method, apparatus, electronic device and computer readable medium for generating speech
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, an electronic device, and a computer-readable medium for generating speech.
Background
The research on the speech generation technology is already an important part in the whole speech language research, and the early research results in the aspects are both available at home and abroad, but the early work is mostly still in the laboratory stage due to various reasons such as computational complexity, memory capacity and computational instantaneity. In many aspects, however, speech generation techniques have a wide range of applications.
In the related art, the generated voices are often the same type of voice, and the voice generation of any speaker voice cannot be generated.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a method, apparatus, electronic device and computer readable medium for generating speech to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for generating speech, the method comprising: acquiring the voice of a user and the voice of a target speaker; extracting a text feature vector in user voice; obtaining the information of the target speaker according to the voice of the target speaker; and generating the voice of the target language based on the target speaker information and the text characteristic vector.
In a second aspect, some embodiments of the present disclosure provide an apparatus for generating speech, the apparatus comprising: an acquisition unit configured to acquire a user voice and a target speaker voice; an extraction unit configured to extract a text feature vector in the user speech; a first generating unit configured to obtain target speaker information according to the target speaker voice; and the second generating unit is configured to generate the voice of the target language based on the target speaker information and the text feature vector.
In a third aspect, an embodiment of the present application provides an electronic device, where the network device includes: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: firstly, acquiring the voice of a user and the voice of a target speaker, then extracting a text feature vector from the voice of the user, obtaining the information of the target speaker according to the voice of the target speaker, and finally generating the voice of the target voice based on the information of the target speaker and the text feature vector. Therefore, the voice of the target language is generated by utilizing the voice of the user and the voice of the target speaker, the customized voice generation of the voice of any target speaker is realized, and the user experience is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of a method for generating speech according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a method for generating speech according to the present disclosure;
FIG. 3 is a schematic block diagram of some embodiments of a speech generating apparatus according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 is a schematic diagram of one application scenario of a method for generating speech according to some embodiments of the present disclosure.
As shown in FIG. 1, first, the server 101 can obtain a user speech 102 and a target speaker speech 103. The server 101 may then extract the text feature vectors 104 from the user speech 102, and obtain the targeted speaker information 105 from the targeted speaker speech 103. The server 101 may then generate speech 106 in the target language using the text feature vectors 104 and the target speaker information 105.
It is understood that the method for generating voice may be executed by the server 101 or may be executed by the terminal device, and the execution main body of the method may further include a device formed by integrating the server 101 and the terminal device through a network, or may also be executed by various software programs. The terminal device may be various electronic devices with information processing capability, including but not limited to a smart phone, a tablet computer, an e-book reader, a laptop portable computer, a desktop computer, and the like. When the execution subject is software, the software can be installed in the electronic device listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of servers in fig. 1 is merely illustrative. There may be any number of servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method for generating speech according to the present disclosure is shown. The method for generating speech comprises the following steps:
step 201, obtaining the voice of the user and the voice of the target speaker.
In some embodiments, the execution subject of the method for generating speech (e.g., the server shown in fig. 1) may obtain the user speech and the target speaker speech from a terminal with which the user performs web browsing through a wired connection or a wireless connection. Here, the target speaker generally refers to a speaker voice that a user wants to generate. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
Step 202, extracting the text feature vector in the user voice.
In some embodiments, based on the user speech obtained in step 201, the execution subject (e.g., the server shown in fig. 1) may extract the text feature vector in the user speech in various ways. As another example, the execution main body may further store a plurality of correspondence relationships between the user voices and the text feature vectors corresponding to the user voices in advance, and when extracting the text feature vectors, determine the same or similar user voices in the pre-stored user voices and extract the corresponding text feature vectors. Here, the text feature vector generally refers to a pinyin sequence or a phoneme sequence corresponding to the user speech content. As an example, when the text converted by the user speech is "hello", the text feature vector may be a pinyin sequence "nihao"; when the text converted from the user speech is "hello", the text feature vector may be a phoneme sequence "hello".
In some optional implementations of some embodiments, the execution subject may extract an acoustic feature in the user speech. Here, the extracting of the acoustic feature may be extracting the acoustic feature of the user speech by an autocorrelation function method, a cepstrum method, or the like. Here, the acoustic features generally refer to features including the content of speech spoken by a speaker, pitch, intensity, duration, timbre, and the like.
And then, analyzing the acoustic features through the extraction model to obtain a text feature vector. The extraction model is trained through a first sample training sample set, and the first training sample set comprises sample acoustic features and sample text feature vectors.
Here, the above extraction model is generally used to characterize the correspondence between the acoustic features and the text feature vectors. As an example, the above extraction model may be a correspondence table including acoustic features and text feature vectors. The correspondence table may be a correspondence table that is prepared in advance by a technician based on statistics of a large number of sample acoustic features and sample text feature vectors and stores correspondence between a plurality of sample acoustic features and sample text feature vectors.
And then, sequentially comparing the acoustic features with a plurality of sample acoustic features in the corresponding relation table, and if a certain sample acoustic feature in the corresponding relation table is the same as or similar to the acoustic features, taking a sample text feature vector corresponding to the sample acoustic feature in the corresponding relation table as a text feature vector.
In some optional implementations of some embodiments, the extraction model is trained with the sample acoustic features as input and the sample text feature vectors as desired output.
As an example, the extraction model may be obtained by performing the following training steps based on a set of training samples. Performing the following training steps based on the set of training samples: inputting the acoustic characteristics of the samples in the training samples into an initial machine learning model to obtain text characteristic vectors; comparing the text feature vectors with corresponding sample text feature vectors; determining the prediction accuracy of the initial machine learning model according to the comparison result; determining whether the prediction accuracy is greater than a preset accuracy threshold; in response to determining that the accuracy is greater than the preset accuracy threshold, taking the initial machine learning model as a trained extraction model; adjusting parameters of the initial machine learning model in response to determining that the accuracy is not greater than the preset accuracy threshold.
It is understood that after the above training, the extraction model can be used to characterize the correspondence between the acoustic features and the text feature vectors. The above-mentioned extraction model may be a Deep Neural Network (DNN).
And step 203, obtaining the information of the target speaker according to the voice of the target speaker.
In some embodiments, the execution subject may generate the target speaker information by performing acoustic feature extraction on the voice of the target speaker, or the like. Here, the above-mentioned target speaker information is generally used to characterize acoustic feature information of the speaker of the piece of speech. The target speaker information may be, for example, timbre information of the target speaker or acoustic feature information such as pitch information of the target speaker.
In some optional implementations of some embodiments, the executing entity may analyze the target speaker voice through a generative model to obtain the target speaker information, where the generative model has been trained through a second training sample set, and the second training sample set includes the sample target speaker voice and the sample target speaker information.
Here, the generated model is generally used to represent the correspondence between the voice of the target speaker and the information of the target speaker. As an example, the generative model may be a correspondence table including the target speaker's speech and the target speaker's information. The correspondence table may be a correspondence table that is pre-formulated by a technician based on statistics of a large number of sample target speaker voices and sample target speaker information, and stores correspondence of a plurality of sample target speaker voices and sample target speaker information.
And then, comparing the target speaker voice with a plurality of sample target speaker voices in the corresponding relation table in sequence, and taking the sample target speaker information corresponding to the sample target speaker voice in the corresponding relation table as the target speaker information if the voice of one sample target speaker in the corresponding relation table is the same as or similar to the voice of the target speaker.
In some alternative implementations of some embodiments, the generative model is trained with the sample targeted speaker's speech as a desired input and the sample targeted speaker's information as a desired output.
As an example, the generative model may result from performing the following training steps based on a set of training samples. Performing the following training steps based on the set of training samples: inputting the voice of a sample target speaker in the training sample into an initial machine learning model to obtain the information of the target speaker; comparing the target speaker information with corresponding sample target speaker information; determining the prediction accuracy of the initial machine learning model according to the comparison result; determining whether the prediction accuracy is greater than a preset accuracy threshold; in response to determining that the accuracy is greater than the preset accuracy threshold, taking the initial machine learning model as a trained generation model; adjusting parameters of the initial machine learning model in response to determining that the accuracy is not greater than the preset accuracy threshold.
It is to be appreciated that after the above training, the generative model can be used to characterize the correspondence between the targeted speaker's speech and the targeted speaker's information. The generative model mentioned above may be a Deep Neural Network (DNN).
And step 204, generating the voice of the target language based on the target speaker information and the text characteristic vector.
In some embodiments, the execution subject may generate speech in a target language based on the target speaker information and the text feature vector. For example, the execution agent may store a plurality of target speaker voices in advance, and synthesize the voices of the target language by cutting or splicing the target speaker voices according to the text feature vector. Here, the target language generally refers to a language of a speech that a user needs to generate. By way of example, the target language is typically the same as the language of the user's speech.
In some optional implementations of some embodiments, the executing subject may analyze the target speaker information and the text feature vector through a conversion model to obtain the target acoustic feature. The conversion model is trained through a third sample training set, and the third training sample set comprises sample target speaker information, sample text feature vectors and sample target acoustic features.
And then, converting the target acoustic features into the voice of the target language. As an example, the execution body may convert the target acoustic feature into a voice of a target language using a vocoder. Here, the vocoder (vocoder) generally refers to a speech analysis and synthesis system of a speech signal.
Here, the above-mentioned conversion model is generally used to characterize the correspondence between the "target speaker information and text feature vector" and the target acoustic feature. As an example, the above conversion model may be a correspondence table including "target speaker information and text feature vector" and target acoustic features. The correspondence table may be a correspondence table that is prepared in advance by a technician based on statistics of a large amount of "sample target speaker information and sample text feature vectors" and sample target acoustic features, and stores a plurality of correspondence relationships between "sample target speaker information and sample text feature vectors" and sample target acoustic features.
And then, sequentially comparing the target speaker information and text characteristic vector with a plurality of sample target speaker information and sample text characteristic vectors in a corresponding relation table, and if a certain sample target speaker information and sample text characteristic vector in the corresponding relation table is the same as or similar to the target speaker information and text characteristic vector, taking the sample target acoustic characteristic corresponding to the sample target speaker information and sample text characteristic vector in the corresponding relation table as the target acoustic characteristic.
In some alternative implementations of some embodiments, the conversion model is trained with the sample target speaker speech and sample text feature vectors as expected inputs and the sample target acoustic features as expected outputs.
As an example, the transformation model may be obtained by performing the following training steps based on a set of training samples. Performing the following training steps based on the set of training samples: inputting 'sample target speaker information and sample text feature vectors' in a training sample into an initial machine learning model to obtain target acoustic features; comparing the target acoustic features with corresponding sample target acoustic features; determining the prediction accuracy of the initial machine learning model according to the comparison result; determining whether the prediction accuracy is greater than a preset accuracy threshold; in response to determining that the accuracy is greater than the preset accuracy threshold, taking the initial machine learning model as a trained conversion model; adjusting parameters of the initial machine learning model in response to determining that the accuracy is not greater than the preset accuracy threshold.
It is understood that after the above training, the conversion model can be used to characterize the correspondence between the "target speaker information and text feature vector" and the target acoustic features. The above-mentioned conversion model may be a Deep Neural Network (DNN).
One of the above-described various embodiments of the present disclosure has the following advantageous effects: firstly, acquiring the voice of a user and the voice of a target speaker, then extracting a text feature vector from the voice of the user, obtaining the information of the target speaker according to the voice of the target speaker, and finally generating the voice of the target voice based on the information of the target speaker and the text feature vector. Therefore, the voice of the target language is generated by utilizing the voice of the user and the voice of the target speaker, the customized voice generation of the voice of any target speaker is realized, and the user experience is improved.
With further reference to fig. 3, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a device for generating speech, which correspond to those of the method embodiments shown in fig. 2, and which may be applied in particular in various electronic devices.
As shown in fig. 3, the speech generating apparatus 300 of some embodiments includes: an acquisition unit 301, an extraction unit 302, a first generation unit 303, and a second generation unit 304. The obtaining unit 301 is configured to obtain a user voice and a target speaker voice; the extracting unit 302 is configured to extract a text feature vector in the user speech; the first generating unit 303 is configured to obtain the information of the target speaker according to the voice of the target speaker; and the second generating unit 304 is configured to generate speech in the target language based on the target speaker information and the text feature vectors.
In some optional implementations of some embodiments, the extracting unit 302 is further configured to: extracting acoustic features in the user voice; and analyzing the acoustic features through an extraction model to obtain a text feature vector, wherein the extraction model is trained through a first sample training sample set, and the first training sample set comprises sample acoustic features and sample text feature vectors.
In some optional implementations of some embodiments, the extraction model is trained with the sample acoustic features as input and the sample text feature vectors as desired output.
In some optional implementations of some embodiments, the first generating unit 303 is further configured to: and analyzing the voice of the target speaker by the generating model to obtain the information of the target speaker, wherein the generating model is trained by a second sample training set, and the second training sample set comprises the voice of the sample target speaker and the information of the sample target speaker.
In some alternative implementations of some embodiments, the generative model is trained with the sample targeted speaker's speech as a desired input and the sample targeted speaker's information as a desired output.
In some optional implementations of some embodiments, the second generating unit 304 is further configured to: analyzing the target speaker information and the text feature vector through a conversion model to obtain target acoustic features, wherein the conversion model is trained through a third sample training set, and the third training sample set comprises sample target speaker information, a sample text feature vector and sample target acoustic features; and converting the target acoustic features into the voice of the target language.
In some alternative implementations of some embodiments, the generative model is trained with the sample target speaker speech and sample text feature vectors as desired inputs and the sample target acoustic features as desired outputs.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: firstly, acquiring the voice of a user and the voice of a target speaker, then extracting a text feature vector from the voice of the user, obtaining the information of the target speaker according to the voice of the target speaker, and finally generating the voice of the target voice based on the information of the target speaker and the text feature vector. Therefore, the voice of the target language is generated by utilizing the voice of the user and the voice of the target speaker, the customized voice generation of the voice of any target speaker is realized, and the user experience is improved.
Referring now to fig. 4, a schematic diagram of an electronic device (e.g., the server of fig. 1) 400 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 407 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., and communication devices 409 may be connected to I/O interface 405. communication devices 409 may allow electronic device 400 to communicate wirelessly or wiredly with other devices to exchange data although FIG. 4 illustrates electronic device 400 with various means, it is to be understood that not all illustrated means are required to be implemented or provided.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). examples of communications networks include local area networks ("L AN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the voice of a user and the voice of a target speaker; extracting a text feature vector in the user voice; obtaining the information of the target speaker according to the voice of the target speaker; and generating the voice of the target language based on the target speaker information and the text characteristic vector.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language, or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an extraction unit, a first generation unit, and a second generation unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, the capturing unit may also be described as a "unit that captures the user's speech and the target speaker's speech".
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex programmable logic devices (CP L D), and so forth.
In accordance with one or more embodiments of the present disclosure, there is provided a method for generating speech, including: acquiring the voice of a user and the voice of a target speaker; extracting a text feature vector in the user voice; obtaining the information of the target speaker according to the voice of the target speaker; and generating the voice of the target language based on the target speaker information and the text characteristic vector.
According to one or more embodiments of the present disclosure, the extracting text feature vectors in the user speech includes: extracting acoustic features in the user voice; and analyzing the acoustic features through an extraction model to obtain a text feature vector, wherein the extraction model is trained through a first sample training sample set, and the first training sample set comprises sample acoustic features and sample text feature vectors.
According to one or more embodiments of the present disclosure, the extraction model is trained with the sample acoustic features as input and the sample text feature vectors as desired output.
According to one or more embodiments of the present disclosure, the obtaining target speaker information according to the target speaker voice includes: and analyzing the voice of the target speaker by the generating model to obtain the information of the target speaker, wherein the generating model is trained by a second sample training set, and the second training sample set comprises the voice of the sample target speaker and the information of the sample target speaker.
In accordance with one or more embodiments of the present disclosure, the generative model is trained with the sample targeted speaker's speech as a desired input and the sample targeted speaker's information as a desired output.
According to one or more embodiments of the present disclosure, the generating speech of the target language based on the target speaker information and the text feature vector includes: analyzing the target speaker information and the text feature vector through a conversion model to obtain target acoustic features, wherein the conversion model is trained through a third sample training set, and the third training sample set comprises sample target speaker information, a sample text feature vector and sample target acoustic features; and converting the target acoustic features into the voice of the target language.
According to one or more embodiments of the present disclosure, the generative model is trained with the sample target speaker speech and sample text feature vectors as expected inputs and the sample target acoustic features as expected outputs.
In accordance with one or more embodiments of the present disclosure, there is provided an apparatus for generating speech, including: an acquisition unit configured to acquire a user voice and a target speaker voice; an extraction unit configured to extract a text feature vector in the user speech; a first generating unit configured to obtain target speaker information according to the target speaker voice; and the second generating unit is configured to generate the voice of the target language based on the target speaker information and the text feature vector.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device, having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the above embodiments.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the above embodiments.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A method for generating speech, comprising:
acquiring the voice of a user and the voice of a target speaker;
extracting a text feature vector in the user voice;
obtaining the information of the target speaker according to the voice of the target speaker;
and generating the voice of the target language based on the target speaker information and the text characteristic vector.
2. The method of claim 1, wherein the extracting text feature vectors in user speech comprises:
extracting acoustic features in the user voice;
and analyzing the acoustic features through an extraction model to obtain a text feature vector, wherein the extraction model is trained through a first sample training sample set, and the first training sample set comprises sample acoustic features and sample text feature vectors.
3. The method of claim 2, wherein the extraction model is trained with the sample acoustic features as input and the sample text feature vector for use as a desired output.
4. The method of claim 1, wherein the obtaining targeted speaker information based on the targeted speaker speech comprises:
and analyzing the voice of the target speaker by the generating model to obtain the information of the target speaker, wherein the generating model is trained by a second sample training set, and the second training sample set comprises the voice of the sample target speaker and the information of the sample target speaker.
5. The method of claim 4, wherein the generative model is trained with the sample targeted speaker's speech as an expected input and the sample targeted speaker's information as an expected output.
6. The method of claim 1, wherein generating speech in a target language based on the target speaker information and text feature vectors comprises:
analyzing the target speaker information and the text feature vector through a conversion model to obtain target acoustic features, wherein the conversion model is trained through a third sample training set, and the third training sample set comprises sample target speaker information, a sample text feature vector and sample target acoustic features;
converting the target acoustic features into speech in a target language.
7. The method of claim 6, wherein the generative model is trained with the sample target speaker speech and sample text feature vectors as desired inputs and the sample target acoustic features as desired outputs.
8. An apparatus for generating speech, comprising:
an acquisition unit configured to acquire a user voice and a target speaker voice;
an extraction unit configured to extract a text feature vector in the user speech;
a first generating unit configured to obtain target speaker information according to the target speaker voice;
a second generating unit configured to generate speech of a target language based on the target speaker information and the text feature vector.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202010242995.8A 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech Pending CN111462728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010242995.8A CN111462728A (en) 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010242995.8A CN111462728A (en) 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech

Publications (1)

Publication Number Publication Date
CN111462728A true CN111462728A (en) 2020-07-28

Family

ID=71680924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010242995.8A Pending CN111462728A (en) 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech

Country Status (1)

Country Link
CN (1) CN111462728A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462727A (en) * 2020-03-31 2020-07-28 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for generating speech
CN112017685A (en) * 2020-08-27 2020-12-01 北京字节跳动网络技术有限公司 Voice generation method, device, equipment and computer readable medium
CN112349273A (en) * 2020-11-05 2021-02-09 携程计算机技术(上海)有限公司 Speech synthesis method based on speaker, model training method and related equipment
CN112382271A (en) * 2020-11-30 2021-02-19 北京百度网讯科技有限公司 Voice processing method, device, electronic equipment and storage medium
CN112382270A (en) * 2020-11-13 2021-02-19 北京有竹居网络技术有限公司 Speech synthesis method, apparatus, device and storage medium
CN113314101A (en) * 2021-04-30 2021-08-27 北京达佳互联信息技术有限公司 Voice processing method and device, electronic equipment and storage medium
CN113409767A (en) * 2021-05-14 2021-09-17 北京达佳互联信息技术有限公司 Voice processing method and device, electronic equipment and storage medium
CN113450759A (en) * 2021-06-22 2021-09-28 北京百度网讯科技有限公司 Voice generation method, device, electronic equipment and storage medium
CN112349273B (en) * 2020-11-05 2024-05-31 携程计算机技术(上海)有限公司 Speech synthesis method based on speaker, model training method and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123932A (en) * 2014-07-29 2014-10-29 科大讯飞股份有限公司 Voice conversion system and method
CN107705783A (en) * 2017-11-27 2018-02-16 北京搜狗科技发展有限公司 A kind of phoneme synthesizing method and device
CN109147758A (en) * 2018-09-12 2019-01-04 科大讯飞股份有限公司 A kind of speaker's sound converting method and device
CN109308892A (en) * 2018-10-25 2019-02-05 百度在线网络技术(北京)有限公司 Voice synthesized broadcast method, apparatus, equipment and computer-readable medium
CN110767210A (en) * 2019-10-30 2020-02-07 四川长虹电器股份有限公司 Method and device for generating personalized voice
CN110808034A (en) * 2019-10-31 2020-02-18 北京大米科技有限公司 Voice conversion method, device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123932A (en) * 2014-07-29 2014-10-29 科大讯飞股份有限公司 Voice conversion system and method
CN107705783A (en) * 2017-11-27 2018-02-16 北京搜狗科技发展有限公司 A kind of phoneme synthesizing method and device
CN109147758A (en) * 2018-09-12 2019-01-04 科大讯飞股份有限公司 A kind of speaker's sound converting method and device
CN109308892A (en) * 2018-10-25 2019-02-05 百度在线网络技术(北京)有限公司 Voice synthesized broadcast method, apparatus, equipment and computer-readable medium
CN110767210A (en) * 2019-10-30 2020-02-07 四川长虹电器股份有限公司 Method and device for generating personalized voice
CN110808034A (en) * 2019-10-31 2020-02-18 北京大米科技有限公司 Voice conversion method, device, storage medium and electronic equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462727A (en) * 2020-03-31 2020-07-28 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for generating speech
CN112017685A (en) * 2020-08-27 2020-12-01 北京字节跳动网络技术有限公司 Voice generation method, device, equipment and computer readable medium
CN112017685B (en) * 2020-08-27 2023-12-22 抖音视界有限公司 Speech generation method, device, equipment and computer readable medium
CN112349273A (en) * 2020-11-05 2021-02-09 携程计算机技术(上海)有限公司 Speech synthesis method based on speaker, model training method and related equipment
CN112349273B (en) * 2020-11-05 2024-05-31 携程计算机技术(上海)有限公司 Speech synthesis method based on speaker, model training method and related equipment
CN112382270A (en) * 2020-11-13 2021-02-19 北京有竹居网络技术有限公司 Speech synthesis method, apparatus, device and storage medium
CN112382271A (en) * 2020-11-30 2021-02-19 北京百度网讯科技有限公司 Voice processing method, device, electronic equipment and storage medium
CN112382271B (en) * 2020-11-30 2024-03-26 北京百度网讯科技有限公司 Voice processing method, device, electronic equipment and storage medium
CN113314101A (en) * 2021-04-30 2021-08-27 北京达佳互联信息技术有限公司 Voice processing method and device, electronic equipment and storage medium
CN113314101B (en) * 2021-04-30 2024-05-14 北京达佳互联信息技术有限公司 Voice processing method and device, electronic equipment and storage medium
CN113409767A (en) * 2021-05-14 2021-09-17 北京达佳互联信息技术有限公司 Voice processing method and device, electronic equipment and storage medium
CN113450759A (en) * 2021-06-22 2021-09-28 北京百度网讯科技有限公司 Voice generation method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111462728A (en) Method, apparatus, electronic device and computer readable medium for generating speech
CN108630190B (en) Method and apparatus for generating speech synthesis model
JP7208952B2 (en) Method and apparatus for generating interaction models
CN107945786B (en) Speech synthesis method and device
CN112489620B (en) Speech synthesis method, device, readable medium and electronic equipment
CN112786006B (en) Speech synthesis method, synthesis model training method, device, medium and equipment
CN112489621B (en) Speech synthesis method, device, readable medium and electronic equipment
CN109981787B (en) Method and device for displaying information
CN111368559A (en) Voice translation method and device, electronic equipment and storage medium
CN110534085B (en) Method and apparatus for generating information
CN111354345B (en) Method, apparatus, device and medium for generating speech model and speech recognition
CN111369971A (en) Speech synthesis method, speech synthesis device, storage medium and electronic equipment
CN111462727A (en) Method, apparatus, electronic device and computer readable medium for generating speech
CN111798821A (en) Sound conversion method, device, readable storage medium and electronic equipment
JP2023541879A (en) Speech recognition using data analysis and dilation of speech content from isolated audio inputs
CN111785247A (en) Voice generation method, device, equipment and computer readable medium
CN111368560A (en) Text translation method and device, electronic equipment and storage medium
CN111597825A (en) Voice translation method and device, readable medium and electronic equipment
CN113468344B (en) Entity relationship extraction method and device, electronic equipment and computer readable medium
CN110009101B (en) Method and apparatus for generating a quantized neural network
CN112257459B (en) Language translation model training method, translation method, device and electronic equipment
CN113571044A (en) Voice information processing method and device and electronic equipment
CN111681661B (en) Speech recognition method, apparatus, electronic device and computer readable medium
CN110956127A (en) Method, apparatus, electronic device, and medium for generating feature vector
CN111784567B (en) Method, apparatus, electronic device, and computer-readable medium for converting image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200728

RJ01 Rejection of invention patent application after publication