CN112381926A - Method and apparatus for generating video - Google Patents

Method and apparatus for generating video Download PDF

Info

Publication number
CN112381926A
CN112381926A CN202011270760.6A CN202011270760A CN112381926A CN 112381926 A CN112381926 A CN 112381926A CN 202011270760 A CN202011270760 A CN 202011270760A CN 112381926 A CN112381926 A CN 112381926A
Authority
CN
China
Prior art keywords
target person
text
face
key points
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011270760.6A
Other languages
Chinese (zh)
Inventor
汤本来
姚佳立
毕成
殷翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202011270760.6A priority Critical patent/CN112381926A/en
Publication of CN112381926A publication Critical patent/CN112381926A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for generating video. One embodiment of the method comprises: acquiring text features extracted from a text; determining the characteristics of a target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text; and generating a video of the target person according to the key points of the face. This embodiment enables convenient conversion from given text to video of the target person.

Description

Method and apparatus for generating video
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for generating a video.
Background
With the increasing maturity of artificial intelligence technologies such as speech recognition, natural language processing, computer vision, etc., and the gradual application thereof to many practical scenes, how to further implement the universal application of artificial intelligence technologies, and how to further develop artificial intelligence technologies are one of the problems currently thought and explored by many researchers.
Under such thinking and exploration, multimodal intelligence is becoming one of the key research directions in the field of artificial intelligence. For example, many scholars are studying the synthesis techniques and applications of multimodal avatars to achieve more natural and convenient human-computer interaction.
In recent years, with the rapid development of industries such as online education, online learning and network live broadcast, scenes such as virtual human teaching, virtual human live broadcast and virtual human explanation appear. However, the application of virtual human technology in these scenarios is currently immature, and there are many areas to be improved. But it is conceivable that virtual human technology has wide application space and application prospect in these industries.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatuses for generating video.
In a first aspect, an embodiment of the present disclosure provides a method for generating a video, the method including: acquiring text features extracted from a text; determining the characteristics of a target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text; and generating a video of the target person according to the key points of the face.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating a video, the apparatus including: an acquisition unit configured to acquire text features extracted from a text; the determining unit is configured to determine the characteristics of the target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text; and the generating unit is configured to generate the video of the target person according to the key points of the human face.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which computer program, when executed by a processor, implements the method as described in any of the implementations of the first aspect.
The method and the device for generating the video provided by the embodiment of the disclosure utilize the text features of the text to generate the features of the target person, such as key points of the face of the target person when the target person reads the text, and then utilize the features of the target person to make the video of the target person, so that the convenient conversion from the given text to the video of the target person is realized, and the method and the device can be further applied to a plurality of given texts to generate a scene of the video of the target person reading the text. Compared with the existing video production method based on human face three-dimensional modeling, the method reduces the computational complexity and time cost.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for generating video in accordance with the present disclosure;
FIG. 3 is a flow diagram of yet another embodiment of a method for generating video in accordance with the present disclosure;
FIG. 4 is a schematic diagram of one application scenario of a method for generating video in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for generating video in accordance with the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the disclosure. It should be further noted that, for the convenience of description, only the portions relevant to the present disclosure are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary architecture 100 to which embodiments of the method for generating video or the apparatus for generating video of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. Various client applications may be installed on the terminal devices 101, 102, 103. For example, browser-like applications, search-like applications, social platform software, instant messaging tools, educational-like applications, live-broadcast-like applications, information-flow-like applications, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a server providing back-end support for client applications installed on the terminal devices 101, 102, 103. The server 105 may acquire texts from the terminal devices 101, 102, 103 and extract text features, then generate face key points of the target person for the texts by using the text features, and then generate videos of the target person according to the face key points. Further, the server 105 may also feed the generated video of the target person to the terminal devices 101, 102, 103 for presentation.
Note that the text may be directly stored locally in the server 105, and the server 105 may directly extract the locally stored text and extract text features for processing, in which case, the terminal apparatuses 101, 102, and 103 and the network 104 may not be present.
It should be noted that the method for generating video provided by the embodiment of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for generating video is generally disposed in the server 105.
It should be further noted that the terminal devices 101, 102, 103 may also generate face key points of the target person for the text by using the text features, and then generate a video of the target person according to the face key points. At this time, the method for generating the video may be executed by the terminal apparatuses 101, 102, 103, and accordingly, the apparatus for generating the video may be provided in the terminal apparatuses 101, 102, 103. At this point, the exemplary system architecture 100 may not have the server 105 and the network 104.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for generating video in accordance with the present disclosure is shown. The method for generating video comprises the following steps:
step 201, obtaining text features extracted from a text.
In the present embodiment, the text may be text of arbitrary content. The text may be pre-specified by a technician according to actual application requirements, or may be determined according to actual application scenarios. For example, the text may be text entered at a terminal device by a user of the terminal device (e.g., terminal devices 101, 102, 103, etc. shown in fig. 1).
The characteristics specifically indicated by the text characteristics of the text can be set by a skilled person according to actual application requirements and application scenarios. For example, the text features of the text may include prosodic features, phonetic features, tonal features, pitch features, and the like.
For any text, the text features of the text can be extracted by using various existing methods for extracting text features (such as a fourier transform-based signal processing method, a neural network-based feature extraction method, and the like).
In this embodiment, the executing agent (e.g., server 105 shown in FIG. 1) of the method for generating video may retrieve text features extracted from the text from a local or other storage device.
It should be noted that the process of extracting text features from text may be executed by the execution main body and store the text features locally. At this time, the execution subject may directly obtain the text feature from the local. The process of extracting text features from the text may also be executed and stored by other electronic devices (such as the terminal devices 101, 102, 103 shown in fig. 1 or other servers, etc.), and at this time, the executing body may obtain the text features from the other electronic devices.
Step 202, determining the characteristics of the target person according to the text characteristics, wherein the characteristics of the target person comprise the face key points of the target person aiming at the text.
In this embodiment, the target person may be a person specified in advance by a technician or a user according to actual application requirements. The face keypoints of the target person for the text may indicate the keypoints of the face of the target person when reading the text.
Because the face of a person may change differently when reading different texts, the key points of the face of the target person may be different for different texts. The face key points can be represented by using the position coordinates of the key points on the face.
After the text features of the text are obtained, various methods may be employed to determine the features of the targeted person based on the text features. For example, a video in which a recording target person reads a large amount of text may be captured in advance, and then, existing various key point extraction methods are used to extract key points of the face of the target person from the recorded video, and simultaneously, text features of each text are extracted. Then, the mapping relation between the text features and the key points of the face of the target person can be counted based on methods such as curve fitting, so that the key points of the face of the target person corresponding to the text features can be determined according to the text features of any text.
And step 203, generating a video of the target person according to the key points of the face.
In this embodiment, the video of the target person may refer to a video in which the face of the target person is presented. After the face key points of the target person are obtained, the video of the target person can be generated by utilizing various existing image processing methods and video processing methods in combination with actual application requirements.
For example, an image of the target person may be obtained from a local or connected other device, then the key points of the face of the target person displayed in the image are determined, and then the key points are adjusted to the determined key points of the face, so as to obtain an adjusted image of the target person. Then, the adjusted image of the target person can be used to create a video of the target person.
In some optional implementations of this embodiment, the characteristics of the target person may further include acoustic characteristics of the target person for the text. The acoustic feature may refer to acoustic information of audio corresponding to the text. The acoustic information indicated by the acoustic signature may be preset by a technician according to the actual application requirements. For example, the acoustic features may include fundamental frequency features, formant features, and the like.
When the text features of the text are used for generating the face key points of the target person, the acoustic features of the target person aiming at the text are generated in a combined mode, the consistency between the mouth shape of the face formed by the face key points and the audio of the target person aiming at the text can be improved, and therefore the naturalness of the manufactured video is improved.
In some optional implementations of this embodiment, after obtaining the acoustic features of the target person for the text, the audio corresponding to the text may be synthesized according to the acoustic features by using various existing speech synthesis methods (e.g., using a pre-trained vocoder, etc.).
Thereafter, audio may be added to the generated video of the target person to obtain a video with audio. Specifically, audio may be added to the generated video of the target person using various existing methods for merging audio and video.
It should be noted that, since the determined acoustic features are the acoustic features of the audio generated by the target person when reading the text, the obtained acoustic features may include the acoustic information of the target person. At this time, the audio synthesized using the obtained acoustic features is of the vocal characteristics of the target person (e.g., of the timbre, accent, etc. of the target person).
By adding audio with the vocal features of the target person to the generated video of the target person, the generated information type is further enriched, enabling convenient multi-modal conversion from a given text to audio and video of the target person for the text.
Optionally, the generated audio may include at least one of: speech data of a target person, singing data of the target person. Wherein the singing data may refer to data generated for various forms of singing.
Therefore, when the video of the target person is generated, the audio formed by the speaking data of the target person aiming at the given text and/or the audio formed by the singing data of the target person aiming at the given text can be generated, thereby further enriching the type of the generated information.
In some optional implementation manners of this embodiment, after obtaining the text features of the text, the features of the target person may be determined according to the text features by using a pre-trained feature determination model corresponding to the target person.
The feature determination model corresponding to the target person may represent a correspondence between text features of the text and features of the target person (e.g., key points of a face of the target person when reading the text, acoustic features of audio generated by the target person when reading the text, etc.). The characteristic determination model corresponding to the target person can be obtained by utilizing pre-collected data training of the target person.
Taking the features of the target person including the key points of the face of the target person when the target person reads the text and the acoustic features of the audio generated by the target person when the target person reads the text as an example, training data may be obtained first, and then a feature determination model corresponding to the target person is obtained by training using the training data.
Specifically, videos and audios of a target person for a large number of texts may be prerecorded, then key points of the face of the target person presented are extracted from the recorded videos, and acoustic features of the target person are extracted from the recorded audios, while text features of each text are extracted. The extracted text features and corresponding keypoints and acoustic features may then be used as training data.
Then, various types of untrained or trained artificial neural networks can be obtained as initial feature determination models, text features in training data are used as input of the initial feature determination models, key points and acoustic features corresponding to the input text features are used as expected output of the initial feature determination models, parameters of the initial feature determination models are continuously adjusted according to values of loss functions by using algorithms such as gradient descent, back propagation and the like until preset training stop conditions are reached (for example, values of the loss functions meet certain conditions and the like), and at the moment, the initial feature determination models obtained through training can be used as the feature determination models corresponding to the target people.
The text features of the text are used for jointly monitoring and learning the face key points of the target person and the acoustic features of the target person aiming at the text, the consistency between the mouth shape of the face formed by the face key points and the audio of the target person aiming at the text is ensured, the naturalness of the manufactured video is improved, and meanwhile, the speed and the complexity of converting the text features into the features of the target person aiming at the text can be reduced by realizing the end-to-end conversion process between the text features of the text and the features of the target person aiming at the text, so that the efficiency of manufacturing the video of the target person is improved.
In some optional implementation manners of this embodiment, the face key points of the target person determined by using the feature determination model corresponding to the target person may specifically include at least one group of face key points, and each group of face key points may represent one frame of face image, that is, each group of face key points may be used to generate one frame of face image of the target person.
Each group of face key points may include a target number of key points to respectively represent different face parts. In particular, the target number may be set by a technician according to the actual application requirements.
At this time, after obtaining at least one group of face key points of the target person, face images corresponding to the respective groups of face key points may be generated to obtain a face image set, and then the face image set is used to generate a video of the target person.
For example, an image of the target person is obtained in advance, and then, for each group of face key points, the face image corresponding to the group of face key points is obtained by adjusting the key points of the face displayed in the image of the target person to be the group of face key points. And then, the obtained face images are used for making a video of the target person.
The multi-frame face image of the target person is generated by taking the frame as a unit, and then the video of the target person is made by utilizing the multi-frame face image, so that the flexibility and the naturalness of video making can be further improved, and the smoothness of the video is also favorably ensured.
According to the method provided by the embodiment of the disclosure, for any given text, the key points of the face of the target person when the target person reads the text and the acoustic features of the audio of the target person when the target person reads the text are generated according to the text features of the text, the obtained key points are used for making the video of the target person, the obtained acoustic features are used for making the audio of the target person aiming at the text, the obtained audio and the video are further combined, and the multi-mode conversion from the given text to the audio and video of the target person aiming at the text is realized.
With further reference to fig. 3, a flow 300 of yet another embodiment of a method for generating a video is shown. The flow 300 of the method for generating a video comprises the steps of:
step 301, obtaining text features extracted from a text.
And 302, determining acoustic features of the target person aiming at the text and at least one group of face key points according to the text features by utilizing a pre-trained feature determination model corresponding to the target person.
And step 303, synthesizing the audio of the target person aiming at the text by using the acoustic features.
And 304, for each group of face key points in at least one group of face key points, generating face images corresponding to the group of face key points according to the group of face key points by using a pre-trained image generation model corresponding to the target person.
In this embodiment, the image generation model may represent a correspondence between a group of face key points and a frame of face image. The image generation model corresponding to the target person can be obtained by utilizing the pre-collected data training of the target person.
As an example, training data may be obtained first, and then an image generation model corresponding to the target person may be obtained through training using the training data. Specifically, a target person may be pre-recorded for a video of a large number of texts, and then each frame of image is extracted from the video, and a face key point group corresponding to each frame of image is determined. Then, the extracted frame images and the corresponding key point groups can be used as training data.
Then, various untrained or trained artificial neural networks can be obtained as initial image generation models, the key point groups in the training data are used as input of the initial image generation models, the images corresponding to the input key point groups are used as expected output of the initial image generation models, parameters of the initial image generation models are continuously adjusted according to the values of the loss functions by using algorithms such as gradient descent, back propagation and the like until preset training stop conditions are reached (for example, the values of the loss functions meet certain conditions and the like), and at the moment, the initial image generation models obtained through training can be used as the image generation models corresponding to the target people.
And 305, generating a video of the target person by using the face images respectively corresponding to the face key points of each group.
And step 306, adding audio of the target person aiming at the text to the generated video of the target person.
The specific execution process of the content not described in detail in the steps 301-306 can refer to the related description in the corresponding embodiment of fig. 2, and will not be described herein again.
With continued reference to fig. 4, fig. 4 is an illustrative application scenario 400 of the method for generating video according to the present embodiment. In the application scenario of fig. 4, the user may specify text and a target person in the terminal 401 he uses. For example, the user may select a lesson as the input text and then designate a favorite star as the target person.
Then, the text features 402 of the input text may be input to the feature determination model 403 of the target person, resulting in acoustic features 404 of the target person for the input text and several face key point groups 405 of the target person for the input text.
The vocoder 406 may then be used to synthesize audio 408 corresponding to the acoustic features 404, i.e., the audio of the subject person reading the text selected by the user. Meanwhile, an image 409 of the target person corresponding to each face key point group is generated by using the image generation model 407, and a video 410 of the target person is produced according to the obtained images of the plurality of target persons.
Then, the obtained audio 408 and video 410 of the target person can be combined to obtain an audio/video 411, and the audio/video 411 is sent to the terminal 401 so that the user can watch the video of the favorite star reading the designated text.
The process of generating the face image of the target person according to the face key points of the target person by using the pre-trained image generation model corresponding to the target person is highlighted in the flow of the method for generating the video in the embodiment, that is, the end-to-end face image generation is realized.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating a video, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for generating video provided by the present embodiment includes an acquisition unit 501, a determination unit 502, and a generation unit 503. Wherein the obtaining unit 501 is configured to obtain text features extracted from a text; the determining unit 502 is configured to determine the feature of the target person according to the text feature, wherein the feature of the target person comprises a face key point of the target person for the text; the generating unit 503 is configured to generate a video of the target person based on the face key points.
In the present embodiment, in the apparatus 500 for generating a video: the specific processing of the obtaining unit 501, the determining unit 502, and the generating unit 503 and the technical effects thereof can refer to the related descriptions of step 201, step 202, and step 203 in the corresponding embodiment of fig. 2, which are not repeated herein.
The apparatus for generating a video according to the above embodiment of the present disclosure acquires, by an acquiring unit, a text feature extracted from a text; the determining unit determines the characteristics of a target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text; the generating unit generates the video of the target person according to the key points of the face, so that multi-mode conversion from the given text to the audio and video of the target person aiming at the text is realized.
Referring now to FIG. 6, a schematic diagram of an electronic device (e.g., the server of FIG. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In accordance with one or more embodiments of the present disclosure, there is provided a method for generating a video, the method comprising: acquiring text features extracted from a text; determining the characteristics of a target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text; and generating a video of the target person according to the key points of the face.
In accordance with one or more embodiments of the present disclosure, the characteristics of the target person further include acoustic characteristics of the target person for the text.
In accordance with one or more embodiments of the present disclosure, the method further comprises: synthesizing audio of the target person aiming at the text by utilizing the acoustic features; audio is added to the video.
According to one or more embodiments of the present disclosure, determining characteristics of a target person from text characteristics includes: and determining the characteristics of the target person according to the text characteristics by using a pre-trained characteristic determination model corresponding to the target person.
According to one or more embodiments of the present disclosure, the face key points include at least one group of face key points, and each group of face key points is used for representing a frame of face image.
According to one or more embodiments of the present disclosure, generating a video of a target person according to face key points includes: generating a face image corresponding to each group of face key points in at least one group of face key points to obtain a face image set; and generating a video of the target person by using the face image set.
According to one or more embodiments of the present disclosure, generating a face image corresponding to each group of face key points in at least one group of face key points includes: and for each group of face key points in at least one group of face key points, generating a face image corresponding to the group of face key points according to the group of face key points by utilizing a pre-trained image generation model corresponding to the target person.
In accordance with one or more embodiments of the present disclosure, the audio includes at least one of: speech data, singing data.
In accordance with one or more embodiments of the present disclosure, there is provided an apparatus for generating a video, the apparatus including: an acquisition unit configured to acquire text features extracted from a text; the determining unit is configured to determine the characteristics of the target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text; and the generating unit is configured to generate the video of the target person according to the key points of the human face.
In accordance with one or more embodiments of the present disclosure, the characteristics of the target person further include acoustic characteristics of the target person for the text.
According to one or more embodiments of the present disclosure, the apparatus further comprises: a synthesizing unit configured to synthesize audio of the target person for the text using the acoustic features; an adding unit configured to add audio for the video.
According to one or more embodiments of the present disclosure, the determining unit is further configured to determine the feature of the target person according to the text feature by using a pre-trained feature determination model corresponding to the target person.
According to one or more embodiments of the present disclosure, the face key points include at least one group of face key points, and each group of face key points is used for representing a frame of face image.
According to one or more embodiments of the present disclosure, the generating unit is further configured to: generating a face image corresponding to each group of face key points in at least one group of face key points to obtain a face image set; and generating a video of the target person by using the face image set.
According to one or more embodiments of the present disclosure, the generating unit is further configured to: and for each group of face key points in at least one group of face key points, generating a face image corresponding to the group of face key points according to the group of face key points by utilizing a pre-trained image generation model corresponding to the target person.
In accordance with one or more embodiments of the present disclosure, the audio includes at least one of: speech data, singing data.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a determination unit, and a generation unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, the acquiring unit may also be described as a "unit that acquires text features extracted from text".
As another aspect, the present disclosure also provides a computer-readable medium. The computer readable medium may be embodied in the electronic device described above; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring text features extracted from a text; determining the characteristics of a target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text; and generating a video of the target person according to the key points of the face.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure in the embodiments of the present disclosure is not limited to the particular combination of the above-described features, but also encompasses other embodiments in which any combination of the above-described features or their equivalents is possible without departing from the scope of the present disclosure. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (11)

1. A method for generating video, comprising:
acquiring text features extracted from a text;
determining the characteristics of a target person according to the text characteristics, wherein the characteristics of the target person comprise face key points of the target person aiming at the text;
and generating the video of the target person according to the face key points.
2. The method of claim 1, wherein the characteristics of the target person further include acoustic characteristics of the target person for the text.
3. The method of claim 2, wherein the method further comprises:
synthesizing audio of the target person for the text using the acoustic features;
adding the audio to the video.
4. The method of any of claims 1-3, wherein said determining a characteristic of the target person from the textual features comprises:
and determining the characteristics of the target person according to the text characteristics by utilizing a pre-trained characteristic determination model corresponding to the target person.
5. The method of any of claims 1-3, wherein the face keypoints comprise at least one group of face keypoints, and each group of face keypoints is used for representing a frame of face image.
6. The method of claim 5, wherein the generating a video of the target person from the face keypoints comprises:
generating face images respectively corresponding to each group of face key points in the at least one group of face key points to obtain a face image set;
and generating the video of the target person by using the face image set.
7. The method of claim 6, wherein the generating of the face images corresponding to the sets of face key points in the at least one set of face key points comprises:
and for each group of face key points in the at least one group of face key points, generating a face image corresponding to the group of face key points according to the group of face key points by utilizing a pre-trained image generation model corresponding to the target person.
8. The method of claim 3, wherein the audio comprises at least one of: speech data, singing data.
9. An apparatus for generating video, comprising:
an acquisition unit configured to acquire text features extracted from a text;
a determining unit configured to determine features of a target person according to the text features, wherein the features of the target person comprise face key points of the target person for the text;
and the generating unit is configured to generate the video of the target person according to the face key points.
10. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202011270760.6A 2020-11-13 2020-11-13 Method and apparatus for generating video Pending CN112381926A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011270760.6A CN112381926A (en) 2020-11-13 2020-11-13 Method and apparatus for generating video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011270760.6A CN112381926A (en) 2020-11-13 2020-11-13 Method and apparatus for generating video

Publications (1)

Publication Number Publication Date
CN112381926A true CN112381926A (en) 2021-02-19

Family

ID=74582558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011270760.6A Pending CN112381926A (en) 2020-11-13 2020-11-13 Method and apparatus for generating video

Country Status (1)

Country Link
CN (1) CN112381926A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113015002A (en) * 2021-03-04 2021-06-22 天九共享网络科技集团有限公司 Processing method and device for anchor video data
WO2023050650A1 (en) * 2021-09-29 2023-04-06 平安科技(深圳)有限公司 Animation video generation method and apparatus, and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829432A (en) * 2019-01-31 2019-05-31 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN110347867A (en) * 2019-07-16 2019-10-18 北京百度网讯科技有限公司 Method and apparatus for generating lip motion video
CN111429885A (en) * 2020-03-02 2020-07-17 北京理工大学 Method for mapping audio clip to human face-mouth type key point
US20200234690A1 (en) * 2019-01-18 2020-07-23 Snap Inc. Text and audio-based real-time face reenactment
CN111897976A (en) * 2020-08-18 2020-11-06 北京字节跳动网络技术有限公司 Virtual image synthesis method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200234690A1 (en) * 2019-01-18 2020-07-23 Snap Inc. Text and audio-based real-time face reenactment
CN109829432A (en) * 2019-01-31 2019-05-31 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN110347867A (en) * 2019-07-16 2019-10-18 北京百度网讯科技有限公司 Method and apparatus for generating lip motion video
CN111429885A (en) * 2020-03-02 2020-07-17 北京理工大学 Method for mapping audio clip to human face-mouth type key point
CN111897976A (en) * 2020-08-18 2020-11-06 北京字节跳动网络技术有限公司 Virtual image synthesis method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
中国计算机学会: "《CCF 2019-2020中国计算机科学技术发展报告》", 31 October 2020, 机械工业出版社, pages: 595 - 598 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113015002A (en) * 2021-03-04 2021-06-22 天九共享网络科技集团有限公司 Processing method and device for anchor video data
WO2023050650A1 (en) * 2021-09-29 2023-04-06 平安科技(深圳)有限公司 Animation video generation method and apparatus, and device and storage medium

Similar Documents

Publication Publication Date Title
CN109377539B (en) Method and apparatus for generating animation
US11158102B2 (en) Method and apparatus for processing information
CN111415677B (en) Method, apparatus, device and medium for generating video
US20230042654A1 (en) Action synchronization for target object
US10776977B2 (en) Real-time lip synchronization animation
CN111599343B (en) Method, apparatus, device and medium for generating audio
US20200410732A1 (en) Method and apparatus for generating information
KR102116309B1 (en) Synchronization animation output system of virtual characters and text
CN107481715B (en) Method and apparatus for generating information
CN110446066B (en) Method and apparatus for generating video
US11847726B2 (en) Method for outputting blend shape value, storage medium, and electronic device
CN110288682A (en) Method and apparatus for controlling the variation of the three-dimensional portrait shape of the mouth as one speaks
US20240070397A1 (en) Human-computer interaction method, apparatus and system, electronic device and computer medium
CN111402842A (en) Method, apparatus, device and medium for generating audio
CN110880198A (en) Animation generation method and device
CN109754783A (en) Method and apparatus for determining the boundary of audio sentence
CN112383721B (en) Method, apparatus, device and medium for generating video
CN112381926A (en) Method and apparatus for generating video
CN115050354B (en) Digital human driving method and device
CN114999441A (en) Avatar generation method, apparatus, device, storage medium, and program product
CN113282791B (en) Video generation method and device
CN114170648A (en) Video generation method and device, electronic equipment and storage medium
CN113205569A (en) Image drawing method and device, computer readable medium and electronic device
CN111415662A (en) Method, apparatus, device and medium for generating video
WO2023061229A1 (en) Video generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination