WO2023045317A1 - 表情驱动方法、装置、电子设备及存储介质 - Google Patents

表情驱动方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023045317A1
WO2023045317A1 PCT/CN2022/088311 CN2022088311W WO2023045317A1 WO 2023045317 A1 WO2023045317 A1 WO 2023045317A1 CN 2022088311 W CN2022088311 W CN 2022088311W WO 2023045317 A1 WO2023045317 A1 WO 2023045317A1
Authority
WO
WIPO (PCT)
Prior art keywords
facial
image
expression
sample
dimensional
Prior art date
Application number
PCT/CN2022/088311
Other languages
English (en)
French (fr)
Inventor
梁柏荣
郭知智
洪智滨
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Publication of WO2023045317A1 publication Critical patent/WO2023045317A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, specifically to the technical field of computer vision and deep learning, which can be applied to scenes such as face image processing and face recognition, and more specifically relates to expression driving methods, devices, electronic equipment and storage media.
  • Facial expression driving technology is one of the important technologies of computer vision. Its task is to drive the facial expression of the target picture through a facial expression picture, so that the two expressions are as consistent as possible. Facial expression-driven technology is widely used in pan-entertainment applications.
  • Embodiments of the present disclosure provide a method, an apparatus, an electronic device, a non-transitory computer-readable storage medium, a computer program product and a computer program for expression driving.
  • an expression driving method including: acquiring a source image with an expression and a target image without an expression; inputting the source image and the target image into a three-dimensional expression expression model respectively In order to obtain a plurality of first facial attributes corresponding to the source image and a plurality of second facial attributes corresponding to the target image; using at least part of the facial attributes in the plurality of first facial attributes to replace the Corresponding facial attributes in the plurality of second facial attributes to obtain a plurality of second facial attributes after replacement processing; according to the plurality of second facial attributes after replacement processing, three-dimensional facial processing is performed on the faces in the target image Reconstructing and rendering to obtain a rendered 3D facial image; inputting the rendered 3D facial image into an expression-driven model, so as to perform expression-driven on the face in the target image.
  • an expression driving device including: a first acquisition module, used to acquire a source image with an expression and a target image without an expression; a second acquisition module, used to convert the The source image and the target image are respectively input into the three-dimensional facial expression model to obtain a plurality of first facial attributes corresponding to the source image and a plurality of second facial attributes corresponding to the target image; the replacement module uses Using at least part of the facial attributes in the plurality of first facial attributes to replace the corresponding facial attributes in the plurality of second facial attributes to obtain a plurality of second facial attributes after replacement processing; the processing module is used for Perform three-dimensional facial reconstruction and rendering on the face in the target image according to the plurality of second facial attributes after the replacement process, so as to obtain a rendered three-dimensional facial image; a driving module, configured to convert the rendered three-dimensional facial image The facial image is input into the expression-driven model to perform expression-driven on the face in the target image.
  • an electronic device including: at least one processor; and a memory connected to the at least one processor in communication; An instruction executed by a processor, the instruction is executed by the at least one processor, so that the at least one processor can execute the expression driving method described in the embodiment of the first aspect of the present disclosure.
  • a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the computer described in the embodiment of the first aspect of the present disclosure. expression-driven approach.
  • a computer program product including a computer program.
  • the computer program When the computer program is executed by a processor, the expression driving method described in the embodiment of the first aspect of the present disclosure is implemented.
  • a computer program is provided, wherein the computer program includes computer program code, and when the computer program code is run on a computer, the computer executes the embodiment of the first aspect of the present disclosure.
  • FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure
  • FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure.
  • Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of an expression driving method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram according to a fifth embodiment of the present disclosure.
  • FIG. 7 shows a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure.
  • Facial expression driving technology is one of the important technologies of computer vision. Its task is to drive the facial expression of the target picture through a facial expression picture, so that the two expressions are as consistent as possible. Facial expression-driven technology is widely used in pan-entertainment applications.
  • the facial 2D key points of the driving image are detected and the facial 2D key points are expressed with expressions, and a corresponding expression-driven facial picture is generated.
  • the present disclosure proposes expression driving method, device, electronic equipment, non-transitory computer readable storage medium, computer program product and computer program.
  • FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure.
  • the expression driving method of the embodiment of the present disclosure can be applied to the expression driving device of the embodiment of the present disclosure, and the device can be configured in an electronic device.
  • the electronic device may be a mobile terminal, for example, a mobile phone, a tablet computer, a personal digital assistant, and other hardware devices with various operating systems.
  • the expression-driven method may include the following steps:
  • Step 101 acquiring a source image with expression and a target image without expression.
  • an image acquisition device may be used to photograph the object, and the source image with expression and the target image without expression may be acquired, or the source image with expression and the target image without expression may be downloaded from the network.
  • the expression in the source image may include: facial expressions such as happy, angry, excited or angry.
  • Step 102 Input the source image and the target image into the three-dimensional facial expression model respectively, so as to obtain a plurality of first facial attributes corresponding to the source image and a plurality of second facial attributes corresponding to the target image.
  • the source image and the target image can be respectively input into the 3D expression model, and the 3D expression model can output multiple first facial attributes corresponding to the source image and the target A plurality of second face attributes corresponding to the image.
  • the first facial attribute and the second facial attribute include: at least one of facial expression, facial posture, facial illumination and facial shape, and the first facial attribute may be different from the second facial attribute.
  • the three-dimensional expression model may include a coding layer and a decoding layer; wherein, the coding layer is used to respectively input the source image and the target image into the three-dimensional expression model, so as to obtain multiple A first facial attribute and a plurality of second facial attributes corresponding to the target image, so as to realize the decoupling between each facial attribute in the facial attributes; the decoding layer is used to replace the plurality of second facial attributes after processing, Performing three-dimensional facial reconstruction on the face in the target image to obtain a reconstructed three-dimensional facial image, so as to implement facial reconstruction on multiple second facial attributes after replacement.
  • the 3D expression expression model can be a 3D deformation statistical model of the face (referred to as 3DMM).
  • the face image and the target image can be respectively input into the coding layer of the 3DMM, so as to obtain a plurality of first face attributes corresponding to the face image and a plurality of second face attributes corresponding to the target image.
  • Step 103 using at least part of the first facial attributes to replace the corresponding facial attributes in the plurality of second facial attributes, so as to obtain a plurality of replaced second facial attributes.
  • facial attributes among the multiple first facial attributes can be used to replace the corresponding ones of the multiple second facial attributes.
  • Facial attributes to obtain a plurality of second facial attributes after replacement For example, facial expressions in the first facial attributes may be used to replace facial expressions in the second facial attributes, and the second facial attributes after replacing the facial expressions may be used as multiple second facial attributes after replacement processing.
  • Step 104 perform three-dimensional facial reconstruction and rendering on the face in the target image according to the replaced plurality of second facial attributes, so as to obtain a rendered three-dimensional facial image.
  • the replaced multiple second facial attributes may be input into the decoding layer of the 3D expression expression model to obtain a reconstructed 3D facial image. Furthermore, a rendered three-dimensional facial image is obtained through a 3D rendering technology.
  • Step 105 input the rendered 3D facial image into the expression-driven model, so as to perform expression-driven on the face in the target image.
  • the rendered three-dimensional facial image is less realistic, in order to make the expression-driven target image more realistic, in the embodiment of the present disclosure, the rendered three-dimensional facial image can be input to the expression-driven model to drive the expression of the face in the target image.
  • the source image with expression and the target image without expression input the source image and the target image into the three-dimensional expression expression model respectively to obtain multiple first facial attributes corresponding to the source image and the corresponding facial attributes of the target image.
  • a plurality of second facial attributes using at least part of the facial attributes in the plurality of first facial attributes to replace the corresponding facial attributes in the plurality of second facial attributes to obtain a plurality of second facial attributes after replacement processing; according to the replacement process
  • the decoupling of the facial expressions and facial gestures in the source image and the target image can be realized, and further, the facial expressions and facial gestures of the target image can be controlled independently, so as to better satisfy more diverse expression drives.
  • FIG. 2 is a schematic diagram according to the second embodiment of the present disclosure.
  • facial expressions in the second facial attributes may be replaced with facial expressions in the first facial attributes, so as to obtain the replaced second facial attributes.
  • the embodiment shown in Figure 2 may include the following steps:
  • Step 201 acquiring a source image with expression and a target image without expression.
  • Step 202 Input the source image and the target image into the three-dimensional facial expression model respectively, so as to obtain a plurality of first facial attributes corresponding to the source image and a plurality of second facial attributes corresponding to the target image.
  • Step 203 performing replacement processing on the facial expressions in the second facial attributes according to the facial expressions in the plurality of first facial attributes.
  • both the first facial attribute and the second facial attribute can include: facial shape, facial posture, facial expression and facial lighting, and the facial expression in the first facial attribute can be used to compare the second facial attribute Replace the facial expressions in .
  • step 204 the replaced facial expressions in the second facial attributes, and the facial pose, facial shape, and facial illumination retained in the replaced processing in the second facial attributes are used as a plurality of replaced second facial attributes.
  • the facial expression after the replacement processing in the second facial attribute can be replaced, and the original facial expression in the second facial attribute is retained.
  • the facial pose, facial shape, and facial illumination of the face are used as a plurality of second facial attributes after replacement processing.
  • Step 205 Perform 3D face reconstruction and rendering on the face in the target image according to the replaced plurality of second face attributes to obtain a rendered 3D face image.
  • Step 206 input the rendered 3D facial image into the expression-driven model, so as to perform expression-driven on the face in the target image.
  • steps 201 to 202 and steps 205 to 206 can be implemented in any of the embodiments of the present disclosure, which is not limited in the embodiments of the present disclosure, and will not be repeated here.
  • the facial expressions in the first facial attributes are replaced;
  • the preserved facial pose, facial shape, and facial illumination are processed as a plurality of second processed facial attributes in place of the replacement.
  • the target image can be kept in its original facial posture, and only the target image can be driven by expression.
  • Figure 3 is a schematic diagram according to the third embodiment of the present disclosure.
  • Two facial attributes perform 3D facial reconstruction and rendering on the face in the target image to obtain a reconstructed 3D facial image.
  • the embodiment shown in Figure 3 may include the following steps:
  • Step 301 acquire a source image with expression and a target image without expression.
  • Step 302 Input the source image and the target image into the three-dimensional facial expression model respectively, so as to obtain a plurality of first facial attributes corresponding to the source image and a plurality of second facial attributes corresponding to the target image.
  • Step 303 using at least part of the first facial attributes to replace the corresponding facial attributes in the plurality of second facial attributes, so as to obtain a plurality of replaced second facial attributes.
  • Step 304 Perform three-dimensional facial reconstruction on the face in the target image according to the plurality of second facial attribute coefficients after replacement processing, so as to obtain a reconstructed three-dimensional facial image.
  • the replaced multiple second facial attribute coefficients may be input into the decoding layer of the 3D expression expression model, and the 3D expression expression model may output a reconstructed 3D facial image.
  • Step 305 performing 3D facial rendering on the reconstructed 3D facial image to obtain a rendered 3D facial image.
  • a 3D rendering technology may be used to perform 3D facial rendering on the reconstructed 3D facial image to obtain a rendered 3D facial image.
  • Step 306 input the rendered 3D facial image into the expression-driven model, so as to perform expression-driven on the face in the target image.
  • steps 301 to 303 and step 306 may be implemented in any of the embodiments of the present disclosure, which is not limited in the embodiments of the present disclosure, and will not be repeated here.
  • Figure 4 is a schematic diagram according to the fourth embodiment of the present disclosure, in the embodiment of the present disclosure Among them, before the rendered three-dimensional facial image is input to the expression-driven model, the expression-driven model can be trained so that the expression-driven model outputs a more realistic facial-driven image.
  • the embodiment shown in Figure 4 may include the following steps:
  • Step 401 acquiring a source image with expression and a target image without expression.
  • Step 402 Input the source image and the target image into the three-dimensional facial expression model to obtain multiple first facial attributes corresponding to the source image and multiple second facial attributes corresponding to the target image.
  • Step 403 Using at least part of the first facial attributes to replace the corresponding facial attributes in the plurality of second facial attributes, so as to obtain a plurality of replaced second facial attributes.
  • Step 404 Perform 3D face reconstruction and rendering on the face in the target image according to the replaced plurality of second face attributes to obtain a rendered 3D face image.
  • Step 405 acquiring multiple frames of sample images with expressions.
  • multiple frames of sample images with expressions can be obtained by using an image acquisition device or downloaded from the network. It should be noted that the multiple frames of sample images with expressions can be sample images of the same object with different expressions. Alternatively, the multiple frames of sample images with expressions may be sample images of different expressions of different objects.
  • Step 406 for each frame of the sample image, input the sample image into the coding layer of the three-dimensional expression model to obtain the sample facial attributes corresponding to the sample image; wherein, the sample facial attributes include: sample facial expression, sample facial shape, sample face At least one of pose and sample face lighting.
  • each frame of sample images of multiple frames of sample images with expressions can be respectively input into the coding layer of the three-dimensional expression expression model, and the three-dimensional expression expression model can output the sample facial attributes corresponding to each frame of sample images, wherein, what needs to be explained Yes, the sample facial attributes may include: at least one of sample facial expressions, sample facial shapes, sample facial poses, and sample facial lighting.
  • Step 407 Input the sample facial expression, sample facial shape, sample facial posture and sample facial illumination into the decoding layer of the 3D expression expression model, so as to perform 3D facial reconstruction on the face in the sample image, and obtain a reconstructed 3D sample facial image .
  • sample facial expression, sample facial shape, sample facial pose and sample facial illumination can be input into the decoding layer of the 3D expression model, and the 3D expression model can perform 3D facial reconstruction on the sample facial attributes to obtain the reconstructed 3D Sample face image.
  • Step 408 performing 3D facial rendering on the reconstructed 3D sample facial image to obtain a rendered 3D sample facial image.
  • the reconstructed 3D sample facial image may be rendered using a 3D rendering technique to obtain a rendered 3D sample facial image.
  • Step 409 Train the initial expression-driven model according to the rendered three-dimensional sample facial image and the sample image, so as to generate an expression-driven model.
  • the rendered three-dimensional sample facial image is input into the initial expression-driven model to obtain the expression prediction image; according to the difference between the sample image and the expression prediction image, the loss function value is determined; according to the loss function value , train the initial expression-driven model to minimize the value of the loss function.
  • the rendered three-dimensional sample facial image can be input into the initial expression-driven model, and the initial expression expression model can output the expression prediction image, and then, the sample image can be combined with the expression Compare the predicted images to determine the difference between the sample image and the expression prediction image, and determine the loss function value according to the difference.
  • the loss function value may include the first sub-loss function value and the second sub-loss function value
  • the first sub-loss function value can be determined according to the absolute value of the difference between the sample image and the expression prediction image, and at the same time, the sample image and the expression prediction image are input into a trained Visual Graphics Generator (Visual Graphics Generator, referred to as In VGG), the semantic vector corresponding to the sample image and the semantic vector corresponding to the expression prediction image are generated, and the second sub-loss function is determined according to the absolute value of the difference between the semantic vector corresponding to the sample image and the semantic vector corresponding to the expression prediction image value.
  • the initial expression-driven model can be trained by means of gradient backpropagation to minimize the value of the loss function.
  • the rendered 3D The sample face image and the sample image are subjected to image normalization processing to obtain a target three-dimensional sample face image.
  • the rendered three-dimensional sample facial image and the pixel value of each pixel in the sample image may be divided by 255 and then subtracted by 1, so that the pixel value of each pixel is between [-0.5, 0.5].
  • the target three-dimensional sample facial image can be input into the initial expression-driven model, and the initial expression expression model can output the expression prediction image, and then, the sample image can be compared with the expression prediction image to determine the sample image and the expression prediction image The difference between them, and determine the loss function value according to the difference.
  • the initial expression-driven model can be trained by gradient backpropagation to minimize the value of the loss function.
  • Step 410 input the rendered 3D facial image into the expression-driven model, so as to perform expression-driven on the face in the target image.
  • steps 401 to 404 and step 410 may be implemented in any of the embodiments of the present disclosure, which is not limited in the embodiments of the present disclosure, and will not be repeated here.
  • sample facial attributes include: At least one of sample facial expression, sample facial shape, sample facial posture and sample facial illumination; input sample facial expression, sample facial shape, sample facial posture and sample facial illumination into the decoding layer of the three-dimensional expression model, to performing three-dimensional facial reconstruction on the face in the sample image to obtain a reconstructed three-dimensional sample facial image; performing three-dimensional facial rendering on the reconstructed three-dimensional sample facial image to obtain a rendered three-dimensional sample facial image; according to the rendered three-dimensional sample facial image and sample images to train the initial expression-driven model to generate an expression-driven model.
  • the expression-driven model can be used to perform expression-driven on the rendered three-dimensional facial image, so as to obtain a more realistic facial-driven image.
  • the source image can represent a source image with expression
  • the target image can represent a target image without expression
  • 3DMM can represent a three-dimensional expression model
  • source The image and the target image are respectively input into the coding layer of 3DMM to obtain the shape (face shape), pose (facial posture), light (facial illumination) and exp (facial expression) corresponding to the source image, and the shape (facial expression) corresponding to the target image ( face shape), pose (facial pose), light (facial lighting) and exp (facial expression).
  • the replaced facial attributes corresponding to the target image include: the replaced exp, the original shape, pose and light retained in the target image. Furthermore, the replaced facial attributes corresponding to the target image can be input into the decoding layer of the 3DMM model to perform 3D facial reconstruction and rendering to obtain a rendered 3D facial image. Finally, the rendered 3D facial image is input In the translator model (expression-driven model), the translator model outputs the expression-driven image corresponding to the target image.
  • the expression driving method of the embodiment of the present disclosure by acquiring a source image with expression and a target image without expression; respectively inputting the source image and the target image into a three-dimensional expression model to obtain a plurality of first faces corresponding to the source image attribute and a plurality of second facial attributes corresponding to the target image; using at least part of the facial attributes in the plurality of first facial attributes to replace the corresponding facial attributes in the plurality of second facial attributes, so as to obtain a plurality of second facial attributes after replacement processing.
  • Facial attributes perform three-dimensional facial reconstruction and rendering on the face in the target image according to multiple second facial attributes after replacement processing, so as to obtain a rendered three-dimensional facial image; input the rendered three-dimensional facial image into the expression-driven model , to perform expression-driven on faces in the target image.
  • the source image and the target image are respectively input into the three-dimensional expression model, and multiple first facial attributes corresponding to the source image and multiple second facial attributes corresponding to the target image are obtained, and then, multiple first face Replace at least part of the facial attributes in the facial attributes with corresponding facial attributes in multiple second facial attributes, and perform three-dimensional facial reconstruction and rendering on the replaced second facial attributes, and finally perform three-dimensional facial image rendering on the rendered three-dimensional facial image through the expression-driven model Expression driven.
  • the decoupling of the facial expressions and facial gestures in the source image and the target image can be realized, and further, the facial expressions and facial gestures of the target image can be controlled independently, so as to better satisfy more diverse expression drives.
  • the present disclosure also proposes an expression driving device.
  • FIG. 6 is a schematic diagram according to the fifth embodiment of the present disclosure.
  • the expression driving device 600 includes: a first acquisition module 610 , a second acquisition module 620 , a replacement module 630 , a processing module 640 and a driving module 650 .
  • the first acquisition module 610 is used to acquire the source image with expression and the target image without expression;
  • the second acquisition module 620 is used to respectively input the source image and the target image into the three-dimensional expression model to obtain the corresponding A plurality of first facial attributes and a plurality of second facial attributes corresponding to the target image;
  • a replacement module 630 configured to use at least part of the facial attributes in the plurality of first facial attributes to replace corresponding ones of the plurality of second facial attributes Facial attributes, to obtain a plurality of second facial attributes after replacement processing;
  • processing module 640 for performing three-dimensional facial reconstruction and rendering on the face in the target image according to the plurality of second facial attributes after replacement processing, to obtain rendering The final three-dimensional facial image;
  • the driving module 650 configured to input the rendered three-dimensional facial image into the expression driving model, so as to perform expression driving on the face in the target image.
  • the replacement module 630 is specifically configured to: perform replacement processing on the facial expressions in the second facial attributes according to the facial expressions in the first facial attributes; The facial expressions after the replacement processing in the attributes, and the facial posture, facial shape and facial illumination retained by the replacement processing in the second facial attributes are used as a plurality of second facial attributes after the replacement processing.
  • the processing module 640 is specifically configured to: perform three-dimensional facial reconstruction on the face in the target image according to the multiple second facial attribute coefficients after replacement processing, so as to obtain the reconstructed three-dimensional face A facial image; performing 3D facial rendering on the reconstructed 3D facial image to obtain a rendered 3D facial image.
  • the three-dimensional expression expression model includes an encoding layer and a decoding layer; wherein, the encoding layer is used to respectively input the source image and the target image into the three-dimensional expression expression model to obtain the corresponding A plurality of first facial attributes and a plurality of second facial attributes corresponding to the target image; the decoding layer is used to perform three-dimensional facial reconstruction on the face in the target image according to the plurality of second facial attributes after replacement processing, to obtain The reconstructed 3D face image.
  • the expression driving device 600 further includes: a third acquisition module, a fourth acquisition module, a reconstruction module, a rendering module and a training module.
  • the third acquisition module is used to acquire multiple frames of sample images with expressions;
  • the fourth acquisition module is used to input the sample images into the coding layer of the three-dimensional expression expression model for each frame of sample images to obtain the corresponding The sample facial attributes;
  • the sample facial attributes include: at least one of sample facial expressions, sample facial shapes, sample facial poses, and sample facial lighting;
  • the reconstruction module is used to convert sample facial expressions, sample facial shapes, and sample facial poses and sample facial illumination are input to the decoding layer of the three-dimensional expression expression model to perform three-dimensional facial reconstruction on the face in the sample image to obtain a reconstructed three-dimensional sample facial image;
  • the rendering module is used to perform reconstruction on the reconstructed three-dimensional sample facial image 3D facial rendering to obtain a rendered 3D sample facial image;
  • a training module for training an initial expression-driven model according to the rendered 3D sample facial image and sample images to generate an expression-driven model.
  • the training module is specifically configured to: input the rendered three-dimensional sample facial image into the initial expression-driven model to obtain an expression prediction image; The difference between them determines the value of the loss function; according to the value of the loss function, the initial expression-driven model is trained to minimize the value of the loss function.
  • the training module is specifically configured to: perform image normalization processing on the rendered 3D facial image and the sample image to obtain the target 3D sample facial image; convert the target 3D sample facial image to Input to the initial expression-driven model to obtain the expression prediction image; according to the difference between the sample image and the expression prediction image, determine the loss function value; according to the loss function value, train the initial expression-driven model to make the loss The function value is minimized.
  • the expression driving device of the embodiment of the present disclosure acquires a source image with an expression and a target image without expression; respectively inputs the source image and the target image into a three-dimensional expression model to obtain a plurality of first faces corresponding to the source image attribute and a plurality of second facial attributes corresponding to the target image; using at least part of the facial attributes in the plurality of first facial attributes to replace the corresponding facial attributes in the plurality of second facial attributes, so as to obtain a plurality of second facial attributes after replacement processing.
  • Facial attributes perform three-dimensional facial reconstruction and rendering on the face in the target image according to multiple second facial attributes after replacement processing, so as to obtain a rendered three-dimensional facial image; input the rendered three-dimensional facial image into the expression-driven model , to perform expression-driven on faces in the target image.
  • the device can obtain multiple first facial attributes corresponding to the source image and multiple second facial attributes corresponding to the target image by inputting the source image and the target image respectively into the three-dimensional expression model, and then adopt multiple first facial attributes At least part of the facial attributes in one facial attribute replace the corresponding facial attributes in the plurality of second facial attributes, and perform 3D facial reconstruction and rendering on the replaced second facial attributes, and finally use the expression-driven model to render the 3D facial Figure for expression-driven.
  • the decoupling of the facial expressions and facial gestures in the source image and the target image can be realized, and further, the facial expressions and facial gestures of the target image can be controlled independently, so as to better satisfy more diverse expression drives.
  • the collection, storage, use, processing, transmission, provision, and disclosure of the user's personal information involved are all carried out on the premise of obtaining the user's consent, and all of them comply with relevant laws and regulations. And do not violate public order and good customs.
  • the present disclosure also provides an electronic device, a non-transitory computer-readable storage medium, a computer program product, and a computer program.
  • an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein, the memory stores information executable by the at least one processor. instructions, the instructions are executed by the at least one processor, so that the at least one processor can execute the expression driving method described in the above-mentioned embodiments of the present disclosure.
  • the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the expression driving method described in the above-mentioned embodiments of the present disclosure .
  • the present disclosure further provides a computer program product, including a computer program, and when the computer program is executed by a processor, the expression driving method described in the above-mentioned embodiments of the present disclosure is implemented.
  • the present disclosure also provides a computer program, wherein the computer program includes computer program code, and when the computer program code is run on a computer, the computer is made to execute the program described in the above-mentioned embodiments of the present disclosure.
  • Expression-driven approach when the computer program code is run on a computer, the computer is made to execute the program described in the above-mentioned embodiments of the present disclosure.
  • FIG. 7 shows a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 700 includes a computing unit 701 that can execute according to a computer program stored in a read-only memory (ROM) 702 or loaded from a storage unit 708 into a random-access memory (RAM) 703. Various appropriate actions and treatments. In the RAM 703, various programs and data necessary for the operation of the device 700 can also be stored.
  • the computing unit 701, ROM 702, and RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to the bus 704 .
  • the I/O interface 705 includes: an input unit 706, such as a keyboard, a mouse, etc.; an output unit 707, such as various types of displays, speakers, etc.; a storage unit 708, such as a magnetic disk, an optical disk, etc. ; and a communication unit 709, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 709 allows the device 700 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 701 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 701 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the computing unit 701 executes various methods and processes described above, such as expression-driven methods.
  • the expression-driven method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708 .
  • part or all of the computer program may be loaded and/or installed on the device 700 via the ROM 702 and/or the communication unit 709.
  • the computer program When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the expression driving method described above can be performed.
  • the computing unit 701 may be configured in any other appropriate way (for example, by means of firmware) to execute the expression driving method.
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system of systems
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

提供了表情驱动方法、装置、电子设备及非瞬时计算机可读存储介质、计算机程序产品和计算机程序。该表情驱动方法包括:将具有表情的源图像及无表情的目标图像分别输入至三维表情表达模型中,以获取多个第一面部属性以及多个第二面部属性,采用第一面部属性中的至少部分面部属性替换第二面部属性中对应的面部属性,并对替换后的第二面部属性进行三维面部重建和渲染,通过表情驱动模型对将渲染的三维面部图像进行表情驱动。

Description

表情驱动方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求在2021年09月23日在中国提交的中国专利申请号202111117185.0的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及人工智能技术领域,具体涉及计算机视觉和深度学习技术领域,可应用于人脸图像处理和人脸识别等场景,更具体地涉及表情驱动方法、装置、电子设备及存储介质。
背景技术
面部表情驱动技术是计算机视觉重要技术之一,其任务是通过一张面部表情图片驱动目标图片的面部的表情,使两者表情尽量一致。面部表情驱动技术在泛娱乐应用中非常广泛。
发明内容
本公开的实施例提供了一种用于表情驱动方法、装置、电子设备、非瞬时计算机可读存储介质、计算机程序产品和计算机程序。
根据本公开的一方面的实施例,提供了一种表情驱动方法,包括:获取具有表情的源图像及无表情的目标图像;将所述源图像和所述目标图像分别输入至三维表情表达模型中,以获取所述源图像对应的多个第一面部属性以及所述目标图像对应的多个第二面部属性;采用所述多个第一面部属性中的至少部分面部属性替换所述多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性;根据所述替换处理后的多个第二面部属性,对所述目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像;将所述渲染后的三维面部图像输入至表情驱动模型中,以对所述目标图像中的面部进行表情驱动。
根据本公开的另一方面的实施例,提供了一种表情驱动装置,包括:第一获取模块,用于获取具有表情的源图像及无表情的目标图像;第二获取模块,用于将所述源图像和所述目标图像分别输入至三维表情表达模型中,以获取所述源图像对应的多个第一面部属性以及所述目标图像对应的多个第二面部属性;替换模块,用于采用所述多个第一面部属性中的至少部分面部属性替换所述多个第二面部属性中对应的面部属 性,以得到替换处理后的多个第二面部属性;处理模块,用于根据所述替换处理后的多个第二面部属性,对所述目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像;驱动模块,用于将所述渲染后的三维面部图像输入至表情驱动模型中,以对所述目标图像中的面部进行表情驱动。
根据本公开的另一方面的实施例,提供了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本公开第一方面实施例所述的表情驱动方法。
根据本公开的另一方面的实施例,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行本公开第一方面实施例所述的表情驱动方法。
根据本公开的另一方面的实施例,提供了一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现本公开第一方面实施例所述的表情驱动方法。
根据本公开的另一方面的实施例,提供了一种计算机程序,其中所述计算机程序包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行本公开第一方面实施例所述的表情驱动方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是根据本公开第一实施例的示意图;
图2是根据本公开第二实施例的示意图;
图3是根据本公开第三实施例的示意图;
图4是根据本公开第四实施例的示意图;
图5是根据本公开实施例的表情驱动方法的流程示意图;
图6是根据本公开第五实施例的示意图;
图7示出了可以用来实施本公开的实施例的示例电子设备700的示意性框图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当 认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
面部表情驱动技术是计算机视觉重要技术之一,其任务是通过一张面部表情图片驱动目标图片的面部的表情,使两者表情尽量一致。面部表情驱动技术在泛娱乐应用中非常广泛。
相关技术中,通过检测驱动图像的面部2D关键点并将面部2D关键点进行表情表达并生成对应表情驱动后的面部图片。
但是,上述基于2D面部关键点的表情驱动技术无法解耦表情和面部姿态,当驱动图片的姿态跟目标图像的姿态差异较大时,所生成图片的姿态会跟随驱动图像变化,无法保持目标图像原始姿态,无法满足更多样的表情驱动。
针对上述问题,本公开提出了表情驱动方法、装置、电子设备、非瞬时计算机可读存储介质、计算机程序产品和计算机程序。
图1是根据本公开第一实施例的示意图。需要说明的是,本公开实施例的表情驱动方法可应用于本公开实施例的表情驱动装置,该装置可被配置于电子设备中。其中,该电子设备可以是移动终端,例如,手机、平板电脑、个人数字助理等具有各种操作系统的硬件设备。
如图1所示,该表情驱动方法可包括如下步骤:
步骤101,获取具有表情的源图像及无表情的目标图像。
在本公开实施例中,可利用图像采集设备对对象进行拍摄,获取具有表情的源图像和无表情的目标图像,或者,从网络下载具有表情的源图像和无表情的目标图像。其中,源图像中的表情可包括:高兴、愤怒、激动或生气等面部表情。
步骤102,将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性。
为了实现面部属性中的各个面部属性之间的解耦,可将源图像和目标图像分别输入至三维表情表达模型中,三维表情表达模型可输出源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性。其中,需要说明的是,第一面部属性和第二面部属性包括:面部表情、面部姿态、面部光照和面部形状中的至少一种,第一面部属性可与第二面部属性不同。
另外,需要说明的是,三维表情表达模型可包括编码层和解码层;其中,编码层,用于将源图像和所述目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性,以实现面部属性中的各个面部属性之间的解耦;解码层,用于根据替换处理后的多个第二面部属性,对目标图 像中的面部进行三维面部重建,以得到重建后的三维面部图像,以实现对替换后的多个第二面部属性进行面部重建。
作为一种应用场景,在人脸图像处理和人脸识别场景中,三维表情表达模型可为人脸3D形变统计模型(简称3DMM),为了实现人脸属性中的各个面部属性之间的解耦,可将人脸图像和目标图像分别输入至3DMM的编码层中,以获取人脸图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性。
步骤103,采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性。
为了使目标图像保持原有面部姿态,仅对目标图像进行表情驱动,在本公开实施例中,可采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性。比如,可采用第一面部属性中的面部表情替换第二面部属性中的面部表情,将替换面部表情后的第二面部属性作为替换处理后的多个第二面部属性。
步骤104,根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像。
为了对替换后的多个第二面部属性进行呈现,可将替换处理后的多个第二面部属性输入至三维表情表达模型的解码层中,以得到重建后的三维面部图像。进而,通过3D渲染技术,得到渲染后的三维面部图像。
步骤105,将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。
可以理解的是,由于渲染后的三维面部图像的真实性较差,因此,为了使表情驱动后的目标图像更加真实,在本公开实施例中,可将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。
综上,通过获取具有表情的源图像及无表情的目标图像;将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性;采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性;根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像;将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。由此,可实现源图像和目标图像中的面部表情和面部姿态的解耦,进而,可实现单独控制目标图像的面部表情和面部姿态,更好地满足更多样的表情驱动。
为了使目标图像保持原有面部姿态,仅对目标图像进行表情驱动,如图2所示,图2是根据本公开第二实施例的示意图。在本公开实施例中,可采用多个第一面部属性中的面部表情替换第二面部属性中的面部表情,以得到替换处理后的第二面部属性。图2所示实施例可包括如下步骤:
步骤201,获取具有表情的源图像及无表情的目标图像。
步骤202,将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性。
步骤203,根据多个第一面部属性中的面部表情,对第二面部属性中的面部表情进行替换处理。
在本公开实施例中,第一面部属性和第二面部属性中均可包括:面部形状、面部姿态、面部表情和面部光照,可采用第一面部属性中的面部表情对第二面部属性中的面部表情进行替换处理。
步骤204,将第二面部属性中替换处理后的面部表情,以及第二面部属性中替换处理所保留的面部姿态、面部形状以及面部光照,作为替换处理后的多个第二面部属性。
也就是说,在采用第一面部属性中的面部表情对第二面部属性中的面部表情进行替换处理后,可将第二面部属性中替换处理后的面部表情,第二面部属性中原来保留的面部姿态、面部形状以及面部光照,作为替换处理后的多个第二面部属性。
步骤205,根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像。
步骤206,将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。
需要说明的是,步骤201至202、步骤205至206的执行过程可以分别采用本公开的各实施例中的任一种方式实现,本公开实施例并不对此作出限定,也不再赘述。
综上,通过根据多个第一面部属性中的面部表情,对第二面部属性中的面部表情进行替换处理;将第二面部属性中替换处理后的面部表情,以及第二面部属性中替换处理所保留的面部姿态、面部形状以及面部光照,作为替换处理后的多个第二面部属性。由此,可使目标图像保持原有面部姿态,仅对目标图像进行表情驱动。
为了对替换后的多个第二面部属性进行面部重建,如图3所示,图3是根据本公开第三实施例的示意图,在本公开实施例中,可根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到重建后的三维面部图像。图3所示实施例可包括如下步骤:
步骤301,获取具有表情的源图像及无表情的目标图像。
步骤302,将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性。
步骤303,采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性。
步骤304,根据替换处理后的多个第二面部属性系数,对目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像。
在本公开实施例中,可将替换处理后的多个第二面部属性系数,输入三维表情表达模型的解码层中,三维表情表达模型可输出重建后的三维面部图像。
步骤305,对重建后的三维面部图像进行三维面部渲染,以得到渲染后的三维面部图像。
为了使获取的三维面部图像更加准确和真实,可采用3D渲染技术对重建后的三维面部图像进行三维面部渲染,以得到渲染后的三维面部图像。
步骤306,将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。
需要说明的是,步骤301至303、步骤306的执行过程可以分别采用本公开的各实施例中的任一种方式实现,本公开实施例并不对此作出限定,也不再赘述。
综上,通过根据替换处理后的多个第二面部属性系数,对目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像;对重建后的三维面部图像进行三维面部渲染,以得到渲染后的三维面部图像,由此,可对替换后的多个第二面部属性进行面部重建。
为了使表情驱动模型可对渲染后的三维面部图像进行表情驱动,以获取更加真实的面部驱动图像,如图4所示,图4是根据本公开第四实施例的示意图,在本公开实施例中,在将渲染后的三维面部图像输入至表情驱动模型之前,可对表情驱动模型进行训练,以使表情驱动模型输出更加真实的面部驱动图像,图4所示实施例可包括如下步骤:
步骤401,获取具有表情的源图像及无表情的目标图像。
步骤402,将源图像和所述目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性。
步骤403,采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性。
步骤404,根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像。
步骤405,获取多帧具有表情的样本图像。
在本公开实施例中,可采用图像采集设备或者网络下载获取多帧具有表情的样本图像,其中,需要说明的是,多帧具有表情的样本图像可为同一个对象不同的表情的样本图像,或者,多帧具有表情的样本图像可为不同对象的不同的表情的样本图像。
步骤406,针对每帧样本图像,将样本图像输入至三维表情表达模型的编码层中,以获取样本图像对应的样本面部属性;其中,样本面部属性包括:样本面部表情、样本面部形状、样本面部姿态和样本面部光照中的至少一种。
进一步地,可将多帧具有表情的样本图像的每帧样本图像分别输入至三维表情表达模型的编码层中,三维表情表达模型可输出每帧样本图像对应的样本面部属性,其中,需要说明的是,样本面部属性可包括:样本面部表情、样本面部形状、样本面部姿态和样本面部光照中的至少一种。
步骤407,将样本面部表情、样本面部形状、样本面部姿态和样本面部光照输入至三维表情表达模型的解码层中,以对样本图像中的面部进行三维面部重建,得到重建后的三维样本面部图像。
进而,可将样本面部表情、样本面部形状、样本面部姿态和样本面部光照输入至三维表情表达模型的解码层中,三维表情表达模型可对样本面部属性进行三维面部重建,以得到重建后的三维样本面部图像。
步骤408,对重建后的三维样本面部图像进行三维面部渲染,以得到渲染后的三维样本面部图像。
在本公开实施例中,可对重建后的三维样本面部图像采用三维渲染技术进行三维面部渲染,得到渲染后的三维样本面部图像。
步骤409,根据渲染后的三维样本面部图像和样本图像,对初始的表情驱动模型进行训练,以生成表情驱动模型。
作为一种示例,将渲染后的三维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;根据样本图像与表情预测图像之间的差异性,确定损失函数值;根据损失函数值,对初始的表情驱动模型进行训练,以使损失函数值最小化。
也就是说,为了提高表情驱动模型的准确性,可将渲染后的三维样本面部图像输入至初始的表情驱动模型中,初始的表情表达模型可输出表情预测图像,进而,可将样本图像与表情预测图像进行比对,确定样本图像与表情预测图像之间的差异性,并根据该差异性确定损失函数值,比如,损失函数值可包括第一子损失函数值和第二子损失函数值,其中,可根据样本图像与表情预测图像之间的差值的绝对值确定第一子损失函数值,同时,将样本图像和表情预测图像输入经过训练的目视图像生成器(Visual  Graphics Generator,简称VGG)中,生成样本图像对应的语义向量和表情预测图像对应的语义向量,根据样本图像对应的语义向量与表情预测图像对应的语义向量之间的差值的绝对值,确定第二子损失函数值。进而,根据损失函数值,可采用梯度回传的方式对初始的表情驱动模型进行训练,以使损失函数值最小化。
作为另一种示例,对渲染后的三维样本面部图像和样本图像进行图像归一化处理,以获取目标三维样本面部图像;将目标三维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;根据样本图像与表情预测图像之间的差异性,确定损失函数值;根据损失函数值,对初始的表情驱动模型进行训练,以使损失函数值最小化。
为了使渲染后的三维样本面部图像和样本图像的数据分布在同一区域,减小渲染后的三维样本面部图像和样本图像的差距,便于对初始的表情驱动模型进行训练,可将渲染后的三维样本面部图像和样本图像进行图像归一化处理,以获取目标三维样本面部图像。比如,可将渲染后的三维样本面部图像和样本图像中的每个像素的像素值除以255然后减1,使每个像素的像素值在[-0.5,0.5]之间。接着,可将目标三维样本面部图像输入至初始的表情驱动模型中,初始的表情表达模型可输出表情预测图像,进而,可将样本图像与表情预测图像进行比对,确定样本图像与表情预测图像之间的差异性,并根据该差异性确定损失函数值。根据损失函数值,可采用梯度回传的方式对初始的表情驱动模型进行训练,以使损失函数值最小化。
步骤410,将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。
需要说明的是,步骤401至404、步骤410的执行过程可以分别采用本公开的各实施例中的任一种方式实现,本公开实施例并不对此作出限定,也不再赘述。
综上,通过获取多帧具有表情的样本图像;针对每帧样本图像,将样本图像输入至三维表情表达模型的编码层中,以获取样本图像对应的样本面部属性;其中,样本面部属性包括:样本面部表情、样本面部形状、样本面部姿态和样本面部光照中的至少一种;将样本面部表情、样本面部形状、样本面部姿态和样本面部光照输入至三维表情表达模型的解码层中,以对样本图像中的面部进行三维面部重建,得到重建后的三维样本面部图像;对重建后的三维样本面部图像进行三维面部渲染,以得到渲染后的三维样本面部图像;根据渲染后的三维样本面部图像和样本图像,对初始的表情驱动模型进行训练,以生成表情驱动模型。由此,可使表情驱动模型对渲染后的三维面部图像进行表情驱动,以获取更加真实的面部驱动图像。
为了更加清楚地说明上述实施例,现举例进行说明。
举例而言,如图5所示,在图5中,source图像(源图像)可表示具有表情的源 图像,target图像可表示无表情的目标图像,3DMM可表示三维表情表达模型,可将source图像和target图像分别输入至3DMM的编码层中,以获取source图像对应的shape(面部形状)、pose(面部姿态)、light(面部光照)和exp(面部表情),以及target图像对应的shape(面部形状)、pose(面部姿态)、light(面部光照)和exp(面部表情)。接着,采用source图像中的exp替换target图像中的exp,target图像对应的替换处理后的面部属性包括:替换后的exp,target图像中保留的原来的shape、pose和light。进而,可将target图像对应的替换处理后的面部属性输入至3DMM模型的解码层中,以进行三维面部重建和渲染,以得到渲染后的三维面部图像,最后,将渲染后的三维面部图像输入至translator模型(表情驱动模型)中,translator模型输出目标图像对应的表情驱动图像。
本公开实施例的表情驱动方法,通过获取具有表情的源图像及无表情的目标图像;将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性;采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性;根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像;将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。该方法通过将源图像和目标图像分别输入至三维表情表达模型中,获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性,进而,采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,并对替换后的第二面部属性进行三维面部重建和渲染,最后通过表情驱动模型对将渲染的三维面部图进行表情驱动。由此,可实现源图像和目标图像中的面部表情和面部姿态的解耦,进而,可实现单独控制目标图像的面部表情和面部姿态,更好地满足更多样的表情驱动。
为了实现上述实施例,本公开还提出一种表情驱动装置。
图6是根据本公开第五实施例的示意图,如图6所示,表情驱动装置600包括:第一获取模块610、第二获取模块620、替换模块630、处理模块640和驱动模块650。
其中,第一获取模块610用于获取具有表情的源图像及无表情的目标图像;第二获取模块620,用于将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性;替换模块630,用于采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性;处理模块640,用于根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到渲染后的 三维面部图像;驱动模块650,用于将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。
作为本公开实施例的一种可能实现方式,替换模块630,具体用于:根据多个第一面部属性中的面部表情,对第二面部属性中的面部表情进行替换处理;将第二面部属性中替换处理后的面部表情,以及第二面部属性中替换处理所保留的面部姿态、面部形状以及面部光照,作为替换处理后的多个第二面部属性。
作为本公开实施例的一种可能实现方式,处理模块640,具体用于:根据替换处理后的多个第二面部属性系数,对目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像;对重建后的三维面部图像进行三维面部渲染,以得到渲染后的三维面部图像。
作为本公开实施例的一种可能实现方式,三维表情表达模型包括编码层和解码层;其中,编码层,用于将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性;解码层,用于根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像。
作为本公开实施例的一种可能实现方式,表情驱动装置600还包括:第三获取模块、第四获取模块、重建模块、渲染模块和训练模块。
其中,第三获取模块,用于获取多帧具有表情的样本图像;第四获取模块,用于针对每帧样本图像,将样本图像输入至三维表情表达模型的编码层中,以获取样本图像对应的样本面部属性;其中,样本面部属性包括:样本面部表情、样本面部形状、样本面部姿态和样本面部光照中的至少一种;重建模块,用于将样本面部表情、样本面部形状、样本面部姿态和样本面部光照输入至三维表情表达模型的解码层中,以对样本图像中的面部进行三维面部重建,得到重建后的三维样本面部图像;渲染模块,用于对重建后的三维样本面部图像进行三维面部渲染,以得到渲染后的三维样本面部图像;训练模块,用于根据渲染后的三维样本面部图像和样本图像,对初始的表情驱动模型进行训练,以生成表情驱动模型。
作为本公开实施例的一种可能实现方式,训练模块,具体用于:将渲染后的三维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;根据样本图像与表情预测图像之间的差异性,确定损失函数值;根据损失函数值,对初始的表情驱动模型进行训练,以使损失函数值最小化。
作为本公开实施例的一种可能实现方式,训练模块,具体用于:对渲染后的三维面部图像和样本图像进行图像归一化处理,以获取目标三维样本面部图像;将目标三 维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;根据样本图像与表情预测图像之间的差异性,确定损失函数值;根据损失函数值,对初始的表情驱动模型进行训练,以使损失函数值最小化。
本公开实施例的表情驱动装置,通过获取具有表情的源图像及无表情的目标图像;将源图像和目标图像分别输入至三维表情表达模型中,以获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性;采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性;根据替换处理后的多个第二面部属性,对目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像;将渲染后的三维面部图像输入至表情驱动模型中,以对目标图像中的面部进行表情驱动。该装置可实现通过将源图像和目标图像分别输入至三维表情表达模型中,获取源图像对应的多个第一面部属性以及目标图像对应的多个第二面部属性,进而,采用多个第一面部属性中的至少部分面部属性替换多个第二面部属性中对应的面部属性,并对替换后的第二面部属性进行三维面部重建和渲染,最后通过表情驱动模型对将渲染的三维面部图进行表情驱动。由此,可实现源图像和目标图像中的面部表情和面部姿态的解耦,进而,可实现单独控制目标图像的面部表情和面部姿态,更好地满足更多样的表情驱动。
本公开的技术方案中,所涉及的用户个人信息的收集、存储、使用、加工、传输、提供和公开等处理,均在征得用户同意的前提下进行,并且均符合相关法律法规的规定,且不违背公序良俗。
根据本公开的实施例,本公开还提供了一种电子设备、一种非瞬时计算机可读存储介质、一种计算机程序产品和一种计算机程序。
根据本公开的实施例,提供了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本公开上述实施例所述的表情驱动方法。
根据本公开的实施例,本公开提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行本公开上述实施例所述的表情驱动方法。
根据本公开的实施例,本公开还提供了一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现本公开上述实施例所述的表情驱动方法。
根据本公开的实施例,本公开还提供了一种计算机程序,其中所述计算机程序包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行本公 开上述实施例所述的表情驱动方法。
图7示出了可以用来实施本公开的实施例的示例电子设备700的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图7所示,设备700包括计算单元701,其可以根据存储在只读存储器(ROM)702中的计算机程序或者从存储单元708加载到随机访问存储器(RAM)703中的计算机程序,来执行各种适当的动作和处理。在RAM 703中,还可存储设备700操作所需的各种程序和数据。计算单元701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
设备700中的多个部件连接至I/O接口705,包括:输入单元706,例如键盘、鼠标等;输出单元707,例如各种类型的显示器、扬声器等;存储单元708,例如磁盘、光盘等;以及通信单元709,例如网卡、调制解调器、无线通信收发机等。通信单元709允许设备700通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元701可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元701的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元701执行上文所描述的各个方法和处理,例如表情驱动方法。例如,在一些实施例中,表情驱动方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元708。在一些实施例中,计算机程序的部分或者全部可以经由ROM 702和/或通信单元709而被载入和/或安装到设备700上。当计算机程序加载到RAM 703并由计算单元701执行时,可以执行上文描述的表情驱动方法的一个或多个步骤。备选地,在其他实施例中,计算单元701可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行表情驱动方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算 机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (18)

  1. 一种表情驱动方法,包括:
    获取具有表情的源图像及无表情的目标图像;
    将所述源图像和所述目标图像分别输入至三维表情表达模型中,以获取所述源图像对应的多个第一面部属性以及所述目标图像对应的多个第二面部属性;
    采用所述多个第一面部属性中的至少部分面部属性替换所述多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性;
    根据所述替换处理后的多个第二面部属性,对所述目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像;
    将所述渲染后的三维面部图像输入至表情驱动模型中,以对所述目标图像中的面部进行表情驱动。
  2. 根据权利要求1所述的表情驱动方法,其中,所述采用所述多个第一面部属性中的至少部分面部属性替换所述多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性,包括:
    根据所述多个第一面部属性中的面部表情,对所述第二面部属性中的面部表情进行替换处理;
    将所述第二面部属性中替换处理后的面部表情,以及所述第二面部属性中替换处理所保留的面部姿态、面部形状以及面部光照,作为所述替换处理后的多个第二面部属性。
  3. 根据权利要求1或2所述的表情驱动方法,其中,所述根据所述替换处理后的多个第二面部属性,对所述目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像,包括:
    根据所述替换处理后的多个第二面部属性系数,对所述目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像;
    对重建后的三维面部图像进行三维面部渲染,以得到渲染后的三维面部图像。
  4. 根据权利要求3所述的表情驱动方法,其中,所述三维表情表达模型包括编码层和解码层;
    其中,所述编码层,用于将所述源图像和所述目标图像分别输入至三维表情表达模型中,以获取所述源图像对应的多个第一面部属性以及所述目标图像对应的多个第二面部属 性;
    所述解码层,用于根据所述替换处理后的多个第二面部属性,对所述目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像。
  5. 根据权利要求1-4中任一项所述的表情驱动方法,其中,所述将所述渲染后的三维面部图像输入至表情驱动模型中之前,还包括:
    获取多帧具有表情的样本图像;
    针对每帧所述样本图像,将所述样本图像输入至所述三维表情表达模型的编码层中,以获取所述样本图像对应的样本面部属性;其中,所述样本面部属性包括:样本面部表情、样本面部形状、样本面部姿态和样本面部光照中的至少一种;
    将所述样本面部表情、样本面部形状、样本面部姿态和样本面部光照输入至所述三维表情表达模型的解码层中,以对所述样本图像中的面部进行三维面部重建,得到重建后的三维样本面部图像;
    对所述重建后的三维样本面部图像进行三维面部渲染,以得到渲染后的三维样本面部图像;
    根据所述渲染后的三维样本面部图像和所述样本图像,对初始的表情驱动模型进行训练,以生成所述表情驱动模型。
  6. 根据权利要求5所述的表情驱动方法,其中,所述根据所述渲染后的三维样本面部图像和所述样本图像对初始的表情驱动模型进行训练,以生成所述表情驱动模型,包括:
    将所述渲染后的三维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;
    根据所述样本图像与所述表情预测图像之间的差异性,确定损失函数值;
    根据所述损失函数值,对所述初始的表情驱动模型进行训练,以使所述损失函数值最小化。
  7. 根据权利要求5所述的表情驱动方法,其中,所述根据所述渲染后的三维样本面部图像和所述样本图像对初始的表情驱动模型进行训练,以生成所述表情驱动模型,包括:
    对所述渲染后的三维样本面部图像和所述样本图像进行图像归一化处理,以获取目标三维样本面部图像;
    将所述目标三维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;
    根据所述样本图像与所述表情预测图像之间的差异性,确定损失函数值;
    根据所述损失函数值,对所述初始的表情驱动模型进行训练,以使所述损失函数值最小化。
  8. 一种表情驱动装置,包括:
    第一获取模块,用于获取具有表情的源图像及无表情的目标图像;
    第二获取模块,用于将所述源图像和所述目标图像分别输入至三维表情表达模型中,以获取所述源图像对应的多个第一面部属性以及所述目标图像对应的多个第二面部属性;
    替换模块,用于采用所述多个第一面部属性中的至少部分面部属性替换所述多个第二面部属性中对应的面部属性,以得到替换处理后的多个第二面部属性;
    处理模块,用于根据所述替换处理后的多个第二面部属性,对所述目标图像中的面部进行三维面部重建和渲染,以得到渲染后的三维面部图像;
    驱动模块,用于将所述渲染后的三维面部图像输入至表情驱动模型中,以对所述目标图像中的面部进行表情驱动。
  9. 根据权利要求8所述的装置,其中,所述替换模块,具体用于:
    根据所述多个第一面部属性中的面部表情,对所述第二面部属性中的面部表情进行替换处理;
    将所述第二面部属性中替换处理后的面部表情,以及所述第二面部属性中替换处理所保留的面部姿态、面部形状以及面部光照,作为所述替换处理后的多个第二面部属性。
  10. 根据权利要求8或9所述的装置,其中,所述处理模块,具体用于:
    根据所述替换处理后的多个第二面部属性系数,对所述目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像;
    对重建后的三维面部图像进行三维面部渲染,以得到渲染后的三维面部图像。
  11. 根据权利要求10所述的装置,其中,所述三维表情表达模型包括编码层和解码层;
    其中,所述编码层,用于将所述源图像和所述目标图像分别输入至三维表情表达模型中,以获取所述源图像对应的多个第一面部属性以及所述目标图像对应的多个第二面部属性;
    所述解码层,用于根据所述替换处理后的多个第二面部属性,对所述目标图像中的面部进行三维面部重建,以得到重建后的三维面部图像。
  12. 根据权利要求8-11中任一项所述的装置,其中,所述装置还包括:
    第三获取模块,用于获取多帧具有表情的样本图像;
    第四获取模块,用于针对每帧所述样本图像,将所述样本图像输入至所述三维表情表达模型的编码层中,以获取所述样本图像对应的样本面部属性;其中,所述样本面部属性包括:样本面部表情、样本面部形状、样本面部姿态和样本面部光照中的至少一种;
    重建模块,用于将所述样本面部表情、样本面部形状、样本面部姿态和样本面部光照输入至所述三维表情表达模型的解码层中,以对所述样本图像中的面部进行三维面部重建,得到重建后的三维样本面部图像;
    渲染模块,用于对所述重建后的三维样本面部图像进行三维面部渲染,以得到渲染后的三维样本面部图像;
    训练模块,用于根据所述渲染后的三维样本面部图像和所述样本图像,对初始的表情驱动模型进行训练,以生成所述表情驱动模型。
  13. 根据权利要求12所述的装置,其中,所述训练模块,具体用于:
    将所述渲染后的三维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;
    根据所述样本图像与所述表情预测图像之间的差异性,确定损失函数值;
    根据所述损失函数值,对所述初始的表情驱动模型进行训练,以使所述损失函数值最小化。
  14. 根据权利要求12所述的装置,其中,所述训练模块,具体用于:
    对所述渲染后的三维面部图像和所述样本图像进行图像归一化处理,以获取目标三维样本面部图像;
    将所述目标三维样本面部图像输入至初始的表情驱动模型中,以获取表情预测图像;
    根据所述样本图像与所述表情预测图像之间的差异性,确定损失函数值;
    根据所述损失函数值,对所述初始的表情驱动模型进行训练,以使所述损失函数值最小化。
  15. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处 理器执行,以使所述至少一个处理器能够执行权利要求1-7中任一项所述的表情驱动方法。
  16. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-7中任一项所述的表情驱动方法。
  17. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-7中任一项所述表情驱动方法的步骤。
  18. 一种计算机程序,其中所述计算机程序包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行根据权利要求1-7中任一项所述的表情驱动方法的步骤。
PCT/CN2022/088311 2021-09-23 2022-04-21 表情驱动方法、装置、电子设备及存储介质 WO2023045317A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111117185.0A CN113870399B (zh) 2021-09-23 2021-09-23 表情驱动方法、装置、电子设备及存储介质
CN202111117185.0 2021-09-23

Publications (1)

Publication Number Publication Date
WO2023045317A1 true WO2023045317A1 (zh) 2023-03-30

Family

ID=78993646

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088311 WO2023045317A1 (zh) 2021-09-23 2022-04-21 表情驱动方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113870399B (zh)
WO (1) WO2023045317A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115317A (zh) * 2023-08-10 2023-11-24 北京百度网讯科技有限公司 虚拟形象驱动及模型训练方法、装置、设备和存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870399B (zh) * 2021-09-23 2022-12-02 北京百度网讯科技有限公司 表情驱动方法、装置、电子设备及存储介质
CN115984947B (zh) * 2023-02-21 2023-06-27 北京百度网讯科技有限公司 图像生成方法、训练方法、装置、电子设备以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944238A (zh) * 2010-09-27 2011-01-12 浙江大学 基于拉普拉斯变换的数据驱动人脸表情合成方法
US20200020173A1 (en) * 2018-07-16 2020-01-16 Zohirul Sharif Methods and systems for constructing an animated 3d facial model from a 2d facial image
CN112215050A (zh) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 非线性3dmm人脸重建和姿态归一化方法、装置、介质及设备
CN113327278A (zh) * 2021-06-17 2021-08-31 北京百度网讯科技有限公司 三维人脸重建方法、装置、设备以及存储介质
CN113344777A (zh) * 2021-08-02 2021-09-03 中国科学院自动化研究所 基于三维人脸分解的换脸与重演方法及装置
CN113870399A (zh) * 2021-09-23 2021-12-31 北京百度网讯科技有限公司 表情驱动方法、装置、电子设备及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11055514B1 (en) * 2018-12-14 2021-07-06 Snap Inc. Image face manipulation
CN110298917B (zh) * 2019-07-05 2023-07-25 北京华捷艾米科技有限公司 一种人脸重建方法及系统
CN110399825B (zh) * 2019-07-22 2020-09-29 广州华多网络科技有限公司 面部表情迁移方法、装置、存储介质及计算机设备
GB2586260B (en) * 2019-08-15 2021-09-15 Huawei Tech Co Ltd Facial image processing
CN110868598B (zh) * 2019-10-17 2021-06-22 上海交通大学 基于对抗生成网络的视频内容替换方法及系统
CN110941332A (zh) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 表情驱动方法、装置、电子设备及存储介质
CN111599002A (zh) * 2020-05-15 2020-08-28 北京百度网讯科技有限公司 用于生成图像的方法和装置
CN111968203B (zh) * 2020-06-30 2023-11-14 北京百度网讯科技有限公司 动画驱动方法、装置、电子设备及存储介质
CN112907725B (zh) * 2021-01-22 2023-09-26 北京达佳互联信息技术有限公司 图像生成、图像处理模型的训练、图像处理方法和装置
CN113221847A (zh) * 2021-06-07 2021-08-06 广州虎牙科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113313085B (zh) * 2021-07-28 2021-10-15 北京奇艺世纪科技有限公司 一种图像处理方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944238A (zh) * 2010-09-27 2011-01-12 浙江大学 基于拉普拉斯变换的数据驱动人脸表情合成方法
US20200020173A1 (en) * 2018-07-16 2020-01-16 Zohirul Sharif Methods and systems for constructing an animated 3d facial model from a 2d facial image
CN112215050A (zh) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 非线性3dmm人脸重建和姿态归一化方法、装置、介质及设备
CN113327278A (zh) * 2021-06-17 2021-08-31 北京百度网讯科技有限公司 三维人脸重建方法、装置、设备以及存储介质
CN113344777A (zh) * 2021-08-02 2021-09-03 中国科学院自动化研究所 基于三维人脸分解的换脸与重演方法及装置
CN113870399A (zh) * 2021-09-23 2021-12-31 北京百度网讯科技有限公司 表情驱动方法、装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115317A (zh) * 2023-08-10 2023-11-24 北京百度网讯科技有限公司 虚拟形象驱动及模型训练方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN113870399B (zh) 2022-12-02
CN113870399A (zh) 2021-12-31

Similar Documents

Publication Publication Date Title
WO2023045317A1 (zh) 表情驱动方法、装置、电子设备及存储介质
US20210224993A1 (en) Method for training generative network, method for generating near-infrared image and device
CN113590858B (zh) 目标对象的生成方法、装置、电子设备以及存储介质
CN113222916A (zh) 采用目标检测模型检测图像的方法、装置、设备和介质
WO2022257487A1 (zh) 深度估计模型的训练方法, 装置, 电子设备及存储介质
WO2022252674A1 (zh) 可驱动三维人物生成方法、装置、电子设备及存储介质
EP3876204A2 (en) Method and apparatus for generating human body three-dimensional model, device and storage medium
CN113327278A (zh) 三维人脸重建方法、装置、设备以及存储介质
WO2023024653A1 (zh) 图像处理方法、图像处理装置、电子设备以及存储介质
CN113365146B (zh) 用于处理视频的方法、装置、设备、介质和产品
WO2023050868A1 (zh) 融合模型的训练方法、图像融合方法、装置、设备及介质
US20220398834A1 (en) Method and apparatus for transfer learning
CN114792355B (zh) 虚拟形象生成方法、装置、电子设备和存储介质
US20230162426A1 (en) Image Processing Method, Electronic Device, and Storage Medium
EP4018411A1 (en) Multi-scale-factor image super resolution with micro-structured masks
CN116432012A (zh) 用于训练模型的方法、电子设备和计算机程序产品
US11836836B2 (en) Methods and apparatuses for generating model and generating 3D animation, devices and storage mediums
CN113380269B (zh) 视频图像生成方法、装置、设备、介质和计算机程序产品
CN113052962B (zh) 模型训练、信息输出方法,装置,设备以及存储介质
CN117911588A (zh) 虚拟对象脸部驱动及模型训练方法、装置、设备和介质
US20230115765A1 (en) Method and apparatus of transferring image, and method and apparatus of training image transfer model
WO2024040870A1 (zh) 文本图像生成、训练、文本图像处理方法以及电子设备
US20220351455A1 (en) Method of processing image, electronic device, and storage medium
CN113240780B (zh) 生成动画的方法和装置
CN114926322A (zh) 图像生成方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871380

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE