WO2021051605A1 - 基于表情驱动的虚拟视频合成方法、装置及存储介质 - Google Patents

基于表情驱动的虚拟视频合成方法、装置及存储介质 Download PDF

Info

Publication number
WO2021051605A1
WO2021051605A1 PCT/CN2019/118285 CN2019118285W WO2021051605A1 WO 2021051605 A1 WO2021051605 A1 WO 2021051605A1 CN 2019118285 W CN2019118285 W CN 2019118285W WO 2021051605 A1 WO2021051605 A1 WO 2021051605A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
expression
images
synthesized
base
Prior art date
Application number
PCT/CN2019/118285
Other languages
English (en)
French (fr)
Inventor
孙太武
张艳
周超勇
刘玉宇
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021051605A1 publication Critical patent/WO2021051605A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Definitions

  • This application relates to the field of video synthesis technology, and in particular to an expression-driven virtual video synthesis method, device, system, and computer-readable storage medium.
  • Virtual object-driven can be used in virtual social networking to drive personalized roles, thereby enhancing the authenticity and interactivity of virtual social networking, and optimizing users’ virtual reality. Realistic experience.
  • the existing virtual video synthesis mainly uses facial motion capture devices in film, animation, and game video production to track changes in real human faces and map them to a virtual character to drive the virtual character’s mouth shape.
  • the inventor realized that it cannot achieve virtual video synthesis similar to its own facial features.
  • This application provides an expression-driven virtual video synthesis method, electronic device, system, and computer-readable storage medium. Its main purpose is to synthesize photos for video chat with one's own photos and strangers' photos, and then drive by expressions.
  • the synthesized photos are virtualized as videos similar to oneself, which can be close to one's true appearance and can protect one's own privacy.
  • the present application provides an expression-driven virtual video synthesis method, which is applied to an electronic device, and the method includes:
  • the synthesized image and the target image are synthesized to form the target photo, and the target image is the user's original image;
  • the present application also provides an electronic device, which includes a memory and a processor.
  • the memory includes an expression-driven virtual video synthesis program, which is implemented when the expression-driven virtual video synthesis program is executed by the processor. The following steps:
  • the synthesized image and the target image are synthesized to form the target photo, and the target image is the user's original image;
  • This application also provides an expression-driven virtual video synthesis system, including:
  • the composite image determining unit is used to obtain a set of images to be synthesized, and determine the images to be synthesized from the set of images to be synthesized;
  • the target photo synthesis unit is used to synthesize the to-be-composited image and the target image based on the GAN network to form the target photo, the target image is the user's original image;
  • the reference image acquisition unit is used to intercept multiple frame images in the unprocessed original video as a reference image
  • the transmission image determining unit is used to drive the target photo with expression based on the reference image to obtain the transmission image corresponding to the virtual transmission video to be transmitted;
  • the transmission video forming unit is used to combine the frames of the transmission image to form a virtual composite video.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium includes an expression-driven virtual video synthesis program, which is realized when the expression-driven virtual video synthesis program is executed by a processor. Any step in the above-mentioned expression-driven virtual video synthesis method.
  • the emoji-driven virtual video synthesis method, electronic device, system, and computer-readable storage medium proposed in this application use their own photos and strangers’ photos to synthesize photos for video chat, and then drive the synthesized photos to virtual For videos that are similar to yourself, you can get close to yourself and protect your privacy.
  • FIG. 1 is a schematic diagram of an application environment of a preferred embodiment of an expression-driven virtual video synthesis method of this application;
  • FIG. 2 is a schematic diagram of modules of a preferred embodiment of an expression-driven virtual video synthesis system based on this application;
  • FIG. 3 is a flowchart of a preferred embodiment of a virtual video synthesis method based on expression drive of the present application.
  • This application provides an expression-driven virtual video synthesis method, which is applied to an electronic device 1.
  • FIG. 1 it is a schematic diagram of the application environment of the preferred embodiment of the virtual video synthesis method based on expression-driven of this application.
  • the electronic device 1 may be a terminal device with arithmetic function, such as a server, a smart phone, a tablet computer, a portable computer, a desktop computer, and the like.
  • the electronic device 1 includes a processor 12, a memory 11, a camera device 13, a network interface 14, and a communication bus 15.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as flash memory, hard disk, multimedia card, card-type memory 11, and the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, for example, the hard disk of the electronic device 1.
  • the readable storage medium may also be the external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart media card (SMC), and a secure Digital (Secure Digital, SD) card, Flash Card (Flash Card), etc.
  • the readable storage medium of the memory 11 is generally used to store the emoticon-driven virtual video synthesis program 10 and the like installed in the electronic device 1.
  • the memory 11 can also be used to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), a microprocessor or other data processing chip, which is used to run the program code or process data stored in the memory 11, for example to execute an expression-based drive The virtual video synthesis program 10 and so on.
  • CPU central processing unit
  • microprocessor or other data processing chip
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the electronic device 1 and other electronic devices.
  • the communication bus 15 is used to realize the connection and communication between these components.
  • FIG. 1 only shows the electronic device 1 with the components 11-15, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the electronic device 1 may also include a user interface.
  • the user interface may include an input unit such as a keyboard (Keyboard), a voice input device such as a microphone (microphone) and other devices with voice recognition functions, and a voice output device such as audio, earphones, etc.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may also include a display, and the display may also be referred to as a display screen or a display unit.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an organic light-emitting diode (Organic Light-Emitting Diode, OLED) touch device, etc.
  • the display is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
  • the electronic device 1 further includes a touch sensor.
  • the area provided by the touch sensor for the user to perform touch operations is called the touch area.
  • the touch sensor here may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor, but also a proximity type touch sensor and the like.
  • the touch sensor may be a single sensor, or may be, for example, a plurality of sensors arranged in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • the display and the touch sensor are layered to form a touch display screen. The device detects the touch operation triggered by the user based on the touch screen.
  • the electronic device 1 may also include a radio frequency (RF) circuit, a sensor, an audio circuit, etc., which will not be repeated here.
  • RF radio frequency
  • the memory 11 which is a computer storage medium, may include an operating system and an expression-driven virtual video synthesis program 10; the processor 12 executes the expression-driven virtual video synthesis program 10 stored in the memory 11.
  • the virtual video synthesis program 10 implements the following steps:
  • the synthesized image and the target image are synthesized to form the target photo, and the target image is the user's original image;
  • the image set to be synthesized includes multiple groups of images, each group of images includes multiple expression images of the same person, and the multiple expression images are used as the expression base of the image to be synthesized.
  • the target image is a group of images corresponding to the expression base of the image to be synthesized; the step of synthesizing the image to be synthesized and the target image includes: synthesizing the target image and the image of the same expression in the image to be synthesized, and after synthesis
  • the expression of the target photo of is consistent with the expression of the image of the same expression in the image to be synthesized.
  • the step of performing expression drive on the target photo based on the reference image to obtain the transmission image corresponding to the virtual transmission video to be transmitted includes:
  • the step of forming a transmission image corresponding to the reference image includes:
  • the coefficients of the expression base are changed to minimize the Euclidean distance between the face key point coordinates obtained based on the 2D image and the face key point coordinates of the reference image, thereby determining a set of expression base coefficients;
  • the electronic device 1 proposed in the above embodiment can achieve real-time or near-real-time speed by optimizing the expression-driven process, and cleverly combines the GAN network that cannot be real-time, thereby solving the problem of not wanting to fully reveal in strangers video making friends The true self cannot help but show the difficulties of self.
  • Using the above-mentioned expression-driven virtual video synthesis method it is possible to synthesize a real person video that is closer to the person, thereby reducing the sense of violation of the video. At the same time, it can also protect personal privacy. Use similar portraits instead of real ones for video conversations. Real and natural, closer to the feelings users want to express.
  • the present application also provides an expression-driven virtual video synthesis system.
  • FIG. 2 a schematic diagram of modules of a preferred embodiment of the expression-driven virtual video synthesis system of the present application
  • the embodiment of the present application is based on the expression-driven virtual video synthesis system It includes: a to-be-composited image determining unit 11, a target photo synthesizing unit 12, a reference image acquiring unit 13, a transmission image determining unit 14, and a transmission video forming unit 15.
  • the functions or operation steps implemented by modules 11-15 are similar to the above, and will not be described in detail here. Illustratively, for example, where:
  • the to-be-composited image determining unit 11 is configured to obtain a to-be-composited image set, and determine the to-be-composited image from the to-be-composited image set;
  • the target photo synthesizing unit 12 is used to synthesize the to-be-composited image and the target image based on the GAN network to form a target photo, and the target image is the original user image;
  • the reference image acquisition unit 13 is used to intercept multiple frame images in the unprocessed original video as reference images
  • the transmission image determining unit 14 is configured to drive the target photo with expression based on the reference image to obtain a transmission image corresponding to the virtual transmission video to be transmitted;
  • the transmission video forming unit 15 is used to combine the frames of the transmission image to form a virtual composite video.
  • the transmission image determining unit 14 includes:
  • the setting module 141 is used to set an average face, a set of expression bases, and a set of identity bases;
  • the transmission image forming module 142 is used to set the coefficients of the average face and the identity base to a fixed value, and control the expression base coefficient of the target photo to change with the change of the expression base coefficient of the reference image to form a transmission image corresponding to the reference image .
  • the transmission image forming module 142 further includes:
  • the dimensional conversion module is used to convert the 3D grid map of the average face, expression base, and identity base into the corresponding 2D image, and obtain the corresponding key point coordinates of the face based on the 2D image;
  • the key point acquisition module is used to acquire the face key point coordinates of the reference image
  • the expression base coefficient determination module is used to change the coefficients of the expression base through iteration to minimize the Euclidean distance between the face key point coordinates obtained based on the 2D image and the face key point coordinates of the reference image, thereby determining a set of expression bases Coefficient of
  • the transmission image acquisition module is used to apply the acquired coefficients of the expression base to the target photo so that the coefficients of the same expression base are the same to obtain the final transmission image.
  • the dimension conversion module includes projecting the three-dimensional coordinates corresponding to the 3D grid map of the average face, the expression base, and the identity base to the 2D plane through the identity matrix, the rotation matrix, and the projection matrix, so as to obtain the two-dimensional coordinates corresponding to the 2D image. .
  • the target photo synthesis unit 12 further includes:
  • the network construction module 121 is used to construct a GAN network and initialize the parameters of the GAN network;
  • the generator generating image module 122 is used to input the to-be-composited image and the target image into the generator of the GAN network to obtain the generator-generated image;
  • the target photo determination module 123 is used to input the image to be synthesized, the target image, and the image generated by the generator into the discriminator of the CAN network, and obtain the optimal image by maximizing the discriminator’s difference capability and minimizing the generator’s distribution loss function The result is used as the target photo.
  • this application also provides a virtual video synthesis method based on expression drive.
  • FIG. 3 is a flowchart of a preferred embodiment of a virtual video synthesis method based on expression-driven in this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the virtual video synthesis method based on expression drive includes:
  • S110 Obtain a set of images to be synthesized, and determine the images to be synthesized from the set of images to be synthesized.
  • this step can also be understood as the selection of materials.
  • each group of images is a photo of the same person with different expressions, and the number of photos is the number of expression bases.
  • the number of expression bases can be set according to requirements. The more expression bases there are, the more delicate and accurate the expressions will be fitted. Of course, at the same time, the calculation complexity will be higher. The required time is also long, which may affect the frame rate (the number of frames processed per second), resulting in the inability to real-time. On the contrary, the smaller the number of expression bases, the faster the processing speed. Of course, the error of expressions may be greater, and the specific number of expression bases can be set according to actual needs.
  • the image set to be synthesized includes multiple image sets, and each image set contains various expression images of the same person, and each expression image is used as The expression base of the image to be synthesized.
  • S120 Synthesize the to-be-combined image and the target image based on the GAN network to form a target photo, and the target image is the user's original image.
  • the network input of the GAN network is two images, and the output is an image;
  • the target image is a group of images or photos corresponding to the expression base of the image to be synthesized;
  • the steps of synthesizing the image to be synthesized and the target image include: Control the expression characteristics during synthesis, synthesize the target image with the image of the same expression in the image to be synthesized, and the expression of the synthesized target photo is consistent with the expression of the target image.
  • the GAN network that is, the generative adversarial network, uses two neural networks to train each other, one neural network tries to generate a synthetic image that is indistinguishable from the real photo, and the other tries to distinguish.
  • the image creation network can generate fake and real images.
  • the feature space can be adjusted so that the sum of the difference between the generated image features and the features of the two input images is the smallest, as a supervisor Loss.
  • the target image mainly refers to a group of images or photos that correspond to the expression base of the image to be synthesized.
  • the expression characteristics of the synthesis are controlled so that the corresponding expression of the photos taken by yourself is similar to that of yourself.
  • the corresponding expressions of the selected images or photos to be synthesized are synthesized in a one-to-one correspondence, and the synthesized expressions are consistent with the expressions before synthesis.
  • the steps of forming the target photo include:
  • S140 Perform expression drive on the target photo based on the reference image to obtain a transmission image corresponding to the virtual transmission video to be transmitted;
  • the step of performing expression drive on the target photo based on the reference image to obtain the transmission image corresponding to the virtual transmission video to be transmitted includes:
  • the step of forming a transmission image corresponding to the reference image includes:
  • Each frame in the actual video is processed as described above, and the processed frames (a set of transmission images) are combined into the final transmission video.
  • first set an average face S0 mean face
  • a set of expression base Sexp expression base
  • a set of identity base Sid identity base
  • the expression base also called the expression texture map
  • the expression base is the image under the different expressions of the "real self” and the “false self” required above, which have been obtained in step S120. Because the identity has been fixed, the coefficients of the average face and identity base can be determined and remain unchanged, which can be ignored here. Therefore, it is only necessary to change the coefficients of the expression base to control the transmission image (fake self) to change with the expression change of the reference image (real self), which is the expression drive mentioned above.
  • the above average face, expression base, and identity base are all 3D grid graphs, that is, 3D mesh.
  • the 3D grid map is projected onto 2D to obtain the corresponding 2D image.
  • the coordinates of the key points of the face of the corresponding image are obtained.
  • the three-dimensional coordinates corresponding to the 3D grid map of the average face, the expression base, and the identity base are projected onto the 2D plane through the identity matrix, the rotation matrix, and the projection matrix to obtain the two-dimensional coordinates corresponding to the 2D image.
  • the camera can detect the reference image, and the position of a group of key points can also be obtained according to the reference image.
  • the coefficients of the expression base are changed, so that the key point positions obtained by 3D mesh projection (that is, the aforementioned x', y'), and the position of the key points of the reference image L2Loss (that is, Euclidean distance) is the smallest, and the coefficient of the expression base can be determined.
  • step S120 is the synthesis of photos of the same expression of two people
  • the "synthesis” of the transmitted image is the synthesis of different expression bases of the same person to obtain the final expression.
  • the person finally obtained is different from the two input persons, but the expressions are the same.
  • the photos synthesized for the transmitted images are the same people, but the final synthesized expression does not belong to any of the 47 expression bases, but the same expression as the reference image of the frame obtained by the video interception.
  • each frame in the actual video is processed according to the above steps and the preset frame rate, and then the processed frames are synthesized into the final transmission video.
  • the above calculation process is simplified through the least square method and linear regression, and real-time video transmission can be realized.
  • GAN network is difficult to achieve real-time, it is mainly used to generate data for data gain, image super-resolution, style transfer, etc., while expression-driven images are mainly used to drive virtual images, such as a cartoon image that resembles oneself , A cat and a dog and so on. This gives people a sense of unreality in actual chat and affects user experience.
  • this application optimizes the expression-driven process to achieve real-time or near-real-time speed, and cleverly combines the GAN network that cannot be real-time, so as to solve the problem of not wanting to fully reveal the true self in stranger video making friends. I can't help but show my difficulties.
  • Using the above-mentioned expression-driven virtual video synthesis method it is possible to synthesize a real person video that is closer to the person, thereby reducing the sense of violation of the video. At the same time, it can also protect personal privacy. Use similar portraits instead of real ones for video conversations. Real and natural, closer to the feelings users want to express.
  • an embodiment of the present application also proposes a computer-readable storage medium that includes an expression-driven virtual video synthesis program that is implemented when the expression-driven virtual video synthesis program is executed by a processor Do as follows:
  • the frames of the transmission image are combined to form the virtual composite video.
  • the image set to be synthesized includes multiple groups of images, each group of images includes multiple expression images of the same person, and the multiple expression images are used as the expression base of the image to be synthesized.
  • the specific implementation of the computer-readable storage medium of the present application is substantially the same as the specific implementation of the above-mentioned emoji-driven virtual video synthesis method and electronic device, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及视频合成技术领域,提出一种基于表情驱动的虚拟视频合成方法、装置、系统及存储介质,方法包括:获取待合成图像集,并从待合成图像集中确定待合成图像;基于GAN网络对待合成图像与目标图像进行合成,形成目标照片,目标图像为用户原始图像;截取未经处理的原始视频中的多个帧作为基准图像;基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;对传输图像的帧进行拼合,形成虚拟合成视频。本申请采用自己的照片和陌生人的照片合成用于视频聊天的照片,进而通过表情驱动将合成的照片虚拟为与自己相似的视频,能够接近自己的真实容貌且能够保护自己的隐私。

Description

基于表情驱动的虚拟视频合成方法、装置及存储介质
本申请要求申请号为201910885913.9,申请日为2019年9月19日,发明创造名称为“基于表情驱动的虚拟视频合成方法、装置及存储介质”的专利申请的优先权。
技术领域
本申请涉及视频合成技术领域,尤其涉及一种基于表情驱动的虚拟视频合成方法、装置、系统及计算机可读存储介质。
背景技术
目前,虚拟视频的合成在很多领域均得到广泛应用,具有很大的市场。由此兴起的虚拟社交是虚拟实现领域的一个重要应用,虚拟对象驱动,可以被应用在虚拟社交中,用来驱动个性化的角色,从而增强虚拟社交的真实性与互动性,优化用户的虚拟现实体验。
但现有的虚拟视频合成主要是在电影、动画以及游戏视频制作中使用人脸动作捕捉设备来跟踪真实人脸的变化,并将其映射到一个虚拟角色上,用来驱动虚拟角色的口型以及表情,发明人意识到其并不能实现与自身脸部特征相似的虚拟视频合成。
同样,在目前的社交领域中,普遍存在陌生人之间相互视频聊天的情况,如何在这种情况下选用与自己长相接近且不属于自己真实面容的视频进行聊天,也是目前急需要解决的一个技术问题。
发明内容
本申请提供一种基于表情驱动的虚拟视频合成方法、电子装置、系统及计算机可读存储介质,其主要目的在于采用自己的照片和陌生人的照片合成用于视频聊天的照片,进而通过表情驱动将合成的照片虚拟为与自己相似的视频,能够接近自己的真是容貌且能够保护自己的隐私。
为实现上述目的,本申请提供一种基于表情驱动的虚拟视频合成方法, 应用于电子装置,所述方法包括:
获取待合成图像集,并从待合成图像集中确定待合成图像;
基于GAN网络对待合成图像与目标图像进行合成,形成目标照片,目标图像为用户原始图像;
截取未经处理的原始视频中的多个帧图像作为基准图像;
基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
对传输图像的帧进行拼合,形成虚拟合成视频。
为实现上述目的,本申请还提供一种电子装置,该电子装置包括:存储器、处理器,存储器中包括基于表情驱动的虚拟视频合成程序,基于表情驱动的虚拟视频合成程序被处理器执行时实现如下步骤:
获取待合成图像集,并从待合成图像集中确定待合成图像;
基于GAN网络对待合成图像与目标图像进行合成,形成目标照片,目标图像为用户原始图像;
截取未经处理的原始视频中的多个帧图像作为基准图像;
基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
对传输图像的帧进行拼合,形成虚拟合成视频。
本申请还提供一种基于表情驱动的虚拟视频合成系统,包括:
合成图像确定单元,用于获取待合成图像集,并从待合成图像集中确定待合成图像;
目标照片合成单元,用于基于GAN网络对待合成图像与目标图像进行合成,形成目标照片,目标图像为用户原始图像;
基准图像获取单元,用于截取未经处理的原始视频中的多个帧图像作为基准图像;
传输图像确定单元,用于基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
传输视频形成单元,用于对传输图像的帧进行拼合,形成虚拟合成视频。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,计算机可读存储介质中包括基于表情驱动的虚拟视频合成程序,基于表情驱动的 虚拟视频合成程序被处理器执行时,实现如上所述的基于表情驱动的虚拟视频合成方法中的任意步骤。
本申请提出的基于表情驱动的虚拟视频合成方法、电子装置、系统及计算机可读存储介质,采用自己的照片和陌生人的照片合成用于视频聊天的照片,进而通过表情驱动将合成的照片虚拟为与自己相似的视频,能够接近自己的真是容貌且能够保护自己的隐私。
附图说明
图1为本申请基于表情驱动的虚拟视频合成方法较佳实施例的应用环境示意图;
图2为本申请基于表情驱动的虚拟视频合成系统较佳实施例的模块示意图;
图3为本申请基于表情驱动的虚拟视频合成方法较佳实施例的流程图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种基于表情驱动的虚拟视频合成方法,应用于一种电子装置1。参照图1所示,为本申请基于表情驱动的虚拟视频合成方法较佳实施例的应用环境示意图。
在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有运算功能的终端设备。
该电子装置1包括:处理器12、存储器11、摄像装置13、网络接口14及通信总线15。
存储器11包括至少一种类型的可读存储介质。至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器11等的非易失性存储介质。在一些实施例中,可读存储介质可以是电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,可读存储介质也可以是所述电子装 置1的外部存储器11,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本实施例中,存储器11的可读存储介质通常用于存储安装于所述电子装置1的基于表情驱动的虚拟视频合成程序10等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行基于表情驱动的虚拟视频合成程序10等。
网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该电子装置1与其他电子设备之间建立通信连接。
通信总线15用于实现这些组件之间的连接通信。
图1仅示出了具有组件11-15的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。
可选地,该电子装置1还可以包括显示器,显示器也可以称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。
可选地,该电子装置1还包括触摸传感器。触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。
此外,该电子装置1的显示器的面积可以与触摸传感器的面积相同,也可以不同。可选地,将显示器与触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。
可选地,该电子装置1还可以包括射频(Radio Frequency,RF)电路,传感器、音频电路等等,在此不再赘述。
在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中可以包括操作系统、以及基于表情驱动的虚拟视频合成程序10;处理器12执行存储器11中存储的基于表情驱动的虚拟视频合成程序10时实现如下步骤:
获取待合成图像集,并从待合成图像集中确定待合成图像;
基于GAN网络对待合成图像与目标图像进行合成,形成目标照片,目标图像为用户原始图像;
截取未经处理的原始视频中的多个帧图像作为,获取与实际视频对应的基准图像;
基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
对传输图像的帧进行拼合,形成虚拟合成视频。
作为具体示例,待合成图像集中包含多组图像,每组图像包括同一人的多个表情图像,多个表情图像作为待合成图像的表情基。
其中,目标图像为与待合成图像的表情基对应的一组图像;对待合成图像与目标图像进行合成的步骤包括:将目标图像与待合成图像中的同一表情的图像进行合成,且合成后的目标照片的表情与待合成图像中的同一表情的图像的表情相一致。
进一步地,基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像的步骤包括:
设定一个平均脸、一组表情基以及一组身份基;
将平均脸和身份基的系数设定为固定值,并控制目标照片的表情基系数随基准图像的表情基的系数变化而变化,形成与基准图像对应的传输图像。
另外,形成与基准图像对应的传输图像的步骤包括:
将平均脸、表情基和身份基的3D网格图转换为对应的2D图像,并基于2D图像获取对应的人脸关键点坐标;
获取基准图像的人脸关键点坐标;
通过迭代,改变表情基的系数,使基于2D图像获取的人脸关键点坐标与基准图像的人脸关键点坐标之间的欧式距离最小,从而确定一组表情基的系 数;
将获取的表情基的系数应用至目标照片上,使得相同表情基的系数相同,获取最终的传输图像。
上述实施例提出的电子装置1,通过优化表情驱动的过程,使之能达到实时或者接近实时的速度,并巧妙地结合了不能实时的GAN网络,从而解决了在陌生人视频交友中不想完全展露真实的自我,又不能不展示自我的难点。利用上述基于表情驱动的虚拟视频合成方法,可以合成与本人更接近的真人视频,从而降低视频的违和感,同时,还能够保护个人隐私,用相似的人像代替真是人像进行视频对话,表情更真实自然,更接近用户想要表达的感情。
本申请还提供一种基于表情驱动的虚拟视频合成系统,如图2本申请基于表情驱动的虚拟视频合成系统较佳实施例的模块示意图所示,本申请实施例基于表情驱动的虚拟视频合成系统包括:待合成图像确定单元11、目标照片合成单元12、基准图像获取单元13、传输图像确定单元14、传输视频形成单元15。模块11-15所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:
待合成图像确定单元11,用于获取待合成图像集,并从待合成图像集中确定待合成图像;
目标照片合成单元12,用于基于GAN网络对待合成图像与目标图像进行合成,形成目标照片,目标图像为用户原始图像;
基准图像获取单元13,用于截取未经处理的原始视频中的多个帧图像作为基准图像;
传输图像确定单元14,用于基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
传输视频形成单元15,用于对传输图像的帧进行拼合,形成虚拟合成视频。
作为具体示例,传输图像确定单元14包括:
设定模块141,用于设定一个平均脸、一组表情基以及一组身份基;
传输图像形成模块142,用于将平均脸和身份基的系数设定为固定值,并控制目标照片的表情基系数随基准图像的表情基的系数变化而变化,形成与基准图像对应的传输图像。
传输图像形成模块142进一步包括:
维度转换模块,用于将平均脸、表情基和身份基的3D网格图转换为对应的2D图像,并基于2D图像获取对应的人脸关键点坐标;
关键点获取模块,用于获取基准图像的人脸关键点坐标;
表情基系数确定模块,用于通过迭代,改变表情基的系数,使基于2D图像获取的人脸关键点坐标与基准图像的人脸关键点坐标之间的欧式距离最小,从而确定一组表情基的系数;
传输图像获取模块,用于将获取的表情基的系数应用至目标照片上,使得相同表情基的系数相同,获取最终的传输图像。
进一步地,维度转换模块包括通过单位矩阵、旋转矩阵以及投影矩阵将平均脸、表情基和身份基的3D网格图对应的三维坐标投影至2D平面上,以获取与2D图像对应的二维坐标。
另外,目标照片合成单元12进一步包括:
网络构建模块121,用于构建GAN网络,并初始化GAN网络的参数;
生成器生成图像模块122,用于将待合成图像与目标图像输入GAN网络的生成器中,以得到生成器生成图像;
目标照片确定模块123,用于将待合成图像、目标图像与生成器生成图像输入CAN网络的判别器中,通过最大化判别器的差别能力和最小化生成器的分布损失函数,获得最优图像结果作为目标照片。
此外,本申请还提供一种基于表情驱动的虚拟视频合成方法。参照图3所示,为本申请基于表情驱动的虚拟视频合成方法较佳实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,基于表情驱动的虚拟视频合成方法包括:
S110:获取待合成图像集,并从待合成图像集中确定待合成图像。
其中,该步骤也可以理解为素材的选择,待合成图像集中有多组图像,每组图像为同一人不同表情的照片,照片的数量为表情基的数量。例如,若表情基有47个,那么代表该人共有47种表情,且这47种表情之间相互独立。当然表情基的个数可以根据需求进行设定,表情基的数目越多,对表情的拟合越细腻,越精准,当然与此同时,计算的复杂度也会更高,处理一张图像所需要的时间也会边长,可能会影响帧率(每秒钟处理的帧的数量),导致无 法实时。与此相反,表情基的数目越少,处理的速度会快,当然表情的误差极大可能也会更大,表情基的具体个数可根据实际需求自行设定。
在这一步骤中,根据用户喜好以及历史使用记录,推荐待合成图像集;待合成图像集包括多组图像集,且每组图像集中均包含同一人的各表情图像,各表情图像作为待合成图像的表情基。同时,也可根据表情基的要求,给自己拍摄一组表情基图像。
S120:基于GAN网络对待合成图像与目标图像进行合成,形成目标照片,目标图像为用户原始图像。
其中,GAN网络的网络输入为两张图像,输出为一张图像;目标图像为与待合成图像的表情基相对应的一组图像或照片;对待合成图像与目标图像进行合成的步骤包括:控制合成时的表情特征,将目标图像与待合成图像中的同一表情的图像进行合成,且合成后的目标照片的表情与目标图像的表情相一致。
具体地,GAN网络,也就是生成对抗网络,利用两个神经网络互相训练,其中一个神经网络试图生成与真实照片无法区分的合成图像,另外一个试图分辨。这样训练一段时间后,图像创建网络可以生成以假乱真的图像。与此同时,为保证生成的图像与输入的两张图像尽可能地像,可以对特征空间进行调整,使得生成的图像特征,与两张输入的图的特征之差的和最小,作为一个监督的损失。
在进行合成时,目标图像主要指自己拍摄的对应待合成图像的表情基的一组图像或照片,在合成过程中,控制合成时的表情特征,使得自己拍摄的照片的对应表情,与自己选择的待合成图像或照片的对应表情一一对应进行合成,且合成后的表情与合成前的表情保持一致。
换句话说,为保证有合成人像的各个表情的图像,需要两组数据,一组是“真我”(目标图像或者自己拍摄的照片)的各个表情的数据,一组是“辅助图”(选择的待合成图像或照片)的各个表情的数据,按照相应的表情一一合成,即可得到“假我”(目标图像)的各个表情的图像。
最后,需要根据这不同的表情去制作素材图,也就是3D mesh的图。
具体地,基于GAN网络对待合成图像与目标图像进行合成,形成目标照片的步骤包括:
1、构建GAN网络,并初始化GAN网络的参数;
2、将待合成图像与目标图像输入GAN网络的生成器中,以得到生成器生成图像;
3、将待合成图像、目标图像与生成器生成图像输入CAN网络的判别器中,通过最大化判别器的差别能力和最小化生成器的分布损失函数,获得最优图像结果作为目标照片。
S130:截取未经处理的原始视频中的多个帧图像作为基准图像。
S140:基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
S150:对传输图像的帧进行拼合,形成虚拟合成视频。
其中,基于基准图像对目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像的步骤包括:
1、设定一个平均脸、一组表情基以及一组身份基;
2、将平均脸和身份基的系数设定为固定值,并控制目标照片的表情基系数随基准图像的表情基的系数变化而变化,形成与基准图像对应的传输图像。
进一步地,将平均脸和身份基的系数设定为固定值,并控制目标照片的表情基系数随基准图像的表情基的系数变化而变化,形成与基准图像对应的传输图像的步骤包括:
1、将平均脸、表情基和身份基的3D的网格图转换为对应的2D图像,并基于2D图像获取对应的人脸关键点坐标;
2、获取基准图像的人脸关键点坐标;
3、通过迭代,改变表情基的系数,使基于2D图像获取的人脸关键点坐标与基准图像的人脸关键点坐标之间的欧式距离最小,从而确定一组表情基的系数;
4、将获取的表情基的系数应用至目标照片上,使得相同表情基的系数相同,获取最终的传输图像。
将实际视频中的各帧进行上述处理,并将处理后的各帧(一组传输图像)合成最终的传输视频。
具体地,首先设定一个平均脸S0(mean face),一组表情基Sexp(expression base)以及一组身份基Sid(identify base)。
其中,表情基,也称为表情纹理图,也就是上述所要求的“真我”和“假我”不同表情下的图像,已经在步骤S120中获取得到了。因为身份已经固定,因而平均脸和身份基的系数就可确定并保持不变,此处可以忽略不计。所以,只需改变表情基的系数,控制传输图像(假我)可以随着基准图像(真我)的表情变化而变化即可,也就是上述所说的表情驱动。
需注意的是,上述的平均脸,表情基以及身份基都是3D的网格图,也就是3D mesh,在进行拟合时,通过三个矩阵(单位矩阵,旋转矩阵以及投影矩阵),将3D的网格图,投影到2D上,得到对应的2D的图像。进而根据2D的图像,获取对应图像的人脸关键点的坐标(眼睛,嘴巴,鼻子等等的位置)。换言之,通过单位矩阵、旋转矩阵以及投影矩阵将平均脸、表情基和身份基的3D网格图对应的三维坐标投影至2D平面上,以获取与2D图像对应的二维坐标。
在上述将3D图像转2D图像的过程,涉及到一系列坐标系的变化,可以将其简单的理解为3D坐标下的(x,y,z)坐标,乘以投影矩阵,旋转矩阵和单位矩阵,得到2D下的坐标(x’,y’,0)。
而在进行视频时,摄像头可以检测到基准图像,根据基准图像同样可以得到一组关键点的位置。在确定表情基的系数时,通过不断迭代,改变表情基的系数,使得3D mesh投影得到的关键点位置(也就是上述所说的x’,y’),和基准图像的关键点的位置的L2Loss(也就是欧式距离)最小,即可确定表情基的系数。
然后,根据获得的这组表情基的系数,应用到目标照片上,使得相同表情基的系数相同,从而保证假传输图像的表情和基准图像的表情一致,也就是用真我的表情驱动了假我的表情。进而获得根据表情基合成得到的一个“最终的假我”的照片,即需要输出到对方视频中的图。
需要说明的是,步骤S120中的“合成”是两个人相同表情照片的合成,而传输图像的“合成”是同一个人不同的表情基合成得到最终的表情。换言之,步骤S120合成得到的照片,最终得到的人不同于两个输入的人,但是表情相同。而针对传输图像合成的照片,人相同,但是最终合成得到的表情,不属于47个表情基中的任何一个,而是和视频截取得到的那一帧的基准图像的表情相同。
最后,按照上述步骤及预设帧率处理实际视频中的每一帧,然后将处理后的各帧合成为最终的传输视频。在合成过程中,通过最小二乘法,线性回归来简化上述计算过程,可以实现实时的视频传输。
需要说明的是,仅仅用GAN网络合成一张单纯的人脸照片,后续在视频中根据人脸的关键点进行贴合,那么视频中显示的人脸会显得很不自然。为了避免这种状态,我们采用BlendShape技术或者Skeleton方法来进行表情驱动操作,能够实现视频的实时表情驱动。
目前现阶段GAN网络因为难以达到实时,因而主要用于生成数据进行数据增益,图像的超分辨率,风格迁移等等,而表情驱动图像则主要用于驱动虚拟形象,比如一个形似自己的卡通形象,一只猫一条狗等等。这在实际聊天中给人一种不真实感,影响用户体验。
为此,本申请优化了表情驱动的过程,使之能达到实时或者接近实时的速度,并巧妙地结合了不能实时的GAN网络,从而解决了在陌生人视频交友中不想完全展露真实的自我,又不能不展示自我的难点。利用上述基于表情驱动的虚拟视频合成方法,可以合成与本人更接近的真人视频,从而降低视频的违和感,同时,还能够保护个人隐私,用相似的人像代替真是人像进行视频对话,表情更真实自然,更接近用户想要表达的感情。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质中包括基于表情驱动的虚拟视频合成程序,所述基于表情驱动的虚拟视频合成程序被处理器执行时实现如下操作:
获取待合成图像集,并从所述待合成图像集中确定待合成图像;
基于GAN网络对所述待合成图像与目标图像进行合成,形成目标照片,所述目标图像为用户原始图像;
截取未经处理的原始视频中的多个帧图像作为基准图像;
基于所述基准图像对所述目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
对所述传输图像的帧进行拼合,形成所述虚拟合成视频。
作为具体示例,待合成图像集中包含多组图像,每组图像包括同一人的多个表情图像,多个表情图像作为待合成图像的表情基。
本申请之计算机可读存储介质的具体实施方式与上述基于表情驱动的虚 拟视频合成方法、电子装置的具体实施方式大致相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于表情驱动的虚拟视频合成方法,应用于电子装置,其特征在于,所述方法包括:
    获取待合成图像集,并从所述待合成图像集中确定待合成图像;
    基于GAN网络对所述待合成图像与目标图像进行合成,形成目标照片,所述目标图像为用户原始图像;
    截取未经处理的原始视频中的多个帧图像作为基准图像;
    基于所述基准图像对所述目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
    对所述传输图像的帧进行拼合,形成所述虚拟合成视频。
  2. 根据权利要求1所述的基于表情驱动的虚拟视频合成方法,其特征在于,所述待合成图像集中包含多组图像,每组图像包括同一人的多个表情图像,所述多个表情图像作为所述待合成图像的表情基。
  3. 根据权利要求2所述的基于表情驱动的虚拟视频合成方法,其特征在于,
    所述目标图像为与所述待合成图像的表情基对应的一组图像;
    对所述待合成图像与目标图像进行合成的步骤包括:将所述目标图像与所述待合成图像中的同一表情的图像进行合成,且合成后的目标照片的表情与所述待合成图像中的同一表情的图像的表情相一致。
  4. 根据权利要求1所述的基于表情驱动的虚拟视频合成方法,其特征在于,所述基于所述基准图像对所述目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像的步骤包括:
    设定一个平均脸、一组表情基以及一组身份基;
    将所述平均脸和所述身份基的系数设定为固定值,并控制所述目标照片的表情基系数随所述基准图像的表情基的系数变化而变化,形成与所述基准图像对应的传输图像。
  5. 根据权利要求4所述的基于表情驱动的虚拟视频合成方法,其特征在于,所述将所述平均脸和所述身份基的系数设定为固定值,并控制所述目标照片的表情基系数随所述基准图像的表情基的系数变化而变化,形成与所述 基准图像对应的传输图像的步骤包括:
    将所述平均脸、表情基和身份基的3D网格图转换为对应的2D图像,并基于所述2D图像获取对应的人脸关键点坐标;
    获取所述基准图像的人脸关键点坐标;
    通过迭代,改变表情基的系数,使基于所述2D图像获取的人脸关键点坐标与所述基准图像的人脸关键点坐标之间的欧式距离最小,从而确定一组表情基的系数;
    将获取的表情基的系数应用至所述目标照片上,使得相同表情基的系数相同,获取最终的传输图像。
  6. 根据权利要求5所述的基于表情驱动的虚拟视频合成方法,其特征在于,将所述平均脸、表情基和身份基的3D网格图转换为对应的2D图像的步骤包括:
    通过单位矩阵、旋转矩阵以及投影矩阵将所述平均脸、表情基和身份基的3D网格图对应的三维坐标投影至2D平面上,以获取与所述2D图像对应的二维坐标。
  7. 根据权利要求1所述的基于表情驱动的虚拟视频合成方法,其特征在于,对所述传输图像的帧进行拼合,形成所述虚拟合成视频的步骤包括:
    基于最小二乘法和线性回归法对所述传输图像进行拼合,以形成实时传输的虚拟合成视频。
  8. 根据权利要求1所述的基于表情驱动的虚拟视频合成方法,其特征在于,所述基于GAN网络对所述待合成图像与目标图像进行合成,形成目标照片的步骤包括:
    构建所述GAN网络,并初始化所述GAN网络的参数;
    将所述待合成图像与所述目标图像输入所述GAN网络的生成器中,以得到生成器生成图像;
    将所述待合成图像、所述目标图像与所述生成器生成图像输入所述CAN网络的判别器中,通过最大化判别器的差别能力和最小化生成器的分布损失函数,获得最优图像结果作为所述目标照片。
  9. 一种电子装置,其特征在于,该电子装置包括:存储器、处理器,所述存储器中包括基于表情驱动的虚拟视频合成程序,所述基于表情驱动的虚 拟视频合成程序被所述处理器执行时实现如下步骤:
    获取待合成图像集,并从所述待合成图像集中确定待合成图像;
    基于GAN网络对所述待合成图像与目标图像进行合成,形成目标照片,所述目标图像为用户原始图像;
    截取未经处理的原始视频中的多个帧图像作为基准图像;
    基于所述基准图像对所述目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
    对所述传输图像的帧进行拼合,形成所述虚拟合成视频。
  10. 根据权利要求9所述的电子装置,其特征在于,
    所述待合成图像集中包含多组图像,每组图像包括同一人的多个表情图像,所述多个表情图像作为所述待合成图像的表情基。
  11. 根据权利要求10所述的电子装置,其特征在于,
    所述目标图像为与所述待合成图像的表情基对应的一组图像;
    对所述待合成图像与目标图像进行合成的步骤包括:将所述目标图像与所述待合成图像中的同一表情的图像进行合成,且合成后的目标照片的表情与所述待合成图像中的同一表情的图像的表情相一致。
  12. 根据权利要求9所述的电子装置,其特征在于,
    所述基于所述基准图像对所述目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像的步骤包括:
    设定一个平均脸、一组表情基以及一组身份基;
    将所述平均脸和所述身份基的系数设定为固定值,并控制所述目标照片的表情基系数随所述基准图像的表情基的系数变化而变化,形成与所述基准图像对应的传输图像。
  13. 根据权利要求12所述的电子装置,其特征在于,所述将所述平均脸和所述身份基的系数设定为固定值,并控制所述目标照片的表情基系数随所述基准图像的表情基的系数变化而变化,形成与所述基准图像对应的传输图像的步骤包括:
    将所述平均脸、表情基和身份基的3D网格图转换为对应的2D图像,并基于所述2D图像获取对应的人脸关键点坐标;
    获取所述基准图像的人脸关键点坐标;
    通过迭代,改变表情基的系数,使基于所述2D图像获取的人脸关键点坐标与所述基准图像的人脸关键点坐标之间的欧式距离最小,从而确定一组表情基的系数;
    将获取的表情基的系数应用至所述目标照片上,使得相同表情基的系数相同,获取最终的传输图像。
  14. 根据权利要求13所述的电子装置,其特征在于,将所述平均脸、表情基和身份基的3D网格图转换为对应的2D图像的步骤包括:
    通过单位矩阵、旋转矩阵以及投影矩阵将所述平均脸、表情基和身份基的3D网格图对应的三维坐标投影至2D平面上,以获取与所述2D图像对应的二维坐标。
  15. 一种基于表情驱动的虚拟视频合成系统,其特征在于,包括:
    合成图像确定单元,用于获取待合成图像集,并从所述待合成图像集中确定待合成图像;
    目标照片合成单元,用于基于GAN网络对所述待合成图像与目标图像进行合成,形成目标照片,所述目标图像为用户原始图像;
    基准图像获取单元,用于截取未经处理的原始视频中的多个帧图像作为基准图像;
    传输图像确定单元,用于基于所述基准图像对所述目标照片进行表情驱动,得到与待传输虚拟传输视频对应的传输图像;
    传输视频形成单元,用于对所述传输图像的帧进行拼合,形成所述虚拟合成视频。
  16. 根据权利要求15所述的基于表情驱动的虚拟视频合成系统,其特征在于,所述传输图像确定单元包括:
    设定模块,用于设定一个平均脸、一组表情基以及一组身份基;
    传输图像形成模块,用于将所述平均脸和所述身份基的系数设定为固定值,并控制所述目标照片的表情基系数随所述基准图像的表情基的系数变化而变化,形成与所述基准图像对应的传输图像。
  17. 根据权利要求16所述的基于表情驱动的虚拟视频合成系统,其特征在于,所述传输图像形成模块包括:
    维度转换模块,用于将所述平均脸、表情基和身份基的3D网格图转换 为对应的2D图像,并基于所述2D图像获取对应的人脸关键点坐标;
    关键点获取模块,用于获取所述基准图像的人脸关键点坐标;
    表情基系数确定模块,用于通过迭代,改变表情基的系数,使基于所述2D图像获取的人脸关键点坐标与所述基准图像的人脸关键点坐标之间的欧式距离最小,从而确定一组表情基的系数;
    传输图像获取模块,用于将获取的表情基的系数应用至所述目标照片上,使得相同表情基的系数相同,获取最终的传输图像。
  18. 根据权利要求17所述的基于表情驱动的虚拟视频合成系统,其特征在于,所述维度转换模块包括通过单位矩阵、旋转矩阵以及投影矩阵将所述平均脸、表情基和身份基的3D网格图对应的三维坐标投影至2D平面上,以获取与所述2D图像对应的二维坐标。
  19. 根据权利要求15所述的基于表情驱动的虚拟视频合成系统,其特征在于,所述目标照片合成单元包括:
    网络构建模块,用于构建所述GAN网络,并初始化所述GAN网络的参数;
    生成器生成图像模块,用于将所述待合成图像与所述目标图像输入所述GAN网络的生成器中,以得到生成器生成图像;
    目标照片确定模块,用于将所述待合成图像、所述目标图像与所述生成器生成图像输入所述CAN网络的判别器中,通过最大化判别器的差别能力和最小化生成器的分布损失函数,获得最优图像结果作为所述目标照片。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括基于表情驱动的虚拟视频合成程序,所述基于表情驱动的虚拟视频合成程序被处理器执行时,实现如权利要求1至8中任一项所述的基于表情驱动的虚拟视频合成方法的步骤。
PCT/CN2019/118285 2019-09-19 2019-11-14 基于表情驱动的虚拟视频合成方法、装置及存储介质 WO2021051605A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910885913.9 2019-09-19
CN201910885913.9A CN110620884B (zh) 2019-09-19 2019-09-19 基于表情驱动的虚拟视频合成方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2021051605A1 true WO2021051605A1 (zh) 2021-03-25

Family

ID=68923758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118285 WO2021051605A1 (zh) 2019-09-19 2019-11-14 基于表情驱动的虚拟视频合成方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN110620884B (zh)
WO (1) WO2021051605A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428672A (zh) * 2020-03-31 2020-07-17 北京市商汤科技开发有限公司 交互对象的驱动方法、装置、设备以及存储介质
CN111614925B (zh) * 2020-05-20 2022-04-26 广州视源电子科技股份有限公司 人物图像处理方法、装置、相应终端及存储介质
CN113559503B (zh) * 2021-06-30 2024-03-12 上海掌门科技有限公司 视频生成方法、设备及计算机可读介质
CN114429611B (zh) * 2022-04-06 2022-07-08 北京达佳互联信息技术有限公司 视频合成方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257195A (zh) * 2018-02-23 2018-07-06 深圳市唯特视科技有限公司 一种基于几何对比生成对抗网络的面部表情合成方法
CN108288072A (zh) * 2018-01-26 2018-07-17 深圳市唯特视科技有限公司 一种基于生成对抗网络的面部表情合成方法
CN108389239A (zh) * 2018-02-23 2018-08-10 深圳市唯特视科技有限公司 一种基于条件多模式网络的微笑脸部视频生成方法
CN109448083A (zh) * 2018-09-29 2019-03-08 浙江大学 一种从单幅图像生成人脸动画的方法
WO2019056000A1 (en) * 2017-09-18 2019-03-21 Board Of Trustees Of Michigan State University GENERATIVE ANTAGONIST NETWORK WITH DISSOCATED REPRESENTATION LEARNING FOR INDEPENDENT POSTURE FACIAL RECOGNITION
CN110097086A (zh) * 2019-04-03 2019-08-06 平安科技(深圳)有限公司 图像生成模型训练方法、图像生成方法、装置、设备及存储介质
CN110148191A (zh) * 2018-10-18 2019-08-20 腾讯科技(深圳)有限公司 视频虚拟表情生成方法、装置及计算机可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875633B (zh) * 2018-06-19 2022-02-08 北京旷视科技有限公司 表情检测与表情驱动方法、装置和系统及存储介质
CN109147017A (zh) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 动态图像生成方法、装置、设备及存储介质
CN109308727B (zh) * 2018-09-07 2020-11-10 腾讯科技(深圳)有限公司 虚拟形象模型生成方法、装置及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019056000A1 (en) * 2017-09-18 2019-03-21 Board Of Trustees Of Michigan State University GENERATIVE ANTAGONIST NETWORK WITH DISSOCATED REPRESENTATION LEARNING FOR INDEPENDENT POSTURE FACIAL RECOGNITION
CN108288072A (zh) * 2018-01-26 2018-07-17 深圳市唯特视科技有限公司 一种基于生成对抗网络的面部表情合成方法
CN108257195A (zh) * 2018-02-23 2018-07-06 深圳市唯特视科技有限公司 一种基于几何对比生成对抗网络的面部表情合成方法
CN108389239A (zh) * 2018-02-23 2018-08-10 深圳市唯特视科技有限公司 一种基于条件多模式网络的微笑脸部视频生成方法
CN109448083A (zh) * 2018-09-29 2019-03-08 浙江大学 一种从单幅图像生成人脸动画的方法
CN110148191A (zh) * 2018-10-18 2019-08-20 腾讯科技(深圳)有限公司 视频虚拟表情生成方法、装置及计算机可读存储介质
CN110097086A (zh) * 2019-04-03 2019-08-06 平安科技(深圳)有限公司 图像生成模型训练方法、图像生成方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN110620884A (zh) 2019-12-27
CN110620884B (zh) 2022-04-22

Similar Documents

Publication Publication Date Title
WO2021051605A1 (zh) 基于表情驱动的虚拟视频合成方法、装置及存储介质
WO2018153267A1 (zh) 群组视频会话的方法及网络设备
US10789453B2 (en) Face reenactment
US11830118B2 (en) Virtual clothing try-on
US20180232929A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction
CN114930399A (zh) 使用基于表面的神经合成的图像生成
US11335069B1 (en) Face animation synthesis
EP4222961A1 (en) Method, system and computer-readable storage medium for image animation
US20220207819A1 (en) Light estimation using neural networks
KR20230156953A (ko) 실시간에서의 실제 크기 안경류 경험
US11477397B2 (en) Media content discard notification system
WO2023220163A1 (en) Multi-modal human interaction controlled augmented reality
US20220319125A1 (en) User-aligned spatial volumes
US20220319059A1 (en) User-defined contextual spaces
US20220207786A1 (en) Flow-guided motion retargeting
KR20240056556A (ko) 참가자 당 엔드-투-엔드 암호화된 메타데이터
KR20230157494A (ko) 실시간에서의 실제 크기 안경류
KR20230160926A (ko) 사용자-정의 맥락 공간들
CN117136404A (zh) 从歌曲中提取伴奏的神经网络
US20220319124A1 (en) Auto-filling virtual content
KR102462947B1 (ko) 디지털 휴먼의 표정 변화에 따라 피부 텍스처를 변화시키기 위한 방법 및 이를 위한 3차원 그래픽 인터페이스 장치
US20230410479A1 (en) Domain changes in generative adversarial networks
US20230069614A1 (en) High-definition real-time view synthesis
US20240029346A1 (en) Single image three-dimensional hair reconstruction
US20240048359A1 (en) Coordinating data access among multiple services

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945981

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945981

Country of ref document: EP

Kind code of ref document: A1