WO2016110199A1 - 一种表情迁移方法、电子设备及系统 - Google Patents

一种表情迁移方法、电子设备及系统 Download PDF

Info

Publication number
WO2016110199A1
WO2016110199A1 PCT/CN2015/099485 CN2015099485W WO2016110199A1 WO 2016110199 A1 WO2016110199 A1 WO 2016110199A1 CN 2015099485 W CN2015099485 W CN 2015099485W WO 2016110199 A1 WO2016110199 A1 WO 2016110199A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
user
parameter
expression model
dimensional expression
Prior art date
Application number
PCT/CN2015/099485
Other languages
English (en)
French (fr)
Inventor
武俊敏
卢俊杰
冯加伟
薛涵凜
周强
周世威
Original Assignee
掌赢信息科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 掌赢信息科技(上海)有限公司 filed Critical 掌赢信息科技(上海)有限公司
Publication of WO2016110199A1 publication Critical patent/WO2016110199A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • the present invention relates to the field of images, and in particular, to an expression migration method, an electronic device, and a system.
  • the prior art provides an expression migration method for identifying a facial expression of a user in a video frame including at least a user's face by image recognition technology, and then migrating the expression to the device.
  • the hardware of the mobile terminal such as the smart phone and the tablet cannot meet the hardware requirements of the method, so that the mobile terminal cannot use the method provided by the prior art to perform real-time video.
  • the migration of the user's facial expression, or when the mobile terminal uses the method provided by the prior art occupies a large amount of processing resources and storage resources of the device, affecting the use of the device, thereby reducing the user experience.
  • the embodiment of the invention provides an expression migration method, an electronic device and a system.
  • the technical solution is as follows:
  • an expression migration method comprising:
  • the establishing a three-dimensional expression model corresponding to the user includes:
  • the generating a three-dimensional expression model corresponding to the user according to the feature point parameter and the posture parameter includes:
  • the feature point parameters and the posture parameters are fitted and normalized to generate the three-dimensional expression model.
  • the obtaining the driving parameter corresponding to the three-dimensional expression model from the instant video includes:
  • the drive parameter is generated based on the deviation value.
  • the generating the driving parameter according to the deviation value includes:
  • the generating the driving parameter according to the deviation value further includes:
  • the driving the three-dimensional expression model to display an expression corresponding to the user according to the driving parameter includes:
  • the driving the three-dimensional expression model to display the expression corresponding to the user according to the driving parameter further includes:
  • an electronic device comprising:
  • model building module for establishing a three-dimensional expression model corresponding to the user
  • An obtaining module configured to acquire, from the instant video, a driving parameter corresponding to the three-dimensional expression model
  • a driving module configured to drive the three-dimensional expression model to display an expression corresponding to the user according to the driving parameter.
  • the acquiring module is further configured to acquire a feature point parameter and a posture parameter of a facial expression of the user;
  • the model building module further includes a generating submodule, and the generating submodule is configured to generate a three-dimensional emoticon model corresponding to the user according to the feature point parameter and the posture parameter.
  • the generating sub-module is further configured to:
  • the feature point parameters and the posture parameters are fitted and normalized to generate the three-dimensional expression model.
  • the device further includes:
  • An identification module configured to identify and fit a facial feature point parameter of the user in the instant video
  • a calculation module configured to calculate a deviation value between a facial feature point parameter in the instant video and a facial feature point parameter corresponding to the three-dimensional expression model
  • the generating submodule is further configured to generate the driving parameter according to the deviation value.
  • the generating sub-module is further configured to:
  • the acquiring module is further configured to:
  • the driving module is configured to:
  • the driving module is further configured to:
  • a third aspect provides an electronic device, including: a display screen, a transmitting module, a receiving module, a memory, and a processor respectively connected to the display screen, the transmitting module, the receiving module, and the memory,
  • the memory stores a set of program codes
  • the processor is configured to invoke program code stored in the memory, and perform the following operations:
  • the processor is further configured to invoke program code stored in the memory, and perform the following operations:
  • the processor is further configured to invoke program code stored in the memory, and perform the following operations:
  • the feature point parameters and the posture parameters are fitted and normalized to generate the three-dimensional expression model.
  • the processor is further configured to invoke program code stored in the memory, and perform the following operations:
  • the drive parameter is generated based on the deviation value.
  • the processor is further configured to invoke the program code stored in the memory, and perform the following operations:
  • the processor is further configured to invoke the program code stored in the memory, and perform the following operations:
  • the processor is further configured to invoke the program code stored in the memory, and perform the following operations:
  • the processor is further configured to invoke the program code stored in the memory, and perform the following operations:
  • an expression migration system includes:
  • model establishing device for establishing a three-dimensional expression model corresponding to the user
  • Obtaining a device configured to acquire, from the instant video, a driving parameter corresponding to the three-dimensional expression model
  • a driving device configured to drive the three-dimensional expression model to display an expression corresponding to the user according to the driving parameter.
  • the embodiment of the invention provides an expression migration method, an electronic device and a system, comprising: establishing a three-dimensional expression model corresponding to a user; acquiring a driving parameter corresponding to the three-dimensional expression model from the instant video; driving the three-dimensional expression model display according to the driving parameter The expression corresponding to the user.
  • the three-dimensional expression model is driven to display the facial expression of the user in the instant video, and the expression migration is realized on the mobile device, thereby improving the user experience.
  • FIG. 1 is a schematic diagram of an interaction system according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an interaction system according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an expression migration method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an expression migration method according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the embodiment of the present invention provides an expression migration method, which is applied to an interaction system including at least two mobile terminals and a server, and the interaction system can be referred to FIG. 1 , wherein the mobile terminal 1 is an instant video transmission.
  • the mobile terminal 2 is an instant video receiver, and the instant video sent by the mobile terminal 1 is forwarded to the mobile terminal 2 via the server; wherein the mobile terminal can be a smart phone or a tablet personal computer. It can also be other mobile terminals, and the specific mobile terminal is not limited in the embodiment of the present invention.
  • the mobile terminal includes at least a video input module and a video display module.
  • the video input module can include a camera.
  • the video display module can include a display screen, and at least one instant video program can be run in the mobile terminal, and the instant video program controls the video input module of the mobile terminal. Instant video with the video display module.
  • the execution subject of the method provided by the embodiment of the present invention may be any one of the mobile terminal 1, the mobile terminal 2, and the server. If the execution subject of the method is the mobile terminal 1, the mobile terminal The terminal 1 performs the emoticon migration by using the instant video input by the user in the video input module of the user, and sends the instant video containing the emoticon migration to the mobile terminal 2 via the server; if the execution subject of the method is the server, the mobile terminal 1 passes the After the video input module inputs the instant video, the instant video is sent to the server, and after the server performs the expression migration using the instant video, the instant video containing the expression migration is sent to the mobile terminal 2; After the mobile terminal 1 inputs the live video through its own video input module, the mobile terminal 1 sends the instant video to the server, and the server sends the instant video to the mobile terminal 2, and the mobile terminal 2 uses the instant video to perform the expression migration.
  • the specific implementation body of the method in the interaction system is not limited in the embodiment of the present invention.
  • the method provided by the embodiment of the present invention can also be applied to an interactive system including only the mobile terminal 1 and the mobile terminal 2.
  • the interactive system can be referred to FIG. 2, wherein the mobile terminal 1 is instant. a video sender, the mobile terminal 2 is an instant video receiver, and the mobile terminal includes at least a video input module and a video display module, the video input module can include a camera, the video display module can include a display screen, and at least one instant can be run in the mobile terminal Video program, the instant video program control The video input module and the video display module of the mobile terminal perform real-time video.
  • the execution subject of the method provided by the embodiment of the present invention may be any one of the mobile terminal 1 and the mobile terminal 2. If the execution subject of the method is the mobile terminal 1, if the execution body of the method Is the mobile terminal 1, the mobile terminal 1 uses the instant video input by the user in its own video input module to perform emoticon migration, and sends the live video containing the emoticon migration to the mobile terminal 2; if the execution subject of the method is mobile The terminal 2, after inputting the live video through the video input module of the mobile terminal 1, sends the instant video to the mobile terminal 2, and the mobile terminal 2 uses the instant video to perform the expression migration.
  • the specific implementation body of the method in the interaction system is not limited in the embodiment of the present invention.
  • An embodiment of the present invention provides a method for migrating an expression. Referring to FIG. 3, the method includes:
  • a three-dimensional expression model corresponding to the user is generated according to the feature point parameter and the posture parameter.
  • the generating a three-dimensional expression model corresponding to the user according to the feature point parameter and the posture parameter includes:
  • the feature point parameters and the attitude parameters are fitted and normalized to generate a three-dimensional expression model.
  • a drive parameter is generated based on the offset value.
  • generating the driving parameters includes:
  • a drive parameter is generated according to the moving position of the feature point.
  • generating the driving parameters further includes:
  • driving the three-dimensional expression model to display an expression corresponding to the user includes:
  • the three-dimensional expression model is driven to display an expression corresponding to the user.
  • driving the three-dimensional expression model to display the expression corresponding to the user further includes:
  • the three-dimensional expression model is driven to display an expression corresponding to the user according to the parameters of the user's eyes and the parameters of the user's mouth.
  • the embodiment of the present invention provides an expression migration method, which drives a three-dimensional expression model to display a facial expression of a user in an instant video by using a driving parameter obtained from an instant video, and implements an expression migration on the mobile device, thereby improving the user experience.
  • An embodiment of the present invention provides an expression migration method. Referring to FIG. 4, the method includes:
  • the feature point is at least used to describe a contour of a face detail
  • the face detail includes at least an eye, a mouth, an eyebrow, and a nose
  • the feature point parameter may be a coordinate of the feature point in a vector including at least a user's face.
  • the specific acquisition manner of the feature point parameter is not limited in the embodiment of the present invention.
  • the attitude parameter is used to describe at least the distribution of the feature point parameter in the three-dimensional space, and the posture parameter may be a projection of the feature point vector.
  • the specific acquisition manner of the feature point parameter is not limited in the embodiment of the present invention.
  • the method for determining the feature point parameter and the posture parameter of the facial expression of the user further includes determining the face of the user from the screen including at least the user.
  • the specific determination manner is not limited in the embodiment of the present invention.
  • the rotation matrix is used to rotate the posture parameter corresponding to the current screen including at least the user's face, so as to set the posture parameter as a fixed posture parameter, and the influence of the posture when the expression is moved is excluded.
  • the pose parameter describes the distribution of the feature point parameter in the three-dimensional space, so it can be generated according to the feature point parameter and the pose parameter corresponding to the picture currently including at least the user's face, and the current at least includes a three-dimensional expression sub-model corresponding to the screen of the user's face;
  • a fitting parameter is obtained, and the preset formula can be:
  • Y represents a feature point parameter
  • fitting parameters all the three-dimensional expression sub-models are fitted to the three-dimensional expression total model, and the specific fitting manner is not limited in the embodiment of the present invention.
  • steps 401 to 402 are processes of establishing a three-dimensional expression model corresponding to the user.
  • the three-dimensional expression model corresponding to the user may be implemented in other manners, and the specific process of the present invention is not limited.
  • the process of identifying the facial feature point parameter of the user in the instant video frame is the same as that in step 401, and details are not described herein.
  • the process of fitting the facial feature point parameter of the user in the instant video frame may be implemented by multiplying the facial feature point parameter by the fitting parameter ⁇ , in addition to
  • the process of fitting the facial feature point parameters of the user in the instant video frame is implemented in other manners, and the specific process is not limited in the embodiment of the present invention.
  • the posture parameter corresponding to the facial feature point parameter in the instant video is rotated by using a rotation matrix.
  • the pose parameter is set as the pose parameter of the facial feature point parameter corresponding to the three-dimensional expression model.
  • the facial feature point parameter in the instant video is normalized by using a scaling matrix to set the feature point parameter as a facial feature point parameter corresponding to the three-dimensional expression model.
  • the deviation value is multiplied by the preset coefficient to generate a moving position of the feature point, wherein the preset coefficient is used to indicate the exaggeration of the expression migration set by the user.
  • the driving parameter is used to indicate the moving position of the feature point on the three-dimensional expression model, and the specific generation manner is not limited in the embodiment of the present invention.
  • the steps 405 to 406 are processes for generating a driving parameter according to the deviation value, and in addition, the driving parameter may be generated according to the deviation value by:
  • the parameter of the user's eye is used to indicate the position of the user's eye and whether the eye is closed.
  • the position of the user's eye can be determined by the coordinates of the user's pupil in the instant video frame, or the gray of the user's eye. The specific value is not limited in the embodiment of the present invention.
  • the parameter of the mouth of the user is used to indicate the position of the mouth of the user and the color of the inside of the mouth.
  • the specific acquisition manner is not limited in the embodiment of the present invention.
  • the feature points on the three-dimensional expression model are moved according to the moving position of the feature points.
  • the process can also be:
  • the position of the eye on the three-dimensional expression model and the closed state of the eye are set according to the parameters of the user's eye.
  • the specific setting process is not limited in the embodiment of the present invention.
  • the position of the mouth on the three-dimensional expression model and the color inside the mouth are set according to the parameters of the user's mouth.
  • the specific setting process is not limited in the embodiment of the present invention.
  • the embodiment of the present invention provides an expression migration method, which drives a three-dimensional expression model to display a facial expression of a user in an instant video by using a driving parameter obtained from an instant video, and implements an expression migration on the mobile device, thereby improving the user experience.
  • An embodiment of the present invention provides an electronic device 5.
  • the device includes:
  • a model building module 51 configured to establish a three-dimensional expression model corresponding to the user
  • the obtaining module 52 is configured to obtain, from the instant video, a driving parameter corresponding to the three-dimensional expression model
  • the driving module 53 is configured to drive the three-dimensional expression model to display an expression corresponding to the user according to the driving parameter.
  • the obtaining module 52 is further configured to acquire a feature point parameter and a posture parameter of the facial expression of the user;
  • the model establishing module 51 further includes a generating submodule, and the generating submodule is configured to generate a three-dimensional emoticon model corresponding to the user according to the feature point parameter and the posture parameter.
  • the generating submodule is also used to:
  • the feature point parameters and the attitude parameters are fitted and normalized to generate a three-dimensional expression model.
  • the device further includes:
  • An identification module configured to identify and fit a facial feature point parameter of the user in the instant video
  • a calculation module configured to calculate a deviation value between a facial feature point parameter in the instant video and a facial feature point parameter corresponding to the three-dimensional expression model
  • the generation sub-module is also used to generate drive parameters based on the deviation values.
  • the generating submodule is also used to:
  • a drive parameter is generated according to the moving position of the feature point.
  • the obtaining module 52 is further configured to:
  • driver module is used to:
  • the three-dimensional expression model is driven to display an expression corresponding to the user according to the moving position of the feature point and the processed three-dimensional expression model.
  • driver module is also used to:
  • the three-dimensional expression model is driven to display an expression corresponding to the user according to the parameters of the user's eyes and the parameters of the user's mouth.
  • An embodiment of the present invention provides an electronic device that drives a three-dimensional expression model to display a facial expression of a user in an instant video by using a driving parameter obtained from an instant video, and implements an expression migration on the mobile device, thereby improving a user experience.
  • An embodiment of the present invention provides an electronic device 6. Referring to FIG. 6, a display screen 61, a transmitting module 62, a receiving module 63, a memory 64, and a display screen 61, a transmitting module 62, a receiving module 63, and a memory 64 are respectively connected.
  • the processor 65 wherein the memory 64 stores a set of program codes, and the processor 65 is configured to call the program code stored in the memory 64 to perform the following operations:
  • the three-dimensional expression model is driven to display an expression corresponding to the user.
  • the processor 65 is configured to call the program code stored in the memory 64, and perform the following operations:
  • a three-dimensional expression model corresponding to the user is generated according to the feature point parameter and the posture parameter.
  • the processor 65 is configured to call the program code stored in the memory 64, and perform the following operations:
  • the feature point parameters and the attitude parameters are fitted and normalized to generate a three-dimensional expression model.
  • the processor 65 is configured to call the program code stored in the memory 64, and perform the following operations:
  • a drive parameter is generated based on the offset value.
  • the processor 65 is configured to call the program code stored in the memory 64, and perform the following operations:
  • the processor 65 is configured to call the program code stored in the memory 64, and perform the following operations:
  • the processor 65 is configured to call the program code stored in the memory 64, and perform the following operations:
  • the three-dimensional expression model is driven to display an expression corresponding to the user.
  • the processor 65 is configured to call the program code stored in the memory 64, and perform the following operations:
  • the three-dimensional expression model is driven to display an expression corresponding to the user according to the parameters of the user's eyes and the parameters of the user's mouth.
  • An embodiment of the present invention provides an electronic device that drives a three-dimensional expression model to display a facial expression of a user in an instant video by using a driving parameter obtained from an instant video, and implements an expression migration on the mobile device, thereby improving a user experience.
  • An embodiment of the present invention provides an expression migration system, where the system includes:
  • model establishing device for establishing a three-dimensional expression model corresponding to the user
  • Obtaining a device configured to obtain a driving parameter corresponding to the three-dimensional expression model from the instant video
  • a driving device for driving the three-dimensional expression model to display an expression corresponding to the user according to the driving parameter
  • An embodiment of the present invention provides an expression migration system, which drives a three-dimensional expression model to display a facial expression of a user in an instant video by using a driving parameter obtained from an instant video, and implements expression migration on the mobile device, thereby improving user experience.
  • the electronic device provided by the foregoing embodiment is only illustrated by the division of each functional module. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the device is divided into different functional modules to perform all or part of the functions described above.
  • the device and the method embodiment provided by the foregoing embodiments are in the same concept, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种表情迁移方法、电子设备及系统,属于图像领域。所述方法包括:建立与用户对应的三维表情模型(301);从即时视频中获取三维表情模型对应的驱动参数(302);根据驱动参数,驱动三维表情模型显示与用户对应的表情(303)。通过从即时视频中获取的驱动参数,驱动三维表情模型显示用户在即时视频中的面部表情,在移动设备上实现了表情迁移,提高了用户体验效果。

Description

一种表情迁移方法、电子设备及系统 技术领域
本发明涉及图像领域,特别涉及一种表情迁移方法、电子设备及系统。
背景技术
随着即时视频应用在移动终端上的普及,使得越来越多的用户通过即时视频应用来实现与他人之间的交互,在陌生人社交以及其他应用场景下,用户需要一种表情迁移方法,将自身的表情迁移至设备上进行显示。
现有技术提供了一种表情迁移方法,该方法通过图像识别技术,识别至少包括用户人脸的视频帧中用户的面部表情,然后将该表情迁移至设备上。
由于现有技术提供的方法对设备的硬件要求较高,而如智能手机和平板电脑等移动终端的硬件无法满足该方法的硬件要求,使得移动终端无法使用现有技术提供的方法进行即时视频中的用户面部表情的迁移,或者,在移动终端使用现有技术提供的方法时,大量占用设备的处理资源和存储资源,影响设备的使用,从而降低了用户体验效果。
发明内容
为了在移动设备上实现表情迁移,提高用户体验效果,本发明实施例提供了一种表情迁移方法、电子设备及系统。所述技术方案如下:
第一方面,提供了一种表情迁移方法,所述方法包括:
建立与用户对应的三维表情模型;
从即时视频中获取所述三维表情模型对应的驱动参数;
根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
结合第一方面,在第一种可能的实现方式中,所述建立与用户对应的三维表情模型包括:
获取用户面部表情的特征点参数与姿态参数;
根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型。
结合第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型包括:
对所述特征点参数与姿态参数进行拟合和归一化处理,生成所述三维表情模型。
结合第一方面,在第三种可能的实现方式中,所述从即时视频中获取所述三维表情模型对应的驱动参数包括:
识别并拟合即时视频中用户的面部特征点参数;
计算所述即时视频中面部特征点参数与所述三维表情模型中所对应的面部特征点参数之间的偏差值;
根据所述偏差值,生成所述驱动参数。
结合第一方面的第三种可能的实现方式,在第四种可能的实现方式中,所述根据所述偏差值,生成所述驱动参数包括:
根据所述偏差值,生成特征点的移动位置;
根据所述特征点的移动位置,生成所述驱动参数
结合第一方面的第三种可能的实现方式,在第五种可能的实现方式中,所述根据所述偏差值,生成所述驱动参数还包括:
获取即时视频中用户眼部的参数;
获取即时视频中用户嘴部的参数。
结合第一方面的第四种可能的实现方式,在第六种可能的实现方式中,所述根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情包括:
根据所述特征点的移动位置与所述三维表情模型,驱动所述三维表情模型显示与所述用户对应的表情。
结合第一方面的第五种可能的实现方式,在第七种可能的实现方式中,所述根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情还包括:
根据所述用户眼部的参数和所述用户嘴部的参数,驱动所述三维表情模型显示与所述用户对应的表情。
第二方面,提供了一种电子设备,所述设备包括:
模型建立模块,用于建立与用户对应的三维表情模型;
获取模块,用于从即时视频中获取所述三维表情模型对应的驱动参数;
驱动模块,用于根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
结合第二方面,在第一种可能的实现方式中,所述获取模块还用于获取用户面部表情的特征点参数与姿态参数;
所述模型建立模块还包括生成子模块,所述生成子模块用于根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型。
结合第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述生成子模块还用于:
对所述特征点参数与姿态参数进行拟合和归一化处理,生成所述三维表情模型。
结合第二方面,在第三种可能的实现方式中,所述设备还包括:
识别模块,用于识别并拟合即时视频中用户的面部特征点参数;
计算模块,用于计算所述即时视频中面部特征点参数与所述三维表情模型中所对应的面部特征点参数之间的偏差值;
所述生成子模块还用于根据所述偏差值,生成所述驱动参数。
结合第二方面的第三种可能的实现方式,在第四种可能的实现方式中,所述生成子模块还用于:
根据所述偏差值,生成特征点的移动位置;
根据所述特征点的移动位置,生成所述驱动参数
结合第二方面的第三种可能的实现方式,在第五种可能的实现方式中,所述获取模块还用于:
获取即时视频中用户眼部的参数;
获取即时视频中用户嘴部的参数。
结合第二方面的第四种可能的实现方式,在第六种可能的实现方式中,所述驱动模块用于:
根据所述特征点的移动位置与所述三维表情模型,驱动所述三维表情模型显示与所述用户对应的表情。
结合第二方面的第五种可能的实现方式,在第七种可能的实现方式中,所述驱动模块还用于:
根据所述用户眼部的参数和所述用户嘴部的参数,驱动所述三维表情模型显示与所述用户对应的表情。
第三方面,提供了一种电子设备,包括:包括显示屏、发送模块、接收模块、存储器以及分别与所述显示屏、所述发送模块、所述接收模块、所述存储器连接的处理器,其中,所述存储器中存储一组程序代码,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
建立与用户对应的三维表情模型;
从即时视频中获取所述三维表情模型对应的驱动参数;
根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
结合第三方面,在第一种可能的实现方式中,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
获取用户面部表情的特征点参数与姿态参数;
根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型。
结合第三方面的第一种可能的实现方式,在第二种可能的实现方式中,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
对所述特征点参数与姿态参数进行拟合和归一化处理,生成所述三维表情模型。
结合第三方面,在第三种可能的实现方式中,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
识别并拟合即时视频中用户的面部特征点参数;
计算所述即时视频中面部特征点参数与所述三维表情模型中所对应的面部特征点参数之间的偏差值;
根据所述偏差值,生成所述驱动参数。
结合第三方面的第三种可能的实现方式,在第四种可能的实现方式中,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
根据所述偏差值,生成特征点的移动位置;
根据所述特征点的移动位置,生成所述驱动参数
结合第三方面的第三种可能的实现方式,在第五种可能的实现方式中,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
获取即时视频中用户眼部的参数;
获取即时视频中用户嘴部的参数。
结合第三方面的第四种可能的实现方式,在第六种可能的实现方式中,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
根据所述特征点的移动位置与所述三维表情模型,驱动所述三维表情模型显示与所述用户对应的表情。
结合第三方面的第五种可能的实现方式,在第七种可能的实现方式中,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
根据所述用户眼部的参数和所述用户嘴部的参数,驱动所述三维表情模型显示与所述用户对应的表情。
第四方面,提供一种表情迁移系统,所述包括:
模型建立设备,用于建立与用户对应的三维表情模型;
获取设备,用于从即时视频中获取所述三维表情模型对应的驱动参数;
驱动设备,用于根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
本发明实施例提供了一种表情迁移方法、电子设备及系统,包括:建立与用户对应的三维表情模型;从即时视频中获取三维表情模型对应的驱动参数;根据驱动参数,驱动三维表情模型显示与用户对应的表情。通过从即时视频中获取的驱动参数,驱动三维表情模型显示用户在即时视频中的面部表情,在移动设备上实现了表情迁移,提高了用户体验效果。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种交互系统示意图;
图2是本发明实施例提供的一种交互系统示意图;
图3是本发明实施例提供的一种表情迁移方法示意图;
图4是本发明实施例提供的一种表情迁移方法示意图;
图5是本发明实施例提供的一种电子设备结构示意图;
图6是本发明实施例提供的一种电子设备结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其 他实施例,都属于本发明保护的范围。
本发明实施例提供了一种表情迁移方法,该方法应用于一种包括至少两个移动终端和服务器的交互系统中,该交互系统可以参照图1所示,其中,移动终端1为即时视频发送方,移动终端2为即时视频接收方,移动终端1所发送的即时视频经由服务器转发至移动终端2;其中,该移动终端可以是智能手机(Smart Phone),可以是平板电脑(Tablet Personal Computer),还可以是其他移动终端,本发明实施例对具体的移动终端不加以限定。移动终端至少包括视频输入模块和视频显示模块,视频输入模块可以包括摄像头,视频显示模块可以包括显示屏,且移动终端中至少可以运行一即时视频程序,该即时视频程序控制移动终端的视频输入模块和视频显示模块进行即时视频。
特别的,本发明实施例所提供的方法的执行主体,即电子设备,可以是移动终端1、移动终端2和服务器中的任意一个,若该方法的执行主体是移动终端1,则对该移动终端1利用用户在自身的视频输入模块输入的即时视频进行表情迁移,并将包含表情迁移后的即时视频经服务器发送至移动终端2;若该方法的执行主体是服务器,则移动终端1在通过自身的视频输入模块输入即时视频后,将该即时视频发送至服务器,由服务器对利用该即时视频进行表情迁移后,将该包含表情迁移的即时视频发送至移动终端2;若该方法的执行主体是移动终端2,移动终端1在通过自身的视频输入模块输入即时视频后,将该即时视频发送至服务器,服务器将该即时视频发送至移动终端2,移动终端2利用该即时视频进行表情迁移。本发明实施例对该交互系统中的该方法的具体的执行主体不加以限定。
除此之外,本发明实施例所提供的方法还可以应用于一种只包括移动终端1和移动终端2的交互系统中,该交互系统可以参照图2所示,其中,移动终端1为即时视频发送方,移动终端2为即时视频接收方,且移动终端至少包括视频输入模块和视频显示模块,视频输入模块可以包括摄像头,视频显示模块可以包括显示屏,且移动终端中至少可以运行一即时视频程序,该即时视频程序控 制移动终端的视频输入模块和视频显示模块进行即时视频。
特别的,本发明实施例所提供的方法的执行主体,即电子设备,可以是移动终端1和移动终端2中的任意一个,若该方法的执行主体是移动终端1,若该方法的执行主体是移动终端1,则对该移动终端1利用用户在自身的视频输入模块输入的即时视频进行表情迁移,并将包含表情迁移后的即时视频发送至移动终端2;若该方法的执行主体是移动终端2,移动终端1在通过自身的视频输入模块输入即时视频后,将该即时视频发送至移动终端2,移动终端2利用该即时视频进行表情迁移。本发明实施例对该交互系统中的该方法的具体的执行主体不加以限定。
实施例一
本发明实施例提供了一种表情迁移的方法,参照图3,该方法流程包括:
301、建立与用户对应的三维表情模型。
具体的,获取用户面部表情的特征点参数与姿态参数;
根据特征点参数与姿态参数,生成与用户对应的三维表情模型。
其中,根据特征点参数与姿态参数,生成与用户对应的三维表情模型包括:
对特征点参数与姿态参数进行拟合和归一化处理,生成三维表情模型。
302、从即时视频中获取三维表情模型对应的驱动参数。
具体的,识别并拟合即时视频中用户的面部特征点参数;
计算即时视频中面部特征点参数与三维表情模型中所对应的面部特征点参数之间的偏差值;
根据偏差值,生成驱动参数。
其中,根据偏差值,生成驱动参数包括:
根据偏差值,生成特征点的移动位置;
根据特征点的移动位置,生成驱动参数。
根据偏差值,生成驱动参数还包括:
获取即时视频中用户眼部的参数;
获取即时视频中用户嘴部的参数。
303、根据驱动参数,驱动三维表情模型显示与用户对应的表情。
具体的,根据驱动参数,驱动三维表情模型显示与用户对应的表情包括:
根据特征点的移动位置与三维表情模型,驱动三维表情模型显示与用户对应的表情。
根据驱动参数,驱动三维表情模型显示与用户对应的表情还包括:
根据用户眼部的参数和用户嘴部的参数,驱动三维表情模型显示与用户对应的表情。
本发明实施例提供了一种表情迁移方法,通过从即时视频中获取的驱动参数,驱动三维表情模型显示用户在即时视频中的面部表情,在移动设备上实现了表情迁移,提高了用户体验效果。
实施例二
本发明实施例提供了一种表情迁移方法,参照图4,该方法包括:
401、获取用户面部表情的特征点参数与姿态参数。
具体的,特征点至少用于描述人脸细节的轮廓,该人脸细节至少包括眼部、嘴部、眉毛和鼻子,该特征点参数可以为该特征点在至少包括用户面部的向量的坐标,本发明实施例对特征点参数具体的获取方式不加以限定。
姿态参数至少用于描述特征点参数在三维空间上的分布,该姿态参数可以为特征点向量的投影,本发明实施例对特征点参数具体的获取方式不加以限定。
在获取用户面部表情的特征点参数与姿态参数,还包括从至少包括用户的画面中确定用户的面部,本发明实施例对具体的确定方式不加以限定。
402、根据特征点参数与姿态参数,生成与用户对应的三维表情模型。
具体的,利用旋转矩阵对当前至少包括用户面部的画面对应的姿态参数进行旋转,以将该姿态参数设置为固定姿态参数,排除表情迁移时姿态的影响。
利用缩放矩阵对当前至少包括用户面部的画面对应的特征点参数进行归一化处理,以将该特征点参数设置为固定尺寸值,排除表情迁移时面部细节之间 的非线性关系的影响。
由于特征点描述了人脸细节的轮廓,姿态参数描述了特征点参数在三维空间上的分布,所以可以根据当前至少包括用户面部的画面对应的特征点参数和姿态参数,生成与该当前至少包括用户面部的画面对应的三维表情子模型;
获取多个至少包括用户面部的画面对应的多个三维表情子模型;
根据预设公式,获取拟合参数,该预设公式可以为:
Y=XTθ
其中,Y表示特征点参数,X={x1,x2,...,xn}表示n个三维表情子模型分别对应的向量,θ表示拟合参数;
根据拟合参数,将所有三维表情子模型拟合为三维表情总模型,本发明实施例对具体的拟合方式不加以限定。
获取三维表情总模型对应的特征点参数和姿态参数,根据该特征点参数和姿态参数,生成与用户对应的三维表情模型。
特别的,步骤401至步骤402是建立与用户对应的三维表情模型的过程。除了上述过程的方式之外,还可以通过其他方式实现建立与用户对应的三维表情模型,本发明实施例对具体的过程不加以限定。
403、识别并拟合即时视频帧中用户的面部特征点参数。
具体的,由于即时视频帧与至少包括用户面部的图像相同,所以识别即时视频帧中用户的面部特征点参数的过程与步骤401中的相同,此处再不加以赘述。
在识别即时视频帧中用户的面部特征点参数之后,可以通过将面部特征点参数乘以拟合参数θ实现拟合即时视频帧中用户的面部特征点参数的过程,除此之外,还可以通过其他方式实现拟合即时视频帧中用户的面部特征点参数的过程,本发明实施例对具体的过程不加以限定。
404、计算即时视频中面部特征点参数与三维表情模型中所对应的面部特征点参数之间的偏差值。
具体的,在计算即时视频中面部特征点参数与三维表情模型中所对应的面部特征点参数之间的偏差值之前,将即时视频中面部特征点参数所对应的姿态参数利用旋转矩阵进行旋转,以将该姿态参数设置为三维表情模型中所对应的面部特征点参数的姿态参数。
将即时视频中面部特征点参数利用缩放矩阵进行归一化处理,以将该特征点参数设置为三维表情模型中所对应的面部特征点参数。
405、根据偏差值,生成特征点的移动位置。
具体的,将偏差值乘乘以预设系数,生成特征点的移动位置,其中,该预设系数用于指示用户所设置的表情迁移的夸张度。
406、根据特征点的移动位置,生成驱动参数。
具体的,该驱动参数用于指示三维表情模型上特征点的移动位置,本发明实施例对具体的生成方式不加以限定。
特别的,步骤405至步骤406是根据偏差值,生成驱动参数的过程,除此之外,还可以通过以下方式,实现根据偏差值,生成驱动参数:
获取即时视频中用户眼部的参数;
具体的,用户眼部的参数用于指示用户眼部的位置以及眼睛是否闭合,其中,可以通过用户瞳孔在即时视频帧中的坐标来确定用户眼部的位置,也可以通过用户眼部的灰度值来确定眼睛是否闭合,本发明实施例对具体的方式不加以限定。
获取即时视频中用户嘴部的参数;
具体的,用户嘴部的参数用于指示用户嘴部的位置以及嘴部内部的颜色,本发明实施例对具体的获取方式不加以限定。
407、根据驱动参数,驱动三维表情模型显示与用户对应的表情。
具体的,根据特征点的移动位置,移动三维表情模型上的特征点。
除此之外,该过程还可以为:
根据用户眼部的参数和用户嘴部的参数,驱动三维表情模型显示与用户对 应的表情。
根据用户眼部的参数,设置三维表情模型上眼部的位置和眼部的闭合状态,本发明实施例对具体的设置过程不加以限定。
根据用户嘴部的参数,设置三维表情模型上嘴部的位置和嘴部内部的颜色,本发明实施例对具体的设置过程不加以限定。
本发明实施例提供了一种表情迁移方法,通过从即时视频中获取的驱动参数,驱动三维表情模型显示用户在即时视频中的面部表情,在移动设备上实现了表情迁移,提高了用户体验效果。
实施例三
本发明实施例提供了一种电子设备5,参见图5、设备包括:
模型建立模块51,用于建立与用户对应的三维表情模型;
获取模块52,用于从即时视频中获取三维表情模型对应的驱动参数;
驱动模块53,用于根据驱动参数,驱动三维表情模型显示与用户对应的表情。
可选的,
获取模块52还用于获取用户面部表情的特征点参数与姿态参数;
模型建立模块51还包括生成子模块,生成子模块用于根据特征点参数与姿态参数,生成与用户对应的三维表情模型。
可选的,生成子模块还用于:
对特征点参数与姿态参数进行拟合和归一化处理,生成三维表情模型。
可选的,设备还包括:
识别模块,用于识别并拟合即时视频中用户的面部特征点参数;
计算模块,用于计算即时视频中面部特征点参数与三维表情模型中所对应的面部特征点参数之间的偏差值;
生成子模块还用于根据偏差值,生成驱动参数。
可选的,生成子模块还用于:
根据偏差值,生成特征点的移动位置;
根据特征点的移动位置,生成驱动参数。
可选的,获取模块52还用于:
获取即时视频中用户眼部的参数;
获取即时视频中用户嘴部的参数。
可选的,驱动模块用于:
根据特征点的移动位置与处理后的三维表情模型,驱动三维表情模型显示与用户对应的表情。
可选的,驱动模块还用于:
根据用户眼部的参数和用户嘴部的参数,驱动三维表情模型显示与用户对应的表情。
本发明实施例提供了一种电子设备,通过从即时视频中获取的驱动参数,驱动三维表情模型显示用户在即时视频中的面部表情,在移动设备上实现了表情迁移,提高了用户体验效果。
实施例四
本发明实施例提供了一种电子设备6,参见图6,包括显示屏61、发送模块62、接收模块63、存储器64以及分别与显示屏61、发送模块62、接收模块63、存储器64连接的处理器65,其中,存储器64中存储一组程序代码,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
建立与用户对应的三维表情模型;
从即时视频中获取三维表情模型对应的驱动参数;
根据驱动参数,驱动三维表情模型显示与用户对应的表情。
可选的,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
获取用户面部表情的特征点参数与姿态参数;
根据特征点参数与姿态参数,生成与用户对应的三维表情模型。
可选的,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
对特征点参数与姿态参数进行拟合和归一化处理,生成三维表情模型。
可选的,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
识别并拟合即时视频中用户的面部特征点参数;
计算即时视频中面部特征点参数与三维表情模型中所对应的面部特征点参数之间的偏差值;
根据偏差值,生成驱动参数。
可选的,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
根据偏差值,生成特征点的移动位置;
根据特征点的移动位置,生成驱动参数
可选的,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
获取即时视频中用户眼部的参数;
获取即时视频中用户嘴部的参数。
可选的,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
根据特征点的移动位置与三维表情模型,驱动三维表情模型显示与用户对应的表情。
可选的,处理器65用于调用存储器64中存储的程序代码,执行以下操作:
根据用户眼部的参数和用户嘴部的参数,驱动三维表情模型显示与用户对应的表情。
本发明实施例提供了一种电子设备,通过从即时视频中获取的驱动参数,驱动三维表情模型显示用户在即时视频中的面部表情,在移动设备上实现了表情迁移,提高了用户体验效果。
实施例五
本发明实施例提供了一种表情迁移系统,该系统包括:
模型建立设备,用于建立与用户对应的三维表情模型;
获取设备,用于从即时视频中获取三维表情模型对应的驱动参数;
驱动设备,用于根据驱动参数,驱动三维表情模型显示与用户对应的表情
本发明实施例提供了一种表情迁移系统,通过从即时视频中获取的驱动参数,驱动三维表情模型显示用户在即时视频中的面部表情,在移动设备上实现了表情迁移,提高了用户体验效果。
需要说明的是:上述实施例提供的电子设备在进行表情迁移时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的设备与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (13)

  1. 一种表情迁移方法,其特征在于,所述方法包括:
    建立与用户对应的三维表情模型;
    从即时视频中获取所述三维表情模型对应的驱动参数;
    根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
  2. 根据权利要求1所述的方法,其特征在于,所述建立与用户对应的三维表情模型包括:
    获取用户面部表情的特征点参数与姿态参数;
    根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型包括:
    对所述特征点参数与姿态参数进行拟合和归一化处理,生成所述三维表情模型。
  4. 根据权利要求1所述的方法,其特征在于,所述从即时视频中获取所述三维表情模型对应的驱动参数包括:
    识别并拟合即时视频中用户的面部特征点参数;
    计算所述即时视频中面部特征点参数与所述三维表情模型中所对应的面部特征点参数之间的偏差值;
    根据所述偏差值,生成特征点的移动位置;
    根据所述特征点的移动位置,生成所述驱动参数。
  5. 一种电子设备,其特征在于,所述设备包括:
    模型建立模块,用于建立与用户对应的三维表情模型;
    获取模块,用于从即时视频中获取所述三维表情模型对应的驱动参数;
    驱动模块,用于根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
  6. 根据权利要求5所述的设备,其特征在于,
    所述获取模块还用于获取用户面部表情的特征点参数与姿态参数;
    所述模型建立模块还包括生成子模块,所述生成子模块用于根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型。
  7. 根据权利要求6所述的设备,其特征在于,所述生成子模块还用于:
    对所述特征点参数与姿态参数进行拟合和归一化处理,生成所述三维表情模型。
  8. 根据权利要求5所述的设备,其特征在于,所述设备还包括:
    识别模块,用于识别并拟合即时视频中用户的面部特征点参数;
    计算模块,用于计算所述即时视频中面部特征点参数与所述三维表情模型中所对应的面部特征点参数之间的偏差值;
    所述生成子模块还用于根据所述偏差值,生成特征点的移动位置;
    所述生成子模块还用于根据所述特征点的移动位置,生成所述驱动参数。
  9. 一种电子设备,其特征在于,包括显示屏、发送模块、接收模块、存储器以及分别与所述显示屏、所述发送模块、所述接收模块、所述存储器连接的处理器,其中,所述存储器中存储一组程序代码,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
    建立与用户对应的三维表情模型;
    从即时视频中获取所述三维表情模型对应的驱动参数;
    根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
  10. 根据权利要求9所述的电子设备,其特征在于,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
    获取用户面部表情的特征点参数与姿态参数;
    根据所述特征点参数与姿态参数,生成与用户对应的三维表情模型。
  11. 根据权利要求10所述的电子设备,其特征在于,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
    对所述特征点参数与姿态参数进行拟合和归一化处理,生成所述三维表情模型。
  12. 根据权利要求9所述的电子设备,其特征在于,所述处理器还用于调用所述存储器中存储的程序代码,执行以下操作:
    识别并拟合即时视频中用户的面部特征点参数;
    计算所述即时视频中面部特征点参数与所述三维表情模型中所对应的面部特征点参数之间的偏差值;
    根据所述偏差值,生成特征点的移动位置;
    根据所述特征点的移动位置,生成所述驱动参数。
  13. 一种表情迁移系统,其特征在于,所述包括:
    模型建立设备,用于建立与用户对应的三维表情模型;
    获取设备,用于从即时视频中获取所述三维表情模型对应的驱动参数;
    驱动设备,用于根据所述驱动参数,驱动所述三维表情模型显示与所述用户对应的表情。
PCT/CN2015/099485 2015-01-05 2015-12-29 一种表情迁移方法、电子设备及系统 WO2016110199A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510005672.6A CN104616347A (zh) 2015-01-05 2015-01-05 一种表情迁移方法、电子设备及系统
CN201510005672.6 2015-01-05

Publications (1)

Publication Number Publication Date
WO2016110199A1 true WO2016110199A1 (zh) 2016-07-14

Family

ID=53150779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099485 WO2016110199A1 (zh) 2015-01-05 2015-12-29 一种表情迁移方法、电子设备及系统

Country Status (2)

Country Link
CN (1) CN104616347A (zh)
WO (1) WO2016110199A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163063A (zh) * 2018-11-28 2019-08-23 腾讯数码(天津)有限公司 表情处理方法、装置、计算机可读存储介质和计算机设备
CN111027438A (zh) * 2019-12-03 2020-04-17 Oppo广东移动通信有限公司 一种人体姿态的迁移方法、移动终端以及计算机存储介质

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616347A (zh) * 2015-01-05 2015-05-13 掌赢信息科技(上海)有限公司 一种表情迁移方法、电子设备及系统
WO2016202286A1 (zh) * 2015-06-19 2016-12-22 美国掌赢信息科技有限公司 一种即时视频的传输方法和电子设备
CN104967867A (zh) * 2015-06-19 2015-10-07 美国掌赢信息科技有限公司 一种即时视频的传输方法和电子设备
CN106815547A (zh) * 2015-12-02 2017-06-09 掌赢信息科技(上海)有限公司 一种通过多级拟合得到标准化模型运动的方法及电子设备
CN106997450B (zh) * 2016-01-25 2020-07-17 深圳市微舞科技有限公司 一种表情迁移中的下巴运动拟合方法和电子设备
CN107123079A (zh) * 2016-02-24 2017-09-01 掌赢信息科技(上海)有限公司 一种表情迁移方法和电子设备
CN107203962B (zh) * 2016-03-17 2021-02-19 掌赢信息科技(上海)有限公司 一种利用2d图片制作伪3d图像的方法及电子设备
CN107292811A (zh) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 一种表情迁移的方法及电子设备
CN107292812A (zh) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 一种表情迁移的方法及电子设备
CN107291214B (zh) * 2016-04-01 2020-04-24 掌赢信息科技(上海)有限公司 一种驱动嘴巴运动的方法及电子设备
CN107292219A (zh) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 一种驱动眼睛运动的方法及电子设备
CN106056650A (zh) * 2016-05-12 2016-10-26 西安电子科技大学 基于快速表情信息提取和泊松融合的人脸表情合成方法
CN106484511A (zh) * 2016-09-30 2017-03-08 华南理工大学 一种谱姿态迁移方法
CN106952217B (zh) * 2017-02-23 2020-11-17 北京光年无限科技有限公司 面向智能机器人的面部表情增强方法和装置
CN109427105A (zh) * 2017-08-24 2019-03-05 Tcl集团股份有限公司 虚拟视频的生成方法及装置
CN110163054B (zh) * 2018-08-03 2022-09-27 腾讯科技(深圳)有限公司 一种人脸三维图像生成方法和装置
CN109147024A (zh) 2018-08-16 2019-01-04 Oppo广东移动通信有限公司 基于三维模型的表情更换方法和装置
CN111435546A (zh) * 2019-01-15 2020-07-21 北京字节跳动网络技术有限公司 模型动作方法、装置、带屏音箱、电子设备及存储介质
CN110008911B (zh) * 2019-04-10 2021-08-17 北京旷视科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN110458121B (zh) * 2019-08-15 2023-03-14 京东方科技集团股份有限公司 一种人脸图像生成的方法及装置
CN112233012B (zh) * 2020-08-10 2023-10-31 上海交通大学 一种人脸生成系统及方法
CN112330805B (zh) * 2020-11-25 2023-08-08 北京百度网讯科技有限公司 人脸3d模型生成方法、装置、设备及可读存储介质
CN112927328B (zh) * 2020-12-28 2023-09-01 北京百度网讯科技有限公司 表情迁移方法、装置、电子设备及存储介质
CN112800869B (zh) * 2021-01-13 2023-07-04 网易(杭州)网络有限公司 图像人脸表情迁移方法、装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944238A (zh) * 2010-09-27 2011-01-12 浙江大学 基于拉普拉斯变换的数据驱动人脸表情合成方法
CN103268623A (zh) * 2013-06-18 2013-08-28 西安电子科技大学 一种基于频域分析的静态人脸表情合成方法
US20140009465A1 (en) * 2012-07-05 2014-01-09 Samsung Electronics Co., Ltd. Method and apparatus for modeling three-dimensional (3d) face, and method and apparatus for tracking face
CN104008564A (zh) * 2014-06-17 2014-08-27 河北工业大学 一种人脸表情克隆方法
CN104616347A (zh) * 2015-01-05 2015-05-13 掌赢信息科技(上海)有限公司 一种表情迁移方法、电子设备及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8235215B2 (en) * 2009-02-24 2012-08-07 Tomasello Melinda K Gift pail kit
KR101640767B1 (ko) * 2010-02-09 2016-07-29 삼성전자주식회사 이종 수행 환경을 위한 네트워크 기반의 실시간 가상 현실 입출력 시스템 및 가상 현실 입출력 방법
AU2012254944B2 (en) * 2012-03-21 2018-03-01 Commonwealth Scientific And Industrial Research Organisation Method and system for facial expression transfer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944238A (zh) * 2010-09-27 2011-01-12 浙江大学 基于拉普拉斯变换的数据驱动人脸表情合成方法
US20140009465A1 (en) * 2012-07-05 2014-01-09 Samsung Electronics Co., Ltd. Method and apparatus for modeling three-dimensional (3d) face, and method and apparatus for tracking face
CN103268623A (zh) * 2013-06-18 2013-08-28 西安电子科技大学 一种基于频域分析的静态人脸表情合成方法
CN104008564A (zh) * 2014-06-17 2014-08-27 河北工业大学 一种人脸表情克隆方法
CN104616347A (zh) * 2015-01-05 2015-05-13 掌赢信息科技(上海)有限公司 一种表情迁移方法、电子设备及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163063A (zh) * 2018-11-28 2019-08-23 腾讯数码(天津)有限公司 表情处理方法、装置、计算机可读存储介质和计算机设备
CN110163063B (zh) * 2018-11-28 2024-05-28 腾讯数码(天津)有限公司 表情处理方法、装置、计算机可读存储介质和计算机设备
CN111027438A (zh) * 2019-12-03 2020-04-17 Oppo广东移动通信有限公司 一种人体姿态的迁移方法、移动终端以及计算机存储介质
CN111027438B (zh) * 2019-12-03 2023-06-02 Oppo广东移动通信有限公司 一种人体姿态的迁移方法、移动终端以及计算机存储介质

Also Published As

Publication number Publication date
CN104616347A (zh) 2015-05-13

Similar Documents

Publication Publication Date Title
WO2016110199A1 (zh) 一种表情迁移方法、电子设备及系统
CN112541963B (zh) 三维虚拟形象生成方法、装置、电子设备和存储介质
US11455765B2 (en) Method and apparatus for generating virtual avatar
WO2016165614A1 (zh) 一种即时视频中的表情识别方法和电子设备
KR20230113370A (ko) 얼굴 애니메이션 합성
KR20230003555A (ko) 텍스처 기반 자세 검증
US20210192192A1 (en) Method and apparatus for recognizing facial expression
US11989348B2 (en) Media content items with haptic feedback augmentations
US20220300728A1 (en) True size eyewear experience in real time
US11997422B2 (en) Real-time video communication interface with haptic feedback response
KR20210010517A (ko) 자세 교정
CN114445562A (zh) 三维重建方法及装置、电子设备和存储介质
US20230120037A1 (en) True size eyewear in real time
CN112714337A (zh) 视频处理方法、装置、电子设备和存储介质
CN111314627B (zh) 用于处理视频帧的方法和装置
CN110266937A (zh) 终端设备及摄像头的控制方法
CN110678904A (zh) 美颜处理方法、装置、无人机及手持平台
US11922587B2 (en) Dynamic augmented reality experience
CN117115321B (zh) 虚拟人物眼睛姿态的调整方法、装置、设备及存储介质
CN115083000B (zh) 人脸模型训练方法、换脸方法、装置和电子设备
US20240193875A1 (en) Augmented reality shared screen space
CN113344812A (zh) 图像处理方法、装置和电子设备
KR20230124703A (ko) 증강 현실 컴포넌트들을 위한 신체 ui

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15876699

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.11.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15876699

Country of ref document: EP

Kind code of ref document: A1