CN108335345B - Control method and device of facial animation model and computing equipment - Google Patents

Control method and device of facial animation model and computing equipment Download PDF

Info

Publication number
CN108335345B
CN108335345B CN201810145903.7A CN201810145903A CN108335345B CN 108335345 B CN108335345 B CN 108335345B CN 201810145903 A CN201810145903 A CN 201810145903A CN 108335345 B CN108335345 B CN 108335345B
Authority
CN
China
Prior art keywords
model
image
key points
group
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810145903.7A
Other languages
Chinese (zh)
Other versions
CN108335345A (en
Inventor
眭一帆
邱学侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201810145903.7A priority Critical patent/CN108335345B/en
Publication of CN108335345A publication Critical patent/CN108335345A/en
Application granted granted Critical
Publication of CN108335345B publication Critical patent/CN108335345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Abstract

The invention discloses a control method and device of a facial animation model and computing equipment. Wherein, the method comprises the following steps: determining first position information of each image key point contained in the acquired image data and second position information of each model key point contained in the facial animation model; determining the corresponding relation between each image key point and each model key point according to a preset key point mapping rule, and comparing the first position information of each image key point with the second position information of each model key point according to the corresponding relation; and controlling each model key point contained in the facial animation model to shift according to the comparison result so as to change the expression of the facial animation model. According to the method, the facial animation model can be driven to make corresponding expressions or actions by utilizing the facial expressions, the interactivity between the model and the user is improved, and the entertainment of the animation model is greatly increased.

Description

Control method and device of facial animation model and computing equipment
Technical Field
The invention relates to the field of image processing, in particular to a control method and device of a facial animation model and computing equipment.
Background
Character animations based on 3D animation model technology have been widely applied to various virtual scene systems in various industries, and various lifelike creatures can be modeled by applying the 3D animation model technology, so that the reality of virtual scenes is effectively improved. In the prior art, a virtual creature is often created by using an animation model, and then the virtual creature corresponding to the animation model is controlled to make various expressions or movements by a preset program.
However, the inventor finds out in the process of implementing the invention that: the way of controlling the animation model through the preset program is rigid, and the animation model can only be changed according to the setting of a programmer, so that the user experience is poor.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a control method and apparatus of a facial animation model, a computing device, which overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided a method of controlling a facial animation model, including:
determining first position information of each image key point contained in the acquired image data and second position information of each model key point contained in the facial animation model;
determining the corresponding relation between each image key point and each model key point according to a preset key point mapping rule, and comparing the first position information of each image key point with the second position information of each model key point according to the corresponding relation;
and controlling each model key point contained in the facial animation model to shift according to the comparison result so as to change the expression of the facial animation model.
Optionally, each image key point included in the image data further includes: a plurality of sets of image region key points divided according to facial regions, and each model key point included in the facial animation model further includes: a plurality of groups of model part key points divided according to the model parts;
the step of comparing the first location information of each image keypoint with the second location information of each model keypoint specifically includes:
respectively determining a group of model part key points corresponding to the group of image part key points aiming at each group of image part key points, and respectively determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points;
and respectively comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points.
Optionally, the step of determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points respectively includes:
determining first distribution information of the key points of the group of image positions according to the first position information of each key point of the image, and determining a first position expression coefficient corresponding to the first distribution information according to a preset first expression calculation rule;
and determining second distribution information of the group of model part key points according to the second position information of each model key point, and determining a second part expression coefficient corresponding to the second distribution information according to a preset second expression calculation rule.
Optionally, the step of controlling displacement of each model key point included in the facial animation model according to the comparison result specifically includes:
respectively judging whether the difference value between the first part expression coefficient of each group of image part key points and the second part expression coefficient of each group of model part key points corresponding to each group of image part key points is larger than a preset threshold value or not aiming at each group of image part key points;
and if so, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to generate displacement.
Optionally, the step of controlling the displacement of the target site key point specifically includes:
determining a target expression coefficient of the key point of the target part according to the first part expression coefficient and a preset model linkage rule, and determining the displacement direction and/or the displacement amplitude of the key point of the target part according to the target expression coefficient;
the model linkage rule is used for setting the corresponding relation between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points.
Optionally, the model linkage rule is further used for setting an association part point associated with each group of model part key points and an association displacement rule of the association part point;
the step of controlling the displacement of the key point of the target part further comprises: and determining a related part point related to the key point of the target part and a related displacement rule of the related part point, and controlling the related part point to displace according to the related displacement rule.
Optionally, wherein the model site keypoints comprise eye site keypoints, and/or mouth site keypoints; the associated site points include: an eyebrow site, a face site, and/or an ear site.
Optionally, the step of determining the first location information of each image keypoint included in the acquired image data specifically includes:
the method comprises the steps of acquiring image data corresponding to a current frame image contained in a live video stream in real time, and determining first position information of each image key point contained in the image data corresponding to the current frame image.
Optionally, the step of determining the first location information of each image keypoint included in the acquired image data specifically includes:
the method comprises the steps of sequentially obtaining image data corresponding to each frame of image contained in a recorded video stream, and determining first position information of each image key point contained in the image data corresponding to the frame of image.
Optionally, the facial animation model is a bone animation model, and each model key point in the facial animation model corresponds to a preset bone part.
Optionally, wherein the bone animation model further comprises: an animal bone animation model, and the animal bone animation model comprises: cat bone animation models, courser bone animation models, and rabbit bone animation models.
According to another aspect of the present invention, there is provided a control apparatus of a facial animation model, including:
the determining module is suitable for determining first position information of each image key point contained in the acquired image data and second position information of each model key point contained in the facial animation model;
the comparison module is suitable for determining the corresponding relation between each image key point and each model key point according to a preset key point mapping rule and comparing the first position information of each image key point with the second position information of each model key point according to the corresponding relation;
and the changing module is suitable for controlling each model key point contained in the facial animation model to displace according to the comparison result so as to change the expression of the facial animation model.
Optionally, each image key point included in the image data further includes: a plurality of sets of image region key points divided according to facial regions, and each model key point included in the facial animation model further includes: a plurality of groups of model part key points divided according to the model parts;
the comparison module is specifically adapted to:
respectively determining a group of model part key points corresponding to the group of image part key points aiming at each group of image part key points, and respectively determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points;
and respectively comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points.
Optionally, wherein the comparing module is specifically adapted to:
determining first distribution information of the key points of the group of image positions according to the first position information of each key point of the image, and determining a first position expression coefficient corresponding to the first distribution information according to a preset first expression calculation rule;
and determining second distribution information of the group of model part key points according to the second position information of each model key point, and determining a second part expression coefficient corresponding to the second distribution information according to a preset second expression calculation rule.
Optionally, wherein the changing module is specifically adapted to:
respectively judging whether the difference value between the first part expression coefficient of each group of image part key points and the second part expression coefficient of each group of model part key points corresponding to each group of image part key points is larger than a preset threshold value or not aiming at each group of image part key points;
and if so, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to generate displacement.
Optionally, wherein the changing module is specifically adapted to:
determining a target expression coefficient of the key point of the target part according to the first part expression coefficient and a preset model linkage rule, and determining the displacement direction and/or the displacement amplitude of the key point of the target part according to the target expression coefficient;
the model linkage rule is used for setting the corresponding relation between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points.
Optionally, the model linkage rule is further used for setting an association part point associated with each group of model part key points and an association displacement rule of the association part point;
the change module is further adapted to: and determining a related part point related to the key point of the target part and a related displacement rule of the related part point, and controlling the related part point to displace according to the related displacement rule.
Optionally, wherein the model site keypoints comprise eye site keypoints, and/or mouth site keypoints; the associated site points include: an eyebrow site, a face site, and/or an ear site.
Optionally, wherein the determining module is specifically adapted to:
the method comprises the steps of acquiring image data corresponding to a current frame image contained in a live video stream in real time, and determining first position information of each image key point contained in the image data corresponding to the current frame image.
Optionally, wherein the determining module is specifically adapted to:
the method comprises the steps of sequentially obtaining image data corresponding to each frame of image contained in a recorded video stream, and determining first position information of each image key point contained in the image data corresponding to the frame of image.
Optionally, the facial animation model is a bone animation model, and each model key point in the facial animation model corresponds to a preset bone part.
Optionally, wherein the bone animation model further comprises: an animal bone animation model, and the animal bone animation model comprises: cat bone animation models, courser bone animation models, and rabbit bone animation models.
According to yet another aspect of the present invention, there is provided a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the control method of the facial animation model.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the control method of the above-described facial animation model.
According to the control method and device of the facial animation model and the computing equipment provided by the invention, the first position information of each image key point contained in the acquired image data and the second position information of each model key point contained in the facial animation model are determined, then the first position information and the second position information are compared according to the corresponding relation between the image key point and the model key point, and each model key point contained in the facial animation model is controlled to be displaced according to the comparison result, so that the expression of the facial animation model is changed. According to the method, various facial animation models can be correspondingly controlled to make similar expressions or other various expressions and actions according to the facial expressions of the human faces in the acquired image data, and interestingness is increased. Therefore, according to the method, the facial animation model can be driven to make corresponding expressions or actions by utilizing the facial expressions, the interactivity between the model and the user is improved, and the entertainment of the animation model is greatly increased.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a flow diagram of a method of controlling a facial animation model according to one embodiment of the invention;
FIG. 2 shows a flowchart of a method of controlling a facial animation model according to another embodiment of the invention;
FIG. 3 shows a functional block diagram of a control apparatus of a facial animation model according to one embodiment of the present invention;
FIG. 4 shows a schematic structural diagram of a computing device according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 shows a flowchart of a method of controlling a facial animation model according to one embodiment of the invention. As shown in fig. 1, the method for controlling a facial animation model specifically includes the following steps:
in step S101, first position information of each image key point included in the acquired image data and second position information of each model key point included in the facial animation model are determined.
The obtained image data may be image data corresponding to a current frame image contained in a live video stream obtained in real time, or image data corresponding to each frame image contained in a recorded video stream. The image key points included in the acquired image data may include: the feature points corresponding to the facial features or facial contours may be obtained by a deep learning method or other methods. After acquiring each image key point, first position information of each image key point may be acquired, and the first position information may be represented in the form of position coordinates of the image key point in a coordinate system set in the image data, where a specific position and a setting manner set by the coordinate system may be specifically set by a person skilled in the art according to actual situations. The first position information may be represented in other forms besides in the form of position coordinates, and specifically may be determined according to the position representation manner of the image key points. Accordingly, the various model keypoints contained in the facial animation model may include: feature points corresponding to facial features or facial contours of the animated model. The second position information may be expressed in the form of position coordinates of each model key point in a coordinate system provided in the animated model, and may also be expressed in other forms, which are not limited herein.
Step S102, determining the corresponding relation between each image key point and each model key point according to a preset key point mapping rule, and comparing the first position information of each image key point with the second position information of each model key point according to the corresponding relation.
Each image key point may include a plurality of groups of image part key points divided for a facial part, and accordingly, each model key point included in the facial animation model may include: and multiple groups of model part key points are divided according to the model parts. The image part key points can be divided into image eye part key points, image mouth part key points and the like according to the types of the facial parts, and correspondingly, the model part key points can also be divided into model eye part key points, model mouth part key points and the like according to the types of the facial model parts. The division may be performed according to other ways other than the above-described division manner. Since the facial animation model may be of an equal-scale size or of an equal-scale scaled size compared to the acquired images, the preset key point mapping rule may be a rule in which, for any one of the sizes of the facial animation models, each image part key point and the model part key points of the same type thereof are in one-to-one correspondence, such as a rule in which the image eye part key points and the model eye part key points are in correspondence with each other. According to the corresponding relation, the first position information of each image key point is compared with the second position information of each model key point. In comparing the first location information of each image keypoint with the second location information of each model keypoint, each image or facial animation model may be scaled to obtain first location information or second location information that may be directly compared, and then compared. Or determining a first part expression coefficient corresponding to each group of image part key points and a second part expression coefficient corresponding to the group of model part key points; and then comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points. In addition to the above comparison, one skilled in the art can compare the first position information of each image keypoint with the second position information of each model keypoint according to other ways.
And step S103, controlling each model key point contained in the facial animation model to displace according to the comparison result so as to change the expression of the facial animation model.
Specifically, a comparison method of comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points is taken as an example. As long as the first part expression coefficient is different from the second part expression coefficient, each model key point contained in the facial animation model is controlled to be displaced so as to change the expression of the facial animation model; or when the difference value between the first part expression coefficient and the second part expression coefficient is greater than a preset threshold value, controlling each model key point contained in the facial animation model to displace so as to change the expression of the facial animation model. The preset threshold value can be specifically set according to the control precision required, when the control precision is higher, the preset threshold value can be set to be a smaller value, taking a first part expression coefficient of an image eye part key point and a second part expression coefficient corresponding to a model eye part key point as an example, when the expression of the eye part in the image is slightly different from the expression of the eye part in the facial model, the eye part key point contained in the facial animation model and each model key point related to the eye part key point can be controlled to be displaced, so that the expression of the facial animation model is changed; when the requirement on the control accuracy is relatively low, the preset threshold value may be set to a relatively large value, taking a first part expression coefficient of the image eye part key points and a second part expression coefficient corresponding to the model eye part key points as an example, when the difference between the expression of the eye parts in the image and the expression of the eye parts in the face model is relatively large, the eye part key points included in the face animation model and the model key points associated therewith may be controlled to be displaced, so that the expression of the face animation model is changed. If the comparison is performed in other manners, the displacement of each model key point included in the facial animation model can be controlled in other manners according to the specific comparison result, so that the expression of the facial animation model is changed.
In addition, when each model key point contained in the facial animation model is controlled to be displaced so that the expression of the facial animation model is changed, each model key point contained in the facial animation model can be controlled to be displaced so that the expression of the facial animation model is the same as or similar to the facial expression in the image so as to simulate the facial expression of the face in the image; namely, when the human face in the image blinks, the face animation model blinks, and when the human face in the image opens the mouth or closes the mouth, the face animation model opens the mouth or closes the mouth correspondingly. Besides the above moving modes, the moving mode of each model key point contained in the facial animation model can be customized and controlled by those skilled in the art, so that the expression of the facial animation model can be changed arbitrarily, for example, when the face in the image cries, the face in the facial animation model can make a smiling expression, and make an ear-up action, etc.
According to the control method of the facial animation model provided by the embodiment, the first position information of each image key point contained in the acquired image data and the second position information of each model key point contained in the facial animation model are determined, then the first position information and the second position information are compared according to the corresponding relation between the image key point and the model key point, and each model key point contained in the facial animation model is controlled to be displaced according to the comparison result, so that the expression of the facial animation model is changed. According to the method, various facial animation models can be correspondingly controlled to make similar expressions or other various expressions and actions according to the facial expressions of the human faces in the acquired image data, and interestingness is increased. Therefore, according to the method, the facial animation model can be driven to make corresponding expressions or actions by utilizing the facial expressions, the interactivity between the model and the user is improved, and the entertainment of the animation model is greatly increased.
FIG. 2 shows a flowchart of a method of controlling a facial animation model according to one embodiment of the invention. As shown in fig. 2, the method for controlling the facial animation model specifically includes the following steps:
in step S201, first position information of each image key point included in the acquired image data and second position information of each model key point included in the facial animation model are determined.
Specifically, image data corresponding to a current frame image contained in a live video stream can be obtained in real time, and first position information of each image key point contained in the current frame image data is determined; or sequentially acquiring image data corresponding to each frame of image contained in the recorded video stream, and determining first position information of each image key point contained in the image data corresponding to the frame of image. When the image data corresponding to each current frame image included in the live video stream is acquired in real time, the image data can be uploaded to a cloud video platform server such as an Aichi art, a Youkou video and a fast video so that the video platform server can display the image data corresponding to each current frame image of the video data on a cloud video platform. Optionally, the current frame images can be uploaded to a cloud live broadcast server, so that the cloud live broadcast server can push the current frame images to image data corresponding to each current frame image of the watching user client in real time. Optionally, the current frame images can be uploaded to a cloud public server, so that the cloud public server pushes the current frame images to image data corresponding to each current frame image of a public attention client. When a user pays attention to the public number, the cloud public number server pushes the video data to a public number attention client; further, the cloud public number server can push video data conforming to user habits to the public number attention client according to the watching habits of users paying attention to the public numbers. In short, the image data in the invention can be acquired in real time or non-real time, and the invention is not limited to specific application scenes.
In addition, the face animation model provided by the embodiment can be a skeleton animation model or other types of face animation models. When the skeleton animation model is a skeleton animation model, each model key point in the animation model corresponds to a preset skeleton part. For example, each model key point of the eye part in the animation model corresponds to the preset skeleton of the eye part. The skeleton animation model can be a human skeleton animation model, and can also be various animal skeleton animation models, such as a cat skeleton animation model, a courser skeleton animation model and a rabbit skeleton animation model. The skeleton animation model is set as the skeleton animation model of various organisms, so that the types of the controlled face animation model are enriched, and when the face is expressed, the various face animation models can be controlled to perform the same or similar expressions and other various expressions and actions, and the interestingness is improved.
Specifically, the image key points may include: feature points corresponding to facial features or facial contours. In addition, each of the image key points included in the image data may further include a plurality of sets of image-part key points divided according to the facial parts. The specific division mode may be divided according to the types of facial features or facial contours, such as eye region key points, nose region key points, mouth region key points, and the like, and may also be divided according to the size of the degree of change of the facial features or contours. Accordingly, each model keypoint contained in the facial animation model further comprises: the model region key points may include eye region key points, and/or mouth region key points. The key points of the images can be obtained by a deep learning method and can also be obtained by other methods. For example, 95 or any multiple key points may be arranged in advance for the position of facial features and the position of facial contour of one frame of image, and then a coordinate system is established to obtain coordinate information, i.e., first position information, of each key point of the frame of image, and the position of the origin of the coordinate system may be specifically set by a person skilled in the art according to actual situations, which is not limited herein. The first position information of the image key points included in the image data corresponding to the images of the other frames may also be acquired according to the method. Correspondingly, 95 or any more key points can be arranged at the positions of the five sense organs and the face contour in the face animation model. In an alternative implementation manner, in order to achieve a better visual effect, the facial animation model is a three-dimensional model formed by 3D meshes, accordingly, by obtaining the distribution of each model key point on the 3D meshes, three-dimensional coordinate information corresponding to each model key point and the three-dimensional model can be determined, and according to the information, second position information of each model key point can be obtained. Therefore, according to the step, even in the case of no depth camera, the control on the facial animation model can be realized by executing the subsequent steps through the first position information of each image key point contained in the acquired image data and the second position information of each model key point contained in the facial animation model, so that the requirement on the hard condition of the equipment camera is reduced, and the method is simpler and more practical.
Step S202, determining the corresponding relation between each image key point and each model key point according to a preset key point mapping rule.
The preset key point mapping rule may be a mapping rule in which each image key point corresponds to each model key point of the same type, or may be a mapping rule in which each group of image part key points corresponds to each group of model part key points of the same type, for example, the image eye part key points correspond to the model eye part key points, the image mouth part key points correspond to the model mouth part key points, and the like. According to the preset key point mapping rule, the corresponding relation between each image key point and each model key point and the corresponding relation between each group of image part key points and each group of model part key points can be determined.
Step S203, determining a group of model part key points corresponding to the group of image part key points, and determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points, respectively, for each group of image part key points.
Specifically, first distribution information of the group of image location key points may be determined according to the first location information of each image key point, and then a first location expression coefficient corresponding to the first distribution information may be determined according to a preset first expression calculation rule. For example, 21 image key points can be set at the mouth, and the first distribution information of the group of image mouth part key points can be determined according to the first position information of each image key point on the mouth, for example, when the lips are closed, the image key points distributed on the surfaces of the upper and lower lips which are in contact with each other are basically in a superposed state, and when the lips are gradually opened, the distance and the position of each image key point distributed on the lips are correspondingly changed along with the opening of the lips. The first distribution information of the key points of the mouth part of the group of images is determined, and then a first part expression coefficient corresponding to the first distribution information of the key points of the mouth part is determined according to a preset first expression calculation rule. Second distribution information of the group of model part key points can be determined according to the second position information of each model key point, and second part expression coefficients corresponding to the second distribution information can be determined according to a preset second expression calculation rule. The first expression calculation rule and the second expression calculation rule may be rules for calculating the first expression coefficient and the second expression coefficient according to a mutual position relationship and a distance relationship between the key points in the first distribution information and the key points in the second distribution information. The first expression coefficient and the second expression coefficient may be numerical values in a range from 0 to 1 based on expression coefficients in an established expression system, and the established expression system may be a blendship expression system or other types of expression systems. For example, when the mouth is opened to the maximum, the corresponding expression coefficient is 1; when the mouth is closed, the corresponding expression coefficient is 0.
Step S204, the first part expression coefficients of each group of image part key points are respectively compared with the second part expression coefficients of a group of model part key points corresponding to the group of image part key points.
After determining the first part expression coefficients corresponding to the set of image part key points and the second part expression coefficients corresponding to the set of model part key points in step S203, the first part expression coefficients of each set of image part key points are compared with the second part expression coefficients of a set of model part key points corresponding to the set of image part key points in this step.
Step S205, respectively for each group of image part key points, determines whether a difference between a first part expression coefficient of the group of image part key points and a second part expression coefficient of a group of model part key points corresponding to the group of image part key points is greater than a preset threshold.
When the facial expressions of the faces of one group of images are completely the same as or have a small difference with the facial expressions of the face model at the moment, the first part expression coefficients of the key points of the image parts of each group in the images are equal to or have a small difference with the second part expression coefficients of the key points of the model parts of each group in the face model, and at the moment, the key points of the model parts do not need to be controlled to be displaced, so that the expressions of the face animation model are changed. Optionally, when the degree of change of one or more organs in the image is the same as or differs little from the degree of change of the corresponding organ in the facial model at the time, for example, the degree of opening and closing of the mouth of the face in the image is the same as or differs little from the degree of opening and closing of the mouth part in the animation model, the first part expression coefficient of the set of image part key points is equal to or differs little from the second expression coefficient value of a set of model part key points corresponding to the set of image key points, and at this time, the expression corresponding to the set of image part key points in the model and the expression corresponding to the set of model part key points can be the same without controlling the set of model part key points to be displaced. Therefore, in order to set the accuracy of controlling the facial animation model in the above case, it may be determined whether a difference between a first part expression coefficient of the group of image part key points and a second part expression coefficient of a group of model part key points corresponding to the group of image part key points is greater than a preset threshold value for each group of image part key points. The preset threshold may be set by a person skilled in the art according to the required control accuracy, for example, when the requirement for control accuracy is higher, the preset threshold may be set to 0 or a smaller value, so that the control of the facial model can be realized when the difference between the first part expression coefficient of each group of image part key points and the second expression coefficient of each corresponding group of model part key points is very small, conversely, when the requirement for control accuracy is lower, the preset threshold may also be set to a larger value, so that the control of the facial model can be realized when the difference between the first part expression coefficient of each group of image part key points and the second part expression coefficient of each corresponding group of model part key points is relatively large. By performing this step, it is possible to flexibly realize control or local control of the animation model according to the required control accuracy, and it is possible to more accurately realize control of the facial animation model, thereby reducing unnecessary operations. Optionally, the preset threshold may also be a plurality of thresholds respectively corresponding to different model part key points. In particular, the corresponding threshold size may be determined according to the degree of deformability of the respective model portion. For example, the change is more obvious for the eye part and the mouth part, so the threshold value can be set smaller to improve the control precision; as another example, the variation is not significant for the ear part and the cheek part, and therefore, the threshold value can be set to be large to reduce the performance loss caused by the adjustment operation which is not noticeable to the user.
And step S206, if yes, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to displace so as to change the expression of the facial animation model.
If the difference value between the first part expression coefficient of the group of image part key points and the second part expression coefficient of a group of model part key points corresponding to the group of image part key points is judged to be larger than a preset threshold value aiming at each group of image part key points, a group of model part key points corresponding to the group of image part key points can be determined as target part key points, and the target part key points are controlled to be displaced, so that the expression of the facial animation model is changed.
Specifically, the target expression coefficient of the target key point may be determined according to the first part expression coefficient and a preset model linkage rule, and the displacement direction and the displacement amplitude of the target key point may be determined according to the target expression coefficient. The model linkage rule is used for setting the corresponding relation between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points. The corresponding relationship may be a one-to-one correspondence and equal relationship, or may be various corresponding relationships set by a person in the art according to needs, for example, the first part expression coefficients of the key points of each group of image parts and the target expression coefficients of the key points of the corresponding model parts may be set to equal values, so that the key points of each group of model parts in the face model and the key points of each group of image parts corresponding to the key points of each group of model parts in the face model may be synchronously controlled, for example, when the mouth of the image face is opened, the mouth in the face model may be correspondingly and equally opened, and when the eyes of the image face are opened, the eyes in the face model may be correspondingly and equally opened. Optionally, the first part expression coefficients of the key points of the image parts of each group and the target expression coefficients of the corresponding key points of the model parts can also be set to be unequal numerical values, so that when the eyes of the human in the image are opened, the control of the rotation of the eyeballs of the human in the facial model or the control of the eye closing action can also be realized; when the mouth is opened in the image, the control of mouth closing in the face model is realized. In short, the model linkage rule is to set the corresponding relationship between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points, but the specific setting mode of the corresponding relationship is not limited in the present invention. After the target expression coefficient of the target key point is determined according to the first part expression coefficient and a preset model linkage rule, the corresponding displacement direction and displacement amplitude of the target key point can be determined according to the target expression coefficient and the expression coefficient calculation rule, so that the target part key point is controlled to be displaced according to the displacement direction and the displacement amplitude, and the expression of the facial animation model is changed.
In addition, the model linkage rule can be further used for setting an associated part point associated with each group of model part key points and an associated displacement rule of the associated part point. Wherein the linkage site comprises: an eyebrow site, a face site, and/or an ear site. When the control target part key point is displaced, the related part point related to the target part key point and the related displacement rule of the related part point can be determined, and the related part point is controlled to be displaced according to the related displacement rule. Specifically, the association movement rules can be set according to the connection relationship between the key points of each group of model parts and the association part points and the linkage rules, and the connection relationship between each skeleton in the skeleton animation model can be preset according to the connection relationship of each skeleton of the human face or the connection relationship of the skeletons of the faces of various animals, so that the motion rules of each skeleton in the skeleton animation model conform to the actual motion rules of the human face or the animal face skeleton, and the simulation effect of the face skeleton animation model is better. Therefore, when the model face gives out smiling expression, due to the linkage rule of bones, the facial eggs can be lifted upwards along with the rising of the mouth corners, and the shapes of eyes can be bent; when a human face cries, due to the linkage law of bones of the human face, the facial eggs can be drawn downwards along with the pulling of the mouth, and eyebrows can be wrinkled. And setting an association movement rule according to the linkage rule, determining an association part point associated with the target part key point and an association displacement rule of the association part point when the target key point is displaced, and controlling the association part point to be displaced according to the association displacement rule. For example, when the model mouth is a target key point, and when the mouth is tilted upward, it may be determined that the associated portion points associated with the target key point are eyebrow portion points, face portion points, and the like, and a specific associated displacement rule of the associated portion points is determined, for example, as the mouth is tilted upward, the eyebrow portion points may be raised, and the face portion points are displaced along with the raised displacement rule, so as to control the associated portion points to be displaced according to the associated displacement rule.
In addition to setting the association movement rules in the above manner, those skilled in the art can custom set the association part points associated with each set of model part key points and the association displacement rules of the association part points. For example, the association part points associated with the key points of the mouth part of the model may be ear part points in addition to the eyebrow part points and the face part points. Therefore, when the image face is smiling in expression, the middle ear of the animation model can be controlled to be erected or do other actions along with the rising of the mouth corner, when the image face is crying in expression, the middle ear of the animation model can be controlled to be pulled down or do other actions along with the falling of the mouth corner, and the association part points and the association displacement rules of the association part points associated with the key points of each group of model parts are set by self-definition, so that the association part points and the association movement rules of the association part points associated with the key points of each group of model parts are not limited to the connection relation and the linkage rule of bones in the skeleton animation model, are diversified and enriched, and the interestingness is greatly enhanced.
The relevant part site may be a key point or a non-key point. Therefore, by setting the association movement rule, the original unmovable part in the model can be changed in expression, and the animation effect of the model is enriched. For example, the original ear does not move, and therefore, if the association movement rule is not set, the ear in the model is always still. However, for some cartoon models (such as rabbit models), the change of ears can reflect corresponding expression features very flexibly, so that the parts which cannot be changed originally can be changed along with the change of the expression of the face by setting the association movement rule, and the visual effect is improved.
In addition, in essence, the five sense organs included in the facial animation model may not be consistent with those of the human face, for example, the facial animation model may also be a small fish animation model, and accordingly, the mouth of the human and the cheek of the small fish may be mapped to each other through the key point mapping rule, so as to achieve a better entertainment effect. In a word, the key point mapping rule can flexibly define the corresponding relation between the image key points and the model key points according to the requirement. Similarly, the model linkage rules and the associated displacement rules contained in the model linkage rules can be flexibly set.
According to the control device and method of the facial animation model provided by the embodiment, the first position information of each image key point contained in the acquired image data and the second position information of each model key point contained in the facial animation model are determined; respectively determining a group of model part key points corresponding to the group of image part key points aiming at each group of image part key points, and respectively determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points; and then comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points to judge whether the difference value is greater than a preset threshold value, flexibly realizing control or local control on the animation model according to the required control precision through the preset threshold value, and more accurately realizing the control on the facial animation model. And finally, if the difference value is judged to exceed the preset threshold value, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to move so as to change the expression of the facial animation model. According to the method, various facial animation models can be correspondingly controlled to make similar expressions or other various expressions and actions according to the facial expressions of the human faces in the acquired image data, and interestingness is increased. Therefore, according to the method, the facial animation model can be driven to make corresponding expressions or actions by utilizing the facial expressions, the interactivity between the model and the user is improved, and the entertainment of the animation model is greatly increased.
FIG. 3 shows a functional block diagram of a control apparatus of a facial animation model according to one embodiment of the present invention. As shown in fig. 3, the apparatus includes:
a determining module 31 adapted to determine first position information of each image key point included in the acquired image data and second position information of each model key point included in the facial animation model;
the comparison module 32 is adapted to determine a corresponding relationship between each image key point and each model key point according to a preset key point mapping rule, and compare the first position information of each image key point with the second position information of each model key point according to the corresponding relationship;
and the changing module 33 is adapted to control each model key point included in the facial animation model to be displaced according to the comparison result, so that the expression of the facial animation model is changed.
Optionally, each image key point included in the image data further includes: a plurality of sets of image region key points divided according to facial regions, and each model key point included in the facial animation model further includes: a plurality of groups of model part key points divided according to the model parts;
the comparison module 32 is specifically adapted to:
respectively determining a group of model part key points corresponding to the group of image part key points aiming at each group of image part key points, and respectively determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points;
and respectively comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points.
Optionally, wherein the comparing module 32 is specifically adapted to:
determining first distribution information of the key points of the group of image positions according to the first position information of each key point of the image, and determining a first position expression coefficient corresponding to the first distribution information according to a preset first expression calculation rule;
and determining second distribution information of the group of model part key points according to the second position information of each model key point, and determining a second part expression coefficient corresponding to the second distribution information according to a preset second expression calculation rule.
Optionally, wherein the changing module 33 is specifically adapted to:
respectively judging whether the difference value between the first part expression coefficient of each group of image part key points and the second part expression coefficient of each group of model part key points corresponding to each group of image part key points is larger than a preset threshold value or not aiming at each group of image part key points;
and if so, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to generate displacement.
Optionally, wherein the changing module 33 is specifically adapted to:
determining a target expression coefficient of the key point of the target part according to the first part expression coefficient and a preset model linkage rule, and determining the displacement direction and/or the displacement amplitude of the key point of the target part according to the target expression coefficient;
the model linkage rule is used for setting the corresponding relation between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points.
Optionally, the model linkage rule is further used for setting an association part point associated with each group of model part key points and an association displacement rule of the association part point;
the change module is further adapted to: and determining a related part point related to the key point of the target part and a related displacement rule of the related part point, and controlling the related part point to displace according to the related displacement rule.
Optionally, wherein the model site keypoints comprise eye site keypoints, and/or mouth site keypoints; the associated site points include: an eyebrow site, a face site, and/or an ear site.
Optionally, wherein the determining module 31 is specifically adapted to:
the method comprises the steps of acquiring image data corresponding to a current frame image contained in a live video stream in real time, and determining first position information of each image key point contained in the image data corresponding to the current frame image.
Optionally, wherein the determining module 31 is specifically adapted to:
the method comprises the steps of sequentially obtaining image data corresponding to each frame of image contained in a recorded video stream, and determining first position information of each image key point contained in the image data corresponding to the frame of image.
Optionally, the facial animation model is a bone animation model, and each model key point in the facial animation model corresponds to a preset bone part.
Optionally, wherein the bone animation model further comprises: an animal bone animation model, and the animal bone animation model comprises: cat bone animation models, courser bone animation models, and rabbit bone animation models.
Fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 4, the computing device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a Communications bus 408.
Wherein:
the processor 402, communication interface 404, and memory 406 communicate with each other via a communication bus 408.
A communication interface 404 for communicating with network elements of other devices, such as clients or other servers.
The processor 402 is configured to execute the program 410, and may specifically execute relevant steps in the above-described control method embodiment of the facial animation model.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be configured to cause the processor 402 to perform the following operations:
determining first position information of each image key point contained in the acquired image data and second position information of each model key point contained in the facial animation model;
determining a corresponding relation between each image key point and each model key point according to a preset key point mapping rule, and comparing first position information of each image key point with second position information of each model key point according to the corresponding relation;
and controlling each model key point contained in the facial animation model to displace according to the comparison result so as to change the expression of the facial animation model.
In an optional manner, each image keypoint contained in the image data further comprises: a plurality of sets of image region key points divided according to facial regions, and each model key point included in the facial animation model further includes: a plurality of groups of model part key points divided according to the model parts;
the program 410 may specifically be further configured to cause the processor 402 to perform the following operations:
respectively determining a group of model part key points corresponding to the group of image part key points aiming at each group of image part key points, and respectively determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points;
and respectively comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations:
determining first distribution information of the key points of the group of image positions according to the first position information of each key point of the image, and determining a first position expression coefficient corresponding to the first distribution information according to a preset first expression calculation rule;
and determining second distribution information of the group of model part key points according to the second position information of each model key point, and determining a second part expression coefficient corresponding to the second distribution information according to a preset second expression calculation rule.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations:
respectively judging whether the difference value between the first part expression coefficient of each group of image part key points and the second part expression coefficient of each group of model part key points corresponding to each group of image part key points is larger than a preset threshold value or not aiming at each group of image part key points;
and if so, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to generate displacement.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations:
determining a target expression coefficient of the key point of the target part according to the first part expression coefficient and a preset model linkage rule, and determining the displacement direction and/or the displacement amplitude of the key point of the target part according to the target expression coefficient;
the model linkage rule is used for setting the corresponding relation between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points.
In an optional manner, the model linkage rule is further used for setting an association part point associated with each group of model part key points and an association displacement rule of the association part point;
the program 410 may specifically be further configured to cause the processor 402 to perform the following operations:
and determining a related part point related to the key point of the target part and a related displacement rule of the related part point, and controlling the related part point to displace according to the related displacement rule.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: wherein the model site keypoints comprise eye site keypoints, and/or mouth site keypoints; the associated site points include: an eyebrow site, a face site, and/or an ear site.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: the method comprises the steps of acquiring image data corresponding to a current frame image contained in a live video stream in real time, and determining first position information of each image key point contained in the image data corresponding to the current frame image.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: the method comprises the steps of sequentially obtaining image data corresponding to each frame of image contained in a recorded video stream, and determining first position information of each image key point contained in the image data corresponding to the frame of image.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: the facial animation model is a skeleton animation model, and each model key point in the facial animation model corresponds to a preset skeleton part.
In an alternative manner, the program 410 may be further specifically configured to cause the processor 402 to perform the following operations: wherein the bone animation model further comprises: an animal bone animation model, and the animal bone animation model comprises: cat bone animation models, courser bone animation models, and rabbit bone animation models.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of an apparatus for real-time processing of video data according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (18)

1. A control method of a facial animation model includes:
determining first position information of each image key point contained in the acquired image data and second position information of each model key point contained in the facial animation model;
determining a corresponding relation between each image key point and each model key point according to a preset key point mapping rule, and comparing first position information of each image key point with second position information of each model key point according to the corresponding relation;
controlling each model key point contained in the facial animation model to displace according to the comparison result so as to change the expression of the facial animation model;
wherein each image key point included in the image data further comprises: a plurality of sets of image region key points divided according to facial regions, and each model key point included in the facial animation model further includes: a plurality of groups of model part key points divided according to the model parts;
the step of comparing the first location information of each image keypoint with the second location information of each model keypoint specifically includes:
respectively determining a group of model part key points corresponding to the group of image part key points aiming at each group of image part key points, and respectively determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points;
wherein the step of determining a first part expression coefficient corresponding to the set of image part key points and a second part expression coefficient corresponding to the set of model part key points respectively specifically comprises:
determining first distribution information of the key points of the group of image positions according to the first position information of each key point of the image, and determining a first position expression coefficient corresponding to the first distribution information according to a preset first expression calculation rule; determining second distribution information of the key points of the group of model parts according to second position information of the key points of each model, determining a second part expression coefficient corresponding to the second distribution information according to a preset second expression calculation rule, wherein the first expression calculation rule and the second expression calculation rule are rules for calculating a first expression coefficient and a second expression coefficient according to the mutual position relationship and distance relationship of the key points in the first distribution information and the second distribution information, and the numerical range of the first expression coefficient and the second expression coefficient is a numerical value between 0 and 1;
respectively comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points;
the step of controlling the displacement of each model key point included in the facial animation model according to the comparison result specifically comprises:
respectively judging whether the difference value between a first part expression coefficient of each group of image part key points and a second part expression coefficient of each group of model part key points corresponding to the group of image part key points is larger than a preset threshold value or not aiming at each group of image part key points, wherein the preset threshold values are a plurality of threshold values which respectively correspond to different groups of model part key points;
and if so, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to generate displacement.
2. The method according to claim 1, wherein the step of controlling the displacement of the target site keypoints specifically comprises:
determining a target expression coefficient of the key point of the target part according to the first part expression coefficient and a preset model linkage rule, and determining the displacement direction and/or the displacement amplitude of the key point of the target part according to the target expression coefficient;
the model linkage rule is used for setting the corresponding relation between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points.
3. The method of claim 2, wherein the model linkage rules are further used to set association site associated with each set of model site keypoints and associated displacement rules for the association site;
the step of controlling the displacement of the key point of the target part further comprises: and determining a related part point related to the key point of the target part and a related displacement rule of the related part point, and controlling the related part point to displace according to the related displacement rule.
4. The method of claim 3, wherein the model site keypoints comprise eye site keypoints, and/or mouth site keypoints; the associated site points include: an eyebrow site, a face site, and/or an ear site.
5. The method according to any one of claims 1 to 4, wherein the step of determining the first position information of each image key point included in the acquired image data specifically includes:
the method comprises the steps of acquiring image data corresponding to a current frame image contained in a live video stream in real time, and determining first position information of each image key point contained in the image data corresponding to the current frame image.
6. The method according to any one of claims 1 to 4, wherein the step of determining the first position information of each image key point included in the acquired image data specifically includes:
the method comprises the steps of sequentially obtaining image data corresponding to each frame of image contained in a recorded video stream, and determining first position information of each image key point contained in the image data corresponding to the frame of image.
7. The method according to any one of claims 1-6, wherein the facial animation model is a skeletal animation model, and each model key point in the facial animation model corresponds to a preset skeletal part.
8. The method of claim 7, wherein the skeletal animation model further comprises: an animal bone animation model, and the animal bone animation model comprises: cat bone animation models, courser bone animation models, and rabbit bone animation models.
9. A control apparatus of a facial animation model, comprising:
the determining module is suitable for determining first position information of each image key point contained in the acquired image data and second position information of each model key point contained in the facial animation model;
the comparison module is suitable for determining the corresponding relation between each image key point and each model key point according to a preset key point mapping rule and comparing the first position information of each image key point with the second position information of each model key point according to the corresponding relation;
the change module is suitable for controlling each model key point contained in the facial animation model to shift according to the comparison result so as to change the expression of the facial animation model;
wherein each image key point included in the image data further comprises: a plurality of sets of image region key points divided according to facial regions, and each model key point included in the facial animation model further includes: a plurality of groups of model part key points divided according to the model parts;
the comparison module is specifically adapted to:
respectively determining a group of model part key points corresponding to the group of image part key points aiming at each group of image part key points, and respectively determining a first part expression coefficient corresponding to the group of image part key points and a second part expression coefficient corresponding to the group of model part key points;
wherein the determining the first part expression coefficients corresponding to the group of image part key points and the second part expression coefficients corresponding to the group of model part key points respectively specifically includes:
determining first distribution information of the key points of the group of image positions according to the first position information of each key point of the image, and determining a first position expression coefficient corresponding to the first distribution information according to a preset first expression calculation rule; determining second distribution information of the key points of the group of model parts according to second position information of the key points of each model, determining a second part expression coefficient corresponding to the second distribution information according to a preset second expression calculation rule, wherein the first expression calculation rule and the second expression calculation rule are rules for calculating a first expression coefficient and a second expression coefficient according to the mutual position relationship and distance relationship of the key points in the first distribution information and the second distribution information, and the numerical range of the first expression coefficient and the second expression coefficient is a numerical value between 0 and 1;
respectively comparing the first part expression coefficient of each group of image part key points with the second part expression coefficient of a group of model part key points corresponding to the group of image part key points;
the change module is specifically adapted to:
respectively judging whether the difference value between a first part expression coefficient of each group of image part key points and a second part expression coefficient of each group of model part key points corresponding to the group of image part key points is larger than a preset threshold value or not aiming at each group of image part key points, wherein the preset threshold values are a plurality of threshold values which respectively correspond to different groups of model part key points;
and if so, determining a group of model part key points corresponding to the group of image part key points as target part key points, and controlling the target part key points to generate displacement.
10. The apparatus of claim 9, wherein the changing module is specifically adapted to:
determining a target expression coefficient of the key point of the target part according to the first part expression coefficient and a preset model linkage rule, and determining the displacement direction and/or the displacement amplitude of the key point of the target part according to the target expression coefficient;
the model linkage rule is used for setting the corresponding relation between the first part expression coefficient of each group of image part key points and the target expression coefficient of the corresponding model part key points.
11. The apparatus of claim 10, wherein the model linkage rules are further for setting an association site associated with each set of model site keypoints and an associated displacement rule for the association site;
the change module is further adapted to: and determining a related part point related to the key point of the target part and a related displacement rule of the related part point, and controlling the related part point to displace according to the related displacement rule.
12. The apparatus of claim 11, wherein the model site keypoints comprise eye site keypoints, and/or mouth site keypoints; the associated site points include: an eyebrow site, a face site, and/or an ear site.
13. The apparatus according to any of claims 9-12, wherein the determining means is specifically adapted to:
the method comprises the steps of acquiring image data corresponding to a current frame image contained in a live video stream in real time, and determining first position information of each image key point contained in the image data corresponding to the current frame image.
14. The apparatus according to any of claims 9-12, wherein the determining means is specifically adapted to:
the method comprises the steps of sequentially obtaining image data corresponding to each frame of image contained in a recorded video stream, and determining first position information of each image key point contained in the image data corresponding to the frame of image.
15. The apparatus according to any one of claims 9-14, wherein the facial animation model is a skeletal animation model, and each model key point in the facial animation model corresponds to a preset skeletal part.
16. The apparatus of claim 15, wherein the skeletal animation model further comprises: an animal bone animation model, and the animal bone animation model comprises: cat bone animation models, courser bone animation models, and rabbit bone animation models.
17. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the control method of the facial animation model as claimed in any one of claims 1-8.
18. A computer storage medium having stored therein at least one executable instruction that causes a processor to perform operations corresponding to the method of controlling a facial animation model as claimed in any one of claims 1 to 8.
CN201810145903.7A 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment Active CN108335345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810145903.7A CN108335345B (en) 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810145903.7A CN108335345B (en) 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment

Publications (2)

Publication Number Publication Date
CN108335345A CN108335345A (en) 2018-07-27
CN108335345B true CN108335345B (en) 2021-08-24

Family

ID=62929265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810145903.7A Active CN108335345B (en) 2018-02-12 2018-02-12 Control method and device of facial animation model and computing equipment

Country Status (1)

Country Link
CN (1) CN108335345B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117770A (en) * 2018-08-01 2019-01-01 吉林盘古网络科技股份有限公司 FA Facial Animation acquisition method, device and terminal device
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109191548A (en) * 2018-08-28 2019-01-11 百度在线网络技术(北京)有限公司 Animation method, device, equipment and storage medium
CN109147012B (en) * 2018-09-20 2023-04-14 麒麟合盛网络技术股份有限公司 Image processing method and device
CN111507143B (en) * 2019-01-31 2023-06-02 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
CN110321008B (en) * 2019-06-28 2023-10-24 北京百度网讯科技有限公司 Interaction method, device, equipment and storage medium based on AR model
CN110728621B (en) * 2019-10-17 2023-08-25 北京达佳互联信息技术有限公司 Face changing method and device of face image, electronic equipment and storage medium
US20220375258A1 (en) * 2019-10-29 2022-11-24 Guangzhou Huya Technology Co., Ltd Image processing method and apparatus, device and storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN110941333A (en) * 2019-11-12 2020-03-31 北京字节跳动网络技术有限公司 Interaction method, device, medium and electronic equipment based on eye movement
CN111553286B (en) * 2020-04-29 2024-01-26 北京攸乐科技有限公司 Method and electronic device for capturing ear animation features
CN111614925B (en) * 2020-05-20 2022-04-26 广州视源电子科技股份有限公司 Figure image processing method and device, corresponding terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
CN105900144A (en) * 2013-06-07 2016-08-24 费斯史福特股份公司 Online modeling for real-time facial animation
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107291214A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method for driving face to move and electronic equipment
CN107679519A (en) * 2017-10-27 2018-02-09 北京光年无限科技有限公司 A kind of multi-modal interaction processing method and system based on visual human

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552510B2 (en) * 2015-03-18 2017-01-24 Adobe Systems Incorporated Facial expression capture for character animation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
CN105900144A (en) * 2013-06-07 2016-08-24 费斯史福特股份公司 Online modeling for real-time facial animation
CN107291214A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method for driving face to move and electronic equipment
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107679519A (en) * 2017-10-27 2018-02-09 北京光年无限科技有限公司 A kind of multi-modal interaction processing method and system based on visual human

Also Published As

Publication number Publication date
CN108335345A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108335345B (en) Control method and device of facial animation model and computing equipment
US11276216B2 (en) Virtual animal character generation from image or video data
US20210390767A1 (en) Computing images of head mounted display wearer
CN108961369B (en) Method and device for generating 3D animation
CN110163054B (en) Method and device for generating human face three-dimensional image
WO2018103220A1 (en) Image processing method and device
EP3341919A1 (en) Image regularization and retargeting system
US11514638B2 (en) 3D asset generation from 2D images
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN109816758B (en) Two-dimensional character animation generation method and device based on neural network
JP2011159329A (en) Automatic 3d modeling system and method
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN114332374A (en) Virtual display method, equipment and storage medium
JP4842242B2 (en) Method and apparatus for real-time expression of skin wrinkles during character animation
CN112085835A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN112669422A (en) Simulated 3D digital human generation method and device, electronic equipment and storage medium
Marques et al. Deep spherical harmonics light probe estimator for mixed reality games
CN114630738A (en) System and method for simulating sensing data and creating perception
US20230177755A1 (en) Predicting facial expressions using character motion states
Zhou et al. Image2GIF: Generating cinemagraphs using recurrent deep q-networks
CN114202615A (en) Facial expression reconstruction method, device, equipment and storage medium
CN113633983A (en) Method, device, electronic equipment and medium for controlling expression of virtual character
CN115699099A (en) Visual asset development using generation of countermeasure networks
US20220172431A1 (en) Simulated face generation for rendering 3-d models of people that do not exist
CN114373034A (en) Image processing method, image processing apparatus, image processing device, storage medium, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant