CN111651033B - Face driving display method and device, electronic equipment and storage medium - Google Patents

Face driving display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111651033B
CN111651033B CN201910562988.3A CN201910562988A CN111651033B CN 111651033 B CN111651033 B CN 111651033B CN 201910562988 A CN201910562988 A CN 201910562988A CN 111651033 B CN111651033 B CN 111651033B
Authority
CN
China
Prior art keywords
face
image data
face key
vertex
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910562988.3A
Other languages
Chinese (zh)
Other versions
CN111651033A (en
Inventor
王云刚
华路延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN201910562988.3A priority Critical patent/CN111651033B/en
Publication of CN111651033A publication Critical patent/CN111651033A/en
Application granted granted Critical
Publication of CN111651033B publication Critical patent/CN111651033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a method and a device for driving and displaying a human face, electronic equipment and a storage medium. The method comprises the following steps: acquiring first image data and video data, wherein the video data is provided with a plurality of frames of second image data, dividing the first image data into a plurality of first grids, dividing each frame of second image data into a plurality of second grids, and adjusting the first face key points according to the second face key points in sequence to adjust the first grids where the first face key points are located; and drawing the adjusted first grid in sequence to drive and display the first image data. The drawing and adjustment of the first grid are simpler compared with deep learning such as a neural network, the processing speed can be improved, the processing time can be reduced, and the method is suitable for scenes with high requirements on real-time performance such as live broadcasting.

Description

Face driving display method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to an image processing technology, in particular to a method and a device for driving and displaying a human face, electronic equipment and a storage medium.
Background
With the development of society, electronic devices such as mobile phones and tablet computers have been widely used in learning, entertainment, work and other fields, and play an increasingly important role.
Cameras are configured in many electronic devices, and can be used for photographing, video recording, live broadcasting and other operations.
In applications such as AR (Augmented Reality ) and expression making, a face state of a current user is recognized using deep learning such as a neural network, thereby driving another face to express the face state.
However, the deep learning has high complexity, low processing speed and long processing time, and has a bottleneck in performance in scenes with high requirements on real-time performance, such as live broadcasting.
Disclosure of Invention
The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for driving and displaying a human face, which are used for solving the problems of low processing speed and long processing time of driving the human face display by using deep learning.
In a first aspect, an embodiment of the present invention provides a method for driving and displaying a face, including:
acquiring first image data and video data, wherein the video data is provided with a plurality of frames of second image data, the first image data is provided with first face data, and the second image data is provided with second face data;
Dividing the first image data into a plurality of first grids, wherein first vertexes of the first grids at least comprise first face key points of the first face data;
dividing the second image data of each frame into a plurality of second grids, wherein second vertexes of the second grids at least comprise second face key points of the second face data;
sequentially adjusting the first face key points according to the second face key points to adjust a first grid where the first face key points are located;
and drawing the adjusted first grid in sequence to drive and display the first image data.
In a second aspect, an embodiment of the present invention further provides a face driving display device, including:
the data acquisition module is used for acquiring first image data and video data, wherein the video data is provided with a plurality of frames of second image data, the first image data is provided with first face data, and the second image data is provided with second face data;
the first grid division module is used for dividing the first image data into a plurality of first grids, and the first vertexes of the first grids at least comprise first face key points of the first face data;
The second grid division module is used for dividing the second image data of each frame into a plurality of second grids, and second vertexes of the second grids at least comprise second face key points of the second face data;
the face key point adjusting module is used for adjusting the first face key points according to the second face key points in sequence so as to adjust first grids where the first face key points are located;
and the grid drawing module is used for drawing the adjusted first grids in sequence so as to drive the display of the first image data.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for driving and displaying a face as described in the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for driving and displaying a face according to the first aspect.
In the embodiment of the invention, the first image data is divided into a plurality of first grids, each frame of second image data in the video data is divided into a plurality of second grids, the first face key points are sequentially adjusted according to the second face key points to adjust the first grids where the first face key points are located, the adjusted first grids are sequentially drawn to drive and display the first image data, when the first face key points are adjusted, the first grids can be adjusted together, the adjustment of the pixel points in the first grids is uniform, so that the adjustment of the face data is smoother, the deformation condition is reduced, the face distortion is avoided, in this case, the first grids can be simultaneously used for face adjustment and rendering operation, the operation amount is reduced, in addition, the drawing of the first grids and the adjustment thereof are simpler compared with the deep learning of a neural network and the like, the processing speed can be improved, the processing time is reduced, and the method is suitable for scenes with high requirements on real-time such as live broadcasting.
Drawings
Fig. 1 is a flowchart of a method for driving and displaying a nose to adjust a face according to an embodiment of the present invention;
fig. 2A to fig. 2C are exemplary diagrams of a face key point according to a first embodiment of the present invention;
FIGS. 3A-3C are exemplary diagrams of a mesh sequence provided in accordance with a first embodiment of the present invention;
fig. 4 is an exemplary diagram of neighboring face key points according to a first embodiment of the present invention;
FIG. 5 is an exemplary diagram of a Veno diagram provided in accordance with a first embodiment of the present invention;
FIG. 6 is an exemplary diagram of a grid provided in accordance with one embodiment of the present invention;
fig. 7 is a schematic structural diagram of a face driving display device according to a second embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a face driving display method according to an embodiment of the present invention, where the face driving display method may be applicable to a face-key-point-based grid construction and a face-based grid driving, and the method may be implemented by a face driving display device, which may be implemented by software and/or hardware, and may be configured in an electronic device, which may include a mobile terminal such as a mobile phone, a tablet, a PDA (personal digital assistant, a personal digital assistant), a smart wearable device (such as a smart glasses and a smart watch), or may include a non-mobile terminal such as a smart television and a personal computer, and the electronic device may include a processor such as a CPU (Central Processing Unit, a central processing unit), a GPU (Graphics Processing Unit, and a graphics processor), and an Application Programming Interface (API) or a rendering engine configured to render 2D vector graphics, such as OpenGL (Open Graphics Library ), openGL (OpenGL for Embedded Systems, openGL for embedded systems), metal, valkan, U D, UE4, and so on, and the method specifically includes the following steps:
S101, acquiring first image data and video data.
In particular implementations, the operating system of the electronic device may include Android (Android), IOS, windows, and the like.
In one aspect, applications capable of image processing, such as live applications, image editing applications, camera applications, instant messaging tools, gallery applications, and the like, are supported in these operating systems.
Applications such as image editing applications, instant messaging tools, gallery applications, etc., the UI (User Interface) of which may provide an imported control, a User may operate the imported control through a peripheral such as touch or mouse, select locally stored image data (represented by a thumbnail or path), and may also select network stored image data (represented by URL (Uniform Resource Locators, uniform resource locator)) so that the application obtains the image data as the first image data.
Applications such as a live broadcast application, an image editing application, a camera application, an instant messaging tool and the like, a UI of the application can provide a control for photographing and video recording, a user can operate the control for photographing and video recording through peripheral equipment such as touch control or a mouse and the like, and the application is informed to call a camera to collect image data to serve as first image data.
On the other hand, the application can call a camera of the electronic device to collect video data, wherein the video data comprises a plurality of frames of second image data, and part or all of the second image data comprises pixels used for representing a user.
S102, dividing the first image data into a plurality of first grids.
The first image data has first face data, which may refer to pixels in the first image data for representing a face.
In the embodiment of the invention, the face detection is performed on the first image data, and the first face key points contained in the first face data are identified.
Face detection is also called face key point detection, positioning or face alignment, and refers to positioning key area positions of faces of a given face, including eyebrows, eyes, nose, mouth, face contours and the like.
Face detection generally uses the following method:
1. manually extracting features such as haar features, training a classifier by using the features, and performing face detection by using the classifier.
2. Face detection is inherited from a generic target detection algorithm, for example, using Faster R-CNN to detect faces.
3. Convolutional neural networks of Cascade structure are used, for example, cascade CNN (Cascade convolutional neural network), MTCNN (Multi-task Cascaded Convolutional Networks, multitasking convolutional neural network).
In a specific implementation, the methods for implementing face detection may be integrated in a module of an application, the application may directly call the module to detect a first face key point in the image data, the methods for implementing face detection may also be integrated in an SDK (Software Development Kit ), the SDK is used as assembly data of the application, the application may request the SDK to perform face detection on the first image data, the SDK detects the first face key point in the first image data, and returns the first face key point to the application.
It should be noted that, the number of the first face key points can be set by a person skilled in the art according to actual conditions, the real-time requirement for static image processing is low, and the denser first face key points, such as 1000, can be detected, and besides the important feature points of the face can be positioned, the outline of the five sense organs can be accurately described; for live broadcast and the like, the real-time requirements are high, sparse first face key points such as 68, 81 and 106 can be detected, and obvious and important characteristic points (such as eye key points, eyebrow key points, nose key points, mouth key points, contour key points and the like) on the face can be positioned, so that the processing amount is reduced, the processing time is shortened, and the embodiment of the invention is not limited to the above.
In order to enable those skilled in the art to better understand the embodiments of the present invention, in the embodiments of the present invention, a sparse first face key point is taken as an example to describe the embodiments of the present invention.
For example, face detection is performed on the first image data shown in fig. 2A, and 68 first face key points as shown in fig. 2B may be output.
The first Mesh (Mesh) represents a single entity capable of being drawn, and the first vertex at least comprises a first face key point, namely, the first face key point is used as at least part of the first vertex of the first Mesh, so that the first image data is meshed and divided into a plurality of (two and more) first meshes.
Gridding is the division of concave polygons or polygons intersected by edges into convex polygons, such as triangles, for rendering by an API or rendering engine such as OpenGL.
It should be noted that, the first grids are orderly arranged to form a grid sequence, so as to conform to the rendering specifications of the API or the rendering engine such as OpenGL.
For example, for OpenGL, there are typically three types of drawing a series of triangles (meshes):
1、GL_TRIANGLES
every three points are respectively provided with a group of triangles, and the triangles are independent.
As shown in FIG. 3A, the first triangle uses vertex v 0 、v 1 、v 2 The second triangle uses vertex v 3 、v 4 、v 5 And so on.
2、GL_TRIANGLE_STRIP
Starting from the third point, each point, in combination with the first two points, draws a triangle, i.e. a string of linear continuous triangles:
as shown in FIG. 3B, the first triangle has vertices arranged in order v 0 ,v 1 ,v 2 The method comprises the steps of carrying out a first treatment on the surface of the The second triangle, vertex order v 2 ,v 1 ,v 3 The method comprises the steps of carrying out a first treatment on the surface of the The third triangle, the vertex arrangement order is v 2 ,v 3 ,v 4 The method comprises the steps of carrying out a first treatment on the surface of the Fourth triangle, vertex arrangement order is v 4 ,v 3 ,v 8
This order is to ensure that the triangles are drawn in the same direction so that the sequence of triangles correctly forms part of the surface.
3、GL_TRIANGLE_FAN。
Starting from the third point, each point in combination with the previous point and the first point draws a triangle, i.e. a sector of a continuous triangle.
As shown in FIG. 3C, the vertex order of the first triangle is v 2 ,v 1 ,v 0 The vertex order of the second triangle is v 3 ,v 2 ,v 0 The vertex order of the first triangle is v 4 ,v 3 ,v 0
In one embodiment of the present invention, S102 may include the steps of:
s1021, determining a first face key point with adjacent positions as a first target key point.
And S1022, taking the first target key point as a first vertex in the first image data, and connecting the first target key point to obtain a first grid.
In the embodiment of the invention, for two adjacent first face key points, the two adjacent first face key points can be used as a pair of first target key points, and the first target key points are sequentially used as the first vertexes of the first grids, and the first grids can be generated by connecting the pair of first target key points.
In one approach of detecting the proximity of the positions, the first image data including the key points of the first face may be converted into a first dimension norgram (Voronoi diagram) by a definition method (Intersect of Halfplanes), an increment (increment) algorithm, a divide-and-conquer method, a plane sweep algorithm, or the like.
The first dimension of the norgram, also known as the Thiessen polygon or Dirichlet graph, comprises a set of consecutive polygons (also known as cell) made up of perpendicular bisectors connecting two adjacent points straight lines.
In the first dimension northgraph, the euclidean distance between any two first face keypoints p and q is denoted as dist (p, q).
Let p= { P 1 ,p 2 ,…,p n And the first face key points are any n mutually different first face key points on the plane, namely base points. The so-called P-corresponding Voronoi diagram is a sub-region division of a plane-the whole plane is thus divided into n cells, which have the property:
any first face key q is located at the first face key p i In the corresponding units, if and only if for any p j ∈P j J.noteq.i, all have dist (q, p i )<dist(q,p j ). At this time, the Voronoi diagram corresponding to P is referred to as Vor (P).
"Vor (P)" or "Voronoi diagram" indicates the edges and vertices that make up the sub-region division. In Vor (P), with base point P i The corresponding cell is denoted as V (p i ) Called AND p i Corresponding Voronoi cells.
In the embodiment of the invention, the first dimension norgram comprises a plurality of first units, each first unit comprises a first face key point, the first units are provided with a plurality of first edges, at the moment, the first face key points positioned on two sides of the same first edge can be determined to be adjacent, and the first face key points positioned on two sides of the same edge are connected, so that a first grid can be generated.
For example, referring to fig. 4, the first image data is converted into a first dimension norgram, and fig. 4 shows a portion of the first face key points (black dots) and a portion of the edges of the first cells (solid lines).
In the first unit 400, the first face keypoints 411 on both sides of the first edge 401 are adjacent to the first face keypoints 412, and the first face keypoints 411 and the first face keypoints 412 may be connected, and similarly, the first face keypoints 411 and the first face keypoints 413 are connected, and the first face keypoints 413 and the first face keypoints 412 are connected, so as to generate the first mesh (dotted line edge) 420.
Further, the first image data shown in fig. 2B, which includes 68 first face key points, may be converted into a first dimension norgram as shown in fig. 5, so as to determine first face key points adjacent to each other, and connect the first face key points adjacent to each other according to a predetermined order, so as to obtain a first grid as shown in fig. 6.
In another embodiment of the present invention, S102 may include the steps of:
s1023, determining a point on the edge of the first image data as a first edge point.
S1024, determining a first face key point adjacent to the first edge point as a third target key point.
S1025, in the first image data, the first edge point and the third target key point are used as vertexes, and the edge point and the third target key point are connected to obtain a first grid.
The first grid formed by the key points of the first face generally cannot completely cover the first image data, and at this time, some points on the edge of the first image data may be selected as first edge points.
It should be noted that, for convenience of operation, the first edge point is selected to be symmetrical.
For example, as shown in fig. 6, four vertices of the first image data and a midpoint between every two vertices are selected as the first edge points.
The first mesh may be generated by sequentially connecting the first edge point and its neighboring first face key points as a pair of third target key points, and sequentially using the third target key points as the first vertices of the first mesh.
It should be noted that, the adjacent relationship between the first edge point and the first face key point may be set by those skilled in the art according to the actual situation, which is not limited in the embodiment of the present invention.
In general, the eyebrow key points and the contour key points in the first face key points are adjacent to the first edge points, and the number of the first edge points is smaller than that of the first face key points adjacent to the first edge points.
For example, the first image data shown in fig. 2B includes 68 first face key points and 8 first edge points, and 25 first face key points are adjacent to 8 first edge points, and the first face key points and the first edge points adjacent to each other are connected in a predetermined order, so that a first grid as shown in fig. 6 can be obtained.
The first vertex of the first mesh may include both the first face key point and the first edge point, and points (may be the first face key point or the first edge point) adjacent to each first face key point and each first edge point are determined, and at this time, the first face key points and the first edge points may be connected in a predetermined order, so as to generate the first mesh.
In yet another embodiment of the present invention, the first vertex of the first mesh includes a first face key point, and a first edge point located on an edge of the first image data, the first edge point and the first face key point have a first number, wherein the first number of the first edge point is preset, the first number of the first face key point is generated during face detection, or is obtained based on a number map generated during face detection, and the first number of the first edge point and the first number of the first face key point are not repeated.
For example, 68 first face key points are shown in fig. 2C, where the case is as follows:
the number of the outline key points is 17, and the first numbers are respectively 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 and 17.
The total of 10 eyebrow key points are respectively 18, 19, 20, 21, 22, 23, 24, 25, 26 and 27.
The number of nose key points is 9, and the first numbers are 28, 29, 30, 31, 32, 33, 34, 35 and 36 respectively.
The total number of the eye key points is 12, and the first numbers are 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47 and 48 respectively.
The number of the key points of the mouth is 20, and the first numbers are 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67 and 68 respectively.
The number of the first edge points is 8, and the first numbers are 69, 70, 71, 72, 73, 74, 75 and 76 respectively.
At this time, S102 may include the steps of:
s1026, inquiring the preset grid variable with sequence.
Wherein vertices in each mesh variable are marked with a third number.
S1027, if the first number is the same as the third number, a first edge point or a first face key point to which the first number belongs is used as a first vertex in the first image data, and the first edge point or the first face key point to which the first number belongs is connected to obtain a first grid.
Because the method for face detection is preset, the output first face key points are generally fixed, and the first edge points are also generally fixed, so that points adjacent to each first face key point and each first edge point (which may be the first face key point or the first edge point) are all fixed, that is, the order of the first vertices in each first grid is fixed.
Therefore, when dividing the first image data of a certain frame into a plurality of first meshes in a certain manner for the first time (offline), the number of each first vertex (which may be a first face key point or a first edge point) in each first mesh is sequentially recorded as the third number, and at this time, each first mesh retains the third number of the first vertex as a mesh variable.
For other (real-time) first image data, if the first image data is divided into first grids in the same manner, the first numbers of points (which may be first face key points or first edge points) thereof may be matched with the third numbers of grid variables, and if the first numbers are the same, the points (which may be first face key points or first edge points) indicated by the first numbers may be connected in the order defined in the grid variables, so that the first image data is divided into a plurality of first grids.
For example, as shown in fig. 2C, the grid variables may be represented as (1, 76,2), (2, 76,3), (3, 76,4), (4, 76,5), and so on.
For the first image data shown in fig. 2B, the first face key points with the first numbers 1 and 2 and the first edge point with the first number 76 are successfully matched with the third number in one of the grid variables, and the first face key points and the first edge points can be connected in order, so that a first grid (1, 76,2) is formed.
S103, dividing the second image data of each frame into a plurality of second grids.
The second image data has second face data, and the second face data may refer to pixels in the second image data for representing a face.
In the embodiment of the invention, face detection is performed on the second image data, and the second face key points contained in the second face data are identified.
It should be noted that, the manner of performing face recognition on the second image data is consistent with that of the first image data, so as to ensure that the second face key points are consistent with the first face key points.
And a second Mesh (Mesh) representing a single entity capable of being drawn, wherein the second vertex at least comprises a second face key point, namely, the second face key point is used as at least part of the second vertex of the second Mesh, and the second image data is meshed and divided into a plurality of (two or more) second meshes.
In one embodiment of the present invention, S103 includes:
determining second face key points adjacent to each other in position as second target key points;
and in the second image data, the second target key point is used as a second vertex, and the second target key point is connected to obtain a second grid.
Further, the determining the second face key point with adjacent positions as the second target key point includes:
converting the second image data into a second dimension norgram, wherein the second dimension norgram comprises a plurality of second units, each second unit comprises a second face key point, and the second units are provided with a plurality of second edges;
And determining that the second face key points positioned on two sides of the same second edge are adjacent.
In another embodiment of the present invention, S103 includes:
determining a point located on an edge of the second image data as a second edge point;
determining a second face key point adjacent to the second edge point position as a fourth target key point;
and in the second image data, the second edge point and the fourth target key point are used as second vertexes, and the second edge point and the fourth target key point are connected to obtain a second grid.
In yet another embodiment of the present invention, the second vertex of the second mesh further includes a second edge point located on the second image data edge, the second edge point having a second number with the second face key point; s103 includes:
inquiring preset grid variables with sequences, wherein the vertex in each grid variable is marked with a third number;
and if the second number is the same as the third number, a second edge point or a second face key point to which the second number belongs is used as a second vertex in the second image data, and the second edge point or the second face key point to which the second number belongs is connected to obtain a second grid.
It should be noted that, since the dividing manner of the second grid is consistent with the dividing manner of the first grid, the description of the dividing manner of the second grid is relatively simple, and the relevant points only need to be referred to in the part of the description of the dividing manner of the first grid, which is not described in detail herein.
And S104, adjusting the first face key points according to the second face key points in sequence so as to adjust the first grids where the first face key points are located.
In general, the face of the user often changes, and the second face key points changed into the second face data in each frame of the second image data often change.
The first face key points are sequentially adjusted, so that the first face key points are sequentially aligned with the second face key points in each frame of second image data, the first face key points are adjusted under the condition that the first face key points are kept as the first vertexes of the first grids, the first grids can be adjusted together, the face data in the first grids are changed along with the change of the second face data, and therefore the effect that the second face data drive the first face data is achieved.
In one embodiment of the present invention, S104 includes:
S1041, determining first vertex coordinates of the first face key points.
S1042, determining a second vertex coordinate of the second face key point in the second image data of each frame.
S1043, referring to the offset of the second vertex coordinates of the second face key points between every two adjacent frames of the second image data, adjusting the first vertex coordinates of the first face key points.
In a specific implementation, the difference between the second image data of two adjacent frames in the video data is smaller, for example, the 1 second video data includes 60 frames of second image data, the difference between every two frames of second image data is 16.67ms, and the variation of the user in 16.67ms is smaller, so that the face driving can be performed through frame synchronization, that is, for the second face key point and the first face key point with the same number (that is, the first number of the first face key point is the same as the second number of the second face key point), the first vertex coordinate of the first face key point can be adjusted by referring to the offset between every two adjacent frames of second image data of the second vertex coordinate of the second face key point.
Further, S1043 includes:
s14031, determining a first offset distance between second vertex coordinates of the second face key points and the second image data of every two adjacent frames.
For a second key point having the same second number, an offset between a second fixed-point coordinate of the second image data of the current frame and a second fixed-point coordinate of the second image data of the previous frame may be sequentially calculated as the first offset distance.
And S14032, mapping the first offset distance into the first image data to obtain a second offset distance.
If the first face data is the same size as the second face data, the first offset distance may be directly applied in the first image data as the second offset distance.
And if the sizes of the first face data and the second face data are different and the same, calculating the size ratio between the first face data and the second face data.
In one example, the first face keypoints comprise first eye keypoints and the second face keypoints comprise second eye keypoints.
In this example, the distance between the first eye keypoints (face keypoints numbered 40, 43 as shown in fig. 2C) may be determined as the first eye separation.
The distance between the second eye keypoints is determined as the second eye distance with respect to ground.
And calculating the distance ratio between the second eye distance and the first eye distance as the size ratio between the first face data and the second face data.
In this example, since muscles between eyes are sparse, the size ratio between the first face data and the second face data is calculated with the inter-eye distance, and the accuracy is high.
Of course, other manners may be used to calculate the size ratio between the first face data and the second face data besides the interocular distance, for example, calculate the size ratio between the first face data and the second face data with the distance between the eyebrow and the tip of the nose, and so on, which is not limited by the embodiment of the present invention.
And calculating the product of the first offset distance and the size proportion as a second offset distance of the first face key point.
S14033, adding the second offset distance on the basis of the first vertex coordinates of the first face key points to update the first vertex coordinates of the first face key points.
And respectively adding the offset of the second face key points in each frame of second image data to the corresponding first face key points in the first image data, so that the first face image data to be driven moves along with the real face (second face data), and the nose, eyes, mouth, eyebrows and the like can synchronize the movements of the real face.
And S105, sequentially drawing the adjusted first grids to drive and display the first image data.
In a specific implementation, an API or a rendering engine for rendering the 2D vector graphics is called, and each first grid is drawn in sequence, so that first image data is displayed on a screen, multiple frames of first image data are continuously displayed, and the change of first face data in the first image data can be displayed.
Further, to increase the display speed of the first image data, the first mesh may be drawn in the GPU.
In one embodiment of the present invention, S105 includes the steps of:
s1051, for each first grid after adjustment, determining texture coordinates of each first vertex in the first grid in sequence.
S1052, for each first mesh after adjustment, sequentially determining first vertex coordinates of each first vertex in the first mesh.
And S1053, drawing the first grid according to the texture coordinates and the first vertex coordinates in sequence to display the first image data.
In rendering a texture mapped scene, in addition to defining geometric coordinates (i.e., vertex coordinates) for each vertex, texture coordinates are also defined. After various transformations, the geometric coordinates determine the location of the vertex drawn on the screen, while the texture coordinates determine which texel in the texture image is assigned to the vertex.
Texture images are square arrays, texture coordinates can be generally defined in one, two, three or four dimensional forms, called s, t, r and q coordinates, one-dimensional textures are often represented by s coordinates, two-dimensional textures are often represented by (s, t) coordinates, and r coordinates are currently ignored. The q coordinate is like w and is typically 1, and is mainly used for establishing homogeneous coordinates. The OpenGL coordinates define the functions:
void gltexCoord{1234}{sifd}[v](TYPE coords);
the current texture coordinates are set, and vertices generated by calling glVertex () are given to the current texture coordinates. For gltexCoord1 (), s coordinates are set to given values, t and r are set to 0, q is set to 1; s and t coordinate values can be set with gltexCoord2 (), r is set to 0, q is set to 1; for gltexCoord3 (), q is set to 1, and the other coordinates are set according to given values; all coordinates can be given by gltexCoord4 ().
In the embodiment of the invention, the flow of drawing the grid is explained by taking OpenGL ES as an example, and the flow is a programmable pipeline and comprises the following operations:
1. VBO/VAO (Vertex Buffer/array Objects, vertex Buffer Objects or Vertex array Objects)
VBO/VAO is vertex information provided to the GPU by the CPU, including vertex coordinates, color (only the color of the vertex, independent of the color of the texture), texture coordinates (for texture mapping), and the like.
2. VertexSlader (vertex shader)
Vertex shader is a program that processes vertex information provided by VBO/VAO. Each vertex provided by VBO/VAO performs a pass through the vertex shader. The uniformity (a variable type) remains consistent at each vertex, with Attribute being different for each vertex (which can be understood as the input vertex attributes). The VertexShader is executed once to output a variable and gl_positon.
Wherein the vertex shader inputs include:
2.1, shader program: vertex shader program source code or executable file describing operations performed on vertices
2.2, vertex shader input (or attribute): data for each vertex provided by a vertex array
2.3, unified variable (unitorm): invariant data for vertex/fragment shader usage
2.4, samplers (Samplers): special unified variable types representing textures used by vertex shaders
Wherein, the vertex shader is a programmable stage of the vertex shader, which is used to control the conversion process of vertex coordinates, and the fragment shader controls the calculation process of each pixel color.
3. Primitive Assembly (primitive assembly):
the next stage of the vertex shader is primitive assembly, where primitives (primitives) are geometric objects such as triangles, lines, or points. At this stage, vertices output by the vertex shader are combined into primitives.
And restoring the vertex data into a grid structure according to the primary (original link relation), wherein the grid consists of vertexes and indexes, linking the vertexes together according to the indexes at the stage to form three different primitives of points, lines and planes, and then cutting triangles beyond a screen.
For example, if a triangle (mesh) has three vertices, one of which is outside the screen and the other two of which are inside the screen, and a quadrilateral is supposed to be seen on the screen, then the quadrilateral may be cut into 2 small triangles (meshes).
In short, the points obtained after the vertex shader computation are grouped into points, lines, planes (triangles) according to the link relationship.
4. rasterization (rasterization)
Rasterization is the process of converting a primitive into a set of two-dimensional fragments, which are then processed by a fragment shader (the input of the fragment shader). These two-dimensional segments represent pixels that can be rendered on a screen, and the mechanism for generating each segment value from the vertex shader output assigned to each primitive vertex is called interpolation.
The vertex after primitive assembly can be understood as a graph, and pixels (texture coordinates v_texcoord, color, and the like) of the graph area can be interpolated according to the shape of the graph during rasterization. Note that the pixel at this time is not a pixel on the screen, and is not colored. The next fragment shader performs the coloring.
5. FragmentSlader (fragment shader)
The fragment shader implements a generic programmable method for operations on fragments (pixels), with each fragment of the rasterized output executing a pass of the fragment shader, with each fragment generated by the rasterization stage executing this shader, generating one or more (multiple rendered) color values as output.
6. Per-Fragment Operations (fragment by fragment operation)
At this stage, each segment performs the following 5 operations:
6.1, pixelOwnershipTest (pixel home test)
It is determined whether the pixel at position (x, y) in the frame buffer is owned by the current context.
For example, if one display frame buffer window is obscured by another window, the windowing system may determine that the obscured pixels do not belong to the context of this OpenGL and thus do not display those pixels.
6.2, scissorTest (cut test):
if the segment is outside the clipping region, it is discarded.
6.3, stencilTest and DepthTest (template and depth test):
if the shape returned by the fragment shader is not a shape in the stencil, then it is discarded.
If the depth returned by the fragment shader is less than the depth in the buffer, then it is discarded.
6.4, blending (mixing):
The newly generated fragment color values are combined with the color values stored in the frame buffer to produce new RGBA (Red, green, blue and Alpha color spaces).
6.5, dithering (jitter):
at the end of the fragment-by-fragment phase of operation, fragments are either rejected or the color, depth or template value of the fragment is written at a certain position in the frame buffer (x, y). The write fragment color, depth, and template value depend on the corresponding write mask that is discarded. The write mask may more precisely control the color, depth, and stencil value of the write-related buffers. For example: the write mask of the color buffer may be set such that no red value can be written to the color buffer.
Finally, the generated fragment is placed in a frame buffer (front buffer or back buffer or FBO (Frame Buffer Object, frame buffer object)), and if not, the fragment in the screen rendering buffer generates pixels on the screen.
In the embodiment of the invention, the first image data is divided into a plurality of first grids, each frame of second image data in the video data is divided into a plurality of second grids, the first face key points are sequentially adjusted according to the second face key points to adjust the first grids where the first face key points are located, the adjusted first grids are sequentially drawn to drive and display the first image data, when the first face key points are adjusted, the first grids can be adjusted together, the adjustment of the pixel points in the first grids is uniform, so that the adjustment of the face data is smoother, the deformation condition is reduced, the face distortion is avoided, in this case, the first grids can be simultaneously used for face adjustment and rendering operation, the operation amount is reduced, in addition, the drawing of the first grids and the adjustment thereof are simpler compared with the deep learning of a neural network and the like, the processing speed can be improved, the processing time is reduced, and the method is suitable for scenes with high requirements on real-time such as live broadcasting.
Example two
Fig. 7 is a schematic structural diagram of a face driving display device according to a second embodiment of the present invention, where the device may specifically include the following modules:
a data acquisition module 701, configured to acquire first image data and video data, where the video data has multiple frames of second image data, the first image data has first face data, and the second image data has second face data;
a first mesh division module 702, configured to divide the first image data into a plurality of first meshes, where a first vertex of the first mesh includes at least a first face key point of the first face data;
a second mesh dividing module 703, configured to divide the second image data of each frame into a plurality of second meshes, where a second vertex of the second mesh includes at least a second face key point of the second face data;
a face key point adjustment module 704, configured to adjust the first face key points sequentially according to the second face key points, so as to adjust a first grid where the first face key points are located;
the grid drawing module 705 is configured to draw the adjusted first grid sequentially, so as to drive and display the first image data.
In one embodiment of the present invention, the facial key point adjustment module 704 includes:
a first vertex coordinate determining sub-module, configured to determine a first vertex coordinate of the first face key point;
a second vertex coordinate determining sub-module, configured to determine, in each frame of the second image data, a second vertex coordinate of the second face key point;
and the offset adjustment sub-module is used for adjusting the first vertex coordinates of the first face key points by referring to the offset of the second vertex coordinates of the second face key points between every two adjacent frames of the second image data.
In one embodiment of the present invention, the offset adjustment submodule includes:
a first offset distance determining unit, configured to determine a first offset distance between second vertex coordinates of the second face key point and the second image data of every two adjacent frames;
a second offset distance calculating unit, configured to map the first offset distance to the first image data, to obtain a second offset distance;
and the vertex coordinate offset unit is used for adding the second offset distance on the basis of the first vertex coordinates of the first face key points so as to update the first vertex coordinates of the first face key points.
In one embodiment of the present invention, the second offset distance calculation unit includes:
a size ratio calculating subunit, configured to calculate a size ratio between the first face data and the second face data;
and the product calculating subunit is used for calculating the product between the first offset distance and the size proportion and taking the product as a second offset distance of the first face key point.
In one example of an embodiment of the present invention, the first face keypoints comprise first eye keypoints and the second face keypoints comprise second eye keypoints;
the size ratio calculation subunit is further configured to:
determining a first eye separation between the first eye keypoints;
determining a second eye separation between the second eye keypoints;
and calculating the distance ratio between the second eye distance and the first eye distance as the size ratio between the first face data and the second face data.
In one embodiment of the present invention, the first meshing module 702 includes:
the first target key point determining submodule is used for determining first face key points adjacent in position and used as first target key points;
The first connection sub-module is used for taking the first target key point as a first vertex in the first image data and connecting the first target key point to obtain a first grid;
the second meshing module 703 includes:
the second target key point determining submodule is used for determining second face key points adjacent in position and used as second target key points;
and the second connection sub-module is used for taking the second target key point as a second vertex in the second image data and connecting the second target key point to obtain a second grid.
In one embodiment of the present invention, the first target keypoint determination submodule includes:
a first dimension norgram conversion unit, configured to convert the first image data into a first dimension norgram, where the first dimension norgram includes a plurality of first units, each first unit includes a first face key point, and the first units have a plurality of first edges;
the first position adjacent determining unit is used for determining that the first face key points positioned on two sides of the same first edge are adjacent;
the second target key point determining submodule includes:
a second northgraph converting unit, configured to convert the second image data into a second northgraph, where the second northgraph includes a plurality of second units, each of the second units includes a second face key point, and the second units have a plurality of second edges;
And the second position adjacent determining unit is used for determining that the second face key points positioned on two sides of the same second edge are adjacent.
In another embodiment of the present invention, the first meshing module 702 includes:
a first edge point determination submodule for determining a point located on the edge of the first image data as a first edge point;
a third target key point determining submodule, configured to determine a first face key point adjacent to the first edge point position as a third target key point;
the third connection sub-module is used for taking the first edge point and the third target key point as a first vertex in the first image data and connecting the first edge point and the third target key point to obtain a first grid;
the second meshing module 703 includes:
a second edge point determination sub-module for determining a point located on an edge of the second image data as a second edge point;
a fourth target key point determining submodule, configured to determine a second face key point adjacent to the second edge point position as a fourth target key point;
and the fourth connection sub-module is used for taking the second edge point and the fourth target key point as second vertexes in the second image data and connecting the second edge point and the fourth target key point to obtain a second grid.
In yet another embodiment of the present invention, the first vertex of the first mesh further includes a first edge point located on the first image data edge, the first edge point having a first number with the first face key point;
the second vertex of the second mesh further comprises a second edge point positioned on the second image data side, and the second edge point and the second face key point are provided with a second number;
the first meshing module 702 includes:
the first grid variable searching sub-module is used for searching preset grid variables with sequences, and the vertex in each grid variable is marked with a third number;
a fifth connection sub-module, configured to, if the first number is the same as the third number, obtain a first mesh in the first image data by using a first edge point or a first face key point to which the first number belongs as a first vertex and connecting the first edge point or the first face key point to which the first number belongs;
the second meshing module 703 includes:
the second grid variable searching sub-module is used for inquiring preset grid variables with sequences, and the vertex in each grid variable is marked with a third number;
And a sixth connection sub-module, configured to, if the second number is the same as the third number, obtain a second grid in the second image data by using a second edge point or a second face key point to which the second number belongs as a second vertex and connecting the second edge point or the second face key point to which the second number belongs.
In one embodiment of the present invention, the mesh drawing module 705 includes:
the texture coordinate determining submodule is used for sequentially determining texture coordinates of each first vertex in each first grid after adjustment;
the vertex coordinate determining submodule is used for sequentially determining first vertex coordinates of all first vertexes in each first grid after adjustment;
and the sitting plot submodule is used for sequentially drawing the first grid according to the texture coordinates and the first vertex coordinates so as to display the first image data.
The face driving display device provided by the embodiment of the invention can execute the face driving display method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example III
Fig. 8 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. As shown in fig. 8, the electronic device includes a processor 800, a memory 801, a communication module 802, an input device 803, and an output device 804; the number of processors 800 in the electronic device may be one or more, one processor 800 being taken as an example in fig. 8; the processor 800, the memory 801, the communication module 802, the input device 803, and the output device 804 in the electronic apparatus may be connected by a bus or other means, which is exemplified in fig. 8 by a bus connection.
The memory 801 is a computer-readable storage medium, and may be used to store a software program, a computer-executable program, and modules corresponding to a driving display method of a face in the present embodiment (for example, a data acquisition module 701, a first meshing module 702, a second meshing module 703, a face key point adjustment module 704, and a mesh drawing module 705 in a driving display device of a face as shown in fig. 7). The processor 800 executes various functional applications and data processing of the electronic device by executing software programs, instructions, and modules stored in the memory 801, that is, implements the above-described face driving display method.
The memory 801 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory 801 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the memory 801 may further include memory remotely located relative to the processor 800, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And the communication module 802 is used for establishing connection with the display screen and realizing data interaction with the display screen. The input means 803 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device.
The electronic device provided by the embodiment of the invention can execute the face driving display method provided by any embodiment of the invention, and the method has specific corresponding functions and beneficial effects.
Example IV
The fourth embodiment of the present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method for driving and displaying a face, the method comprising:
acquiring first image data and video data, wherein the video data is provided with a plurality of frames of second image data, the first image data is provided with first face data, and the second image data is provided with second face data;
dividing the first image data into a plurality of first grids, wherein first vertexes of the first grids at least comprise first face key points of the first face data;
dividing the second image data of each frame into a plurality of second grids, wherein second vertexes of the second grids at least comprise second face key points of the second face data;
sequentially adjusting the first face key points according to the second face key points to adjust a first grid where the first face key points are located;
and drawing the adjusted first grid in sequence to drive and display the first image data.
Of course, the computer readable storage medium provided by the embodiments of the present invention, the computer program thereof is not limited to the method operations described above, and related operations in the method for driving and displaying a face provided by any embodiment of the present invention may also be performed.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the face driving display device, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (9)

1. A method for driving and displaying a face, comprising:
acquiring first image data and video data, wherein the video data is provided with a plurality of frames of second image data, the first image data is provided with first face data, and the second image data is provided with second face data;
dividing the first image data into a plurality of first grids, wherein first vertexes of the first grids at least comprise first face key points of the first face data;
dividing the second image data of each frame into a plurality of second grids, wherein second vertexes of the second grids at least comprise second face key points of the second face data;
Adjusting the first face key points according to the second face key points in sequence to adjust a first grid where the first face key points are located, including:
determining first vertex coordinates of the first face key points;
determining second vertex coordinates of the second face key points in each frame of the second image data;
adjusting first vertex coordinates of the first face key points by referring to the offset of second vertex coordinates of the second face key points between every two adjacent frames of the second image data;
sequentially drawing the adjusted first grid to drive and display the first image data;
the adjusting the first vertex coordinates of the first face key point with reference to the offset of the second vertex coordinates of the second face key point between every two adjacent frames of the second image data includes:
determining a first offset distance between second vertex coordinates of the second face key points and the second image data of every two adjacent frames;
mapping the first offset distance into the first image data to obtain a second offset distance;
adding the second offset distance on the basis of the first vertex coordinates of the first face key points to update the first vertex coordinates of the first face key points;
The mapping the first offset distance to the first image data to obtain a second offset distance includes:
calculating the size ratio between the first face data and the second face data;
calculating the product of the first offset distance and the size proportion to serve as a second offset distance of the first face key point;
the first face keypoints comprise first eye keypoints, and the second face keypoints comprise second eye keypoints;
the calculating the size ratio between the first face data and the second face data includes:
determining a first eye separation between the first eye keypoints;
determining a second eye separation between the second eye keypoints;
and calculating the distance ratio between the second eye distance and the first eye distance as the size ratio between the first face data and the second face data.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the dividing the first image data into a plurality of first grids includes:
determining first face key points adjacent in position as first target key points;
the first target key point is used as a first vertex in the first image data, and the first target key point is connected to obtain a first grid;
Said dividing said second image data per frame into a plurality of second grids, comprising:
determining second face key points adjacent to each other in position as second target key points;
and in the second image data, the second target key point is used as a second vertex, and the second target key point is connected to obtain a second grid.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the determining the first face key points with adjacent positions as the first target key points comprises the following steps:
converting the first image data into a first dimension norgram, wherein the first dimension norgram comprises a plurality of first units, each first unit comprises a first face key point, and the first units are provided with a plurality of first edges;
determining that first face key points positioned on two sides of the same first edge are adjacent;
the determining the second face key points adjacent to the positions as the second target key points comprises the following steps:
converting the second image data into a second dimension norgram, wherein the second dimension norgram comprises a plurality of second units, each second unit comprises a second face key point, and the second units are provided with a plurality of second edges;
and determining that the second face key points positioned on two sides of the same second edge are adjacent.
4. The method of claim 1, wherein the dividing the first image data into a plurality of first grids comprises:
determining a point located on an edge of the first image data as a first edge point;
determining a first face key point adjacent to the first edge point position as a third target key point;
the first edge point and the third target key point are used as a first vertex in the first image data, and the first edge point and the third target key point are connected to obtain a first grid;
said dividing said second image data per frame into a plurality of second grids, comprising:
determining a point located on an edge of the second image data as a second edge point;
determining a second face key point adjacent to the second edge point position as a fourth target key point;
and in the second image data, the second edge point and the fourth target key point are used as second vertexes, and the second edge point and the fourth target key point are connected to obtain a second grid.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the first vertex of the first grid further comprises a first edge point positioned on the first image data edge, and the first edge point and the first face key point are provided with a first number;
The second vertex of the second mesh further comprises a second edge point positioned on the second image data side, and the second edge point and the second face key point are provided with a second number;
the dividing the first image data into a plurality of first grids includes:
inquiring preset grid variables with sequences, wherein the vertex in each grid variable is marked with a third number;
if the first number is the same as the third number, a first edge point or a first face key point to which the first number belongs is used as a first vertex in the first image data, and the first edge point or the first face key point to which the first number belongs is connected to obtain a first grid;
said dividing said second image data per frame into a plurality of second grids, comprising:
inquiring preset grid variables with sequences, wherein the vertex in each grid variable is marked with a third number;
and if the second number is the same as the third number, a second edge point or a second face key point to which the second number belongs is used as a second vertex in the second image data, and the second edge point or the second face key point to which the second number belongs is connected to obtain a second grid.
6. The method of claim 1, wherein the sequentially rendering the adjusted first grid to display the first image data comprises:
for each first grid after adjustment, determining texture coordinates of each first vertex in the first grid in turn;
for each first grid after adjustment, sequentially determining first vertex coordinates of each first vertex in the first grid;
and drawing the first grid according to the texture coordinates and the first vertex coordinates in sequence to display the first image data.
7. A face-driven display device, comprising:
the data acquisition module is used for acquiring first image data and video data, wherein the video data is provided with a plurality of frames of second image data, the first image data is provided with first face data, and the second image data is provided with second face data;
the first grid division module is used for dividing the first image data into a plurality of first grids, and the first vertexes of the first grids at least comprise first face key points of the first face data;
the second grid division module is used for dividing the second image data of each frame into a plurality of second grids, and second vertexes of the second grids at least comprise second face key points of the second face data;
The face key point adjusting module is used for adjusting the first face key points according to the second face key points in sequence so as to adjust first grids where the first face key points are located;
the face key point adjusting module comprises:
a first vertex coordinate determining sub-module, configured to determine a first vertex coordinate of the first face key point;
a second vertex coordinate determining sub-module, configured to determine, in each frame of the second image data, a second vertex coordinate of the second face key point;
an offset adjustment sub-module, configured to adjust a first vertex coordinate of the first face key point with reference to an offset between the second image data of each two adjacent frames of the second vertex coordinate of the second face key point;
the grid drawing module is used for drawing the adjusted first grids in sequence so as to drive the display of the first image data;
the offset adjustment submodule includes:
a first offset distance determining unit, configured to determine a first offset distance between second vertex coordinates of the second face key point and the second image data of every two adjacent frames;
a second offset distance calculating unit, configured to map the first offset distance to the first image data, to obtain a second offset distance;
The vertex coordinate offset unit is used for adding the second offset distance on the basis of the first vertex coordinates of the first face key points so as to update the first vertex coordinates of the first face key points;
the second offset distance calculation unit includes:
a size ratio calculating subunit, configured to calculate a size ratio between the first face data and the second face data;
a product calculating subunit, configured to calculate a product between the first offset distance and the size ratio, as a second offset distance of the first face key point;
the first face keypoints comprise first eye keypoints, and the second face keypoints comprise second eye keypoints;
the size ratio calculation subunit is further configured to:
determining a first eye separation between the first eye keypoints;
determining a second eye separation between the second eye keypoints;
and calculating the distance ratio between the second eye distance and the first eye distance as the size ratio between the first face data and the second face data.
8. An electronic device, the electronic device comprising:
one or more processors;
A memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of driving display of a face as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements a method for driving a display of a face as claimed in any one of claims 1-6.
CN201910562988.3A 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium Active CN111651033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910562988.3A CN111651033B (en) 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910562988.3A CN111651033B (en) 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111651033A CN111651033A (en) 2020-09-11
CN111651033B true CN111651033B (en) 2024-03-05

Family

ID=72343498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910562988.3A Active CN111651033B (en) 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111651033B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905943B (en) * 2020-12-09 2021-12-10 广州市玄武无线科技股份有限公司 Dynamic chart display method and system based on mobile terminal

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005055930A2 (en) * 2003-12-03 2005-06-23 University Of Rochester Recombinant factor viii having increased specific activity
EP1754198A1 (en) * 2004-05-26 2007-02-21 Gameware Europe Limited Animation systems
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation
CN102999929A (en) * 2012-11-08 2013-03-27 大连理工大学 Triangular gridding based human image face-lift processing method
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN108062783A (en) * 2018-01-12 2018-05-22 北京蜜枝科技有限公司 FA Facial Animation mapped system and method
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109685873A (en) * 2018-12-14 2019-04-26 广州市百果园信息技术有限公司 A kind of facial reconstruction method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005055930A2 (en) * 2003-12-03 2005-06-23 University Of Rochester Recombinant factor viii having increased specific activity
EP1754198A1 (en) * 2004-05-26 2007-02-21 Gameware Europe Limited Animation systems
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation
CN102999929A (en) * 2012-11-08 2013-03-27 大连理工大学 Triangular gridding based human image face-lift processing method
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN108062783A (en) * 2018-01-12 2018-05-22 北京蜜枝科技有限公司 FA Facial Animation mapped system and method
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109685873A (en) * 2018-12-14 2019-04-26 广州市百果园信息技术有限公司 A kind of facial reconstruction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111651033A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111652791B (en) Face replacement display method, face replacement live broadcast device, electronic equipment and storage medium
CN111652794B (en) Face adjusting and live broadcasting method and device, electronic equipment and storage medium
CN109840881A (en) A kind of 3D special efficacy image generating method, device and equipment
CN111462205B (en) Image data deformation, live broadcast method and device, electronic equipment and storage medium
CN111652022B (en) Image data display method, image data live broadcast device, electronic equipment and storage medium
CN112967381B (en) Three-dimensional reconstruction method, apparatus and medium
CN108694719A (en) image output method and device
US11557086B2 (en) Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping
CN111652795A (en) Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
US20240282024A1 (en) Training method, method of displaying translation, electronic device and storage medium
CN111951368A (en) Point cloud, voxel and multi-view fusion deep learning method
CN107203962B (en) Method for making pseudo-3D image by using 2D picture and electronic equipment
CN111652807B (en) Eye adjusting and live broadcasting method and device, electronic equipment and storage medium
CN111651033B (en) Face driving display method and device, electronic equipment and storage medium
CN107203961B (en) Expression migration method and electronic equipment
CN111652025B (en) Face processing and live broadcasting method and device, electronic equipment and storage medium
Cardona et al. Hybrid-space localized stylization method for view-dependent lines extracted from 3D models.
CN111652024B (en) Face display and live broadcast method and device, electronic equipment and storage medium
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
CN111652978B (en) Grid generation method and device, electronic equipment and storage medium
JP2009122998A (en) Method for extracting outline from solid/surface model, and computer software program
CN107730577B (en) Line-hooking rendering method, device, equipment and medium
CN112465692A (en) Image processing method, device, equipment and storage medium
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN115578495A (en) Special effect image drawing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant