CN111651033A - Driving display method and device for human face, electronic equipment and storage medium - Google Patents

Driving display method and device for human face, electronic equipment and storage medium Download PDF

Info

Publication number
CN111651033A
CN111651033A CN201910562988.3A CN201910562988A CN111651033A CN 111651033 A CN111651033 A CN 111651033A CN 201910562988 A CN201910562988 A CN 201910562988A CN 111651033 A CN111651033 A CN 111651033A
Authority
CN
China
Prior art keywords
face
image data
key points
vertex
mesh
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910562988.3A
Other languages
Chinese (zh)
Other versions
CN111651033B (en
Inventor
王云刚
华路延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN201910562988.3A priority Critical patent/CN111651033B/en
Publication of CN111651033A publication Critical patent/CN111651033A/en
Application granted granted Critical
Publication of CN111651033B publication Critical patent/CN111651033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a driving display method and device for a human face, electronic equipment and a storage medium. The method comprises the following steps: acquiring first image data and video data, wherein the video data comprises a plurality of frames of second image data, dividing the first image data into a plurality of first grids, dividing each frame of the second image data into a plurality of second grids, and sequentially adjusting the first face key points according to the second face key points so as to adjust the first grids where the first face key points are located; and sequentially drawing the adjusted first grids to drive and display the first image data. Compared with deep learning such as a neural network, the drawing and the adjustment of the first grid are simpler, the processing speed can be increased, the processing time can be shortened, and the method is suitable for scenes with high real-time requirements such as live broadcast.

Description

Driving display method and device for human face, electronic equipment and storage medium
Technical Field
The embodiments of the present invention relate to image processing technologies, and in particular, to a method and an apparatus for driving and displaying a human face, an electronic device, and a storage medium.
Background
With the development of society, electronic devices such as mobile phones and tablet computers have been widely used in learning, entertainment, work, and the like, playing an increasingly important role.
Cameras are arranged in many electronic devices, and can be used for operations such as photographing, video recording, live broadcasting and the like.
In applications such as AR (Augmented Reality), expression making and the like, a neural network and the like are used for deep learning and recognition of the face state of a current user, so that another face is driven to express the face state.
However, the deep learning has high complexity, low processing speed and long processing time, and the performance has a bottleneck in scenes with high real-time requirements, such as live broadcasting.
Disclosure of Invention
The embodiment of the invention provides a method and a device for driving and displaying a human face, electronic equipment and a storage medium, and aims to solve the problems of low processing speed and long processing time of driving the human face to display by using deep learning.
In a first aspect, an embodiment of the present invention provides a method for driving and displaying a human face, including:
acquiring first image data and video data, wherein the video data comprises a plurality of frames of second image data, the first image data comprises first face data, and the second image data comprises second face data;
dividing the first image data into a plurality of first meshes, wherein first vertexes of the first meshes at least comprise first face key points of the first face data;
dividing each frame of the second image data into a plurality of second grids, wherein second vertexes of the second grids at least comprise second face key points of the second face data;
sequentially adjusting the first face key points according to the second face key points so as to adjust a first grid where the first face key points are located;
and sequentially drawing the adjusted first grids to drive and display the first image data.
In a second aspect, an embodiment of the present invention further provides a driving display device for a human face, including:
the data acquisition module is used for acquiring first image data and video data, wherein the video data comprises a plurality of frames of second image data, the first image data comprises first face data, and the second image data comprises second face data;
a first mesh dividing module, configured to divide the first image data into multiple first meshes, where a first vertex of each first mesh at least includes a first face key point of the first face data;
the second mesh dividing module is used for dividing each frame of the second image data into a plurality of second meshes, and a second vertex of each second mesh at least comprises a second face key point of the second face data;
the face key point adjusting module is used for adjusting the first face key points according to the second face key points in sequence so as to adjust a first grid where the first face key points are located;
and the grid drawing module is used for drawing the adjusted first grids in sequence so as to drive and display the first image data.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for driving and displaying the human face according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for driving and displaying the human face according to the first aspect.
In the embodiment of the invention, the first image data is divided into a plurality of first grids, each frame of second image data in the video data is divided into a plurality of second grids, the first face key points are adjusted according to the second face key points in sequence to adjust the first grids where the first face key points are located, the adjusted first grids are drawn in sequence to drive and display the first image data, when the first face key points are adjusted, the adjustment of the pixel points in the first grids can be more uniform along with the adjustment of the first grids, so that the adjustment of the face data is smoother, the deformation condition is reduced, the face distortion is avoided, in this case, the first grids can be simultaneously used for face adjustment and rendering operation, the operation of grid division is multiplexed, the operation amount can be reduced, and the drawing and the adjustment of the first grids are simpler compared with deep learning such as a neural network, the method can improve the processing speed and reduce the processing time, and is suitable for scenes with high real-time requirements, such as live broadcasting and the like.
Drawings
Fig. 1 is a flowchart of a method for adjusting a driving display of a human face by a nose according to an embodiment of the present invention;
fig. 2A to fig. 2C are exemplary diagrams of a face key point according to an embodiment of the present invention;
fig. 3A to fig. 3C are exemplary diagrams of a grid sequence according to an embodiment of the present invention;
fig. 4 is an exemplary diagram of key points of adjacent faces according to an embodiment of the present invention;
FIG. 5 is an exemplary diagram of a Voronoi diagram provided by one embodiment of the present invention;
FIG. 6 is a diagram illustrating a grid according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a driving display device for a human face according to a second embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for driving and displaying a human face according to an embodiment of the present invention, where the method is applicable to a case where a mesh is constructed based on human face key points and a human face is driven based on the mesh, and the method may be executed by a device for driving and displaying a human face, where the device may be implemented by software and/or hardware, and may be configured in an electronic device, where the electronic device may include a mobile terminal such as a mobile phone, a tablet, a PDA (personal digital assistant), a smart wearable device (e.g., smart glasses, a smart watch), and a non-mobile terminal such as a smart television, a personal computer, and the electronic device includes a processor such as a CPU (Central Processing Unit ), a GPU (Graphics Processing Unit, Graphics Processing Unit), and an Application Programming Interface (API) or a rendering engine configured to render 2D vector Graphics, such as Open Graphics Library, open graphics library), OpenGL ES (OpenGL for embedded systems ), Metal, Valkan, U3D, UE4, and the like, the method specifically includes the following steps:
s101, acquiring first image data and video data.
In particular implementations, the operating system of the electronic device may include Android (Android), IOS, Windows, and so on.
In one aspect, applications that enable image processing, such as live applications, image editing applications, camera applications, instant messaging tools, gallery applications, and the like, are supported for execution in these operating systems.
An application such as an image editing application, an instant messaging tool, a gallery application, etc., may have a UI (User Interface) for providing an imported control, and a User may operate the imported control through a peripheral device such as a touch or a mouse, etc., to select image data stored locally (represented by a thumbnail or a path), or may select image data stored in a network (represented by a URL (uniform resource locator), so that the application acquires the image data as the first image data.
The UI of the application can provide controls for photographing and recording, and a user can operate the controls for photographing and recording through external devices such as touch control or a mouse and the like to inform the application to call a camera to acquire image data as first image data.
On the other hand, the application can call a camera of the electronic equipment to collect video data, the video data comprises multiple frames of second image data, and part or all of the second image data comprises users, namely pixel points for representing the users.
S102, dividing the first image data into a plurality of first grids.
The first image data may include first face data, which may be pixels representing a face of a person in the first image data.
In the embodiment of the present invention, the application performs face detection on the first image data, and identifies first face key points included in the first face data.
The face detection is also called face key point detection, positioning or face alignment, and refers to positioning key region positions of a face, including eyebrows, eyes, a nose, a mouth, a face contour, and the like, given face data.
Face detection typically uses the following methods:
1. and (3) manually extracting features, such as haar features, training a classifier by using the features, and detecting the human face by using the classifier.
2. Face detection is inherited from a common object detection algorithm, for example, using fast R-CNN to detect faces.
3. Convolutional neural Networks using a Cascade structure, for example, Cascade CNN (Cascaded Convolutional neural network), MTCNN (Multi-task Cascaded Convolutional neural network).
In a specific implementation, the methods for implementing face detection may be integrated in an application module, the application may directly call the module to detect the first face key point in the image data, the methods for implementing face detection may also be integrated in an SDK (Software Development Kit), the SDK serves as assembly data of the application, the application may request the SDK to perform face detection on the first image data, the SDK detects the first face key point in the first image data, and returns the first face key point to the application.
It should be noted that the number of the first face key points may be set by a person skilled in the art according to actual conditions, the real-time requirement for static image processing is low, and dense first face key points, such as 1000, may be detected, and in addition to positioning important feature points of a face, contours of five sense organs may also be accurately described; for live broadcasting and the like, the real-time requirement is high, and sparse first face key points, such as 68, 81, and 106, can be detected, and obvious and important feature points (such as eye key points, eyebrow key points, nose key points, mouth key points, contour key points, and the like) on the face can be located to reduce the processing amount and reduce the processing time, and the embodiment of the invention is not limited thereto.
In order to make those skilled in the art better understand the embodiment of the present invention, in the embodiment of the present invention, a first sparse face key point is taken as an example for description.
For example, by performing face detection on the first image data shown in fig. 2A, 68 first face key points as shown in fig. 2B may be output.
The first Mesh (Mesh) represents a single drawable entity, and the first vertices of the Mesh at least include first face key points, that is, the first face key points are used as at least part of the first vertices of the first Mesh, and the first image data is gridded and divided into a plurality of (two or more) first meshes.
So-called meshing, concave polygons or polygons with intersecting edges are divided into convex polygons, such as triangles, to be rendered by an API or rendering engine such as OpenGL.
It should be noted that the first grids are ordered to form a grid sequence, so as to conform to the rendering specification of an API or a rendering engine such as OpenGL.
For example, for OpenGL, there are generally three types of rendering a series of triangles (meshes):
1、GL_TRIANGLES
every three points are grouped to draw a triangle, and the triangles are independent.
As shown in FIG. 3A, the first triangle is usedVertex v0、v1、v2The second triangle uses the vertex v3、v4、v5And so on.
2、GL_TRIANGLE_STRIP
Starting from the third point, each point in combination with the first two points draws a triangle, i.e. a linear continuous string of triangles:
as shown in FIG. 3B, the first triangle has the vertex arrangement order v0,v1,v2(ii) a Second triangle with vertex arrangement order v2,v1,v3(ii) a A third triangle having vertex arrangement order v2,v3,v4(ii) a A fourth triangle having vertex arrangement order v4,v3,v8
This order is to ensure that the triangles are drawn in the same direction so that the sequence of triangles can correctly form part of the surface.
3、GL_TRIANGLE_FAN。
Starting from the third point, each point in combination with the previous point and the first point draws a triangle, i.e. a fan-shaped continuous triangle.
As shown in FIG. 3C, the vertex of the first triangle is arranged in the order v2,v1,v0The vertex of the second triangle is arranged in the order v3,v2,v0The vertex of the first triangle is arranged in the order v4,v3,v0
In one embodiment of the present invention, S102 may include the steps of:
and S1021, determining first face key points adjacent to the positions as first target key points.
S1022, connecting the first target key point in the first image data with the first target key point as a first vertex, to obtain a first mesh.
In the embodiment of the present invention, two first face key points with adjacent positions may be regarded as a pair of first target key points, and the first target key points are sequentially regarded as first vertices of the first mesh and connected to the pair of first target key points, so that the first mesh may be generated.
In one way of detecting the neighboring positions, the first image data including the first face keypoints may be converted into a first Voronoi diagram (Voronoi diagram) by a definition method (Intersect of Halfplanes), an increment (increment) algorithm, a divide-and-conquer method, a plane sweep algorithm, or the like.
The first dimension Nuo graph, also called Thiessen polygon or Dirichlet graph, consists of a set of continuous polygons (also called cell) made up of perpendicular bisectors connecting two neighboring point lines.
In the first voronoi diagram, the euclidean distance between any two first face keypoints p and q is denoted as dist (p, q).
Let P be { P ═ P1,p2,…,pnThe method is characterized in that any n different first face key points on a plane are used as base points. So-called P-corresponding Voronoi diagrams are a subdivision of a plane-the entire plane is thus divided into n cells, which have the property that:
any first face key q is located at first face key piIn the corresponding cell, if and only if for any pj∈PjJ ≠ i, all have dist (q, p)i)<dist(q,pj). At this time, a Voronoi diagram corresponding to P is denoted by vor (P).
The "vor (p)" or "Voronoi diagram" indicates the edges and vertices that make up the sub-region partition. In vor (p), with a base point piThe corresponding cell is denoted V (p)i) -so called and piCorresponding Voronoi cells.
In an embodiment of the present invention, the first edge of the first mesh may be a first edge of the first mesh, and the first edge of the first mesh may be a second edge of the first mesh.
For example, referring to fig. 4, converting the first image data into a first sinogram, fig. 4 shows a portion of the first face keypoints (black dots) and a portion of the first cell edges (solid lines).
In the first unit 400, the first face key points 411 on both sides of the first edge 401 are adjacent to the first face key points 412, and may connect the first face key points 411 with the first face key points 412, similarly, connect the first face key points 411 with the first face key points 413, and connect the first face key points 413 with the first face key points 412, thereby generating a first mesh (dotted edge) 420.
Further, the first image data shown in fig. 2B, which includes 68 first facial keypoints, may be converted into the first dimensional nomogram shown in fig. 5, so as to determine first facial keypoints that are adjacent in position, and the first facial keypoints that are adjacent in position are connected in a predetermined order, so as to obtain the first mesh shown in fig. 6.
In another embodiment of the present invention, S102 may include the steps of:
and S1023, determining points on the edge of the first image data as first edge points.
And S1024, determining a first face key point adjacent to the first edge point as a third target key point.
S1025, connecting the edge point and the third target key point in the first image data by taking the first edge point and the third target key point as vertexes to obtain a first grid.
The first mesh of first face keypoints may not generally completely cover the first image data, and in this case, some points may be selected on the edge of the first image data as first edge points.
It should be noted that, for convenience of operation, the first edge points are selected to be symmetrical.
For example, as shown in fig. 6, four vertices of the first image data and a midpoint between every two vertices are selected as the first edge point.
The first edge point and the adjacent first face key points are used as a pair of third target key points, the third target key points are sequentially used as the first vertex of the first mesh and are sequentially connected with the pair of third target key points, and then the first mesh can be generated.
It should be noted that, the relationship between the first edge point and the first face key point may be set by those skilled in the art according to actual situations, and the embodiment of the present invention is not limited thereto.
In general, eyebrow keypoints and contour keypoints of the first face keypoints are adjacent to first edge points, and the number of the first edge points is less than that of the first face keypoints adjacent to the first edge points.
For example, the first image data shown in fig. 2B includes 68 first face key points and 8 first edge points, 25 first face key points and 8 first edge points are positioned adjacent to each other, and the first face key points and the first edge points positioned adjacent to each other are connected in a predetermined order, so that the first mesh shown in fig. 6 can be obtained.
In this case, the first mesh may be generated by connecting the first face key points and the first edge points in a predetermined order.
In yet another embodiment of the present invention, the first vertex of the first mesh includes a first edge point and a first edge point located on the edge of the first image data, the first edge point and the first edge point have a first number, wherein the first number of the first edge point is preset, the first number of the first edge point is generated during face detection, or is obtained based on a number map generated during face detection, and the first number of the first edge point and the first number of the first face key point are not repeated.
For example, as shown in fig. 2C, the first face key points are 68 in number, as follows:
the contour has 17 key points, and the first numbers are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 and 17 respectively.
The key points of the eyebrows are 10, and the first numbers are 18, 19, 20, 21, 22, 23, 24, 25, 26 and 27 respectively.
The number of the key points of the nose is 9, and the first numbers are 28, 29, 30, 31, 32, 33, 34, 35 and 36 respectively.
The eye key points are 12 in number, and the first numbers are 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47 and 48 respectively.
The number of the key points of the mouth is 20, and the first numbers are 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67 and 68 respectively.
The number of the first edge points is 8, and the first numbers are 69, 70, 71, 72, 73, 74, 75 and 76 respectively.
At this time, S102 may include the steps of:
and S1026, inquiring preset grid variables with sequence.
Wherein vertices in each mesh variable are labeled with a third number.
S1027, if the first number is the same as the third number, connecting the first edge point or the first face key point to which the first number belongs in the first image data with the first edge point or the first face key point to which the first number belongs as a first vertex, to obtain a first mesh.
Because the method for detecting the human face is preset, the first face key points output by the method are generally fixed, and the first edge points are also generally fixed, so that the points (which may be the first face key points or the first edge points) adjacent to each first face key point and each first edge point are all fixed, that is, the order of the first vertices in each first mesh is fixed.
Therefore, when a frame of first image data is first (offline) divided into a plurality of first meshes according to a certain mode, the number of each first vertex (which may be a first face key point or a first edge point) in each first mesh is sequentially recorded as a third number, and at this time, each first mesh retains the third number of the first vertex as a mesh variable.
For other (real-time) first image data, if the first image data is divided into the first meshes in the same manner, the first numbers of the points (which may be the first face key points or the first edge points) may be matched with the third numbers of the mesh variables, and if the first image data and the first image data are the same, the points (which may be the first face key points or the first edge points) indicated by the first numbers may be connected in the order defined in the mesh variables, so that the first image data may be divided into a plurality of first meshes.
For example, as shown in fig. 2C, the grid variables may be represented as (1, 76, 2), (2, 76, 3), (3, 76, 4), (4, 76, 5), and so on.
For the first image data shown in fig. 2B, if the first face keypoints with first numbers 1 and 2 and the first edge points with first number 76 match successfully with the third number in one of the grid variables, the first face keypoints and the first edge points may be connected in this order to form a first grid (1, 76, 2).
S103, dividing the second image data of each frame into a plurality of second grids.
The second image data includes second face data, and the second face data may refer to pixel points in the second image data for representing a face.
In the embodiment of the invention, the second image data is subjected to face detection, and second face key points contained in the second face data are identified.
It should be noted that, the manner of performing face recognition on the second image data is consistent with that of performing face recognition on the first image data, so that the second face key points are kept consistent with the first face key points.
The second Mesh (Mesh) represents a single drawable entity, and the second vertices of the Mesh at least include second face key points, that is, the second face key points are used as at least part of the second vertices of the second Mesh, and the second image data is gridded and divided into a plurality of (two or more) second meshes.
In one embodiment of the present invention, S103 includes:
determining second face key points with adjacent positions as second target key points;
and connecting the second target key points in the second image data by taking the second target key points as second vertexes to obtain a second mesh.
Further, the determining, as a second target key point, a second face key point whose position is adjacent to the determined position includes:
converting the second image data into a second nomogram, wherein the second nomogram comprises a plurality of second units, each second unit comprises a second face key point, and the second units are provided with a plurality of second edges;
and determining that the positions of the second face key points positioned at the two sides of the same second edge are adjacent.
In another embodiment of the present invention, S103 includes:
determining points located on the edge of the second image data as second edge points;
determining a second face key point adjacent to the second edge point position as a fourth target key point;
and connecting the second edge point and the fourth target key point in the second image data by taking the second edge point and the fourth target key point as a second vertex to obtain a second mesh.
In yet another embodiment of the present invention, the second vertices of the second mesh further comprise second edge points located on the second image data edge, the second edge points having a second number with the second face keypoints; s103 includes:
querying preset grid variables with sequence, wherein a vertex in each grid variable is marked with a third number;
and if the second number is the same as the third number, connecting a second edge point or a second face key point to which the second number belongs in the second image data by taking the second edge point or the second face key point to which the second number belongs as a second vertex to obtain a second mesh.
It should be noted that the dividing manner of the second mesh is consistent with the dividing manner of the first mesh, so the description of the dividing manner of the second mesh is relatively simple, and for the relevant points, reference may be made to the partial description of the dividing manner of the first mesh, and the embodiment of the present invention is not described in detail herein.
And S104, sequentially adjusting the first face key points according to the second face key points so as to adjust the first grids where the first face key points are located.
In general, the face of the user may change frequently, and the second face key point that appears as the second face data in each frame of the second image data changes frequently.
And adjusting the first face key points in sequence to enable the first face key points to be aligned with second face key points in each frame of second image data in sequence, and under the condition that the first face key points are kept as first vertexes of the first grids, adjusting the first face key points, and adjusting the first grids together to enable the face data in the first grids to change along with the change of the second face data, so that the effect of driving the first face data by the second face data is realized.
In one embodiment of the present invention, S104 includes:
s1041, determining first vertex coordinates of the first face key points.
S1042, determining second vertex coordinates of the second face key points in each frame of the second image data.
S1043, referring to the offset of the second vertex coordinates of the second face key points between every two adjacent frames of the second image data, adjusting the first vertex coordinates of the first face key points.
In a specific implementation, the difference between the second image data of two adjacent frames in the video data is relatively small, for example, the video data of 1 second includes 60 frames of second image data, the difference between each two frames of image data is 16.67ms, and the variation of the user in the 16.67ms is relatively small, so that the face driving can be performed through frame synchronization, that is, for a second face key point and a first face key point with the same number (that is, the first number of the first face key point is the same as the second number of the second face key point), the first vertex coordinate of the first face key point can be adjusted by referring to the offset of the second vertex coordinate of the second face key point between each two adjacent frames of second image data.
Further, S1043 includes:
s14031, determining a first offset distance between the second vertex coordinates of the second face key points and the second image data of every two adjacent frames.
For second keypoints with the same second number, the offset between the second fixed point coordinates of the second image data of the current frame and the second fixed point coordinates of the second image data of the previous frame may be sequentially calculated as the first offset distance.
S14032, mapping the first offset distance to the first image data to obtain a second offset distance.
If the first face data and the second face data are the same size, the first offset distance may be directly applied in the first image data as the second offset distance.
And if the sizes of the first face data and the second face data are different and the same, calculating the size ratio between the first face data and the second face data.
In one example, the first face keypoints comprise first eye keypoints and the second face keypoints comprise second eye keypoints.
In this example, the distance between first eye keypoints (e.g., first number 40, 43 face keypoints as shown in fig. 2C) may be determined as the first eye distance.
The distance between the second eye keypoints, relative to ground, is determined as the second inter-eye distance.
And calculating the distance ratio between the second eye distance and the first eye distance as the size ratio between the first face data and the second face data.
In this example, since the muscles between the eyes are sparse, the size ratio between the first face data and the second face data is calculated at the inter-eye distance, and the accuracy is high.
Of course, besides the eye distance, the size ratio between the first face data and the second face data may also be calculated in other manners, for example, the size ratio between the first face data and the second face data is calculated by the distance between the eyebrow center and the nose tip, and the like, which is not limited in this embodiment of the present invention.
The product of the first offset distance and the size ratio is calculated as a second offset distance for the first face keypoints.
S14033, adding the second offset distance to the first vertex coordinates of the first face keypoint to update the first vertex coordinates of the first face keypoint.
The offsets of the second face key points in each frame of second image data are respectively added to the corresponding first face key points in the first image data, so that the first face image data to be driven can move along with the real face (second face data), and the nose, the eyes, the mouth, the eyebrows and the like can synchronize the movement of the real face.
And S105, sequentially drawing the adjusted first grids to drive and display the first image data.
In specific implementation, an API or a rendering engine for rendering 2D vector graphics is called, each first grid is drawn in sequence, so that first image data is displayed on a screen, and a plurality of frames of first image data are continuously displayed, namely, the change of first face data in the first image data can be displayed.
Further, to increase the display speed of the first image data, the first mesh may be rendered in the GPU.
In one embodiment of the present invention, S105 includes the steps of:
s1051, for each adjusted first mesh, sequentially determining texture coordinates of each first vertex in the first mesh.
And S1052, sequentially determining the first vertex coordinates of each first vertex in each first mesh after adjustment.
S1053, drawing the first mesh according to the texture coordinate and the first vertex coordinate in sequence to display the first image data.
In rendering a texture mapped scene, in addition to defining geometric coordinates (i.e., vertex coordinates) for each vertex, texture coordinates are also defined. After a variety of transformations, the geometric coordinates determine where the vertex is drawn on the screen, and the texture coordinates determine which texel in the texture image is assigned to the vertex.
Texture images are square arrays, the texture coordinates can be usually defined in one, two, three or four dimensional form, called s, t, r and q coordinates, one dimensional texture is usually expressed by s coordinates, two dimensional texture is usually expressed by (s, t) coordinates, and r coordinates are ignored at present. The q coordinate, like w, is typically 1, and is used primarily to establish homogeneous coordinates. The function defined by the OpenGL coordinates is:
void gltexCoord{1234}{sifd}[v](TYPE coords);
the current texture coordinates are set, and the vertices resulting from the call to glVertex () are all assigned the current texture coordinates. For gltexCoord1 (), the s coordinate is set to a given value, t and r are set to 0, and q is set to 1; s and t coordinate values can be set with gltexCoord2 (), r is set to 0, and q is set to 1; for gltexCoord3 (), q is set to 1, and the other coordinates are set at given values; all coordinates can be given with gltexCoord4 ().
In the embodiment of the present invention, OpenGL ES is taken as an example to explain a process of drawing a grid, where the process is a programmable pipeline, and specifically includes the following operations:
1. VBO/VAO (Vertex Buffer/Arrays Objects, Vertex Buffer object or Vertex group object)
VBO/VAO is vertex information provided by the CPU to the GPU, and includes vertex coordinates, color (only the color of a vertex, and not the color of a texture), texture coordinates (for texture mapping), and the like.
2. Vertexshader (vertex shader)
The vertex shader is a program that processes vertex information provided by the VBO/VAO. Each vertex provided by VBO/VAO performs one pass of the vertex shader. Uniformity (a variable type) remains consistent across each vertex, and Attribute varies across each vertex (which can be understood as the input vertex attributes). Executing VertexShader once outputs one Varying (variable) and gl _ positon.
Wherein, the input of the vertex shader comprises:
2.1, shader program: vertex shader program source code or executable file describing operations performed on vertices
2.2, vertex shader input (or attributes): data for each vertex provided by a vertex array
2.3, uniform variable (uniform): invariant data used by vertex/fragment shaders
2.4, Samplers (Samplers): special uniform variable types representing textures used by vertex shaders
The vertex shader is a stage in which the programming of the vertex shader can be operated, and is used for controlling the conversion process of vertex coordinates, and the fragment shader controls the calculation process of each pixel color.
3. Primitive Assembly:
the next stage of the vertex shader is primitive assembly, and a primitive (speculative) is a geometric object such as a triangle, a straight line or a point. At this stage, the vertices output by the vertex shader are grouped into primitives.
Restoring the vertex data into a grid structure according to a Primitive (original link relation), wherein the grid is composed of vertexes and indexes, the vertexes are linked together according to the indexes at the stage to form three different primitives of points, lines and surfaces, and then the triangles beyond the screen are clipped.
For example, if a triangle (mesh) has three vertices, one of which is outside the screen and two of which are inside the screen, and the view on the screen should be a quadrangle, the quadrangle can be cut into 2 small triangles (meshes).
In short, the points obtained after the vertex shader computation are grouped into points, lines, and planes (triangles) according to the link relationship.
4. rasterization (rasterization)
Rasterization is the process of converting a primitive into a set of two-dimensional fragments, which are then processed by a fragment shader (the input to the fragment shader). These two-dimensional fragments represent pixels that can be drawn on the screen, and the mechanism for generating each fragment value from the vertex shader output assigned to each primitive vertex is called interpolation.
The vertex after primitive assembling can be understood as becoming a graph, and pixels (texture coordinates v _ tex coord, color and other information) in the graph area can be interpolated according to the shape of the graph during rasterization. Note that the pixel at this time is not a pixel on the screen, and is not colored. The next fragment shader completes the coloring.
5. Fragmentshader (fragment shader)
The fragment shader implements a generic programmable approach for operations on fragments (pixels), executing the fragment shader one pass per fragment of the rasterized output, executing the shader on each fragment generated during the rasterization phase, and generating one or more (multiple rendered) color values as output.
6. Per-Fragment Operations (Fragment by Fragment operation)
At this stage, each segment will perform the following 5 operations:
6.1 PixelOwnershiptest (Pixel assignment test)
The pixel that determines the location (x, y) in the frame buffer is not owned by the current context.
For example, if one display frame buffer window is occluded by another window, the window system may determine that the occluded pixels do not belong to the context of this OpenGL, and therefore do not display these pixels.
6.2, ScissorTest (cut test):
if the segment is outside the cropping zone, it is discarded.
6.3, StencilTest and DepthSt (template and depth test):
if the shape returned by the fragment shader is not a shape in the stencil, it is discarded.
If the depth returned by the fragment shader is less than the depth in the buffer, then it is discarded.
6.4 Blending:
the newly generated fragment color values are combined with the color values stored in the frame buffer to generate new RGBA (Red, Green, Blue, and Alpha color spaces).
6.5, dithering:
at the end of the fragment-by-fragment operation phase, the fragment is either rejected or the color, depth or template value of the fragment is written somewhere in the frame buffer (x, y). The write fragment color, depth, and stencil values depend on the respective write mask being discarded. The write mask may more precisely control the color, depth, and stencil values written into the associated buffer. For example: the write mask for the color buffer may be set so that any red value cannot be written to the color buffer.
Finally, the generated fragments are placed in a Frame buffer (front buffer or rear buffer or FBO (Frame buffer object)), and if the fragments are not FBO, the fragments in the screen drawing buffer generate pixels on the screen.
In the embodiment of the invention, the first image data is divided into a plurality of first grids, each frame of second image data in the video data is divided into a plurality of second grids, the first face key points are adjusted according to the second face key points in sequence to adjust the first grids where the first face key points are located, the adjusted first grids are drawn in sequence to drive and display the first image data, when the first face key points are adjusted, the adjustment of the pixel points in the first grids can be more uniform along with the adjustment of the first grids, so that the adjustment of the face data is smoother, the deformation condition is reduced, the face distortion is avoided, in this case, the first grids can be simultaneously used for face adjustment and rendering operation, the operation of grid division is multiplexed, the operation amount can be reduced, and the drawing and the adjustment of the first grids are simpler compared with deep learning such as a neural network, the method can improve the processing speed and reduce the processing time, and is suitable for scenes with high real-time requirements, such as live broadcasting and the like.
Example two
Fig. 7 is a schematic structural diagram of a driving display device for a human face according to a second embodiment of the present invention, where the device may specifically include the following modules:
a data obtaining module 701, configured to obtain first image data and video data, where the video data includes multiple frames of second image data, the first image data includes first face data, and the second image data includes second face data;
a first mesh dividing module 702, configured to divide the first image data into a plurality of first meshes, where a first vertex of each first mesh at least includes a first face key point of the first face data;
a second mesh dividing module 703, configured to divide each frame of the second image data into multiple second meshes, where a second vertex of each second mesh at least includes a second face key of the second face data;
a face key point adjusting module 704, configured to adjust the first face key points sequentially according to the second face key points, so as to adjust a first mesh where the first face key points are located;
and a grid drawing module 705, configured to draw the adjusted first grid in sequence, so as to drive to display the first image data.
In an embodiment of the present invention, the face keypoint adjusting module 704 includes:
a first vertex coordinate determination submodule for determining first vertex coordinates of the first face keypoints;
a second vertex coordinate determination submodule, configured to determine a second vertex coordinate of the second face key point in each frame of the second image data;
and the offset adjusting submodule is used for adjusting the first vertex coordinates of the first face key points by referring to the offset of the second vertex coordinates of the second face key points between every two adjacent frames of the second image data.
In one embodiment of the present invention, the offset adjustment sub-module includes:
a first offset distance determining unit, configured to determine a first offset distance between second vertex coordinates of the second face keypoint and the second image data of each two adjacent frames;
a second offset distance calculation unit, configured to map the first offset distance into the first image data to obtain a second offset distance;
and the vertex coordinate offset unit is used for adding the second offset distance on the basis of the first vertex coordinates of the first face key points so as to update the first vertex coordinates of the first face key points.
In one embodiment of the present invention, the second offset distance calculation unit includes:
a size ratio calculation subunit configured to calculate a size ratio between the first face data and the second face data;
a product calculating subunit, configured to calculate a product between the first offset distance and the size ratio as a second offset distance of the first face keypoint.
In one example of an embodiment of the present invention, the first face keypoints comprise first eye keypoints and the second face keypoints comprise second eye keypoints;
the size scale calculation subunit is further configured to:
determining a first eye distance between the first eye keypoints;
determining a second inter-ocular distance between the second eye keypoints;
and calculating the distance ratio between the second eye distance and the first eye distance as the size ratio between the first face data and the second face data.
In one embodiment of the present invention, the first meshing module 702 includes:
the first target key point determining submodule is used for determining first face key points adjacent in position as first target key points;
the first connecting submodule is used for connecting the first target key point in the first image data by taking the first target key point as a first vertex to obtain a first grid;
the second meshing module 703 includes:
the second target key point determining submodule is used for determining second face key points with adjacent positions as second target key points;
and the second connecting submodule is used for connecting the second target key point in the second image data by taking the second target key point as a second vertex to obtain a second mesh.
In one embodiment of the present invention, the first target keypoint determination submodule includes:
a first voronoi diagram conversion unit configured to convert the first image data into a first voronoi diagram, where the first voronoi diagram includes a plurality of first units, each of the first units includes a first face key point, and the first units have a plurality of first edges;
the first position adjacency determining unit is used for determining that the positions of the first face key points positioned on the two sides of the same first edge are adjacent;
the second target keypoint determination submodule includes:
a second nomogram conversion unit, configured to convert the second image data into a second nomogram, where the second nomogram includes a plurality of second cells, each of the second cells includes a second face keypoint, and the second cells have a plurality of second edges;
and the second position adjacency determining unit is used for determining that the positions of the second face key points positioned on two sides of the same second edge are adjacent.
In another embodiment of the present invention, the first meshing module 702 includes:
a first edge point determination submodule for determining a point located on an edge of the first image data as a first edge point;
a third target key point determining submodule, configured to determine a first face key point adjacent to the first edge point as a third target key point;
a third connection submodule, configured to connect the first edge point and the third target key point in the first image data as a first vertex to obtain a first mesh;
the second meshing module 703 includes:
a second edge point determining submodule for determining a point located on an edge of the second image data as a second edge point;
a fourth target key point determining submodule, configured to determine a second face key point adjacent to the second edge point position as a fourth target key point;
and the fourth connecting submodule is used for connecting the second edge point and the fourth target key point in the second image data by taking the second edge point and the fourth target key point as a second vertex to obtain a second grid.
In yet another embodiment of the present invention, the first vertices of the first mesh further comprise first edge points located on the first image data edge, the first edge points having a first number with the first face keypoints;
the second vertex of the second mesh further comprises a second edge point located on the second image data edge, and the second edge point and the second face key point have a second number;
the first meshing module 702 includes:
the first grid variable searching submodule is used for searching preset grid variables with a sequence, and a vertex in each grid variable is marked with a third number;
a fifth connecting sub-module, configured to, if the first number is the same as the third number, connect the first edge point or the first face key point to which the first number belongs in the first image data with the first edge point or the first face key point to which the first number belongs as a first vertex to obtain a first mesh;
the second meshing module 703 includes:
the second grid variable searching submodule is used for searching preset grid variables with a sequence, and a vertex in each grid variable is marked with a third number;
and a sixth connecting sub-module, configured to, if the second number is the same as the third number, connect the second edge point or the second face key point to which the second number belongs in the second image data with the second edge point or the second face key point to which the second number belongs as a second vertex to obtain a second mesh.
In one embodiment of the present invention, the grid drawing module 705 comprises:
the texture coordinate determination submodule is used for sequentially determining the texture coordinate of each first vertex in each first grid after the adjustment;
the vertex coordinate determination submodule is used for sequentially determining the first vertex coordinates of each first vertex in the first grids for each adjusted first grid;
and the coordinate drawing submodule is used for drawing the first grid according to the texture coordinate and the first vertex coordinate in sequence so as to display the first image data.
The driving display device for the human face provided by the embodiment of the invention can execute the driving display method for the human face provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE III
Fig. 8 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. As shown in fig. 8, the electronic apparatus includes a processor 800, a memory 801, a communication module 802, an input device 803, and an output device 804; the number of the processors 800 in the electronic device may be one or more, and one processor 800 is taken as an example in fig. 8; the processor 800, the memory 801, the communication module 802, the input device 803 and the output device 804 in the electronic apparatus may be connected by a bus or other means, and fig. 8 illustrates an example of connection by a bus.
The memory 801 may be used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as modules corresponding to the driving display method of the human face in this embodiment (for example, a data acquisition module 701, a first mesh division module 702, a second mesh division module 703, a human face key point adjustment module 704, and a mesh drawing module 705 in the driving display device of the human face shown in fig. 7). The processor 800 executes various functional applications and data processing of the electronic device by running software programs, instructions and modules stored in the memory 801, so as to implement the above-mentioned driving display method for the human face.
The memory 801 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 801 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 801 may further include memory located remotely from processor 800, which may be connected to an electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And the communication module 802 is configured to establish a connection with the display screen and implement data interaction with the display screen. The input unit 803 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic device.
The electronic device provided in this embodiment can execute the method for driving and displaying the face provided in any embodiment of the present invention, and has corresponding functions and advantages.
Example four
The fourth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for driving and displaying a human face, and the method includes:
acquiring first image data and video data, wherein the video data comprises a plurality of frames of second image data, the first image data comprises first face data, and the second image data comprises second face data;
dividing the first image data into a plurality of first meshes, wherein first vertexes of the first meshes at least comprise first face key points of the first face data;
dividing each frame of the second image data into a plurality of second grids, wherein second vertexes of the second grids at least comprise second face key points of the second face data;
sequentially adjusting the first face key points according to the second face key points so as to adjust a first grid where the first face key points are located;
and sequentially drawing the adjusted first grids to drive and display the first image data.
Of course, the computer program of the computer-readable storage medium provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the method for driving and displaying a human face provided in any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the face driving and displaying device, the units and modules included in the face driving and displaying device are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (13)

1. A method for driving and displaying a human face is characterized by comprising the following steps:
acquiring first image data and video data, wherein the video data comprises a plurality of frames of second image data, the first image data comprises first face data, and the second image data comprises second face data;
dividing the first image data into a plurality of first meshes, wherein first vertexes of the first meshes at least comprise first face key points of the first face data;
dividing each frame of the second image data into a plurality of second grids, wherein second vertexes of the second grids at least comprise second face key points of the second face data;
sequentially adjusting the first face key points according to the second face key points so as to adjust a first grid where the first face key points are located;
and sequentially drawing the adjusted first grids to drive and display the first image data.
2. The method according to claim 1, wherein said adjusting the first face keypoints according to the second face keypoints in sequence to adjust the first mesh where the first face keypoints are located comprises:
determining first vertex coordinates of the first face keypoints;
determining second vertex coordinates of the second face key points in each frame of the second image data;
and adjusting the first vertex coordinates of the first face key points by referring to the offset of the second vertex coordinates of the second face key points between every two adjacent frames of the second image data.
3. The method of claim 2, wherein said adjusting the first vertex coordinates of the first face keypoints with reference to the offset of the second vertex coordinates of the second face keypoints between every two adjacent frames of the second image data comprises:
determining a first offset distance between second vertex coordinates of the second face key points and the second image data of every two adjacent frames;
mapping the first offset distance to the first image data to obtain a second offset distance;
adding the second offset distance to the first vertex coordinates of the first face keypoints to update the first vertex coordinates of the first face keypoints.
4. The method of claim 3, wherein mapping the first offset distance into the first image data to obtain a second offset distance comprises:
calculating a size ratio between the first face data and the second face data;
calculating a product between the first offset distance and the size ratio as a second offset distance for the first face keypoints.
5. The method of claim 4, wherein the first face keypoints comprise first eye keypoints and the second face keypoints comprise second eye keypoints;
the calculating the size ratio between the first face data and the second face data comprises:
determining a first eye distance between the first eye keypoints;
determining a second inter-ocular distance between the second eye keypoints;
and calculating the distance ratio between the second eye distance and the first eye distance as the size ratio between the first face data and the second face data.
6. The method according to any one of claims 1 to 5,
the dividing the first image data into a plurality of first meshes includes:
determining first face key points adjacent in position as first target key points;
connecting the first target key points in the first image data by taking the first target key points as first vertexes to obtain a first mesh;
the dividing of the second image data into a plurality of second meshes per frame includes:
determining second face key points with adjacent positions as second target key points;
and connecting the second target key points in the second image data by taking the second target key points as second vertexes to obtain a second mesh.
7. The method of claim 6,
the determining of the first face key points adjacent to the position as the first target key points includes:
converting the first image data into a first voronoi diagram, the first voronoi diagram comprising a plurality of first cells, each of the first cells containing a first face keypoint, the first cells having a plurality of first edges;
determining that the positions of the first face key points positioned on the two sides of the same first edge are adjacent;
the determining of the second face key points with adjacent positions as second target key points comprises the following steps:
converting the second image data into a second nomogram, wherein the second nomogram comprises a plurality of second units, each second unit comprises a second face key point, and the second units are provided with a plurality of second edges;
and determining that the positions of the second face key points positioned at the two sides of the same second edge are adjacent.
8. The method of any of claims 1-5, wherein the dividing the first image data into a plurality of first meshes comprises:
determining a point located on an edge of the first image data as a first edge point;
determining a first face key point adjacent to the first edge point position as a third target key point;
connecting the first edge point and the third target key point in the first image data by taking the first edge point and the third target key point as a first vertex to obtain a first mesh;
the dividing of the second image data into a plurality of second meshes per frame includes:
determining points located on the edge of the second image data as second edge points;
determining a second face key point adjacent to the second edge point position as a fourth target key point;
and connecting the second edge point and the fourth target key point in the second image data by taking the second edge point and the fourth target key point as a second vertex to obtain a second mesh.
9. The method according to any one of claims 1 to 5,
the first vertices of the first mesh further comprise first edge points located on the first image data edge, the first edge points and the first face keypoints have first numbers;
the second vertex of the second mesh further comprises a second edge point located on the second image data edge, and the second edge point and the second face key point have a second number;
the dividing the first image data into a plurality of first meshes includes:
querying preset grid variables with sequence, wherein a vertex in each grid variable is marked with a third number;
if the first number is the same as the third number, connecting a first edge point or a first face key point to which the first number belongs in the first image data by taking the first edge point or the first face key point to which the first number belongs as a first vertex to obtain a first mesh;
the dividing of the second image data into a plurality of second meshes per frame includes:
querying preset grid variables with sequence, wherein a vertex in each grid variable is marked with a third number;
and if the second number is the same as the third number, connecting a second edge point or a second face key point to which the second number belongs in the second image data by taking the second edge point or the second face key point to which the second number belongs as a second vertex to obtain a second mesh.
10. The method of any of claims 1-5, wherein said sequentially rendering the adjusted first grid to display the first image data comprises:
for each adjusted first mesh, sequentially determining texture coordinates of each first vertex in the first mesh;
for each adjusted first mesh, sequentially determining first vertex coordinates of each first vertex in the first mesh;
and drawing the first mesh according to the texture coordinates and the first vertex coordinates in sequence so as to display the first image data.
11. A driving display device for human face, comprising:
the data acquisition module is used for acquiring first image data and video data, wherein the video data comprises a plurality of frames of second image data, the first image data comprises first face data, and the second image data comprises second face data;
a first mesh dividing module, configured to divide the first image data into multiple first meshes, where a first vertex of each first mesh at least includes a first face key point of the first face data;
the second mesh dividing module is used for dividing each frame of the second image data into a plurality of second meshes, and a second vertex of each second mesh at least comprises a second face key point of the second face data;
the face key point adjusting module is used for adjusting the first face key points according to the second face key points in sequence so as to adjust a first grid where the first face key points are located;
and the grid drawing module is used for drawing the adjusted first grids in sequence so as to drive and display the first image data.
12. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of driving display of a human face as claimed in any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a method of driving a display of a human face according to any one of claims 1 to 10.
CN201910562988.3A 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium Active CN111651033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910562988.3A CN111651033B (en) 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910562988.3A CN111651033B (en) 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111651033A true CN111651033A (en) 2020-09-11
CN111651033B CN111651033B (en) 2024-03-05

Family

ID=72343498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910562988.3A Active CN111651033B (en) 2019-06-26 2019-06-26 Face driving display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111651033B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905943A (en) * 2020-12-09 2021-06-04 广州市玄武无线科技股份有限公司 Dynamic chart display method and system based on mobile terminal

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005055930A2 (en) * 2003-12-03 2005-06-23 University Of Rochester Recombinant factor viii having increased specific activity
EP1754198A1 (en) * 2004-05-26 2007-02-21 Gameware Europe Limited Animation systems
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation
CN102999929A (en) * 2012-11-08 2013-03-27 大连理工大学 Triangular gridding based human image face-lift processing method
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN108062783A (en) * 2018-01-12 2018-05-22 北京蜜枝科技有限公司 FA Facial Animation mapped system and method
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109685873A (en) * 2018-12-14 2019-04-26 广州市百果园信息技术有限公司 A kind of facial reconstruction method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005055930A2 (en) * 2003-12-03 2005-06-23 University Of Rochester Recombinant factor viii having increased specific activity
EP1754198A1 (en) * 2004-05-26 2007-02-21 Gameware Europe Limited Animation systems
CN101944238A (en) * 2010-09-27 2011-01-12 浙江大学 Data driving face expression synthesis method based on Laplace transformation
CN102999929A (en) * 2012-11-08 2013-03-27 大连理工大学 Triangular gridding based human image face-lift processing method
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN108876879A (en) * 2017-05-12 2018-11-23 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN108062783A (en) * 2018-01-12 2018-05-22 北京蜜枝科技有限公司 FA Facial Animation mapped system and method
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109685873A (en) * 2018-12-14 2019-04-26 广州市百果园信息技术有限公司 A kind of facial reconstruction method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905943A (en) * 2020-12-09 2021-06-04 广州市玄武无线科技股份有限公司 Dynamic chart display method and system based on mobile terminal
CN112905943B (en) * 2020-12-09 2021-12-10 广州市玄武无线科技股份有限公司 Dynamic chart display method and system based on mobile terminal

Also Published As

Publication number Publication date
CN111651033B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN111652791B (en) Face replacement display method, face replacement live broadcast device, electronic equipment and storage medium
CN111462205B (en) Image data deformation, live broadcast method and device, electronic equipment and storage medium
EP3533218B1 (en) Simulating depth of field
US11557086B2 (en) Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping
CN111652794B (en) Face adjusting and live broadcasting method and device, electronic equipment and storage medium
CN111652022B (en) Image data display method, image data live broadcast device, electronic equipment and storage medium
CN111652795A (en) Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
US11217002B2 (en) Method for efficiently computing and specifying level sets for use in computer simulations, computer graphics and other purposes
CN111651033B (en) Face driving display method and device, electronic equipment and storage medium
CN111652025B (en) Face processing and live broadcasting method and device, electronic equipment and storage medium
CN111652024B (en) Face display and live broadcast method and device, electronic equipment and storage medium
CN111652792B (en) Local processing method, live broadcasting method, device, equipment and storage medium for image
CN111652978B (en) Grid generation method and device, electronic equipment and storage medium
Cardona et al. Hybrid-space localized stylization method for view-dependent lines extracted from 3D models.
CN111652807B (en) Eye adjusting and live broadcasting method and device, electronic equipment and storage medium
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
KR100602739B1 (en) Semi-automatic field based image metamorphosis using recursive control-line matching
CN115578495A (en) Special effect image drawing method, device, equipment and medium
Liu et al. Fog effect for photography using stereo vision
JP2009122998A (en) Method for extracting outline from solid/surface model, and computer software program
JP5956875B2 (en) Image processing apparatus and image processing method
CN112465692A (en) Image processing method, device, equipment and storage medium
CN111652023A (en) Mouth shape adjusting method, mouth shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
CN111652793B (en) Tooth image processing method, tooth image live device, electronic equipment and storage medium
Jiang et al. GPU-Driven Real-Time Mesh Contour Vectorization.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant