CN117523136B - Face point position corresponding relation processing method, face reconstruction method, device and medium - Google Patents

Face point position corresponding relation processing method, face reconstruction method, device and medium Download PDF

Info

Publication number
CN117523136B
CN117523136B CN202311511524.2A CN202311511524A CN117523136B CN 117523136 B CN117523136 B CN 117523136B CN 202311511524 A CN202311511524 A CN 202311511524A CN 117523136 B CN117523136 B CN 117523136B
Authority
CN
China
Prior art keywords
face
dimensional
target
sample
corresponding relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311511524.2A
Other languages
Chinese (zh)
Other versions
CN117523136A (en
Inventor
李乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shuhang Technology Beijing Co ltd
Original Assignee
Shuhang Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shuhang Technology Beijing Co ltd filed Critical Shuhang Technology Beijing Co ltd
Priority to CN202311511524.2A priority Critical patent/CN117523136B/en
Publication of CN117523136A publication Critical patent/CN117523136A/en
Application granted granted Critical
Publication of CN117523136B publication Critical patent/CN117523136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a face point position corresponding relation processing method, a face reconstruction device and a medium. The face point position corresponding relation processing method comprises the following steps: acquiring a face image sample and an initial point position corresponding relation, reconstructing a three-dimensional face aiming at a standard face three-dimensional model to acquire each sample face three-dimensional model, and extracting an image to acquire a sample two-dimensional image; carrying out first face key point identification on the sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points identified by one image is more than that of the initial two-dimensional key points; acquiring a second corresponding relation between the target two-dimensional key points and grid vertex indexes of the sample face three-dimensional model; and determining the target grid vertex index of the target two-dimensional key points based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional image so as to obtain the corresponding relation of the target point positions. The requirement of flexibly adjusting the standard face three-dimensional model can be met, and the three-dimensional face reconstruction effect is improved.

Description

Face point position corresponding relation processing method, face reconstruction method, device and medium
Technical Field
The application relates to the technical field of image processing, in particular to a face point position corresponding relation processing method, a face reconstruction device and a medium.
Background
At present, the application of the three-dimensional face reconstruction technology is wider and wider, and the requirements of users on the three-dimensional face reconstruction effect are higher and higher. When three-dimensional face reconstruction is performed based on a two-dimensional face image and a standard three-dimensional face model, the vertex corresponding to the key point is generally determined directly according to the point position corresponding relation between the key point in the two-dimensional face image and the vertex in the three-dimensional face model, so that the three-dimensional face model is adjusted according to the corresponding vertex, and three-dimensional face reconstruction is realized.
However, the point location correspondence used by the current standard face three-dimensional model is preset for a fixed number of key points directly according to experience when a constructor of the face three-dimensional model constructs the face three-dimensional model. The key points which can be used when the point position corresponding relation is used for carrying out three-dimensional reconstruction of the face are difficult to meet the requirement of flexibly adjusting the standard three-dimensional model of the face, and the three-dimensional face reconstruction effect is not improved.
Disclosure of Invention
The embodiment of the application provides a face point position corresponding relation processing method, a face reconstruction device and a medium. Therefore, three-dimensional face reconstruction can be performed based on a larger number of target two-dimensional key points, the requirements for flexible adjustment of the standard face three-dimensional model can be met, and the three-dimensional face reconstruction effect can be improved.
The first aspect of the embodiment of the application provides a face point location corresponding relation processing method, which comprises the following steps:
Acquiring a face image sample and an initial point position corresponding relation, wherein the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes;
Carrying out three-dimensional face reconstruction according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample;
extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face;
Performing first face key point recognition on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one image is greater than that of the initial two-dimensional key points;
acquiring a second corresponding relation between each target two-dimensional key point and a grid vertex index of a sample face three-dimensional model;
And determining target grid vertex indexes corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain target point position corresponding relation of the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model.
A second aspect of an embodiment of the present application provides a face reconstruction method, where the method includes:
acquiring a face image to be reconstructed and a target point position corresponding relation, wherein the target point position corresponding relation is used for describing a corresponding relation between a target two-dimensional key point obtained by carrying out first face key point identification on the image and grid vertexes in a standard face three-dimensional model;
Performing first face key point recognition on the face image to be reconstructed to obtain target two-dimensional key points in the face image to be reconstructed;
And carrying out three-dimensional face reconstruction on the standard face three-dimensional model according to the target two-dimensional key points and the corresponding relation of the target point positions so as to obtain a target reconstructed face three-dimensional model matched with the face image to be reconstructed.
A third aspect of the embodiment of the present application provides a face point location correspondence processing device, where the device includes:
The system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for acquiring a face image sample and an initial point position corresponding relation, the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes;
the sample reconstruction module is used for reconstructing a three-dimensional face according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model so as to obtain a sample face three-dimensional model corresponding to each face image sample;
The image extraction module is used for extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face;
the key point identification module is used for carrying out first face key point identification on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points identified by one image is greater than that of the initial two-dimensional key points;
the corresponding relation acquisition module is used for acquiring a second corresponding relation between each target two-dimensional key point and the grid vertex index of the sample face three-dimensional model;
And the face point position corresponding relation processing module is used for determining target grid vertex indexes corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain the target point position corresponding relation between the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model.
In some alternative embodiments, the data acquisition module is specifically configured to:
acquiring a plurality of face feature influence factors for describing different face features;
and acquiring a plurality of face image samples according to the face feature influence factors, wherein at least one face image sample is matched with the face features described by the face feature influence factors aiming at each face feature influence factor.
In some alternative embodiments, the sample reconstruction module is specifically configured to:
Performing second face key point recognition on each face image sample to obtain the position of an initial two-dimensional key point in each face image sample;
And respectively reconstructing the three-dimensional face of the standard face three-dimensional model according to the positions of the initial two-dimensional key points in the face image samples and the corresponding relation of the initial point positions to obtain a sample face three-dimensional model corresponding to each face image sample.
In some optional embodiments, the image extraction module is specifically configured to:
Respectively carrying out face angle adjustment on each sample face three-dimensional model;
And aiming at each sample face three-dimensional model, performing image conversion processing after each face angle adjustment so as to obtain sample two-dimensional images of different angles of each sample face three-dimensional model.
In some optional embodiments, the face point location correspondence processing module is specifically configured to:
Counting the second corresponding relation of the target two-dimensional key points with the same point position in each sample two-dimensional image to obtain grid vertex index counting results corresponding to the target two-dimensional key points with the same point position;
And aiming at the target two-dimensional key points with the same point positions, taking the grid vertex index with the largest occurrence frequency in the grid vertex index statistical result as the target grid vertex index of the target two-dimensional key points.
A fourth aspect of an embodiment of the present application provides a face reconstruction device, where the device includes:
The face image acquisition module is used for acquiring a face image to be reconstructed and a target point position corresponding relation, wherein the target point position corresponding relation is used for describing the corresponding relation between a target two-dimensional key point obtained by carrying out first face key point identification on the image and grid vertexes in the standard face three-dimensional model;
The target two-dimensional key point recognition module is used for carrying out first face key point recognition on the face image to be reconstructed to obtain target two-dimensional key points in the face image to be reconstructed;
and the face reconstruction module is used for reconstructing the three-dimensional face of the standard face three-dimensional model according to the target two-dimensional key points and the corresponding relation of the target point positions so as to obtain a target reconstructed face three-dimensional model matched with the face image to be reconstructed.
A fifth aspect of an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a plurality of instructions; the processor loads instructions from the memory to execute steps in the face point location correspondence processing method provided in the first aspect of the embodiment of the present application or execute steps in the face reconstruction method provided in the second aspect of the embodiment of the present application.
A sixth aspect of the present application provides a computer readable storage medium, where the computer readable storage medium stores a plurality of instructions, where the instructions are adapted to be loaded by a processor to perform steps in a face point location correspondence processing method provided in the first aspect of the present application or perform steps in a face reconstruction method provided in the second aspect of the present application.
By adopting the scheme of the embodiment of the application, a face image sample and an initial point position corresponding relation can be obtained, wherein the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes; carrying out three-dimensional face reconstruction according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample; extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face; performing first face key point recognition on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one image is greater than that of the initial two-dimensional key points; acquiring a second corresponding relation between each target two-dimensional key point and a grid vertex index of a sample face three-dimensional model; and determining target grid vertex indexes corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain target point position corresponding relation of the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model.
Thus, the target positioning corresponding relation corresponding to the target two-dimensional key points can be obtained according to the face image sample processing, and the number of the target two-dimensional key points is more than that of the initial two-dimensional key points provided by the standard face three-dimensional model. Therefore, three-dimensional face reconstruction can be performed based on a larger number of target two-dimensional key points, the requirements for flexible adjustment of the standard face three-dimensional model can be met, and the three-dimensional face reconstruction effect can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a face point location correspondence processing method provided by an embodiment of the present application;
Fig. 2 is a specific flow diagram of a face point location correspondence processing method according to an embodiment of the present application;
FIG. 3 is a schematic view of a two-dimensional image of a sample at an angle provided by an embodiment of the present application;
FIG. 4 is a schematic view of a two-dimensional image of a sample at another angle provided by an embodiment of the present application;
fig. 5 is a schematic flow chart of a face reconstruction method according to an embodiment of the present application;
Fig. 6 is a block diagram of a face point location correspondence processing device according to an embodiment of the present application;
fig. 7 is a block diagram of a face reconstruction device according to an embodiment of the present application;
fig. 8 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a face point position corresponding relation processing method, a face reconstruction device and a medium. Specifically, the face point location correspondence processing method and/or the face reconstruction method in the embodiment of the present application may be executed by a computer device, where the computer device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a Personal computer (PC, personal Computer), a Personal digital assistant (PDA, personal DIGITAL ASSISTANT), etc. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of a face point location correspondence processing method according to an embodiment of the present application. The specific flow of the face point position corresponding relation processing method can be as follows:
101. The method comprises the steps of obtaining a face image sample and an initial point position corresponding relation, wherein the initial point position corresponding relation is used for describing a first corresponding relation between initial two-dimensional key points obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes.
The face image sample is a sample image containing a face, and in the embodiment of the application, the optimization processing of the initial point position relation is realized according to the face image sample.
The standard face three-dimensional model is a face three-dimensional model used in three-dimensional face reconstruction, and the initial point position corresponding relation of the standard face three-dimensional model is a point position corresponding relation preset in the standard face three-dimensional model. Specifically, the initial point position corresponding relation comprises a preset fixed number of initial two-dimensional key points and grid vertex indexes of grid vertices corresponding to the initial two-dimensional key points in the standard face three-dimensional model, so that the grid vertices of the face three-dimensional model corresponding to any one of the initial two-dimensional key points can be determined according to the initial point position corresponding relation in the three-dimensional face reconstruction process.
For example, in an application scenario, the standard face three-dimensional model is a preset 3dMM model, and the initial point location correspondence includes a point location correspondence of preset 68 initial two-dimensional key points, but the number of points of the correspondence is only 68, which affects the reconstruction effect of the three-dimensional face. In addition, if more two-dimensional key points are identified by using the key point identification model, the problem that part of two-dimensional key points cannot find corresponding grid vertices occurs, and the use flexibility is poor.
It should be noted that the facial shape and other features may be different for different persons. In order to enable the optimized point position corresponding relation to obtain a better matching corresponding effect of key points and model vertexes for two-dimensional face images with different facial features, in the embodiment of the application, face point position corresponding relation processing is performed on face image samples with different facial features, so that the optimized point position corresponding relation obtains a better face reconstruction effect on different two-dimensional face images.
In some embodiments of the present application, the acquiring a face image sample includes:
acquiring a plurality of face feature influence factors for describing different face features;
and acquiring a plurality of face image samples according to the face feature influence factors, wherein at least one face image sample is matched with the face features described by the face feature influence factors aiming at each face feature influence factor.
The face feature influencing factors can be preset or adjusted according to actual requirements, and are not particularly limited herein. Specifically, the above-mentioned face feature influencing factor is a factor that may influence the face feature (i.e., facial feature) in the face image sample, thereby influencing the position of the identified two-dimensional key point.
Fig. 2 is a specific flow chart of a face point location correspondence processing method according to an embodiment of the present application. As shown in fig. 2, in an application scenario, the above-mentioned facial feature influencing factors may include at least one of factors describing features of a face shape, an age, an expression, and the like, such as a round face, a square face, a young age group, an old age group, a smile, a laugh, and the like. And acquiring face image samples of different facial forms, different ages and different expressions from a preset data source by combining the face characteristic influence factors, so that the point position corresponding relation of the face with different characteristics is optimized, and the adaptability of the obtained target point position corresponding relation under different scenes is improved.
It should be noted that, in the embodiment of the present application, the acquired face image samples have consistent image quality (such as sharpness, image size, etc.), and the image quality can meet the requirements of the subsequent three-dimensional face reconstruction.
102. And carrying out three-dimensional face reconstruction according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample.
Specifically, in the embodiment of the application, for each face image sample, three-dimensional face reconstruction is performed through the initial point position corresponding relation and the standard three-dimensional face model to obtain a sample face three-dimensional model matched with the face in the face image sample.
Specifically, the reconstructing a three-dimensional face according to the face image sample, the initial point location correspondence and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample includes:
Performing second face key point recognition on each face image sample to obtain the position of an initial two-dimensional key point in each face image sample;
And respectively reconstructing the three-dimensional face of the standard face three-dimensional model according to the positions of the initial two-dimensional key points in the face image samples and the corresponding relation of the initial point positions to obtain a sample face three-dimensional model corresponding to each face image sample.
The second key point recognition can be realized through a preset second key point recognition model, and the second key point recognition model is used for carrying out face key point recognition on an input image so as to determine the position of an initial key point of each point in the image. And aiming at each face image sample, carrying out second key point recognition on the face image sample, after obtaining the initial two-dimensional key position in the face image sample, determining grid vertex indexes corresponding to the initial two-dimensional key points in each standard face three-dimensional model according to the initial point position corresponding relation, and carrying out optimization adjustment on the standard face three-dimensional model by combining the corresponding relation and the face image sample until obtaining a sample face three-dimensional model matched with the face sample image after three-dimensional face reconstruction.
It should be noted that, the above three-dimensional face reconstruction process may be performed at the server side. As shown in fig. 2, in an application scenario, the above-mentioned three-dimensional face reconstruction process may include reconstructing a mesh (i.e., a face mesh) and reconstructing a texture. Specifically, when reconstructing the mesh, an initial point location correspondence may be used, for example, the initial point location correspondence includes a correspondence between 68 mesh vertices of the three-dimensional face model and initial two-dimensional key points of the 68 points. When the mesh is rebuilt, the face mesh is rebuilt by using the corresponding relation through an optimization method.
In some embodiments, the optimization method of three-dimensional face reconstruction is to solve s, r, t, sp, ep coefficients, where r represents the rotation angle in degrees; t represents a translation pixel, and the unit is a pixel; s represents a scaling factor, without units; sp represents a shape factor, ep represents an expression factor. Specifically, the calculation can be performed by the following formulas (1) and (2):
vertices = sp@shapeBase + ep@expBase + shape_mean (1)
pseudo_3d_landmarks = s * (vertices @ R) + T (2)
it should be noted that, the grid reconstruction loss function in the optimization process is shown in the following formula (3):
x=min(||pseudo_3d_landmarks - real_2d_landmarks||) (3)
The real_2d_landmarks are identified 2d face key points (i.e., initial two-dimensional key points), and it should be noted that in an application scenario, the calculation may be performed for the identified three-dimensional face key points, i.e., the real_2d_landmarks are replaced by real_3d_landmarks. vertices represents vertex coordinates; sp@shape base represents the shape offset scalar, shapeBase represents the shape base; ep@expbase represents the expression shift sitting amount, expBase represents the expression base; shape_mean represents the vertex coordinates of the average face; pseudo_3d_landmarks represent reconstructed three-dimensional face key points; s (vertical@r) +t represents transforming a three-dimensional face from the world coordinate system into the image coordinate system; x represents the value of the mesh reconstruction loss function.
Further, based on the picture information of the face image sample and the optimized mesh information, texture reconstruction is performed. In the texture reconstruction process, solving tp coefficients based on the following formulas (4), (5) and (6), wherein tp coefficients represent texture coefficients:
texture = tp@textureBase + texture_mean (4)
pseudo_texture = texture * light_fitting (5)
y=min(||pseudo_3d_texture – real_3d_texture||) (6)
Wherein texture represents the preliminarily reconstructed texture, and no illumination coefficient exists; tp@texture is an offset pixel value of the texture obtained by multiplying the texture coefficient by the texture base; texture_mean represents the average texel value; pseudo texture represents reconstructed texture; light_fixing represents the reconstructed illumination coefficient; y represents the value of the texture reconstruction loss function; pseudo_3d_texture is equivalent to pseudo_texture, is a reconstructed texture, and represents picture pixel information under a reconstructed mesh.
Note that the symbol @ in the above formulas (1) to (6) represents matrix multiplication.
103. And extracting the images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face.
The sample face three-dimensional model is a three-dimensional face model, and the image extraction is to obtain a corresponding two-dimensional image according to the three-dimensional face model extraction. For example, the two-dimensional image may be extracted by using an image format conversion function provided by the three-dimensional image processing software, or the three-dimensional face model may be directly captured to obtain a corresponding two-dimensional image, or other image extraction methods may be used, which is not limited herein.
In some embodiments of the present application, in order to improve accuracy of identifying target keypoints for different positions of a face, a plurality of sample two-dimensional images of different angles may be obtained for each sample three-dimensional model of the face.
Because the collected two-dimensional image (for example, two-dimensional face image sample) can only represent a face under a certain angle, as shown in fig. 2, in some application scenarios, the sample two-dimensional image under different angles can be reconstructed according to the reconstructed sample face three-dimensional model.
Specifically, in some embodiments of the present application, the extracting the image of each of the three-dimensional model of the sample face to obtain a two-dimensional image of a sample corresponding to each of the three-dimensional model of the sample face includes:
Respectively carrying out face angle adjustment on each sample face three-dimensional model;
And aiming at each sample face three-dimensional model, performing image conversion processing after each face angle adjustment so as to obtain sample two-dimensional images of different angles of each sample face three-dimensional model.
Fig. 3 is a schematic view of a two-dimensional image of a sample at one angle provided by an embodiment of the present application, and fig. 4 is a schematic view of a two-dimensional image of a sample at another angle provided by an embodiment of the present application. As shown in fig. 3 and 4, two-dimensional images of samples at different angles are acquired, so that two-dimensional key points of targets at different positions can be accurately identified. It should be noted that, the angle corresponding to the two-dimensional image of the sample may be preset and adjusted according to the actual requirement, which is not limited herein.
In fig. 2 to 4, the face of the person is coded, but the present application is not limited thereto.
104. And carrying out first face key point recognition on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one image is greater than that of the initial two-dimensional key points.
The first face key point recognition may be performed through a preset first face key point recognition model. It should be noted that the first face key point recognition model and the second key point recognition model may be the same model or may be different models.
In an application scene, when a first face key point model and a second key point model are the same model, the same number of face key points are obtained after face key point recognition is carried out on an input image through the model, a first number of target two-dimensional key points are selected from the two-dimensional key points according to actual requirements, a second number of initial two-dimensional key points are selected, and the first number is larger than the second number.
In another application scenario, when the first face key point model and the second key point model are not the same model, more face key points can be identified through the first face key point model. The first face key point model may be a model obtained by modifying the second key point model, which is not particularly limited herein.
105. And acquiring a second corresponding relation between each target two-dimensional key point and the grid vertex index of the sample face three-dimensional model.
The second correspondence is used for describing which grid vertex index in the sample face three-dimensional model is matched and corresponds to the target two-dimensional key point.
In one application scenario, the target two-dimensional key points and the sample face three-dimensional model may be labeled by a user to determine the second correspondence.
In another application scenario, a second corresponding relation corresponding to each target two-dimensional key point can be determined according to the position relation between the sample two-dimensional image where the target two-dimensional key point is located and the corresponding sample face three-dimensional model. Specifically, each sample two-dimensional image is obtained by image extraction (e.g., screenshot) of a sample face three-dimensional model. And a determined position corresponding relation exists between each pixel point of the sample two-dimensional image and the grid vertex in the sample face three-dimensional model, and according to the position corresponding relation, the corresponding relation between the target two-dimensional key point in the sample two-dimensional image and the grid vertex in the sample face three-dimensional model can be determined, so that the index matching correspondence of the target two-dimensional key point and the grid vertex can be determined.
106. And determining target grid vertex indexes corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain target point position corresponding relation of the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model.
Specifically, the determining, based on the second correspondence between the target two-dimensional key points with the same point location in each of the sample two-dimensional images, a target mesh vertex index corresponding to the target two-dimensional key points with the same point location includes:
Counting the second corresponding relation of the target two-dimensional key points with the same point position in each sample two-dimensional image to obtain grid vertex index counting results corresponding to the target two-dimensional key points with the same point position;
And aiming at the target two-dimensional key points with the same point positions, taking the grid vertex index with the largest occurrence frequency in the grid vertex index statistical result as the target grid vertex index of the target two-dimensional key points.
It should be noted that, each identified target two-dimensional key point has its own point location, for example, the target two-dimensional key point of the point location 1 represents the center point of the left eye, the target two-dimensional key point of the point location 2 represents the center point of the right eye, and so on. For different images, the meaning represented by the target two-dimensional key points with the same identified point positions is the same. For example, the target two-dimensional keypoints of point number 1 in the first image and the second image both represent the center point of the left eye.
In the process of reconstructing the three-dimensional face, the grid vertex index of each grid vertex in the standard face three-dimensional model is unchanged, namely the grid vertex indexes corresponding to the same grid vertex in the sample face three-dimensional model in the standard face three-dimensional model are the same.
Therefore, the target mesh vertex index corresponding to the target two-dimensional key point with the same point position (i.e., the target two-dimensional key point representing the same meaning) can be determined based on the second correspondence relationship of the target two-dimensional key points with the same point position in each of the sample two-dimensional images.
Specifically, as shown in fig. 2, based on images under different angles and different influencing factors (or different data sources), a grid vertex index statistical result corresponding to a target two-dimensional key point with the same point position can be automatically determined, the grid vertex index statistical result comprises grid vertex indexes which are different and are determined according to different sample two-dimensional images and are matched with the target two-dimensional key point, and the grid vertex index with the largest occurrence frequency is used as a target grid vertex index.
Therefore, the target grid vertex index corresponding to the target two-dimensional key point of each point can be determined more accurately, and the corresponding relation of the target point positions for describing the corresponding relation of the target two-dimensional key point and the grid vertices in the standard face three-dimensional model is determined.
Therefore, in the embodiment of the application, the target point position corresponding relation is constructed according to the influence factors such as different facial forms, different expressions and the like, so that the method and the device can be suitable for the requirements of different scenes. And under the condition that the number of the two-dimensional key points is more than that of the initial two-dimensional key points, the user is not required to manually select the points, and the three-dimensional face reconstruction efficiency is improved. The first face key point recognition model can be used in a matching mode with the customized face key point recognition model, and has stronger adaptability. And the problem of face reconstruction under different facial forms and different expressions can be solved.
In the embodiment of the application, a face image sample and an initial point position corresponding relation can be obtained, wherein the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes; carrying out three-dimensional face reconstruction according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample; extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face; performing first face key point recognition on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one image is greater than that of the initial two-dimensional key points; acquiring a second corresponding relation between each target two-dimensional key point and a grid vertex index of a sample face three-dimensional model; and determining target grid vertex indexes corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain target point position corresponding relation of the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model.
Thus, the target positioning corresponding relation corresponding to the target two-dimensional key points can be obtained according to the face image sample processing, and the number of the target two-dimensional key points is more than that of the initial two-dimensional key points provided by the standard face three-dimensional model. Therefore, three-dimensional face reconstruction can be performed based on a larger number of target two-dimensional key points, the requirements for flexible adjustment of the standard face three-dimensional model can be met, and the three-dimensional face reconstruction effect can be improved.
The second aspect of the embodiment of the application provides a face reconstruction method. Referring to fig. 5, fig. 5 is a flowchart of a face reconstruction method according to an embodiment of the present application. The specific flow of the face reconstruction method can be as follows:
501. And acquiring a face image to be reconstructed and a target point position corresponding relation, wherein the target point position corresponding relation is used for describing the corresponding relation between a target two-dimensional key point obtained by carrying out first face key point identification on the image and grid vertexes in a standard face three-dimensional model.
The three-dimensional image of the face to be reconstructed is a two-dimensional image which needs to be reconstructed of the three-dimensional face.
It should be noted that, in the face reconstruction method provided by the embodiment of the present application, the target point location correspondence used is obtained by the face point location correspondence processing method provided by the first aspect of the embodiment of the present application. In terms of the nouns mentioned in the face reconstruction method in the embodiment of the present application, the same nouns as those mentioned in the face point location correspondence processing method in the embodiment of the present application represent the same meanings, and are not described herein again.
502. And carrying out first face key point recognition on the face image to be reconstructed to obtain target two-dimensional key points in the face image to be reconstructed.
Specifically, the target two-dimensional key points in the face image to be reconstructed can be obtained through preset first face key point recognition, which is not described herein.
503. And carrying out three-dimensional face reconstruction on the standard face three-dimensional model according to the target two-dimensional key points and the corresponding relation of the target point positions so as to obtain a target reconstructed face three-dimensional model matched with the face image to be reconstructed.
It should be noted that, the specific process of three-dimensional face reconstruction may refer to the specific process of three-dimensional face reconstruction according to the face image sample, the initial point location corresponding relationship and the standard face three-dimensional model to obtain the sample face three-dimensional model corresponding to each face image sample, but the specific process is not repeated herein, and the optimized target point location corresponding relationship is used instead of the initial point location corresponding relationship.
In the embodiment of the application, the three-dimensional face reconstruction is performed by using the optimized target point position corresponding relation, and the number of the target two-dimensional key points in the target point position corresponding relation is more than the number of the initial two-dimensional key points provided by the standard face three-dimensional model. Therefore, three-dimensional face reconstruction can be performed based on a larger number of target two-dimensional key points, the requirements for flexible adjustment of the standard face three-dimensional model can be met, and the three-dimensional face reconstruction effect can be improved.
With reference to fig. 6, fig. 6 is a structural block diagram of a face point location correspondence processing device provided by an embodiment of the present application, where the face point location correspondence processing device includes:
The data acquisition module 601 is configured to acquire a face image sample and an initial point location correspondence, where the initial point location correspondence is used to describe a first correspondence between an initial two-dimensional key point identified in the face image sample and a grid vertex in a standard face three-dimensional model, where the grid vertex in the standard face three-dimensional model is provided with a grid vertex index;
the sample reconstruction module 602 is configured to perform three-dimensional face reconstruction according to the face image samples, the initial point location correspondence, and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample;
the image extraction module 603 is configured to perform image extraction on each of the sample face three-dimensional models, so as to obtain a sample two-dimensional image corresponding to each of the sample face three-dimensional models;
A key point identifying module 604, configured to identify first face key points of each of the sample two-dimensional images to obtain target two-dimensional key points, where the number of the target two-dimensional key points identified by one image is greater than the number of the initial two-dimensional key points;
The correspondence acquiring module 605 is configured to acquire a second correspondence between each of the target two-dimensional key points and a grid vertex index of the sample face three-dimensional model;
the face point location correspondence processing module 606 is configured to determine, based on the second correspondence between the target two-dimensional key points with the same point location in each of the sample two-dimensional images, a target mesh vertex index corresponding to the target two-dimensional key points with the same point location, so as to obtain a target point location correspondence between the target two-dimensional key point of each of the target two-dimensional key points obtained by the first face key point identification and a mesh vertex index in a standard face three-dimensional model.
In some alternative embodiments, the data acquisition module 601 is specifically configured to:
acquiring a plurality of face feature influence factors for describing different face features;
and acquiring a plurality of face image samples according to the face feature influence factors, wherein at least one face image sample is matched with the face features described by the face feature influence factors aiming at each face feature influence factor.
In some alternative embodiments, the sample reconstruction module 602 is specifically configured to:
Performing second face key point recognition on each face image sample to obtain the position of an initial two-dimensional key point in each face image sample;
And respectively reconstructing the three-dimensional face of the standard face three-dimensional model according to the positions of the initial two-dimensional key points in the face image samples and the corresponding relation of the initial point positions to obtain a sample face three-dimensional model corresponding to each face image sample.
In some alternative embodiments, the image extraction module 603 is specifically configured to:
Respectively carrying out face angle adjustment on each sample face three-dimensional model;
And aiming at each sample face three-dimensional model, performing image conversion processing after each face angle adjustment so as to obtain sample two-dimensional images of different angles of each sample face three-dimensional model.
In some optional embodiments, the face point location correspondence processing module 606 is specifically configured to:
Counting the second corresponding relation of the target two-dimensional key points with the same point position in each sample two-dimensional image to obtain grid vertex index counting results corresponding to the target two-dimensional key points with the same point position;
And aiming at the target two-dimensional key points with the same point positions, taking the grid vertex index with the largest occurrence frequency in the grid vertex index statistical result as the target grid vertex index of the target two-dimensional key points.
The embodiment of the application discloses a face point position corresponding relation processing device, which is used for acquiring a face image sample and an initial point position corresponding relation through a data acquisition module 601, wherein the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes; carrying out three-dimensional face reconstruction by a sample reconstruction module 602 according to the face image samples, the initial point position corresponding relation and the standard face three-dimensional model to obtain sample face three-dimensional models corresponding to the face image samples; carrying out image extraction on each sample face three-dimensional model through an image extraction module 603 to obtain a sample two-dimensional image corresponding to each sample face three-dimensional model; performing first face key point recognition on each sample two-dimensional image through a key point recognition module 604 to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one image is greater than that of the initial two-dimensional key points; acquiring a second corresponding relation between each target two-dimensional key point and a grid vertex index of the sample face three-dimensional model through a corresponding relation acquisition module 605; and determining, by the face point location correspondence processing module 606, a target grid vertex index corresponding to the target two-dimensional key point with the same point location based on the second correspondence between the target two-dimensional key points with the same point location in each of the sample two-dimensional images, so as to obtain a target point location correspondence between the target two-dimensional key point of each of the target two-dimensional key points obtained by the first face key point identification and the grid vertex index in the standard face three-dimensional model.
Thus, the target positioning corresponding relation corresponding to the target two-dimensional key points can be obtained according to the face image sample processing, and the number of the target two-dimensional key points is more than that of the initial two-dimensional key points provided by the standard face three-dimensional model. Therefore, three-dimensional face reconstruction can be performed based on a larger number of target two-dimensional key points, the requirements for flexible adjustment of the standard face three-dimensional model can be met, and the three-dimensional face reconstruction effect can be improved.
Referring to fig. 7, fig. 7 is a block diagram of a face reconstruction device according to an embodiment of the present application, where the face reconstruction device includes:
The face image obtaining module 701 is configured to obtain a face image to be reconstructed, and a target point location correspondence, where the target point location correspondence is used to describe a correspondence between a target two-dimensional key point obtained by performing first face key point recognition on the image and a grid vertex in a standard face three-dimensional model;
the target two-dimensional key point recognition module 702 is configured to perform first face key point recognition on the face image to be reconstructed, so as to obtain a target two-dimensional key point in the face image to be reconstructed;
The face reconstruction module 703 is configured to reconstruct a three-dimensional face of the standard face three-dimensional model according to the target two-dimensional key point and the target point location correspondence, so as to obtain a target reconstructed face three-dimensional model that matches the face image to be reconstructed.
In this way, three-dimensional face reconstruction is performed by using the optimized target point position corresponding relation, and the number of target two-dimensional key points in the target point position corresponding relation is more than the number of initial two-dimensional key points provided by the standard face three-dimensional model. Therefore, three-dimensional face reconstruction can be performed based on a larger number of target two-dimensional key points, the requirements for flexible adjustment of the standard face three-dimensional model can be met, and the three-dimensional face reconstruction effect can be improved.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Correspondingly, the embodiment of the application also provides electronic equipment which can be a terminal, wherein the terminal can be terminal equipment such as a smart phone, a tablet Personal computer, a notebook computer, a touch screen, a game machine, a Personal computer (PC, personal Computer), a Personal digital assistant (PDA, personal DIGITAL ASSISTANT) and the like. As shown in fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 800 includes a processor 801 having one or more processing cores, a memory 802 having one or more computer-readable storage media, and a computer program stored on the memory 802 and executable on the processor. The processor 801 is electrically connected to the memory 802. Those skilled in the art will appreciate that the electronic device structure shown in the figures is not limiting of the electronic device and may include more or fewer components than shown in fig. 8, or may combine certain components, or a different arrangement of components.
The processor 801 is a control center of the electronic device 800, connects various parts of the entire electronic device 800 using various interfaces and lines, and performs various functions of the electronic device 800 and processes data by running or loading software programs and/or modules stored in the memory 802, and calling data stored in the memory 802, thereby performing overall monitoring of the electronic device 800. The processor 801 may be a central processing unit CPU, a graphics processor GPU, a network processor (NP, network Processor), etc., and may implement or perform the methods, steps and logic blocks disclosed in embodiments of the present application.
In the embodiment of the present application, the processor 801 in the electronic device 800 loads the instructions corresponding to the processes of one or more application programs into the memory 802 according to the following steps, and the processor 801 executes the application programs stored in the memory 802, so as to implement various functions, for example:
Acquiring a face image sample and an initial point position corresponding relation, wherein the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes;
Carrying out three-dimensional face reconstruction according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample;
extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face;
Performing first face key point recognition on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one image is greater than that of the initial two-dimensional key points;
acquiring a second corresponding relation between each target two-dimensional key point and a grid vertex index of a sample face three-dimensional model;
And determining target grid vertex indexes corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain target point position corresponding relation of the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model.
Or for example:
acquiring a face image to be reconstructed and a target point position corresponding relation, wherein the target point position corresponding relation is used for describing a corresponding relation between a target two-dimensional key point obtained by carrying out first face key point identification on the image and grid vertexes in a standard face three-dimensional model;
Performing first face key point recognition on the face image to be reconstructed to obtain target two-dimensional key points in the face image to be reconstructed;
And carrying out three-dimensional face reconstruction on the standard face three-dimensional model according to the target two-dimensional key points and the corresponding relation of the target point positions so as to obtain a target reconstructed face three-dimensional model matched with the face image to be reconstructed.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 8, the electronic device 800 further includes: a touch display 803, a radio frequency circuit 804, an audio circuit 805, an input unit 806, and a power supply 807. The processor 801 is electrically connected to the touch display 803, the radio frequency circuit 804, the audio circuit 805, the input unit 806, and the power supply 807, respectively. Those skilled in the art will appreciate that the electronic device structure shown in fig. 8 is not limiting of the electronic device and may include more or fewer components than shown in fig. 8, or may combine certain components, or a different arrangement of components.
The touch display 803 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 803 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 801, and can receive and execute commands sent from the processor 801. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 801 to determine the type of touch event, and the processor 801 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display 803 to realize the input and output functions. In some embodiments, however, the touch panel and the display panel may be implemented as two separate components to implement the input and output functions. I.e. the touch-sensitive display 803 may also implement an input function as part of the input unit 806.
The radio frequency circuit 804 may be configured to receive and transmit radio frequency signals to and from a network device or other electronic device via wireless communication to and from the network device or other electronic device.
Audio circuitry 805 may be used to provide an audio interface between a user and an electronic device through speakers, microphones, and so on. The audio circuit 805 may transmit the received electrical signal converted from audio data to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 805 and converted into audio data, which are processed by the audio data output processor 801 and sent to, for example, another electronic device via the radio frequency circuit 804, or which are output to the memory 802 for further processing. The audio circuitry 805 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The input unit 806 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 807 is used to power the various components of the electronic device 800. Alternatively, the power supply 807 may be logically connected to the processor 801 through a power management system, so that functions of managing charging, discharging, and power consumption management are implemented through the power management system. The power supply 807 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 8, the electronic device 800 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute steps in any face point location correspondence processing method or any face reconstruction method provided by the embodiment of the present application. For example, the computer program may perform the steps of:
Acquiring a face image sample and an initial point position corresponding relation, wherein the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes;
Carrying out three-dimensional face reconstruction according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample;
extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face;
Performing first face key point recognition on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one image is greater than that of the initial two-dimensional key points;
acquiring a second corresponding relation between each target two-dimensional key point and a grid vertex index of a sample face three-dimensional model;
And determining target grid vertex indexes corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain target point position corresponding relation of the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model.
Or for example:
acquiring a face image to be reconstructed and a target point position corresponding relation, wherein the target point position corresponding relation is used for describing a corresponding relation between a target two-dimensional key point obtained by carrying out first face key point identification on the image and grid vertexes in a standard face three-dimensional model;
Performing first face key point recognition on the face image to be reconstructed to obtain target two-dimensional key points in the face image to be reconstructed;
And carrying out three-dimensional face reconstruction on the standard face three-dimensional model according to the target two-dimensional key points and the corresponding relation of the target point positions so as to obtain a target reconstructed face three-dimensional model matched with the face image to be reconstructed.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The computer program stored in the storage medium can execute any step in any face point location corresponding relation processing method or any face reconstruction method provided by the embodiment of the present application, so that any face point location corresponding relation processing method or any face reconstruction method provided by the embodiment of the present application can be implemented, which is detailed in the previous embodiments and will not be described herein.
According to one aspect of the present application, there is also provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative implementations of the above embodiments.
The face point location correspondence processing method, the face reconstruction device and the medium provided by the embodiment of the application are described in detail, and specific examples are applied to the description of the principle and the implementation of the application, and the description of the above embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (9)

1. The face point location corresponding relation processing method is characterized by comprising the following steps:
acquiring a face image sample and an initial point position corresponding relation, wherein the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by identification in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes;
Carrying out three-dimensional face reconstruction according to the face image sample, the initial point position corresponding relation and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample;
extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face;
Performing first face key point recognition on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points recognized by one sample two-dimensional image is greater than that of the initial two-dimensional key points;
Acquiring a second corresponding relation between each target two-dimensional key point and a grid vertex index of the sample face three-dimensional model;
Determining target grid vertex indexes corresponding to target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images, so as to obtain target point position corresponding relation between the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model;
determining the target grid vertex index corresponding to the target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images comprises the following steps:
Counting the second corresponding relation of the target two-dimensional key points with the same point position in each sample two-dimensional image to obtain a grid vertex index counting result corresponding to the target two-dimensional key points with the same point position;
And aiming at the target two-dimensional key points with the same point positions, taking the grid vertex index with the largest occurrence frequency in the grid vertex index statistical result as the target grid vertex index of the target two-dimensional key points.
2. The face point location correspondence processing method according to claim 1, wherein the obtaining a face image sample includes:
acquiring a plurality of face feature influence factors for describing different face features;
and acquiring a plurality of face image samples according to the face feature influence factors, wherein at least one face image sample is matched with the face features described by the face feature influence factors aiming at each face feature influence factor.
3. The face point location correspondence processing method according to claim 1, wherein the performing three-dimensional face reconstruction according to the face image sample, the initial point location correspondence, and the standard face three-dimensional model to obtain a sample face three-dimensional model corresponding to each face image sample includes:
Performing second face key point recognition on each face image sample to obtain the position of an initial two-dimensional key point in each face image sample;
And respectively carrying out three-dimensional face reconstruction on the standard face three-dimensional model according to the positions of the initial two-dimensional key points in each face image sample and the initial point position corresponding relation to obtain a sample face three-dimensional model corresponding to each face image sample.
4. The face point location correspondence processing method according to claim 1, wherein the performing image extraction on each of the sample face three-dimensional models to obtain a sample two-dimensional image corresponding to each of the sample face three-dimensional models includes:
Respectively carrying out face angle adjustment on each sample face three-dimensional model;
And aiming at each sample face three-dimensional model, performing image conversion processing after each face angle adjustment so as to obtain sample two-dimensional images of different angles of each sample face three-dimensional model.
5. A method of face reconstruction, the method comprising:
acquiring a face image to be rebuilt and a target point position corresponding relation, wherein the target point position corresponding relation is used for describing the corresponding relation between a target two-dimensional key point obtained by carrying out first face key point identification on the image and grid vertexes in a standard face three-dimensional model;
Performing first face key point recognition on the face image to be reconstructed to obtain target two-dimensional key points in the face image to be reconstructed;
and reconstructing the three-dimensional face of the standard face three-dimensional model according to the target two-dimensional key points and the target point position corresponding relation to obtain a target reconstructed face three-dimensional model matched with the face image to be reconstructed, wherein the target point position corresponding relation is determined according to the face point position corresponding relation processing method provided by claim 1.
6. A face point location correspondence processing apparatus, the apparatus comprising:
The system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for acquiring a face image sample and an initial point position corresponding relation, the initial point position corresponding relation is used for describing a first corresponding relation between an initial two-dimensional key point obtained by recognition in the face image sample and grid vertexes in a standard face three-dimensional model, and the grid vertexes in the standard face three-dimensional model are provided with grid vertex indexes;
The sample reconstruction module is used for carrying out three-dimensional face reconstruction according to the face image samples, the initial point position corresponding relation and the standard face three-dimensional model so as to obtain sample face three-dimensional models corresponding to the face image samples;
the image extraction module is used for extracting images of the three-dimensional model of each sample face to obtain a sample two-dimensional image corresponding to the three-dimensional model of each sample face;
The key point identification module is used for carrying out first face key point identification on each sample two-dimensional image to obtain target two-dimensional key points, wherein the number of the target two-dimensional key points identified by one sample two-dimensional image is greater than that of the initial two-dimensional key points;
The corresponding relation acquisition module is used for acquiring a second corresponding relation between each target two-dimensional key point and the grid vertex index of the sample face three-dimensional model;
the face point position corresponding relation processing module is used for determining target grid vertex indexes corresponding to target two-dimensional key points with the same point positions based on the second corresponding relation of the target two-dimensional key points with the same point positions in the sample two-dimensional images so as to obtain target point position corresponding relation between the target two-dimensional key points of each point position in the target two-dimensional key points obtained by the first face key point identification and the grid vertex indexes in the standard face three-dimensional model;
The face point position corresponding relation processing module is further used for counting the second corresponding relation of the target two-dimensional key points with the same point position in each sample two-dimensional image to obtain grid vertex index counting results corresponding to the target two-dimensional key points with the same point position;
The face point position corresponding relation processing module is further used for regarding the grid vertex index with the largest occurrence number in the grid vertex index statistical result as a target grid vertex index of the target two-dimensional key points aiming at the target two-dimensional key points with the same point position.
7. A face reconstruction apparatus, the apparatus comprising:
The face image acquisition module is used for acquiring a face image to be reconstructed and a target point position corresponding relation, wherein the target point position corresponding relation is used for describing the corresponding relation between a target two-dimensional key point obtained by carrying out first face key point identification on the image and grid vertexes in the standard face three-dimensional model;
The target two-dimensional key point identification module is used for carrying out first face key point identification on the face image to be reconstructed to obtain target two-dimensional key points in the face image to be reconstructed;
The face reconstruction module is used for reconstructing the three-dimensional face of the standard face three-dimensional model according to the target two-dimensional key points and the target point position corresponding relation to obtain a target reconstructed face three-dimensional model matched with the face image to be reconstructed, wherein the target point position corresponding relation is determined according to the face point position corresponding relation processing method provided by claim 1.
8. An electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to run the application program in the memory to execute the steps in the face point location correspondence processing method according to any one of claims 1 to 4 or the steps in the face reconstruction method according to claim 5.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a plurality of instructions adapted to be loaded by a processor to perform the steps of the face point correspondence processing method of any one of claims 1 to 4 or the steps of the face reconstruction method of claim 5.
CN202311511524.2A 2023-11-13 2023-11-13 Face point position corresponding relation processing method, face reconstruction method, device and medium Active CN117523136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311511524.2A CN117523136B (en) 2023-11-13 2023-11-13 Face point position corresponding relation processing method, face reconstruction method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311511524.2A CN117523136B (en) 2023-11-13 2023-11-13 Face point position corresponding relation processing method, face reconstruction method, device and medium

Publications (2)

Publication Number Publication Date
CN117523136A CN117523136A (en) 2024-02-06
CN117523136B true CN117523136B (en) 2024-05-14

Family

ID=89752578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311511524.2A Active CN117523136B (en) 2023-11-13 2023-11-13 Face point position corresponding relation processing method, face reconstruction method, device and medium

Country Status (1)

Country Link
CN (1) CN117523136B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870420A (en) * 2021-10-11 2021-12-31 Oppo广东移动通信有限公司 Three-dimensional face model reconstruction method and device, storage medium and computer equipment
CN113902869A (en) * 2020-06-22 2022-01-07 北京达佳互联信息技术有限公司 Three-dimensional head grid generation method and device, electronic equipment and storage medium
CN114255308A (en) * 2021-12-14 2022-03-29 重庆邮电大学 Face blend shape generation method based on single-view three-dimensional reconstruction
CN115294301A (en) * 2022-08-11 2022-11-04 广州沃佳科技有限公司 Head model construction method, device, equipment and medium based on face image
CN115375835A (en) * 2022-07-13 2022-11-22 平安科技(深圳)有限公司 Three-dimensional model establishing method based on two-dimensional key points, computer and storage medium
CN116012550A (en) * 2023-01-30 2023-04-25 百果园技术(新加坡)有限公司 Face deformation target correction method and device, equipment, medium and product thereof
EP4217974A1 (en) * 2021-03-15 2023-08-02 Tencent America LLC Methods and systems for personalized 3d head model deformation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902869A (en) * 2020-06-22 2022-01-07 北京达佳互联信息技术有限公司 Three-dimensional head grid generation method and device, electronic equipment and storage medium
EP4217974A1 (en) * 2021-03-15 2023-08-02 Tencent America LLC Methods and systems for personalized 3d head model deformation
CN113870420A (en) * 2021-10-11 2021-12-31 Oppo广东移动通信有限公司 Three-dimensional face model reconstruction method and device, storage medium and computer equipment
CN114255308A (en) * 2021-12-14 2022-03-29 重庆邮电大学 Face blend shape generation method based on single-view three-dimensional reconstruction
CN115375835A (en) * 2022-07-13 2022-11-22 平安科技(深圳)有限公司 Three-dimensional model establishing method based on two-dimensional key points, computer and storage medium
CN115294301A (en) * 2022-08-11 2022-11-04 广州沃佳科技有限公司 Head model construction method, device, equipment and medium based on face image
CN116012550A (en) * 2023-01-30 2023-04-25 百果园技术(新加坡)有限公司 Face deformation target correction method and device, equipment, medium and product thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图片的三维人脸自动生成与编辑算法研究;司徒亨哥;中国优秀硕士学位论文全文数据库 信息科技辑;20181215;全文 *

Also Published As

Publication number Publication date
CN117523136A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN112598780B (en) Instance object model construction method and device, readable medium and electronic equipment
CN112465945B (en) Model generation method and device, storage medium and computer equipment
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN111325220B (en) Image generation method, device, equipment and storage medium
CN115661912A (en) Image processing method, model training method, electronic device and readable storage medium
CN110622218A (en) Image display method, device, storage medium and terminal
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN113723164A (en) Method, device and equipment for acquiring edge difference information and storage medium
CN113361490B (en) Image generation method, network training method, image generation device, network training device, computer equipment and storage medium
CN117523136B (en) Face point position corresponding relation processing method, face reconstruction method, device and medium
CN116385615A (en) Virtual face generation method, device, computer equipment and storage medium
CN115775283A (en) Image compression method, device, storage medium and computer equipment
CN113350792B (en) Contour processing method and device for virtual model, computer equipment and storage medium
CN116797631A (en) Differential area positioning method, differential area positioning device, computer equipment and storage medium
CN116029912A (en) Training of image processing model, image processing method, device, equipment and medium
CN113694525A (en) Method, device, equipment and storage medium for acquiring virtual image
CN113409468B (en) Image processing method and device, electronic equipment and storage medium
WO2022237116A1 (en) Image processing method and apparatus
CN113705309A (en) Scene type judgment method and device, electronic equipment and storage medium
CN115731339A (en) Virtual model rendering method and device, computer equipment and storage medium
CN116993884A (en) Texture data generation method, texture generation model training method and device
CN118247325A (en) Model training method, device, electronic equipment and computer readable storage medium
CN117593493A (en) Three-dimensional face fitting method, three-dimensional face fitting device, electronic equipment and storage medium
CN117788486A (en) Image segmentation method, device, electronic equipment and storage medium
CN115526973A (en) Skin rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant