CN112561784A - Image synthesis method, image synthesis device, electronic equipment and storage medium - Google Patents

Image synthesis method, image synthesis device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112561784A
CN112561784A CN202011497181.5A CN202011497181A CN112561784A CN 112561784 A CN112561784 A CN 112561784A CN 202011497181 A CN202011497181 A CN 202011497181A CN 112561784 A CN112561784 A CN 112561784A
Authority
CN
China
Prior art keywords
model
face
deformation
standard head
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011497181.5A
Other languages
Chinese (zh)
Other versions
CN112561784B (en
Inventor
张宏龙
郭旭峰
吴闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011497181.5A priority Critical patent/CN112561784B/en
Publication of CN112561784A publication Critical patent/CN112561784A/en
Application granted granted Critical
Publication of CN112561784B publication Critical patent/CN112561784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4084Transform-based scaling, e.g. FFT domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides an image synthesis method, an image synthesis device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a human face model and a standard head model, and carrying out alignment transformation on the human face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; deforming the face model based on the first deformation model to obtain a second deformation model; and performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model. According to the image synthesis method, the image synthesis device, the electronic equipment and the storage medium, the human face model is in stitching and butt joint on the standard head model through various transformations and deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements of different virtual scenes are met.

Description

Image synthesis method, image synthesis device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image synthesis method and apparatus, an electronic device, and a storage medium.
Background
In recent years, with the development of related technologies, face changing technologies have been widely used in many fields. For example, for the film and television industry, the face of the character A is replaced on the character B, and the performance condition of the character A in a plot can be seen; the face of the character a is replaced on the photograph of the character B having a certain style, and the presentation effect of the character a having a corresponding style can be seen.
The existing face changing modes include the following several modes, based on a traditional image processing algorithm scheme, face segmentation and face key point alignment of a face changing image and a face changed image are firstly carried out, then a 2D mesh deformation algorithm is used for deforming a face area of the face changing image to a face area corresponding to the face changed image, and finally edge texture fusion is carried out to realize face changing, but the face changing modes are mainly carried out aiming at a two-dimensional scene (such as a picture or a video) and cannot be used in a three-dimensional virtual scene.
Disclosure of Invention
The invention provides an image synthesis method, an image synthesis device, electronic equipment and a storage medium, aiming at the problems in the prior art.
The invention provides an image synthesis method, which comprises the following steps:
acquiring a human face model and a standard head model, and carrying out alignment transformation on the human face model and the standard head model;
based on the face model, deforming the standard head model to obtain a first deformation model;
deforming the face model based on the first deformation model to obtain a second deformation model;
and performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
According to the image synthesis method provided by the invention, the alignment transformation of the human face model and the standard head model comprises the following steps:
carrying out rotation transformation on the face model based on the standard head model;
determining a scaling factor between the human face model and the standard head model, and carrying out scaling transformation on the human face model according to the scaling factor;
and determining a translation vector between the human face model and the standard head model, and performing translation transformation on the human face model according to the translation vector.
According to the image synthesis method provided by the invention, the step of determining the scaling factor between the human face model and the standard head model and carrying out scaling transformation on the human face model according to the scaling factor comprises the following steps:
acquiring two first position points in the face area in the standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
determining a scaling factor according to the first characteristic distance and the second characteristic distance, and carrying out scaling transformation on the human face model according to the scaling factor;
the first position point and the second position point are determined by taking facial features as reference.
According to the image synthesis method provided by the invention, the determining of the translation vector between the human face model and the standard head model and the translation transformation of the human face model according to the translation vector comprise the following steps:
acquiring a third position point in the standard head model, which is positioned in the face area, and acquiring a fourth position point in the face model;
and obtaining a translation vector between the third position point and the fourth position point, and performing translation transformation on the face model according to the translation vector.
And the third position point and the fourth position point are determined by taking facial features as a reference.
According to the image synthesis method provided by the invention, the step of deforming the standard head model based on the human face model to obtain a first deformation model comprises the following steps:
obtaining a head model with a face cutting area based on the face proportion of the face model;
collecting edge points of the face cutting area to form a first moving point set;
the edge point set of the face model forms an edge point set, and a first target point set is obtained according to the first moving point set and the edge point set;
collecting position points on a region close to the neck on a head model with a face cutting region to form a first fixed point set;
and carrying out mesh deformation on the head model with the face cutting area by adopting a mesh deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set to obtain a first deformation model.
According to an image synthesis method provided by the present invention, the deforming the face model based on the first deformation model to obtain a second deformation model includes:
collecting edge points of the face model to form a second moving point set, and collecting edge lines of a face cutting area in the first deformation model to form an edge line set;
obtaining a second target point set according to the second moving point set and the edge line set;
taking facial features as a reference region in the face model, and collecting position points on the reference region to form a second fixed point set;
and carrying out mesh deformation on the face model by adopting a mesh deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
According to an image synthesis method provided by the present invention, the border stitching processing is performed on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model, and the method includes:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge mesh from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching processing according to the new edge mesh set to obtain a synthesized three-dimensional model.
According to an image synthesis method provided by the present invention, after obtaining the three-dimensional model, the method further includes:
and performing texture fusion and rendering processing on the stitched boundary of the three-dimensional model to obtain the three-dimensional model suitable for different angle conversion.
The present invention also provides an image synthesizing apparatus comprising:
the transformation module is used for acquiring a human face model and a standard head model and carrying out alignment transformation on the human face model and the standard head model;
the first deformation module is used for deforming the standard head model based on the human face model to obtain a first deformation model;
the second deformation module is used for deforming the face model based on the first deformation model to obtain a second deformation model;
and the synthesis module is used for performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the image synthesis method as described in any one of the above when executing the program.
The invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image synthesis method as described in any one of the above.
According to the image synthesis method, the image synthesis device, the electronic equipment and the storage medium, the human face model is in stitching and butt joint on the standard head model through various transformations and deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements of different virtual scenes are met.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image synthesis method provided by the present invention;
FIG. 2 is a schematic structural diagram of an image synthesis apparatus provided in the present invention;
fig. 3 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The image synthesis method, apparatus, electronic device and storage medium provided by the present invention are described below with reference to fig. 1 to 3.
Fig. 1 shows a schematic flow chart of an image synthesis method provided by the present invention, and referring to fig. 1, the method includes the following steps:
s11, acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model;
s12, deforming the standard head model based on the human face model to obtain a first deformation model;
s13, deforming the face model based on the first deformation model to obtain a second deformation model;
and S14, performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
With respect to steps S11-S14, it should be noted that, in the present invention, the image synthesis method is used to synthesize the human face in a certain frame of picture in a picture or video onto a standard head model. The standard head model is a three-dimensional model and is obtained based on real head scanning data processing, and real human body characteristics can be reflected.
In the invention, the standard head model can be replaced by a complete human body model with a face area, and at the moment, the human face is synthesized on the human body model and is mainly suitable for viewing the whole effect in a virtual environment. The human body model is obtained based on real human body scanning data processing, and can reflect real human body characteristics.
In the invention, the face in a certain frame of picture in the picture or video is provided with head posture information and expression information. And performing face reconstruction by adopting a deep learning method based on a 3DMM parameterized face model based on the head posture information and the expression information to obtain a three-dimensional face model.
In the present invention, the initial pose and scale of the face of the human face model and the standard head model (or the human body model) are usually not consistent, and the human face model needs to be subjected to rigid transformation (translation and rotation) and scale scaling by taking the standard head model as a reference so as to roughly align the human face model to the face of the standard head model.
Different face models have different length-width ratios, fat and thin characteristics and the like, namely, the face proportion is different. Thus, there may be differences in corresponding features on the face of the face model and the standard head model. In order to ensure that the face region of the model after face changing has the characteristics of the length-width ratio, the fat and the thin, and the like which are the same as those of the human face model, the standard head model needs to be subjected to deformation processing so as to be adapted to different human face models.
Since the standard head model is adapted to the face model, for this reason, the feature information such as the feature length-width ratio and the feature weight should be deformed on the standard head model based on the feature information such as the feature length-width ratio and the feature weight on the face model, so as to obtain a deformed standard head model, i.e. the first deformed model mentioned in the above step.
In the invention, the density of points on the face model is greater than that of points on the standard head model, and if the face model and the standard head model are directly subjected to image synthesis, the processing effect at the synthesized boundary is inevitably poor. Therefore, when the face model is synthesized into the first deformation model, the coordinate information of each point on the synthesis boundary of the face model is deformed based on the coordinate information of each point on the synthesis boundary of the first deformation model, so as to obtain a deformed face model, i.e. the second deformation model mentioned in the above step.
After the face model and the standard head model are deformed for image synthesis, the first deformation model and the second deformation model can be subjected to boundary stitching processing, so that a synthesized three-dimensional model is obtained. Because the model is a grid graph before rendering, for this reason, in the process of performing boundary stitching on the first deformation model and the second deformation model, a new grid relation of the two models on the boundary is actually established, so that the boundary stitching is realized, and the synthesized three-dimensional model is obtained.
Here, the synthesized three-dimensional model is a model after replacing a face on the standard head model with a face in a picture or a picture of a frame in a video. And for the synthesized three-dimensional model, intercepting and displaying images at different angles can be performed according to specific requirements.
According to the image synthesis method provided by the invention, the human face model is in stitching butt joint on the standard head model through various transformations and various deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements of different virtual scenes are met.
In the further explanation of the above method, the processing procedure of performing alignment transformation on the face model and the standard head model is mainly explained, which is specifically as follows:
carrying out rotation transformation on the face model based on the standard head model;
determining a scaling factor between the human face model and the standard head model, and carrying out scaling transformation on the human face model according to the scaling factor;
and determining a translation vector between the human face model and the standard head model, and performing translation transformation on the human face model according to the translation vector.
In this regard, it should be noted that, since the initial pose and scale of the face model and the standard head model (or the human body model) are not always consistent, a rotation transformation, a scaling transformation and a translation transformation are required to achieve the alignment between the face model and the standard head model.
In the rotation transformation, the coordinate system in which the standard head model is located is set as the world coordinate system. And transforming the face model into a world coordinate system based on the posture parameters and expression parameters of the face model, so that the length direction of the face on the transformed face model is consistent with the length direction of the face on the standard head model, and the rotary transformation of the face model is completed.
In the scaling transformation, the face model and the standard head model are both composed of a plurality of points, the face model needs to be scaled, the positions of all points in the face model need to be changed, and the positions of all points in the standard head model do not need to be changed. Therefore, it is necessary to calculate a scaling factor between the face model and the standard head model based on the positions of some selected points on the face model and the standard head model, and then make all the points in the face model complete position change according to the scaling factor.
In the translation transformation, similarly to the above process, a translation vector between the face model and the standard head model is calculated based on the positions of some selected points on the face model and the standard head model, and the translation vector represents the moving direction and distance of each point. And then all the points in the face model complete the position change according to the translation vector.
The further method of the invention can quickly realize the alignment operation between the face model and the standard head model by establishing a reference standard (namely a scaling factor and a translation vector) between the face model and the standard head model.
In the further explanation of the above method, mainly explaining the processing procedure of determining the scaling factor between the face model and the standard head model and performing scaling transformation on the face model according to the scaling factor, the details are as follows:
acquiring two first position points in the face area in the standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
and determining a scaling factor according to the first characteristic distance and the second characteristic distance, and carrying out scaling transformation on the human face model according to the scaling factor.
In this regard, it should be noted that the face model and the standard head model have five sense organs. The five sense organs include eyes, ears, nose, eyebrows and mouth (mouth). To this end, finding points on the two models that can be used as reference criteria should be determined according to the five sense organs.
For example, the two first position points select the center of the pupils of both eyes, the inner corner of the eye, the outer corner of the eye or the eyebrow center, etc. Likewise, the two second location points are also selected from the center of the pupils of both eyes, inner canthus, outer canthus, eyebrow center, etc.
The first characteristic distance is then the distance between the centers of the two pupils, the distance between the two inner canthi, the distance between the two outer canthi or the distance between the two eyebrow centers. Similarly, the first characteristic distance is a distance between two pupil centers, a distance between two inner canthi, a distance between two outer canthi or a distance between two eyebrow centers.
And calculating to obtain a ratio between the first characteristic distance and the second characteristic distance, wherein the ratio is a scaling factor. And then all points in the face model are subjected to position change according to the scaling factor.
The further method of the invention limits the scaling mode on the distance condition, determines the scaling factor according to the distance, can quickly determine the scaling relation between the two models, and achieves the effect of simple, convenient and accurate scaling.
In the further explanation of the above method, the processing procedure of determining the translation vector between the face model and the standard head model and performing translation transformation on the face model according to the translation vector is mainly explained as follows:
acquiring a third position point in the standard head model, which is positioned in the face area, and acquiring a fourth position point in the face model;
and obtaining a translation vector between the third position point and the fourth position point, and performing translation transformation on the face model according to the translation vector.
In this regard, it should be noted that the face model and the standard head model have five sense organs. The five sense organs include eyes, ears, nose, eyebrows and mouth (mouth). To this end, finding points on the two models that can be used as reference criteria should be determined according to the five sense organs.
For example, the nose tip point on the standard head model is set as the target point (i.e., the third position point), the nose tip point on the face model is set as the original point (i.e., the fourth position point), and the translation vector V from the original point to the target point is calculated. And (4) superposing the translation vector V on all points on the face model, namely finishing the translation transformation of the face model.
The further method of the invention limits the translation mode on the direction vector, can quickly determine the moving relation between the two models, and achieves the effect of simple, convenient and accurate translation.
In the further explanation of the above method, the explanation is mainly given on the processing procedure of deforming the standard head model based on the face model to obtain the first deformation model, which is specifically as follows:
obtaining a head model with a face cutting area based on the face proportion of the face model;
collecting edge points of the face cutting area to form a first moving point set;
the edge point set of the face model forms an edge point set, and a first target point set is obtained according to the first moving point set and the edge point set;
collecting position points on a region close to the neck on a head model with a face cutting region to form a first fixed point set;
and carrying out mesh deformation on the head model with the face cutting area by adopting a mesh deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set to obtain a first deformation model.
In this regard, it should be noted that, in the present invention, a face cutting region is cut on the standard head model with reference to the face proportion on the face model. Actually, the face carried by the standard head model is removed. For example, there is a black area on the standard head model, and this black area may be a face-cutting area.
Since the model is a grid map before rendering, the grid map is typically made up of many triangles. For this reason, there are many points on the edge of the face cut region, and these points are collected to form the first moving point set. The first moving point set actually includes coordinate information of each point in the world coordinate system.
Similarly, edge points of the face model are collected to form an edge point set, and then the first target point set is screened out from the edge point set according to the coordinate information of the first moving point set. Namely: and finding corresponding edge points on the human face model by each point in the first moving point set, wherein the edge points are target points and are collected to form a target point set.
In the present invention, for example, the nearest neighbor point Q1 of each point P1 in the moving point set P1 in the edge point set Q1 is calculated, and Q1 is taken as the target point of P1.
Typically, a standard head model may be configured on a body model according to certain requirements, i.e. placed on the neck of the body model. In the situation that the neck size is configured, the boundary connected with the neck on the standard head model needs to be fixed without deformation. To this end, a set of position points on a region near the neck on the head model having the face cutting region is formed as a first fixed point set.
When the standard head model includes a neck region, any one turn of closed points on the neck region may be selected as the set of fixation points.
In the scheme of replacing the standard head model by the manikin, any circle of closed points on the neck part can be selected as a fixed point set.
In the deformation process, the coordinate information of the first moving point set, the first target point set and the first fixed point set is processed by a grid deformation algorithm based on Laplace or a gradient field.
For example, a mesh deformation matrix and constraint coordinates are established according to the coordinate information of the first moving point set, the first target point set and the first fixed point set, and the numerical value of each vertex before deformation in the model is determined according to the mesh deformation matrix and the constraint coordinates, namely the vertex coordinates. Deformation can be accomplished based on the vertex coordinates.
For example, LV is δ, where L is the mesh deformation matrix, V is the vertex coordinates, and δ is the constraint coordinates.
The further method of the invention adopts the grid deformation algorithm based on the point coordinate information on the edge in the model, and can quickly deform the model to obtain the required deformed model.
In the further explanation of the above method, the processing procedure of deforming the face model based on the first deformation model to obtain the second deformation model is mainly explained as follows:
collecting edge points of the face model to form a second moving point set, and collecting edge lines of a face cutting area in the first deformation model to form an edge line set;
obtaining a second target point set according to the second moving point set and the edge line set;
taking facial features as a reference region in the face model, and collecting position points on the reference region to form a second fixed point set;
and carrying out mesh deformation on the face model by adopting a mesh deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
In this regard, in the present invention, since the face model is deformed, the edge points of the face model are collected to form the second moving point set. Meanwhile, edge lines of the face cutting area in the first deformation model are collected to form an edge line set. Here, it should be noted that there are many edge points in the face cutting region, and a connection line between two adjacent edge points is an edge line. All edge lines are grouped together to form a set of edge lines.
In the above description, it is stated that the density of points on the face model is greater than the density of points on the standard head model. Therefore, the corresponding target point of the moving point on each edge line is to be found. And collecting the target points on each edge line to form a target point set.
For example, the edge points on the face model form a moving point set P2, and the edge lines on the face cut region form an edge line set L2. The closest projected point q2 of each point P2 in the edge line 12 in the moving point set P2 is calculated, and the position point of q2 is set as the target point of P2.
And (4) taking facial features as reference regions in the face model, and collecting position points on the reference regions to form a second fixed point set. For example, the feature points such as eyes, nose, mouth, eyebrows, etc. may be selected as the fixed points.
In the deformation process, the coordinate information of the second moving point set, the second target point set and the second fixed point set is processed by a grid deformation algorithm based on Laplace or gradient fields.
For example, a grid deformation matrix and constraint coordinates are established according to the coordinate information of the second moving point set, the second target point set and the second fixed point set, and the numerical value of each vertex before deformation in the model is determined according to the grid deformation matrix and the constraint coordinates, namely the vertex coordinates. Deformation can be accomplished based on the vertex coordinates.
For example, LV is δ, where L is the mesh deformation matrix, V is the vertex coordinates, and δ is the constraint coordinates.
The further method of the invention adopts the grid deformation algorithm based on the point coordinate information on the edge in the model, and can quickly deform the model to obtain the required deformed model.
In the further explanation of the above method, the processing procedure of performing the boundary stitching processing on the first deformation model and the second deformation model to obtain the synthesized three-dimensional model is mainly explained as follows:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge mesh from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching processing according to the new edge mesh set to obtain a synthesized three-dimensional model.
In contrast, in the present invention, the model is a mesh diagram and is composed of points constructing triangles. For this purpose, the corresponding edge mesh can be obtained based on the edge lines of the face-cut region, i.e. one edge of one triangle is an edge line. And forming an edge mesh set by using the edge mesh sets corresponding to all the edge lines.
The edge points which can be projected on the edge in all edge points of the face model are taken as a sub-point set corresponding to the edge, namely, one sub-point set corresponds to one edge grid.
For example, 100 edge points, the edge points with the serial number of 1-6 can be projected onto the edge line with the serial number of 1, and the edge points with the serial numbers of 1-6 form a sub-point set corresponding to the edge line with the serial number of 1.
In an edge mesh, there is a non-edge point, and the edge line is deleted, so that the non-edge point is connected with each point in the subset to form a new edge mesh. Thus, after all non-edge points are connected to points in the subset, a new set of meshes is formed at the boundary between the model and the model. And performing boundary stitching processing according to the new edge mesh set to obtain a synthesized three-dimensional model.
The method divides the edge points in the face model into the sub-point sets corresponding to the edge lines, so that a corresponding new edge grid is established in the original edge grid, and the method is favorable for stitching the boundary.
In the further explanation of the above method, the stitched boundary of the three-dimensional model is subjected to texture fusion and rendering processing, so as to obtain a three-dimensional model suitable for different angle transformations.
The image synthesizing apparatus provided by the present invention is described below, and the image synthesizing apparatus described below and the image synthesizing method described above may be referred to in correspondence with each other.
Fig. 2 shows a schematic structural diagram of an image synthesis apparatus provided by the present invention, referring to fig. 2, the apparatus includes a transformation module 21, a first deformation module 22, a second deformation module 23, and a synthesis module 24, wherein:
the transformation module 21 is configured to obtain a face model and a standard head model, and perform alignment transformation on the face model and the standard head model;
the first deformation module 22 is configured to deform the standard head model based on the face model to obtain a first deformation model;
the second deformation module 23 is configured to deform the face model based on the first deformation model to obtain a second deformation model;
and the synthesis module 24 is configured to perform boundary stitching on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
In a further description of the above apparatus, the transformation module, during the process of performing alignment transformation on the face model and the standard head model, is specifically configured to:
carrying out rotation transformation on the face model based on the standard head model;
determining a scaling factor between the human face model and the standard head model, and carrying out scaling transformation on the human face model according to the scaling factor;
and determining a translation vector between the human face model and the standard head model, and performing translation transformation on the human face model according to the translation vector.
In a further description of the above apparatus, the transformation module is specifically configured to, in a process of determining a scaling factor between the face model and the standard head model and performing scaling transformation on the face model according to the scaling factor:
acquiring two first position points in the face area in the standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
determining a scaling factor according to the first characteristic distance and the second characteristic distance, and carrying out scaling transformation on the human face model according to the scaling factor;
the first position point and the second position point are determined by taking facial features as reference.
In a further description of the above apparatus, the transformation module is specifically configured to, in a process of determining a translation vector between the face model and the standard head model and performing translation transformation on the face model according to the translation vector:
acquiring a third position point in the standard head model, which is positioned in the face area, and acquiring a fourth position point in the face model;
and obtaining a translation vector between the third position point and the fourth position point, and performing translation transformation on the face model according to the translation vector.
And the third position point and the fourth position point are determined by taking facial features as a reference.
In a further description of the above apparatus, the first deformation module is specifically configured to deform the standard head model based on the face model to obtain a first deformation model in a processing process of the first deformation model:
obtaining a head model with a face cutting area based on the face proportion of the face model;
collecting edge points of the face cutting area to form a first moving point set;
the edge point set of the face model forms an edge point set, and a first target point set is obtained according to the first moving point set and the edge point set;
collecting position points on a region close to the neck on a head model with a face cutting region to form a first fixed point set;
and carrying out mesh deformation on the head model with the face cutting area by adopting a mesh deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set to obtain a first deformation model.
In a further description of the above apparatus, the second deformation module, in a process of deforming the face model based on the first deformation model to obtain a second deformation model, is specifically configured to:
collecting edge points of the face model to form a second moving point set, and collecting edge lines of a face cutting area in the first deformation model to form an edge line set;
obtaining a second target point set according to the second moving point set and the edge line set;
taking facial features as a reference region in the face model, and collecting position points on the reference region to form a second fixed point set;
and carrying out mesh deformation on the face model by adopting a mesh deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
In a further description of the above apparatus, the synthesizing module is specifically configured to, in a process of performing a boundary stitching process on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge mesh from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching processing according to the new edge mesh set to obtain a synthesized three-dimensional model.
In the further description of the above apparatus, the apparatus further includes an adjusting module, configured to perform texture fusion and rendering processing on the stitched boundary of the three-dimensional model after obtaining the three-dimensional model, so as to obtain a three-dimensional model suitable for different angle transformations.
Since the principle of the apparatus according to the embodiment of the present invention is the same as that of the method according to the above embodiment, further details are not described herein for further explanation.
It should be noted that, in the embodiment of the present invention, the relevant functional module may be implemented by a hardware processor (hardware processor).
According to the image synthesis method provided by the invention, the human face model is in stitching butt joint on the standard head model through various transformations and various deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements of different virtual scenes are met.
Fig. 3 is a schematic physical structure diagram of an electronic device, which may include, as shown in fig. 3: a processor (processor)31, a communication Interface (communication Interface)32, a memory (memory)33 and a communication bus 34, wherein the processor 31, the communication Interface 32 and the memory 33 are communicated with each other via the communication bus 34. The processor 31 may call logic instructions in the memory 33 to perform an image synthesis method comprising: acquiring a human face model and a standard head model, and carrying out alignment transformation on the human face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; deforming the face model based on the first deformation model to obtain a second deformation model; and performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
In addition, the logic instructions in the memory 33 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the image synthesis method provided by the above methods, the method comprising: acquiring a human face model and a standard head model, and carrying out alignment transformation on the human face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; deforming the face model based on the first deformation model to obtain a second deformation model; and performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the image synthesis methods provided above, the method comprising: acquiring a human face model and a standard head model, and carrying out alignment transformation on the human face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; deforming the face model based on the first deformation model to obtain a second deformation model; and performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. An image synthesis method, comprising:
acquiring a human face model and a standard head model, and carrying out alignment transformation on the human face model and the standard head model;
based on the face model, deforming the standard head model to obtain a first deformation model;
deforming the face model based on the first deformation model to obtain a second deformation model;
and performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
2. The image synthesis method of claim 1, wherein the performing the alignment transformation of the face model with the standard head model comprises:
carrying out rotation transformation on the face model based on the standard head model;
determining a scaling factor between the human face model and the standard head model, and carrying out scaling transformation on the human face model according to the scaling factor;
and determining a translation vector between the human face model and the standard head model, and performing translation transformation on the human face model according to the translation vector.
3. The image synthesis method of claim 2, wherein determining a scaling factor between the face model and the standard head model, and performing scaling transformation on the face model according to the scaling factor comprises:
acquiring two first position points in the face area in the standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
determining a scaling factor according to the first characteristic distance and the second characteristic distance, and carrying out scaling transformation on the human face model according to the scaling factor;
the first position point and the second position point are determined by taking facial features as reference.
4. The image synthesis method according to claim 2, wherein the determining a translation vector between the face model and the standard head model, and performing translation transformation on the face model according to the translation vector comprises:
acquiring a third position point in the standard head model, which is positioned in the face area, and acquiring a fourth position point in the face model;
and obtaining a translation vector between the third position point and the fourth position point, and performing translation transformation on the face model according to the translation vector.
And the third position point and the fourth position point are determined by taking facial features as a reference.
5. The image synthesis method according to claim 1, wherein the transforming the standard head model based on the face model to obtain a first transformed model comprises:
obtaining a head model with a face cutting area based on the face proportion of the face model;
collecting edge points of the face cutting area to form a first moving point set;
the edge point set of the face model forms an edge point set, and a first target point set is obtained according to the first moving point set and the edge point set;
collecting position points on a region close to the neck on a head model with a face cutting region to form a first fixed point set;
and carrying out mesh deformation on the head model with the face cutting area by adopting a mesh deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set to obtain a first deformation model.
6. The image synthesis method according to claim 5, wherein the transforming the face model based on the first deformation model to obtain a second deformation model comprises:
collecting edge points of the face model to form a second moving point set, and collecting edge lines of a face cutting area in the first deformation model to form an edge line set;
obtaining a second target point set according to the second moving point set and the edge line set;
taking facial features as a reference region in the face model, and collecting position points on the reference region to form a second fixed point set;
and carrying out mesh deformation on the face model by adopting a mesh deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
7. The image synthesis method according to claim 6, wherein the performing a boundary stitching process on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model includes:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge mesh from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching processing according to the new edge mesh set to obtain a synthesized three-dimensional model.
8. The image synthesis method according to claim 1 or 6, further comprising, after obtaining the three-dimensional model:
and performing texture fusion and rendering processing on the stitched boundary of the three-dimensional model to obtain the three-dimensional model suitable for different angle conversion.
9. An image synthesizing apparatus, comprising:
the transformation module is used for acquiring a human face model and a standard head model and carrying out alignment transformation on the human face model and the standard head model;
the first deformation module is used for deforming the standard head model based on the human face model to obtain a first deformation model;
the second deformation module is used for deforming the face model based on the first deformation model to obtain a second deformation model;
and the synthesis module is used for performing boundary stitching processing on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the image synthesis method according to any one of claims 1 to 8 are implemented when the program is executed by the processor.
11. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image synthesis method according to any one of claims 1 to 8.
CN202011497181.5A 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium Active CN112561784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011497181.5A CN112561784B (en) 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011497181.5A CN112561784B (en) 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112561784A true CN112561784A (en) 2021-03-26
CN112561784B CN112561784B (en) 2024-04-09

Family

ID=75063124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011497181.5A Active CN112561784B (en) 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112561784B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241124A (en) * 2021-11-17 2022-03-25 埃洛克航空科技(北京)有限公司 Method, device and equipment for determining stitching edge in three-dimensional model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN104376599A (en) * 2014-12-11 2015-02-25 苏州丽多网络科技有限公司 Handy three-dimensional head model generation system
CN110136243A (en) * 2019-04-09 2019-08-16 五邑大学 A kind of three-dimensional facial reconstruction method and its system, device, storage medium
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN104376599A (en) * 2014-12-11 2015-02-25 苏州丽多网络科技有限公司 Handy three-dimensional head model generation system
CN110136243A (en) * 2019-04-09 2019-08-16 五邑大学 A kind of three-dimensional facial reconstruction method and its system, device, storage medium
WO2020207270A1 (en) * 2019-04-09 2020-10-15 五邑大学 Three-dimensional face reconstruction method, system and apparatus, and storage medium
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
盖;孙艳丰;尹宝才;唐恒亮;: "基于重采样的三维人脸样本扩充", 北京工业大学学报, no. 05 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241124A (en) * 2021-11-17 2022-03-25 埃洛克航空科技(北京)有限公司 Method, device and equipment for determining stitching edge in three-dimensional model

Also Published As

Publication number Publication date
CN112561784B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN110807836B (en) Three-dimensional face model generation method, device, equipment and medium
US10540817B2 (en) System and method for creating a full head 3D morphable model
JP7456670B2 (en) 3D face model construction method, 3D face model construction device, computer equipment, and computer program
EP1424655B1 (en) A method of creating 3-D facial models starting from facial images
EP2043049B1 (en) Facial animation using motion capture data
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
US10467793B2 (en) Computer implemented method and device
US20100328307A1 (en) Image processing apparatus and method
WO2002013144A1 (en) 3d facial modeling system and modeling method
CN114359453A (en) Three-dimensional special effect rendering method and device, storage medium and equipment
US11443473B2 (en) Systems and methods for generating a skull surface for computer animation
CN112561784B (en) Image synthesis method, device, electronic equipment and storage medium
CN114429518A (en) Face model reconstruction method, device, equipment and storage medium
JPH06118349A (en) Spectacles fitting simulation device
EP3980975B1 (en) Method of inferring microdetail on skin animation
CN114742954A (en) Method for constructing large-scale diversified human face image and model data pairs
JP7394566B2 (en) Image processing device, image processing method, and image processing program
CN109360270B (en) 3D face pose alignment method and device based on artificial intelligence
JP2003030684A (en) Face three-dimensional computer graphic generation method and device, face three-dimensional computer graphic generation program and storage medium storing face three-dimensional computer graphic generation program
JP2003216978A (en) Device and method for generating triangle patch for expressing face feature, and computer-readable recording medium with program for generation- processing triangle patch for expressing face feature recorded thereon
CN109472860B (en) Depth map balance optimization method and device based on artificial intelligence
JPH0935082A (en) Image processing method and image processor
JP2001222725A (en) Image processor
CN117078827A (en) Method, device and equipment for generating texture map
CN117635838A (en) Three-dimensional face reconstruction method, device, storage medium and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant