CN112561784B - Image synthesis method, device, electronic equipment and storage medium - Google Patents

Image synthesis method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112561784B
CN112561784B CN202011497181.5A CN202011497181A CN112561784B CN 112561784 B CN112561784 B CN 112561784B CN 202011497181 A CN202011497181 A CN 202011497181A CN 112561784 B CN112561784 B CN 112561784B
Authority
CN
China
Prior art keywords
model
face
deformation
point set
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011497181.5A
Other languages
Chinese (zh)
Other versions
CN112561784A (en
Inventor
张宏龙
郭旭峰
吴闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011497181.5A priority Critical patent/CN112561784B/en
Publication of CN112561784A publication Critical patent/CN112561784A/en
Application granted granted Critical
Publication of CN112561784B publication Critical patent/CN112561784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an image synthesis method, an image synthesis device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; based on the first deformation model, deforming the face model to obtain a second deformation model; and performing boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model. According to the image synthesis method, the device, the electronic equipment and the storage medium, the human face model is stitched and butted on the standard head model through various transformations and various deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements applied to different virtual scenes are met.

Description

Image synthesis method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image synthesis method, an image synthesis device, an electronic device, and a storage medium.
Background
In recent years, with the development of related technologies, face-changing technology has been widely used in many fields. For example, for the film industry, the face of the character A is replaced by the face of the character B, so that the performance condition of the character A in the scenario can be seen; the face of character a is replaced with a photograph of character B having a certain style, and the presentation effect of character a having a corresponding style can be seen.
The existing face changing modes are characterized in that based on the scheme of a traditional image processing algorithm, face segmentation and face key point alignment of a face changing image and a face image to be changed are firstly carried out, then a face area of the face changing image is deformed to a face area corresponding to the face image to be changed through a 2D grid deformation algorithm, and finally edge texture fusion is carried out, so that face changing is realized, but the face changing is mainly carried out aiming at a two-dimensional scene (such as a picture or a video), and the face changing mode cannot be used in a three-dimensional virtual scene.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides an image synthesis method, an image synthesis device, electronic equipment and a storage medium.
The invention provides an image synthesis method, which comprises the following steps:
acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model;
based on the face model, deforming the standard head model to obtain a first deformation model;
based on the first deformation model, deforming the face model to obtain a second deformation model;
and performing boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
According to the image synthesis method provided by the invention, the alignment transformation is carried out on the face model and the standard head model, and the method comprises the following steps:
performing rotation transformation on the face model based on the standard head model;
determining a scaling factor between the face model and the standard head model, and scaling and transforming the face model according to the scaling factor;
and determining a translation vector between the face model and the standard head model, and carrying out translation transformation on the face model according to the translation vector.
According to the image synthesis method provided by the invention, the scaling factor between the face model and the standard head model is determined, and the face model is scaled and transformed according to the scaling factor, which comprises the following steps:
acquiring two first position points in a face area in a standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
determining a scaling factor according to the first characteristic distance and the second characteristic distance, and scaling and transforming the face model according to the scaling factor;
the first position point and the second position point are determined by taking facial five sense organs as reference.
According to the image synthesis method provided by the invention, the translation vector between the face model and the standard head model is determined, and the face model is subjected to translation transformation according to the translation vector, and the method comprises the following steps:
acquiring a third position point in the face area in the standard head model and a fourth position point in the face model;
and obtaining a translation vector between the third position point and the fourth position point, and carrying out translation transformation on the face model according to the translation vector.
Wherein the third position point and the fourth position point are determined by taking the facial five sense organs as the reference.
According to the image synthesis method provided by the invention, the standard head model is deformed based on the face model to obtain a first deformation model, which comprises the following steps:
obtaining a head model with a face cutting area based on the face duty ratio of the face model;
forming a first moving point set by integrating edge points of the face cutting area;
forming an edge point set by the edge point set of the face model, and obtaining a first target point set according to the first moving point set and the edge point set;
forming a first fixed point set by integrating the position points on the area, close to the neck, of the head model with the face cutting area;
and carrying out grid deformation on the head model with the face cutting area by adopting a grid deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set to obtain a first deformation model.
According to the image synthesis method provided by the invention, the face model is deformed based on the first deformation model to obtain the second deformation model, which comprises the following steps:
forming a second moving point set by the edge point set of the face model, and forming an edge line set by the edge line set of the face cutting area in the first deformation model;
obtaining a second target point set according to the second moving point set and the edge line set;
the facial features in the facial model are taken as reference areas, and position points on the reference areas are collected to form a second fixed point set;
and carrying out grid deformation on the face model by adopting a grid deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
According to the image synthesis method provided by the invention, the first deformation model and the second deformation model are subjected to boundary stitching treatment to obtain a synthesized three-dimensional model, and the method comprises the following steps:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge grid from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching treatment according to the new edge grid set to obtain a synthesized three-dimensional model.
According to the image synthesis method provided by the invention, after a three-dimensional model is obtained, the method further comprises the following steps:
and carrying out texture fusion and rendering treatment on the stitched boundary of the three-dimensional model to obtain the three-dimensional model suitable for different angle conversion.
The present invention also provides an image synthesizing apparatus including:
the transformation module is used for acquiring a face model and a standard head model and carrying out alignment transformation on the face model and the standard head model;
the first deformation module is used for deforming the standard head model based on the face model to obtain a first deformation model;
the second deformation module is used for deforming the face model based on the first deformation model to obtain a second deformation model;
and the synthesis module is used for carrying out boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the image synthesis method as described in any of the above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image synthesis method as described in any of the above.
According to the image synthesis method, the device, the electronic equipment and the storage medium, the human face model is stitched and butted on the standard head model through various transformations and various deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements applied to different virtual scenes are met.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an image synthesis method provided by the invention;
fig. 2 is a schematic structural view of an image synthesizing apparatus provided by the present invention;
fig. 3 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The image synthesis method, the device, the electronic equipment and the storage medium provided by the invention are described below with reference to fig. 1-3.
Fig. 1 shows a flow diagram of an image synthesis method provided by the invention, referring to fig. 1, the method comprises the following steps:
s11, acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model;
s12, deforming the standard head model based on the face model to obtain a first deformation model;
s13, deforming the face model based on the first deformation model to obtain a second deformation model;
s14, performing boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
In the steps S11 to S14, it should be noted that, in the present invention, the image synthesis method is used to synthesize a face in a certain frame of picture or video onto a standard head model. The standard head model is a three-dimensional model, is obtained based on real head scanning data processing, and can reflect real human body characteristics.
In the invention, a complete human model with a face area can be used for replacing a standard head model, and at the moment, the human face is synthesized on the human model and is mainly suitable for checking the whole effect in a virtual environment. The human body model is also obtained based on real human body scanning data processing, and can reflect real human body characteristics.
In the invention, a face in a certain frame of picture or video is provided with head posture information and expression information. And carrying out face reconstruction by adopting a 3DMM parameterized face model-based deep learning method based on the head posture information and the expression information to obtain a three-dimensional face model.
In the invention, the initial pose and scale of the face model and the standard head model (or the human body model) are generally inconsistent, and the face model needs to be subjected to rigid transformation (translation and rotation) and scale scaling by taking the standard head model as a reference so as to roughly align the face model to the face of the standard head model.
Different face models have different length-width ratios, fat and thin characteristics and the like, namely different face occupation ratios. Thus, there may be differences in the corresponding features on the face of the face model and the standard head model. In order to ensure that the face area of the model after face change has the same length-width ratio, fat and thin characteristics as the face model, the standard head model needs to be subjected to deformation treatment so as to adapt to different face models.
Because the standard head model is adapted to the face model, the characteristic information representing the aspect ratio, the fat and thin and the like on the standard head model is deformed based on the characteristic information representing the aspect ratio, the fat and thin and the like on the face model, so that the deformed standard head model, namely the first deformation model mentioned in the step, is obtained.
In the invention, the point density on the face model is larger than that on the standard head model, and if the face model and the standard head model are directly subjected to image synthesis, the processing effect at the synthesized boundary is poor. Therefore, when the face model is synthesized to the first deformation model, the coordinate information of each point on the synthesis boundary of the face model needs to be deformed based on the coordinate information of each point on the synthesis boundary of the first deformation model, so as to obtain the deformed face model, namely the second deformation model mentioned in the step.
After the face model and the standard head model are deformed for image synthesis, the first deformation model and the second deformation model can be subjected to boundary stitching treatment, so that a synthesized three-dimensional model is obtained. Because the model is a grid diagram before rendering, in the process of performing boundary stitching processing on the first deformation model and the second deformation model, a new grid relation of the two models on the boundary is actually established, so that the boundary stitching processing is realized, and the synthesized three-dimensional model is obtained.
Here, the synthesized three-dimensional model is a model in which a face in a certain frame of picture or video is replaced with a face on a standard head model. For the synthesized three-dimensional model, the image can be intercepted and displayed at different angles according to specific requirements.
According to the image synthesis method provided by the invention, the human face model is stitched and butted on the standard head model through various transformations and various deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements applied to different virtual scenes are realized.
In the further description of the above method, the process of performing alignment transformation on the face model and the standard head model is mainly explained, and specifically the following steps are performed:
performing rotation transformation on the face model based on the standard head model;
determining a scaling factor between the face model and the standard head model, and scaling and transforming the face model according to the scaling factor;
and determining a translation vector between the face model and the standard head model, and carrying out translation transformation on the face model according to the translation vector.
In this regard, since the face model is generally inconsistent with the face of the standard head model (or manikin), rotation transformation, scaling transformation, and translation transformation are required to achieve alignment between the face model and the standard head model.
In the rotation transformation, the coordinate system in which the standard head model is located is set as the world coordinate system. Based on the gesture parameters and the expression parameters of the face model, the face model is transformed into a world coordinate system, so that the length direction of the face on the transformed face model is consistent with the length direction of the face on the standard head model, and the rotary transformation of the face model is completed.
In the scaling transformation, the face model and the standard head model are both composed of a plurality of points, the positions of the points in the face model are required to be changed when the face model is to be scaled, and the positions of the points in the standard head model are not required to be changed. For this purpose, it is necessary to calculate a scaling factor between the face model and the standard head model based on the positions of selected points on the face model and the standard head model, and then to make all points in the face model complete position change according to the scaling factor.
In the translation transformation, similarly to the above-described process, a translation vector between the face model and the standard head model, which characterizes the moving direction and distance of each point, is calculated based on the positions of some points selected on the face model and the standard head model. All points in the face model are then subject to positional variation in accordance with this translation vector.
According to the method, a reference standard (namely a scaling factor and a translation vector) is established between the face model and the standard head model, so that the alignment operation between the face model and the standard head model can be quickly realized.
In the further description of the above method, mainly, the process of determining the scaling factor between the face model and the standard head model and performing scaling transformation on the face model according to the scaling factor is explained, which is specifically as follows:
acquiring two first position points in a face area in a standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
and determining a scaling factor according to the first characteristic distance and the second characteristic distance, and scaling the face model according to the scaling factor.
In this regard, both the face model and the standard head model have five sense organs. The five sense organs include eyes, ears, nose, eyebrows, mouth (mouth). For this purpose, points on the two models that can be used as reference criteria should be determined from the five sense organs.
For example, two first location points select the center of the pupils of both eyes, the inner canthus, the outer canthus, the eyebrow, etc. Similarly, the two second location points also select the center of the pupils of both eyes, the inner canthus, the outer canthus, the eyebrow, etc.
The first characteristic distance is then the distance between the centers of the two pupils, the distance between the two inner corners of the eye, the distance between the two outer corners of the eye or the distance between the two eyebrows. Similarly, the first feature distance is the distance between the centers of the two pupils, the distance between the two inner corners of the eye, the distance between the two outer corners of the eye, or the distance between the two eyebrows.
A ratio between the first feature distance and the second feature distance is calculated, the ratio being a scaling factor. All points in the face model are then brought to position change according to this scaling factor.
The method further limits the scaling mode on the distance condition, determines the scaling factor according to the distance, and can rapidly determine the scaling relation between the two models so as to achieve the simple and accurate scaling effect.
In the further description of the above method, mainly, the translation vector between the face model and the standard head model is determined, and the processing procedure of translating and transforming the face model according to the translation vector is explained, specifically as follows:
acquiring a third position point in the face area in the standard head model and a fourth position point in the face model;
and obtaining a translation vector between the third position point and the fourth position point, and carrying out translation transformation on the face model according to the translation vector.
In this regard, both the face model and the standard head model have five sense organs. The five sense organs include eyes, ears, nose, eyebrows, mouth (mouth). For this purpose, points on the two models that can be used as reference criteria should be determined from the five sense organs.
For example, the nose point on the standard head model is set as the target point (i.e. the third position point), the nose point on the face model is set as the original point (i.e. the fourth position point), and the translation vector V from the original point to the target point is calculated. And (5) superposing translation vectors V on all points on the face model to finish translation transformation of the face model.
According to the method, the translation mode is limited on the direction vector, so that the movement relation between the two models can be rapidly determined, and the simple and accurate translation effect is achieved.
In the further description of the above method, mainly, the processing procedure of deforming the standard head model based on the face model to obtain the first deformed model is explained, which specifically includes the following steps:
obtaining a head model with a face cutting area based on the face duty ratio of the face model;
forming a first moving point set by integrating edge points of the face cutting area;
forming an edge point set by the edge point set of the face model, and obtaining a first target point set according to the first moving point set and the edge point set;
forming a first fixed point set by integrating the position points on the area, close to the neck, of the head model with the face cutting area;
and carrying out grid deformation on the head model with the face cutting area by adopting a grid deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set to obtain a first deformation model.
In the present invention, a face cutting area is cut out on a standard head model with reference to the face ratio on the face model. In effect, the face of the standard head model is cut off. For example, there is a black region on the standard head model, which may be a face cut region.
Since the model is a mesh prior to rendering, the mesh is typically made up of many triangles. For this reason, there may be many points on the edge of the face cutting area, and these point sets are combined to form a first moving point set. In the first set of moving points is actually coordinate information containing points in the world coordinate system.
And similarly, forming an edge point set by the edge point set of the face model, and then screening a first target point set from the edge point set according to the coordinate information of the first moving point set. Namely: and enabling each point in the first moving point set to find out a corresponding edge point on the face model, wherein the edge points are target points, and the target points are integrated to form a target point set.
In the present invention, for example, the nearest neighbor point Q1 of each point P1 in the set of moving points P1 in the set of edge points Q1 is calculated, taking Q1 as the target point of P1.
Typically, a standard head model may be configured on a body model, i.e. placed on the neck of the body model, according to certain requirements. In the case where the neck size is configured, fixing is required for the boundary with the neck on the standard head model, and deformation is not required. For this purpose, a set of position points on an area near the neck on the head model having the face cutting area is integrated into a first fixed point set.
When the standard head model includes a neck portion, any one of the trap points on the neck portion may be selected as the set of fixed points.
In the scheme that the manikin replaces the standard head model, any one of the closure points on the neck part can be selected as the fixed point set.
In the deformation process, the coordinate information of the first moving point set, the first target point set and the first fixed point set is performed by adopting a grid deformation algorithm based on Laplace or gradient field and the like.
For example, a grid deformation matrix and constraint coordinates are established according to the coordinate information of the first moving point set, the first target point set and the first fixed point set, and the numerical value of each vertex before deformation in the model is determined according to the grid deformation matrix and the constraint coordinates, namely the vertex coordinates. Deformation can be accomplished based on the vertex coordinates.
For example lv=δ, where L is the mesh deformation matrix, V is the vertex coordinates, and δ is the constraint coordinates.
According to the method, based on point coordinate information on the edge of the model, the grid deformation algorithm is adopted, so that the model can be rapidly deformed, and the needed deformed model can be obtained.
In the further description of the above method, mainly, the processing procedure of deforming the face model based on the first deformation model to obtain the second deformation model is explained, which specifically includes the following steps:
forming a second moving point set by the edge point set of the face model, and forming an edge line set by the edge line set of the face cutting area in the first deformation model;
obtaining a second target point set according to the second moving point set and the edge line set;
the facial features in the facial model are taken as reference areas, and position points on the reference areas are collected to form a second fixed point set;
and carrying out grid deformation on the face model by adopting a grid deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
In the present invention, the face model is deformed, so that the edge points of the face model are collected to form the second moving point set. Meanwhile, the edge line set of the face cutting area in the first deformation model is formed into an edge line set. Here, it should be noted that there are many edge points in the face cutting area, and the line between two adjacent edge points is the edge line. All edge lines are brought together to form an edge line set.
In the above description, there is an explanation that the dot density on the face model is greater than that on the standard head model. The corresponding target point of the mobile point on each edge line is thus found. And forming a target point set by the target point set on each edge line.
For example, edge points on the face model form a moving point set P2, and edge points on the face cutting area form an edge line set L2. The closest projected point q2 of each point P2 in the moving point set P2 in the edge line 12 is calculated, and the position point of q2 is set as the target point of P2.
And (3) taking facial five sense organs in the face model as reference areas, and collecting position points on the reference areas to form a second fixed point set. For example, the characteristic points of eyes, nose, mouth, eyebrows, etc. can be selected as the fixing points.
In the deformation process, the coordinate information of the second moving point set, the second target point set and the second fixed point set is performed by adopting a grid deformation algorithm based on Laplace or gradient field and the like.
For example, a grid deformation matrix and constraint coordinates are established according to the coordinate information of the second moving point set, the second target point set and the second fixed point set, and the numerical value of each vertex before deformation in the model is determined according to the grid deformation matrix and the constraint coordinates, namely the vertex coordinates. Deformation can be accomplished based on the vertex coordinates.
For example lv=δ, where L is the mesh deformation matrix, V is the vertex coordinates, and δ is the constraint coordinates.
According to the method, based on point coordinate information on the edge of the model, the grid deformation algorithm is adopted, so that the model can be rapidly deformed, and the needed deformed model can be obtained.
In the further description of the above method, the process of performing border stitching on the first deformation model and the second deformation model to obtain the synthesized three-dimensional model is mainly explained, and specifically includes the following steps:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge grid from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching treatment according to the new edge grid set to obtain a synthesized three-dimensional model.
In the present invention, the model is a mesh map, and a triangle is constructed from each point. For this purpose, a corresponding edge grid can be obtained based on the edge lines of the face cut area, i.e. one side of one triangle is the edge line. And collecting the edge grids corresponding to all the edge lines to form an edge grid set.
And taking edge points which can be projected onto the edge in all edge points of the face model as a sub-point set corresponding to the edge, namely, one sub-point set corresponds to one edge grid.
For example, 100 edge points, the points with the sequence numbers 1-6 of the edge points can be projected onto the edge line with the sequence number 1 of the edge line, and then the edge points with the sequence numbers 1-6 form a sub-point set corresponding to the edge line with the sequence number 1.
In an edge grid, there is a non-edge point, the edge line is deleted, and the non-edge point is connected with each point in the sub-point set to form a new edge grid. Thus, after all non-edge points are connected with points in the sub-point set, a new grid set is formed at the boundary between models. And performing boundary stitching treatment according to the new edge grid set to obtain a synthesized three-dimensional model.
According to the method, the edge points in the face model are divided into the sub-point sets corresponding to the edge lines, so that a corresponding new edge grid is established in the original edge grid, and the stitching treatment of the boundary is facilitated.
In a further illustration of the above method, texture fusion and rendering processes are performed on stitched boundaries of the three-dimensional model to obtain a three-dimensional model suitable for different angular transformations.
The image synthesizing apparatus provided by the present invention will be described below, and the image synthesizing apparatus described below and the image synthesizing method described above may be referred to correspondingly to each other.
Fig. 2 shows a schematic structural diagram of an image synthesizing apparatus provided by the present invention, referring to fig. 2, the apparatus includes a transformation module 21, a first deformation module 22, a second deformation module 23, and a synthesizing module 24, wherein:
the transformation module 21 is used for acquiring a face model and a standard head model, and performing alignment transformation on the face model and the standard head model;
a first deformation module 22, configured to deform the standard head model based on the face model, to obtain a first deformation model;
a second deformation module 23, configured to deform the face model based on the first deformation model, to obtain a second deformation model;
and the synthesis module 24 is configured to perform boundary stitching on the first deformation model and the second deformation model, so as to obtain a synthesized three-dimensional model.
In a further description of the above apparatus, the transformation module is specifically configured to, during a process of performing alignment transformation on the face model and the standard head model:
performing rotation transformation on the face model based on the standard head model;
determining a scaling factor between the face model and the standard head model, and scaling and transforming the face model according to the scaling factor;
and determining a translation vector between the face model and the standard head model, and carrying out translation transformation on the face model according to the translation vector.
In a further description of the above apparatus, the transformation module is specifically configured to, in determining a scaling factor between the face model and the standard head model, perform scaling transformation on the face model according to the scaling factor:
acquiring two first position points in a face area in a standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
determining a scaling factor according to the first characteristic distance and the second characteristic distance, and scaling and transforming the face model according to the scaling factor;
the first position point and the second position point are determined by taking facial five sense organs as reference.
In a further description of the above apparatus, the transformation module is specifically configured to, in a process of determining a translation vector between the face model and the standard head model and performing translation transformation on the face model according to the translation vector:
acquiring a third position point in the face area in the standard head model and a fourth position point in the face model;
and obtaining a translation vector between the third position point and the fourth position point, and carrying out translation transformation on the face model according to the translation vector.
Wherein the third position point and the fourth position point are determined by taking the facial five sense organs as the reference.
In a further illustration of the above apparatus, the first deformation module is configured to, in a process of deforming the standard head model based on the face model to obtain the first deformation model, specifically:
obtaining a head model with a face cutting area based on the face duty ratio of the face model;
forming a first moving point set by integrating edge points of the face cutting area;
forming an edge point set by the edge point set of the face model, and obtaining a first target point set according to the first moving point set and the edge point set;
forming a first fixed point set by integrating the position points on the area, close to the neck, of the head model with the face cutting area;
and carrying out grid deformation on the head model with the face cutting area by adopting a grid deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set to obtain a first deformation model.
In a further illustration of the above apparatus, the second deformation module is configured to, in a process of deforming the face model based on the first deformation model to obtain the second deformation model, specifically:
forming a second moving point set by the edge point set of the face model, and forming an edge line set by the edge line set of the face cutting area in the first deformation model;
obtaining a second target point set according to the second moving point set and the edge line set;
the facial features in the facial model are taken as reference areas, and position points on the reference areas are collected to form a second fixed point set;
and carrying out grid deformation on the face model by adopting a grid deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
In a further description of the above apparatus, the synthesis module is specifically configured to, during a process of performing border stitching on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge grid from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching treatment according to the new edge grid set to obtain a synthesized three-dimensional model.
In a further illustration of the above apparatus, the apparatus further comprises an adjustment module for performing texture fusion and rendering processing on stitched boundaries of the three-dimensional model after the three-dimensional model is obtained, to obtain a three-dimensional model suitable for different angular transformations.
Since the apparatus according to the embodiment of the present invention is the same as the method according to the above embodiment, the details of the explanation will not be repeated here.
It should be noted that, in the embodiment of the present invention, the related functional modules may be implemented by a hardware processor (hardware processor).
According to the image synthesis method provided by the invention, the human face model is stitched and butted on the standard head model through various transformations and various deformations between the human face model and the standard head model, so that a synthesized three-dimensional model is obtained, and the display requirements applied to different virtual scenes are realized.
Fig. 3 shows a schematic physical structure of an electronic device, as shown in fig. 3, where the electronic device may include: a processor (processor) 31, a communication interface (Communications Interface) 32, a memory (memory) 33 and a communication bus 34, wherein the processor 31, the communication interface 32 and the memory 33 communicate with each other through the communication bus 34. The processor 31 may invoke logic instructions in the memory 33 to perform an image composition method comprising: acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; based on the first deformation model, deforming the face model to obtain a second deformation model; and performing boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
Further, the logic instructions in the memory 33 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the image synthesis method provided by the above methods, the method comprising: acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; based on the first deformation model, deforming the face model to obtain a second deformation model; and performing boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the image synthesis methods provided above, the method comprising: acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model; based on the face model, deforming the standard head model to obtain a first deformation model; based on the first deformation model, deforming the face model to obtain a second deformation model; and performing boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. An image synthesizing method, characterized by comprising:
acquiring a face model and a standard head model, and carrying out alignment transformation on the face model and the standard head model;
based on the face model, deforming the standard head model to obtain a first deformation model;
based on the first deformation model, deforming the face model to obtain a second deformation model;
performing boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model;
the face model-based deformation method comprises the steps of deforming a standard head model to obtain a first deformation model, and comprises the following steps:
obtaining a head model with a face cutting area based on the face duty ratio of the face model;
forming a first moving point set by integrating edge points of the face cutting area;
forming an edge point set by the edge point set of the face model, and obtaining a first target point set according to the first moving point set and the edge point set;
forming a first fixed point set by integrating the position points on the area, close to the neck, of the head model with the face cutting area;
grid deformation is carried out on the head model with the face cutting area by adopting a grid deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set, so as to obtain a first deformation model;
the method for deforming the face model based on the first deformation model to obtain a second deformation model comprises the following steps:
forming a second moving point set by the edge point set of the face model, and forming an edge line set by the edge line set of the face cutting area in the first deformation model;
obtaining a second target point set according to the second moving point set and the edge line set;
the facial features in the facial model are taken as reference areas, and position points on the reference areas are collected to form a second fixed point set;
and carrying out grid deformation on the face model by adopting a grid deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
2. The image synthesis method according to claim 1, wherein the performing an alignment transformation on the face model and the standard head model includes:
performing rotation transformation on the face model based on the standard head model;
determining a scaling factor between the face model and the standard head model, and scaling and transforming the face model according to the scaling factor;
and determining a translation vector between the face model and the standard head model, and carrying out translation transformation on the face model according to the translation vector.
3. The image synthesis method according to claim 2, wherein determining a scaling factor between the face model and the standard head model, scaling the face model according to the scaling factor, comprises:
acquiring two first position points in a face area in a standard head model, and acquiring a first characteristic distance between the two position points;
acquiring two second position points in the face model, and acquiring a second characteristic distance between the two second position points;
determining a scaling factor according to the first characteristic distance and the second characteristic distance, and scaling and transforming the face model according to the scaling factor;
the first position point and the second position point are determined by taking facial five sense organs as reference.
4. The method of image synthesis according to claim 2, wherein determining a translation vector between the face model and the standard head model, and performing a translation transformation on the face model according to the translation vector, comprises:
acquiring a third position point in the face area in the standard head model and a fourth position point in the face model;
obtaining a translation vector between the third position point and the fourth position point, and carrying out translation transformation on the face model according to the translation vector;
wherein the third position point and the fourth position point are determined by taking the facial five sense organs as the reference.
5. The image synthesis method according to claim 1, wherein the performing boundary stitching on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model includes:
obtaining an edge grid set according to edge lines of the face cutting area in the first deformation model;
determining a set of sub-points corresponding to each edge grid from the second set of moving points;
obtaining a new edge grid set according to the non-edge points and the sub-point sets in the edge grid;
and performing boundary stitching treatment according to the new edge grid set to obtain a synthesized three-dimensional model.
6. The image synthesizing method according to claim 1, characterized by further comprising, after obtaining the three-dimensional model:
and carrying out texture fusion and rendering treatment on the stitched boundary of the three-dimensional model to obtain the three-dimensional model suitable for different angle conversion.
7. An image synthesizing apparatus, comprising:
the transformation module is used for acquiring a face model and a standard head model and carrying out alignment transformation on the face model and the standard head model;
the first deformation module is used for deforming the standard head model based on the face model to obtain a first deformation model;
the second deformation module is used for deforming the face model based on the first deformation model to obtain a second deformation model;
the synthesis module is used for carrying out boundary stitching treatment on the first deformation model and the second deformation model to obtain a synthesized three-dimensional model;
the first deformation module is used for deforming the standard head model based on the face model, and is particularly used for:
obtaining a head model with a face cutting area based on the face duty ratio of the face model;
forming a first moving point set by integrating edge points of the face cutting area;
forming an edge point set by the edge point set of the face model, and obtaining a first target point set according to the first moving point set and the edge point set;
forming a first fixed point set by integrating the position points on the area, close to the neck, of the head model with the face cutting area;
grid deformation is carried out on the head model with the face cutting area by adopting a grid deformation algorithm according to the coordinate information of the first moving point set, the first target point set and the first fixed point set, so as to obtain a first deformation model;
the second deformation module is used for deforming the face model based on the first deformation model, so that the second deformation model is obtained in the processing process, and is specifically used for:
forming a second moving point set by the edge point set of the face model, and forming an edge line set by the edge line set of the face cutting area in the first deformation model;
obtaining a second target point set according to the second moving point set and the edge line set;
the facial features in the facial model are taken as reference areas, and position points on the reference areas are collected to form a second fixed point set;
and carrying out grid deformation on the face model by adopting a grid deformation algorithm according to the coordinate information of the second moving point set, the second target point set and the second fixed point set to obtain a second deformation model.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the image synthesis method according to any one of claims 1 to 6 when the program is executed.
9. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the image synthesis method according to any one of claims 1 to 6.
CN202011497181.5A 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium Active CN112561784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011497181.5A CN112561784B (en) 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011497181.5A CN112561784B (en) 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112561784A CN112561784A (en) 2021-03-26
CN112561784B true CN112561784B (en) 2024-04-09

Family

ID=75063124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011497181.5A Active CN112561784B (en) 2020-12-17 2020-12-17 Image synthesis method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112561784B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241124B (en) * 2021-11-17 2022-10-18 埃洛克航空科技(北京)有限公司 Method, device and equipment for determining stitching edge in three-dimensional model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN104376599A (en) * 2014-12-11 2015-02-25 苏州丽多网络科技有限公司 Handy three-dimensional head model generation system
CN110136243A (en) * 2019-04-09 2019-08-16 五邑大学 A kind of three-dimensional facial reconstruction method and its system, device, storage medium
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN104376599A (en) * 2014-12-11 2015-02-25 苏州丽多网络科技有限公司 Handy three-dimensional head model generation system
CN110136243A (en) * 2019-04-09 2019-08-16 五邑大学 A kind of three-dimensional facial reconstruction method and its system, device, storage medium
WO2020207270A1 (en) * 2019-04-09 2020-10-15 五邑大学 Three-dimensional face reconstruction method, system and apparatus, and storage medium
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于重采样的三维人脸样本扩充;盖;孙艳丰;尹宝才;唐恒亮;;北京工业大学学报(第05期);全文 *

Also Published As

Publication number Publication date
CN112561784A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN110807836B (en) Three-dimensional face model generation method, device, equipment and medium
US6532011B1 (en) Method of creating 3-D facial models starting from face images
EP2043049B1 (en) Facial animation using motion capture data
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
US6453052B1 (en) Automated method and image processing system for hair style simulation
JP7456670B2 (en) 3D face model construction method, 3D face model construction device, computer equipment, and computer program
CN110675489B (en) Image processing method, device, electronic equipment and storage medium
US10467793B2 (en) Computer implemented method and device
US11508107B2 (en) Additional developments to the automatic rig creation process
CN113628327B (en) Head three-dimensional reconstruction method and device
WO2001099048A2 (en) Non-linear morphing of faces and their dynamics
WO2002013144A1 (en) 3d facial modeling system and modeling method
CN115668300A (en) Object reconstruction with texture resolution
CN114359453A (en) Three-dimensional special effect rendering method and device, storage medium and equipment
CN112561784B (en) Image synthesis method, device, electronic equipment and storage medium
CN113808272B (en) Texture mapping method in three-dimensional virtual human head and face modeling
CN114429518A (en) Face model reconstruction method, device, equipment and storage medium
JPH06118349A (en) Spectacles fitting simulation device
Jeong et al. Automatic generation of subdivision surface head models from point cloud data
CN116863044A (en) Face model generation method and device, electronic equipment and readable storage medium
CN114742954A (en) Method for constructing large-scale diversified human face image and model data pairs
EP3980975B1 (en) Method of inferring microdetail on skin animation
WO2022165463A1 (en) Object reconstruction using media data
WO2020129660A1 (en) Three-dimensional model editing device, three-dimensional model editing method, and program
GB2342026A (en) Graphics and image processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant