CN117315154A - Quantifiable face model reconstruction method and system - Google Patents
Quantifiable face model reconstruction method and system Download PDFInfo
- Publication number
- CN117315154A CN117315154A CN202311321603.7A CN202311321603A CN117315154A CN 117315154 A CN117315154 A CN 117315154A CN 202311321603 A CN202311321603 A CN 202311321603A CN 117315154 A CN117315154 A CN 117315154A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- facial
- parameters
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000001815 facial effect Effects 0.000 claims abstract description 133
- 238000012545 processing Methods 0.000 claims abstract description 31
- 238000006243 chemical reaction Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims abstract description 4
- 230000011218 segmentation Effects 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 8
- 238000004088 simulation Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
Abstract
The application discloses a quantifiable face model reconstruction method and system, which relate to the technical field of face recognition and comprise the following steps: acquiring a face image, and dividing the face image to generate a face picture with a preset size; acquiring a face picture, preprocessing the face picture, and identifying key points of the face picture to generate an initial texture image and the facial features related to the key points; acquiring a stylized processing template, fusing an initial texture image with the stylized processing template to obtain a sample image after style conversion, and determining facial parameters and a first parameter set of the sample image under each facial state based on facial features of the sample image; training the sample image based on the deformable template, and optimizing texture details of the sample image to obtain an improved texture image; the efficiency and the accuracy of the face reconstruction can be improved, so that the face reconstruction can provide more accurate facial analysis and expression recognition.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a quantifiable face model reconstruction method and a quantifiable face model reconstruction system.
Background
To enhance the simulation of three-dimensional virtual scenes, it is often desirable to be able to incorporate more, interactive, virtual digital human figures with high simulation to maximize the simulation of the human activities in the real world and the environment in which they are located in the three-dimensional virtual world. The generation of high-fidelity virtual digital personas is a key driving force for applications in the virtual reality, virtual educational simulation, and game movie industries. For the application scenes, the method has important significance in acquiring the three-dimensional face texture appearance with stronger sense of reality. As a key step in the virtual digital human image, the high-simulation texture generation can enable the human face model to better simulate the real human face, and realistically realize various detail characteristics of the human face. Meanwhile, generating the real facial texture details is an important component in facial expression animation.
The method based on the three-dimensional deformation model utilizes a statistical method to extract an average face (a shape average face and a texture average face respectively) and face coefficients (a shape coefficient and a texture coefficient) from actual three-dimensional faces in a laboratory scene. And then carrying out iterative optimization on the two-dimensional face by using constraint conditions such as key points of the face to obtain a required shape coefficient and texture coefficient, and finally obtaining the reconstructed face.
However, since the face photo can only show the texture of a certain angle of the face, different texture synthesis methods and different texture mapping methods will affect the realism of the model, texture overlapping is easy to occur when texture information is combined in a matching manner, visual confusion of the texture information is caused, and meanwhile, certain gaps and cracks can occur in an overlapping area when a plurality of texture angles are combined.
Disclosure of Invention
According to the method and the system for reconstructing the face model, the problem that textures are disordered when overlapping images are spliced in the prior art is solved, and the accuracy and the efficiency of face image processing are improved.
The embodiment of the application provides a quantifiable face model reconstruction method and a quantifiable face model reconstruction system, wherein the method comprises the following steps:
acquiring a face image, and dividing the face image to generate a face picture with a preset size;
acquiring a face picture, preprocessing the face picture, and identifying key points of the face picture to generate an initial texture image and the facial features related to the key points;
acquiring a stylized processing template, fusing an initial texture image with the stylized processing template to obtain a sample image after style conversion, and determining facial parameters and a first parameter set of the sample image under each facial state based on facial features of the sample image;
training the sample image based on the deformable template, and optimizing texture details of the sample image to obtain an improved texture image;
selecting the largest improved texture image, determining whether an overlapping area exists in the adjacent improved texture images, and outputting an overlapping image corresponding to the overlapping area when the overlapping area exists;
and for the overlapped image, acquiring a face parameter corresponding to the overlapped image and a second parameter set, and obtaining a face reconstruction image based on the face parameter of the overlapped image.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
extracting face states corresponding to different face features from the sample image and the overlapped image, so as to determine face parameters of the face features in the different face states; and then the first parameter set and the second parameter set are processed, so that the obtained facial parameters are determined.
The first parameter set and the second parameter set are controlled in a segmented mode, and the facial parameters are controlled in a segmented mode, so that the efficiency of selecting the facial parameters is improved when the stylized face processing is carried out. When the overlapping images exist in the face images, the interference of the overlapping images can be reduced, and the accuracy of the images is improved.
Drawings
FIG. 1 is a flow chart of a method for reconstructing a quantifiable face model;
FIG. 2 is a schematic flow chart of an initial texture image style conversion of a quantifiable face model reconstruction method;
FIG. 3 is a flowchart of a method for reconstructing a face model to obtain facial parameters of a sample image;
FIG. 4 is a flowchart of a method for reconstructing a quantifiable face model to obtain an improved texture image;
FIG. 5 is a schematic flow chart of an overlapping image processing of a quantifiable face model reconstruction method;
FIG. 6 is a flow chart of another overlapping image processing of a quantifiable face model reconstruction method;
fig. 7 is a system diagram of a quantifiable face model reconstruction system.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings; the preferred embodiments of the present invention are illustrated in the drawings, however, the present invention may be embodied in many different forms and is not limited to the embodiments described herein; rather, these embodiments are provided so that this disclosure will be thorough and complete.
It should be noted that the terms "vertical", "horizontal", "upper", "lower", "left", "right", and the like are used herein for illustrative purposes only and do not represent the only embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs; the terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention; the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
As shown in fig. 1, a quantifiable face model reconstruction method includes:
s101, acquiring a face image, and performing segmentation processing on the face image to generate a face picture with a preset size.
For obtaining the face image, the image uploaded by the user can be collected, and the obtained image is compared with a set image library and is divided into a plurality of face images with the same size.
And acquiring the acquired face images according to the light rays sent by the camera, and acquiring the face images with different light rays under each angle, so that a neural network for identifying the single Zhang Ren face image can be trained.
S102, acquiring a face picture, preprocessing the face picture, and identifying key points of the face picture to generate an initial texture image and the facial features related to the key points.
The method comprises the steps of identifying obtained face pictures according to common basic characteristics of human faces, selecting a part with a large difference from a common picture style in the face pictures from a nose, a mouth and the like, identifying key points, and performing plane projection tiling in a UV space to obtain the face images so as to extend the face images to a two-dimensional plane.
The UV unfolding diagram is used for enabling the face recognition and expression recognition effects to be better, and the textures of the face in the image can be displayed more fully, so that the texture features are more fully specific.
And processing the facial picture according to key points in face recognition, and selecting a part of facial features with obvious characteristics, so that a more accurate face reconstruction image is conveniently obtained.
S103, acquiring a stylized processing template, fusing the initial texture image with the stylized processing template to obtain a sample image after style conversion, and determining face parameters and a first parameter set of each face state of the sample image based on the facial features of the sample image.
Specifically, in this step, the stylized process is performed according to the obtained key points and the face parameters, the preset three-dimensional style face image is processed, the input initial texture image is processed to obtain a sample image, the generated sample image can be preprocessed, and the parameters during rendering are adjusted, so that enough face parameters of the obtained sample image can be reserved.
S104, training the sample image based on the deformable template, and optimizing texture details of the sample image to obtain an improved texture image.
And comparing the facial features and the facial parameters of each point in different angles of the sample image after the stylization processing to adjust the reconstruction and texture combination of the three-dimensional human face.
And mapping each facial feature to corresponding position information, and acquiring corresponding facial parameters according to the mapped position information, so as to obtain a corresponding stylized processed face image.
S105, selecting the largest improved texture image, determining whether an overlapping area exists in the adjacent improved texture images, and outputting an overlapping image corresponding to the overlapping area when the overlapping area exists.
Specifically, face parameters of the texture image are determined, whether overlapping parts exist in the face parameters or not is compared, and the overlapping parts are overlapped to obtain corresponding overlapping images. The output overlapped image refers to image information after corresponding parts are overlapped, and the overlapped image is used for solving the abnormal situation of some textures when the images are combined and optimized.
S106, for the overlapped image, acquiring the face parameter and the second parameter set corresponding to the overlapped image, and obtaining a face reconstruction image based on the face parameter of the overlapped image.
In one embodiment of the present invention, as shown in fig. 2, the style conversion of the initial texture image is specifically implemented in the following manner:
and acquiring a stylized processing template, and fusing the initial texture image with the stylized processing template to obtain a sample image after style conversion.
S201, defining a style network structure.
The defined style network structure is adjusted according to the sample images preset by the user, and the style with more occurrence times can be selected as the defined style network structure.
Forward propagation and backward propagation can be adopted for the set style network structure, and forward propagation is selected at the moment, and a sample image is constructed by acquiring the initial texture image content and the loss condition of the characteristics.
S202, converting an initial texture image into a preset size, and acquiring facial features and facial parameters of the initial texture image;
at this time, the initial texture image is converted into an image size of 400 pixels, and the specific setting may adjust the image size according to the need.
The facial features of the initial texture image represent prominent features in the face picture, the facial parameters are general feature information in the face picture, and the specific content of the picture is represented.
S203, generating a sample image of style conversion according to the facial features and the facial parameters of the initial texture image based on the style network structure.
Specifically, a required facial parameter can be selected according to the obtained facial feature, and color values corresponding to pixel points in the facial feature are assigned to points on the corresponding three-dimensional image, so that a converted sample image is obtained. It can also be understood that the initial texture image is converted into three-dimensional rendering according to the obtained coordinate information, and the UV map is converted into a three-dimensional model after being unfolded, so that a converted sample image is obtained.
S204, comparing the converted sample images according to the sample images generated before and after, comparing the facial features in the sample images with the facial parameters to determine the current image conversion condition, and outputting the compared sample images.
As shown in fig. 3, a specific implementation of the facial parameters for acquiring the sample image is as follows:
s301, acquiring N face parameters of each face feature in N face states, wherein N is a clustering result of the face features, and N is a positive integer greater than 0.
The facial state refers to some change situations of expression, state and the like on the face.
S302, outputting the largest face parameter as a first face parameter for the face parameters corresponding to the sample images in the N face states.
The first face parameter is a parameter used in extracting facial features and other feature information; the first face parameter is a parameter for clustering face parameters, and the face parameters are divided into a plurality of data sets according to different face states.
S303, taking the first face parameters as initial calculation points, and acquiring a first parameter set corresponding to each face parameter.
The first parameter set takes the first facial parameter as a starting point, and screens out facial parameters related to the first facial parameter.
Further, the processing mode of the sample image further comprises:
s304, carrying out feature recognition on facial features of the sample image, and outputting a first facial feature state.
S305, performing central clustering by taking the first facial feature state as a central point, outputting N face states after clustering, and determining N face parameters based on the face parameters corresponding to the N face states.
The first facial feature state is used for carrying out central clustering on the facial states, screening out the facial states related to the first facial feature state, and determining a feature information list selected during style conversion by determining each facial feature according to the feature conditions of N facial states. The N face states refer to states selected after clustering under different states in different picture comparison, so as to identify different labels or different ways of clustering images selected by different face states.
When the stylized processing is performed, firstly, the face state corresponding to the sample image is determined to be the identifiable state, and when the clustering is performed, the face features in the same state are selected for the clustering processing, so that parameters for the stylized processing are obtained.
And when the centers of the first face feature states are clustered, N face states are extracted to serve as initial clustering centers, the distance between each face state and the initial clustering center is used for clustering, then the distance difference value between each face state is calculated, the difference value is used as an evaluation value of the clustering center according to the square root of the difference value, the initial clustering center smaller than the preset evaluation value is selected to serve as the clustering center, and therefore the selected value of N is determined. And comparing the N face states one by one, and corresponding the face parameters with the N face states one by one.
After the face index is obtained, calculating the image loss of the sample image according to the face index, so that the loss existing in the sample image is determined, and further, the face parameter is adjusted, so that the sample image subjected to the stylization treatment can achieve sufficient display effect. And the face parameter set corresponding to each face parameter is used for calculating different face parameters, so that the obtained face image achieves the optimal effect.
In one embodiment of the present invention, as shown in fig. 4, in order to make the effect of obtaining the sample image displayed on the three-dimensional plane more relevant, the sample image is reconstructed according to the manner of point cloud fitting on the generated sample image, so as to obtain a more relevant face image.
Specifically, the specific implementation manner of obtaining the improved texture image is as follows:
s401, acquiring a sample image and an initial texture image corresponding to the sample image.
And S402, mapping the initial texture image onto a three-dimensional model according to the three-dimensional coordinates to obtain a minimum rotation matrix and a minimum translation matrix in the point cloud fitting coefficients.
S403, determining a three-dimensional style face image corresponding to the initial texture image based on the face parameter corresponding to the initial texture image and the sample image.
S404, determining a three-dimensional style face image under three-dimensional coordinates based on the rotation matrix and the translation matrix.
Specifically, according to the set rotation matrix and translation matrix, corresponding angles and translation directions are determined when the three-dimensional face image is constructed, so that the definition degree of the contours and edges of the generated face image is better.
Further, the generated three-dimensional style face image is subjected to point cloud fitting according to the point cloud data, the three-dimensional style face image is divided into a plurality of reference planes, and the plurality of reference planes are subjected to fitting, so that error collision situations existing in the three-dimensional style face image are reduced.
S405, dividing the three-dimensional style face image into N reference planes, and acquiring point cloud data corresponding to the N reference planes, wherein the number of the N reference planes is consistent with the number of the face parameters.
S406, acquiring a barycenter coordinate in each reference plane based on the point cloud data of each reference plane, and taking the barycenter coordinate as the barycenter coordinate of each reference plane; and sequentially iterating the barycentric coordinates until the reference plane is fitted, and taking the fitted three-dimensional style face image as an improved texture image.
Here, the barycentric coordinates described refer to point cloud data having the highest weight ratio of point cloud data in each reference plane; and taking the barycentric coordinates as an initial point of iteration, and continuously iterating adjacent reference planes until the finally obtained barycentric coordinates finish the fitting of the planes.
In one embodiment of the present invention, as shown in fig. 5, in order to reduce the influence of the overlapping image on the reconstructed image of the face, the overlapping image is processed according to the obtained face parameters, and the processing manner of the obtained overlapping image is as follows:
s501, obtaining facial features of the overlapped images and facial parameters corresponding to the overlapped images.
S502, detecting facial features of the overlapped image and outputting second facial parameters.
S503, taking the second face parameter as a starting point, and acquiring a second parameter set corresponding to the second face parameter.
Further, the detection of the facial features of the superimposed image is to detect the presence of inconsistent facial features in the superimposed image and compare the corresponding facial features with the improved texture image, so that the accuracy of the improved texture image can be further improved.
S504, comparing the face states corresponding to each second parameter set, performing feature recognition according to the second parameter sets, and outputting second face feature states.
S505, iterating the second facial feature state, outputting the clustered facial state, and determining facial parameters based on the clustered facial state.
The second facial feature state is a facial state which is represented in the overlapped image, the facial state is filtered through the second facial feature state to obtain a relevant facial state, so that the existing facial parameters are determined, and the three-dimensional model can be optimized according to the obtained facial parameters.
Further, in order to make the facial parameters in the overlapped images more accurate, the first facial parameters and the second facial parameters are compared, so that the related facial parameters are obtained, the facial parameters are subjected to segmentation control, the number of the facial parameters generated by corresponding users in different modes is also different, the facial parameters are segmented at the moment, and different facial parameters are selected for users in different modes, so that the generation of the style face can be realized by selecting as few facial parameters as possible when the face is reconstructed, and the high efficiency of the face reconstruction is improved.
Specifically, as shown in fig. 6, the processing of the overlapping image further includes the following implementation manner:
s601, a first parameter set and a second parameter set corresponding to the overlapped image are obtained.
S602, carrying out sectional control on the first parameter set according to a preset standard value, identifying the facial parameters larger than a preset interval, and taking the facial parameters larger than the preset interval as a third parameter set.
The preset standard value is a parameter value for controlling the first parameter set to be segmented, and the first parameter set is divided into a plurality of segments according to the magnitude of the face parameter value; the preset interval is an interval range corresponding to the preset standard value.
S603, performing secondary segmentation control on the third parameter set to obtain segmentation parameters.
Screening out the third parameter set when performing secondary segmentation control, performing secondary inspection on the third parameter set according to the set facial parameter interval, and further dividing the facial parameter into a plurality of parameter sets with different segments, wherein the segment parameters are segments for controlling the facial parameter, so that the calling of the facial parameter can be reduced, and the processing efficiency is improved.
S604, when the facial parameters in the third parameter set exist in the second parameter set, the second parameter set is controlled in a segmentation mode by segmentation parameters.
At this time, the second parameter set is controlled in a segmented manner according to the obtained segmentation parameters, so that the correlation effect of the facial parameters related to the second parameter set is better, and the facial parameters can be divided into more suitable interval sets.
The technical scheme in the embodiment of the application at least has the following technical effects or advantages:
extracting face states corresponding to different face features from the sample image and the overlapped image, so as to determine face parameters of the face features in the different face states; and then the first parameter set and the second parameter set are processed, so that the obtained facial parameters are determined.
The first parameter set and the second parameter set are controlled in a segmented mode, and the facial parameters are controlled in a segmented mode, so that the efficiency of selecting the facial parameters is improved when the stylized face processing is carried out. When the overlapping images exist in the face images, the interference of the overlapping images can be reduced, and the accuracy of the images is improved.
As shown in fig. 7, a quantifiable face model reconstruction system includes:
the data acquisition module is used for acquiring face images, and outputting the face images to the feature extraction module when the number of the face images reaches the stylized processing condition.
The feature extraction module is used for extracting facial features and extracting the facial features according to key points of a preset face image.
And the stylization processing module is used for performing stylization processing on the image according to the facial features to obtain a primarily converted sample image.
And the point cloud reconstruction module is used for carrying out point cloud fitting on the sample image according to the three-dimensional coordinates to obtain an optimized improved texture image.
The edge detection module is used for carrying out edge detection on the improved texture images and detecting whether an overlapping area exists after the improved texture images with a plurality of angles are combined; and if the overlapping area exists, performing edge detection on the overlapping image in the overlapping area to acquire the overlapping image.
And the face reconstruction module is used for carrying out edge scale fusion on the overlapped image based on the facial features and the facial parameters of the overlapped image to obtain a face reconstruction image.
In one embodiment of the present invention, a quantifiable face model reconstruction system further includes a first face parameter unit and a second face parameter unit;
the first facial parameter unit is used for acquiring a facial state corresponding to the sample image and determining facial parameters according to the facial state of the sample image;
the second face parameter unit is used for acquiring the face state corresponding to the overlapped image, and determining the face parameter according to the face state of the overlapped image.
In one embodiment of the present invention, a quantifiable face model reconstruction system further includes a segment control unit;
the segmentation control unit is used for obtaining segmentation parameters according to the facial parameters in the first parameter set, which are larger than the preset interval, and carrying out segmentation control on the second parameter set based on the segmentation parameters.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method for reconstructing a quantifiable face model, comprising:
acquiring a face image, and dividing the face image to generate a face picture with a preset size;
acquiring a face picture, preprocessing the face picture, and identifying key points of the face picture to generate an initial texture image and the facial features related to the key points;
acquiring a stylized processing template, fusing an initial texture image with the stylized processing template to obtain a sample image after style conversion, and determining facial parameters and a first parameter set of the sample image under each facial state based on facial features of the sample image;
training the sample image based on the deformable template, and optimizing texture details of the sample image to obtain an improved texture image;
selecting the largest improved texture image, determining whether an overlapping area exists in the adjacent improved texture images, and outputting an overlapping image corresponding to the overlapping area when the overlapping area exists;
and for the overlapped image, acquiring a face parameter corresponding to the overlapped image and a second parameter set, and obtaining a face reconstruction image based on the face parameter of the overlapped image.
2. A method of quantifiable face model reconstruction as defined in claim 1 wherein a style network structure is defined;
converting the initial texture image into a preset size, and acquiring facial features and facial parameters of the initial texture image;
based on the style network structure, generating a style-converted sample image according to the facial features and the facial parameters of the initial texture image;
and comparing the converted sample images according to the sample images generated before and after, comparing the difference of facial features and facial parameters in the sample images to determine the current image conversion condition, and outputting the compared sample images.
3. A quantifiable face model reconstruction method according to claim 1 wherein N face parameters are obtained for each of the face features in N face states;
outputting the largest face parameter as a first face parameter for the face parameters corresponding to the sample images in the N face states;
taking the first face parameters as initial calculation points, and acquiring a first parameter set corresponding to each face parameter;
performing feature recognition on facial features of the sample image, and outputting a first facial feature state;
and carrying out central clustering by taking the first facial feature state as a central point, outputting N face states after clustering, and determining N face parameters based on the face parameters corresponding to the N face states.
4. A method of reconstructing a quantifiable face model as claimed in claim 1, wherein a sample image and an initial texture image corresponding to the sample image are obtained;
mapping the initial texture image onto a three-dimensional model according to the three-dimensional coordinates to obtain a minimum rotation matrix and a minimum translation matrix in the point cloud fitting coefficients;
and determining a three-dimensional style face image corresponding to the initial texture image based on the face parameter corresponding to the initial texture image and the sample image.
5. The method of reconstructing a quantifiable face model of claim 4 wherein a three-dimensional style face image in three-dimensional coordinates is determined based on the rotation matrix and the translation matrix;
dividing a three-dimensional style face image into N reference planes, and acquiring point cloud data corresponding to the N reference planes, wherein the number of the N reference planes is consistent with the number of face parameters;
acquiring barycenter coordinates in each reference plane based on the point cloud data of each reference plane, and taking the barycenter coordinates as barycenter coordinates of each reference plane; and sequentially iterating the barycentric coordinates until the reference plane is fitted, and taking the fitted three-dimensional style face image as an improved texture image.
6. A method of reconstructing a quantifiable face model as claimed in claim 1, wherein facial features of the superimposed image and corresponding facial parameters of the superimposed image are obtained;
detecting facial features of the overlapped images and outputting second facial parameters;
taking the second facial parameter as a starting point to obtain a second parameter set corresponding to the second facial parameter;
comparing the face states corresponding to each second parameter set, performing feature recognition according to the second parameter sets, and outputting second face feature states;
and iterating the second facial feature state, outputting the clustered facial state, and determining facial parameters based on the clustered facial state.
7. The method for reconstructing a quantifiable face model of claim 1 in which a first parameter set and a second parameter set corresponding to overlapping images are obtained;
the first parameter set is controlled in a segmented mode according to a preset standard value, face parameters which are larger than a preset interval are identified, and the face parameters which are larger than the preset interval are taken as a third parameter set;
performing secondary segmentation control on the third parameter set to obtain segmentation parameters;
when the facial parameters in the third parameter set exist in the second parameter set, the second parameter set is controlled in a segmentation mode through segmentation parameters.
8. A quantifiable face model reconstruction system comprising: the data acquisition module is used for acquiring face images, and outputting the face images to the feature extraction module when the number of the face images reaches the stylized processing condition;
the feature extraction module is used for extracting facial features, and extracting the facial features according to key points of a preset face image;
the stylization processing module is used for performing stylization processing on the image according to the facial features to obtain a sample image after preliminary conversion;
the point cloud reconstruction module is used for carrying out point cloud fitting on the sample image according to the three-dimensional coordinates to obtain an optimized improved texture image;
the edge detection module is used for carrying out edge detection on the improved texture images and detecting whether an overlapping area exists after the improved texture images with a plurality of angles are combined; if the overlapping area exists, carrying out edge detection on the overlapping image in the overlapping area to obtain an overlapping image;
and the face reconstruction module is used for carrying out edge scale fusion on the overlapped image based on the facial features and the facial parameters of the overlapped image to obtain a face reconstruction image.
9. A quantifiable face model reconstruction system according to claim 8 further comprising a first facial parameter unit and a second facial parameter unit;
the first facial parameter unit is used for acquiring a facial state corresponding to the sample image and determining facial parameters according to the facial state of the sample image;
the second face parameter unit is used for acquiring the face state corresponding to the overlapped image, and determining the face parameter according to the face state of the overlapped image.
10. A quantifiable face model reconstruction system according to claim 8 further comprising a segmentation control unit;
the segmentation control unit is used for obtaining segmentation parameters according to the facial parameters in the first parameter set, which are larger than the preset interval, and carrying out segmentation control on the second parameter set based on the segmentation parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311321603.7A CN117315154A (en) | 2023-10-12 | 2023-10-12 | Quantifiable face model reconstruction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311321603.7A CN117315154A (en) | 2023-10-12 | 2023-10-12 | Quantifiable face model reconstruction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117315154A true CN117315154A (en) | 2023-12-29 |
Family
ID=89286322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311321603.7A Pending CN117315154A (en) | 2023-10-12 | 2023-10-12 | Quantifiable face model reconstruction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117315154A (en) |
-
2023
- 2023-10-12 CN CN202311321603.7A patent/CN117315154A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109325437B (en) | Image processing method, device and system | |
CN109345556B (en) | Neural network foreground separation for mixed reality | |
CN109859296B (en) | Training method of SMPL parameter prediction model, server and storage medium | |
CN107945282B (en) | Rapid multi-view three-dimensional synthesis and display method and device based on countermeasure network | |
CN107993216B (en) | Image fusion method and equipment, storage medium and terminal thereof | |
CN107484428B (en) | Method for displaying objects | |
CN110246163B (en) | Image processing method, image processing device, image processing apparatus, and computer storage medium | |
WO2022095721A1 (en) | Parameter estimation model training method and apparatus, and device and storage medium | |
EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
US20180357819A1 (en) | Method for generating a set of annotated images | |
CN110363133B (en) | Method, device, equipment and storage medium for sight line detection and video processing | |
CN112633191A (en) | Method, device and equipment for reconstructing three-dimensional face and storage medium | |
US20160110909A1 (en) | Method and apparatus for creating texture map and method of creating database | |
CN114373050A (en) | Chemistry experiment teaching system and method based on HoloLens | |
JP7476511B2 (en) | Image processing system, image processing method and program | |
JP6898264B2 (en) | Synthesizers, methods and programs | |
CN111275610A (en) | Method and system for processing face aging image | |
WO2022022260A1 (en) | Image style transfer method and apparatus therefor | |
CN117315154A (en) | Quantifiable face model reconstruction method and system | |
CN115147577A (en) | VR scene generation method, device, equipment and storage medium | |
US20230290132A1 (en) | Object recognition neural network training using multiple data sources | |
CN110599587A (en) | 3D scene reconstruction technology based on single image | |
CN111243099A (en) | Method and device for processing image and method and device for displaying image in AR (augmented reality) device | |
Fukuda et al. | Optical integrity of diminished reality using deep learning | |
CN117011493B (en) | Three-dimensional face reconstruction method, device and equipment based on symbol distance function representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |