CN116311474A - Face image face filling method, system and storage medium - Google Patents

Face image face filling method, system and storage medium Download PDF

Info

Publication number
CN116311474A
CN116311474A CN202310415974.5A CN202310415974A CN116311474A CN 116311474 A CN116311474 A CN 116311474A CN 202310415974 A CN202310415974 A CN 202310415974A CN 116311474 A CN116311474 A CN 116311474A
Authority
CN
China
Prior art keywords
face
dimensional
dimensional face
model
filling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310415974.5A
Other languages
Chinese (zh)
Inventor
林继亮
李明悦
仇中宝
刘洛麒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN202310415974.5A priority Critical patent/CN116311474A/en
Publication of CN116311474A publication Critical patent/CN116311474A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a face image face filling method, a face image face filling system and a storage medium, wherein the face image face filling method comprises the following steps: acquiring an input image, and extracting face features to obtain two-dimensional face features; according to the two-dimensional face characteristics, obtaining two-dimensional face texture characteristic information and calculating an illumination coefficient of an input image; based on the two-dimensional face features and the two-dimensional face texture feature information, carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain a first three-dimensional face model; performing deformation filling treatment on the first three-dimensional model to obtain a second three-dimensional face model, and adding an illumination coefficient to obtain a third three-dimensional face model; and mapping the third three-dimensional face model to a two-dimensional plane to obtain a face filling effect diagram of the face image. According to the invention, the three-dimensional face reconstruction and deformation filling treatment are carried out by combining the preset three-dimensional face model, so that the filling effect diagram is more vivid and natural, the stereoscopic impression is stronger, and the risk of medical filling can be comprehensively estimated for the user.

Description

Face image face filling method, system and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a face filling method, a face filling system and a storage medium for face images.
Background
The face is a very complex and precise part with a tissue structure, the surface can be divided into dozens of beauty subunits, and the interior is mainly provided with dozens of fat chambers for supporting, so that the general appearance of the face is formed. With the increase of age, the fat in the fat chambers can change continuously, the appearance of people can change, the fat capacity of certain parts is smaller, the parts are more sunken in appearance, and people can look old and have poor aesthetic feeling.
The depressed areas of the face are mainly divided into forehead, eyebrow bow, eye sockets, temple, lacrimal duct, apple muscle, nasal base and chin. Therefore, the effect of filling apple muscles, forehead, eyebrow bows, chin and the like on improving the face is obvious, if the parts are relatively flat, the face can be sunken, the face is relatively old, and even if the whole five sense organs are good, the face is hardly beautiful. Secondly, the temples are filled, so that the temples are plump, and the effect of improving the appearance is also immediate. In addition, aging can lead to atrophy of body fat, which is easily generated at the positions of the nasal base and the eye sockets, so that filling fat into the positions can also obviously lighten the aging feel, and the people are younger and more active.
The concave parts of the user can be filled through medical means and the like to enable the user to be young, and facial filling processing of the image can be used as an effective medical simulation mode, so that the user can evaluate the risks of medical cosmetology more comprehensively. In addition, in the field of picture beautification, lovers also want to fill faces of face images in a picture-repairing mode, so that the faces of the lovers in the lenses are full and younger.
However, the existing face image face filling method mainly performs face filling processing on a two-dimensional image in a filtering mode and the like, has no stereoscopic impression, causes poor final presentation effect, and cannot comprehensively evaluate the risk of medical filling for a user.
Disclosure of Invention
The invention mainly aims to provide a face image face filling method, a face image face filling system and a storage medium, and aims to solve the technical problems that the existing face image face filling method is free of stereoscopic impression, and therefore a final presentation effect is poor and medical filling risks cannot be comprehensively estimated for users.
In order to achieve the above object, the present invention provides a face image face filling method, which includes the steps of: acquiring an input image, and extracting face features to obtain two-dimensional face features; obtaining two-dimensional face texture feature information according to the two-dimensional face features; calculating the illumination coefficient of the input image according to the two-dimensional face characteristics; based on the two-dimensional face features and the two-dimensional face texture feature information, carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain a first three-dimensional face model; performing deformation filling treatment on the first three-dimensional model to obtain a second three-dimensional face model; adding an illumination coefficient to the second three-dimensional model to obtain a third three-dimensional face model; and mapping the third three-dimensional face model to a two-dimensional plane to obtain a face filling effect diagram of the face image.
Optionally, based on two-dimensional face features and two-dimensional face texture feature information, a preset three-dimensional face model is combined to reconstruct a three-dimensional face to obtain a first three-dimensional face model, and the method specifically comprises the following steps: based on the two-dimensional face features and the two-dimensional face texture feature information, performing alignment processing and registration processing, and performing three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain an initialized three-dimensional face model; the preset three-dimensional face model is obtained through face feature extraction or is a standard three-dimensional face model; and performing shape optimization and texture optimization processing on the initialized three-dimensional face model to obtain a first three-dimensional face model.
Optionally, an input image is acquired, face feature extraction is performed to obtain two-dimensional face features, and the method specifically comprises the following steps: acquiring an input image, and performing image processing on the input image to obtain a processed image; and inputting the processed image into a deep learning model for face feature extraction to obtain two-dimensional face features, wherein the two-dimensional face features comprise a face boundary box, face key points and confidence degrees of face areas.
Optionally, the image processing includes an image conversion process, specifically converting the input image into a grayscale image, and an image adjustment process, specifically adjusting the brightness and contrast of the grayscale image.
Optionally, according to the two-dimensional face feature points, two-dimensional face texture feature information is obtained, specifically: according to the face boundary frame and the face key points, sampling interpolation is carried out on pixels in the face boundary frame through an image interpolation algorithm, and two-dimensional face texture characteristic information is obtained; the illumination coefficient calculation process of the input image specifically comprises the following steps: according to the key points of the human face, calculating a minimum bounding rectangle of the human face, projecting the extracted human face area onto a unit hemispherical surface, generating spherical harmonic functions, and determining the order of the spherical harmonic functions according to the required precision; the extracted face area is divided into a plurality of blocks, RGB values of pixels in each block are calculated and converted into function values under a spherical coordinate system, and spherical harmonic coefficients are fitted through a least square method to obtain illumination coefficients beta.
Optionally, according to the two-dimensional face characteristics, calculating the illumination coefficient of the input image, carrying out illumination filtration on the input image, and then carrying out three-dimensional face reconstruction; the illumination filtering of the input image is specifically: according to the spherical harmonic function and the illumination coefficient beta, the illumination intensities in different light source directions are calculated, specifically: calculating the function value of each pixel point under a spherical coordinate system, summing the function value by using spherical harmonic function and illumination coefficient beta to obtain illumination intensity, and calculating the difference value between the illumination intensity of each pixel point in an input image and the average illumination intensity value of surrounding pixel points to obtain the shadow value of the pixel point; and adding the shadow value back to the RGB value of the pixel point to realize illumination filtering.
Optionally, performing deformation filling processing on the first three-dimensional model to obtain a second three-dimensional face model, which specifically comprises the following steps: the first three-dimensional Face model Face3D is expressed as: fa ce3 d=m+s×a+t×b, where M is an average shape of the three-dimensional face model, S is a shape vector, a is a coefficient of the shape vector, T is a three-dimensional face texture vector, and B is a coefficient of the three-dimensional face texture vector, specifically an average value obtained by performing principal component analysis on a preset three-dimensional face model; calculating three-dimensional coordinates of each feature point based on the three-dimensional feature points of the first three-dimensional model, and arranging the three-dimensional coordinates into a vector form to obtain a shape vector S; based on the three-dimensional feature points of the first three-dimensional model, RGB values of each feature point are obtained and are arranged into a vector form to obtain a texture vector T; obtaining a standard shape vector_model of a standard three-dimensional face Model, calculating a difference value between a shape vector S and the standard shape vector shape_model, and taking the difference value as a face filling degree value delta S of a user; according to the face region obtained by face feature extraction, further dividing to obtain a preset face feature point region; based on a preset face feature point area, carrying out first subdivision processing on a face filling degree value delta S of a user to obtain a filling degree value corresponding to the preset face feature point area; performing second subdivision processing on the shape vector S based on the preset face feature point area to obtain a first shape vector corresponding to the preset face feature point area; searching a filling degree value corresponding to a required filling region in a user face filling degree value delta S, and adding the filling degree value into a shape vector of the required filling region to obtain a set S_reshape of deformed characteristic shape vectors; setting a second shape vector of the corresponding region added with the filling degree value as a constraint characteristic shape vector of the deformed characteristic shape vector set S_reshalde, and carrying out interpolation fitting treatment on the constraint characteristic shape vector based on a deformation method of third-order Laplace coordinates to obtain a deformed shape vector; and according to the characteristic shape vector S_reshape obtained by interpolation fitting processing, the second three-dimensional face model is expressed as: face3 d_reshape=m+s_reshape×a+t×b, and a second three-dimensional Face model is obtained.
Optionally, the preset face feature point area at least comprises a forehead area, a brow area, an apple muscle area, a chin area, an eye socket area, a temple outline area, a lacrimal passage area, a nose base area and other areas; the filling degree values corresponding to the preset face feature point areas at least comprise a forehead area filling degree value delta S_forward, a brow area filling degree value delta S_browArch, an apple muscle area filling degree value delta S_plumpcheeks, a chin area filling degree value delta S_chn, an eye socket area filling degree value delta S_eyestock, a temple contour area filling degree value delta S_temple, a lacrimal canal area filling degree value delta S_teletrough, a nose base area filling degree value delta S_smileline and other area filling degree values delta S_other; the first shape vector at least comprises a first foreHead feature shape vector s_forehead, a first arch feature shape vector s_browarch, a first apple muscle feature shape vector s_plumpcheeks, a first chin feature shape vector s_chn, a first socket feature shape vector s_eyestock, a first temple feature shape vector s_temple, a first lacrimal passage feature shape vector s_teartogh, a first nasal base feature shape vector s_smiline, a first other shape vector s_other; the second shape vector includes at least a second foreHead feature shape vector S ' _forward head, a second browArch feature shape vector S ' _browarch, a second apple muscle feature shape vector S ' _plumpcheeks, a second chin feature shape vector S ' _chip, a second eye socket feature shape vector S ' _eyestock, a second temple feature shape vector S ' _temple, a second tear channel feature shape vector S ' _teartrough, a second nasal base feature shape vector S ' _smiline, a second other shape vector S ' _other.
Corresponding to the face image face filling method, the invention provides a face image face filling system, which comprises: the face feature extraction module is used for acquiring an input image and extracting face features to obtain two-dimensional face features; the two-dimensional face texture feature information acquisition module is used for acquiring two-dimensional face texture feature information according to the two-dimensional face features; the illumination coefficient calculation module is used for calculating the illumination coefficient of the input image according to the two-dimensional face characteristics; the three-dimensional face reconstruction module is used for carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model based on the two-dimensional face characteristics and the two-dimensional face texture characteristic information to obtain a first three-dimensional face model; the deformation filling processing module is used for performing deformation filling processing on the first three-dimensional model to obtain a second three-dimensional face model; adding an illumination coefficient to the second three-dimensional model to obtain a third three-dimensional face model; and the mapping module is used for mapping the third three-dimensional face model to a two-dimensional plane to obtain a face image face filling effect diagram.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a face image face filling program which, when executed by a processor, implements the steps of the face image face filling method as described above.
The beneficial effects of the invention are as follows:
(1) Compared with the prior art, the three-dimensional face reconstruction is carried out by combining the preset three-dimensional face model, the obtained three-dimensional face reconstruction result is more accurate, and the filling effect graph is more vivid and natural and has stronger third dimension by combining the deformation filling treatment, so that the risk of medical filling can be comprehensively estimated for a user, and the user is helped to comprehensively estimate the expected medical filling effect;
(2) Compared with the prior art, the method and the device have the advantages that through the alignment treatment and the registration treatment, the input image is corresponding to the preset three-dimensional face model, the three-dimensional face is reconstructed, and then the three-dimensional face is reconstructed, so that the actual face shape of the input image can be more accurately matched;
(3) Compared with the prior art, the method can improve the robustness and accuracy of face feature extraction through image conversion processing and image adjustment processing;
(4) Compared with the prior art, the method and the device can improve the accuracy and the precision of three-dimensional face reconstruction by calculating the illumination coefficient beta and carrying out illumination filtering on the two-dimensional image;
(5) Compared with the prior art, the method has the advantages that the filling effect is more natural and real through the deformation filling treatment by the regional division, full and three-dimensional facial images can be obtained, the risk of medical filling is comprehensively evaluated for users, the operation process is simple and convenient, and the efficiency is higher.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a flow chart diagram of the face image face filling method of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the face filling method of the face image of the present invention includes the following steps: acquiring an input image, and extracting face features to obtain two-dimensional face features; obtaining two-dimensional face texture feature information according to the two-dimensional face features; calculating the illumination coefficient of the input image according to the two-dimensional face characteristics; based on the two-dimensional face features and the two-dimensional face texture feature information, carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain a first three-dimensional face model; performing deformation filling treatment on the first three-dimensional model to obtain a second three-dimensional face model; adding an illumination coefficient to the second three-dimensional model to obtain a third three-dimensional face model; and mapping the third three-dimensional face model to a two-dimensional plane to obtain a face filling effect diagram of the face image.
According to the invention, the three-dimensional face reconstruction is carried out by combining the preset three-dimensional face model, the obtained three-dimensional face reconstruction result is more accurate, and the filling effect diagram is more vivid and natural and has stronger third dimension by combining the deformation filling treatment, so that the risk of medical filling can be comprehensively evaluated for users.
In this embodiment, based on two-dimensional face features and two-dimensional face texture feature information, a three-dimensional face reconstruction is performed in combination with a preset three-dimensional face model to obtain a first three-dimensional face model, which specifically includes the following steps: based on the two-dimensional face features and the two-dimensional face texture feature information, performing alignment processing and registration processing, and performing three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain an initialized three-dimensional face model; the preset three-dimensional face model is obtained through face feature extraction or is a standard three-dimensional face model; and performing shape optimization and texture optimization processing on the initialized three-dimensional face model to obtain a first three-dimensional face model.
Preferably, the three-dimensional model initialization in the invention is based on a method of 3D Morphable Model (3 DMM), wherein the 3DMM is a statistical model for describing the shape and texture of a face and can be learned from a large amount of three-dimensional face scanning data; and (3) corresponding the input image to the 3DMM through alignment processing and registration processing based on the two-dimensional face features, and generating an initialized three-dimensional face model.
Preferably, the shape optimization optimizes the shape of the three-dimensional model specifically using a method based on an inverse progressive grid (Iterative Closest Point, ICP) algorithm that achieves the optimization by finding the minimum distance between the three-dimensional model and the input image. Specifically, shape regularization method, weight balancing method and other means can be used to balance shape change and maintain stability, so that the shape can be more accurately matched with the actual face shape of the user.
Preferably, the texture optimization refers to optimizing texture information of the three-dimensional model to more accurately match the actual face texture of the user. In the embodiment, a nonlinear optimization method is used, based on gradient descent, texture optimization is achieved by minimizing the difference between the three-dimensional model and the input image, and the optimization effect is improved by combining a local optimization strategy.
According to the invention, through the alignment processing and the registration processing, the input image is corresponding to the preset three-dimensional face model, the three-dimensional face reconstruction is carried out, and then the three-dimensional face reconstruction is carried out, so that the actual face shape of the input image can be more accurately matched.
In this embodiment, an input image is acquired, and face feature extraction is performed to obtain two-dimensional face features, which specifically includes the following steps: acquiring an input image, and performing image processing on the input image to obtain a processed image; and inputting the processed image into a deep learning model for face feature extraction to obtain two-dimensional face features, wherein the two-dimensional face features comprise a face boundary box, face key points and confidence degrees of face areas.
Preferably, the deep learning model RetinaFace, retinaFace is a face detection algorithm based on a single-stage target detector, and the main idea is to directly regress the confidence of the boundary box, the key points and the face region of the face by using a full convolutional neural network.
For each input image, the Retinaface algorithm generates a set of predictions including the bounding box of each detected face, the face keypoints, and the confidence level of the face region. Preferably, the face detection result with higher confidence is selected and reserved according to the confidence.
The model structure of RetinaFace mainly comprises one base network and three sub-networks. Wherein the base network employs a ResNet network architecture for extracting features from the input image. The three sub-networks are used for predicting the confidence of the boundary box, the key point and the area of the face.
In the present embodiment, the image processing includes an image conversion process, which is specifically to convert an input image into a grayscale image, and an image adjustment process, which is specifically to adjust the brightness and contrast of the grayscale image.
The invention can improve the robustness and accuracy of the face feature extraction through the image conversion processing and the image adjustment processing.
In this embodiment, two-dimensional face texture feature information is obtained according to two-dimensional face feature points, specifically: according to the face boundary frame and the face key points, sampling interpolation is carried out on pixels in the face boundary frame through an image interpolation algorithm, and two-dimensional face texture characteristic information is obtained; the illumination coefficient calculation process of the input image specifically comprises the following steps: according to the key points of the human face, calculating a minimum bounding rectangle faceRect of the human face, and projecting the extracted human face area onto a unit hemisphere so as to perform illumination estimation on spherical harmonics; generating spherical harmonics, which are a set of basis functions used to describe the distribution of functions over a sphere, which can be used to represent the illumination intensities in different light source directions using predefined spherical harmonic coefficients; the order of the spherical harmonics is determined according to the required precision; the extracted face area is divided into a plurality of blocks, RGB values of pixels in each block are calculated and converted into function values under a spherical coordinate system, and spherical harmonic coefficients are fitted through a least square method to obtain illumination coefficients beta.
Spherical harmonics are mathematical functions that describe phenomena such as illumination, reflection, radiation, etc. in three dimensions, with higher orders representing finer illumination or radiation distributions, but also requiring more computational resources and memory space. Thus, the determination of the required accuracy typically requires measurement and selection based on the needs of the particular application and the availability of computing resources.
Generally, lower spherical harmonics orders (e.g., 1 to 4) can be used for rough approximation or fast computation, suitable for real-time or real-time demanding scenes. The higher spherical harmonic order (such as more than 5 th order) can be used for finer illumination or radiation simulation, and is suitable for scenes with higher requirements on illumination or radiation distribution.
In specific applications, experiments and tuning can be performed as required, and appropriate spherical harmonic orders are selected to achieve the required degree of refinement by observing the performance effect of the generated spherical harmonic on the spherical surface and combining the availability and performance requirements of computing resources.
Since the present invention requires finer illumination and minimizes processing time, the order of the spherical harmonics is preferably 5.
In this embodiment, according to the detected boundary box and key point positions of the face, the pixels in the boundary box of the face are sampled and interpolated by using an image interpolation algorithm, so that high-resolution face texture feature information can be obtained. Meanwhile, the 2D feature point information may be used to perform segmentation and recognition of a face key region, the segmentation and recognition result including an eye region, a mouth region, a nose region, a forehead region, a cheek region, a chin region, and the like.
In the embodiment, according to the two-dimensional face characteristics, after the illumination coefficient of an input image is calculated, illumination filtering is carried out on the input image, and then three-dimensional face reconstruction is carried out; the illumination filtering of the input image is specifically: according to the spherical harmonic function and the illumination coefficient beta, the illumination intensities in different light source directions are calculated, specifically: and calculating the function value of each pixel point under the spherical coordinate system, and summing the spherical harmonic function and the illumination coefficient beta to obtain the illumination intensity.
Preferably, for each pixel point, calculating a function value of the pixel point in a spherical coordinate system, and summing the function value with a spherical harmonic function and an illumination coefficient beta to obtain illumination intensity, further comprising the following steps:
s1, calculating a function value of a spherical harmonic function under a spherical coordinate system for each pixel point;
spherical harmonics are typically represented using SH (Spherical Harmonics) coefficients, which can be calculated either pre-calculated or calculated in real time. The order of the spherical harmonics is determined by the required precision, and generally the higher the order, the higher the precision, but the higher the calculation cost;
s2, multiplying the function value of the spherical harmonic function by the illumination coefficient beta;
the illumination coefficient beta generally represents information such as the intensity, the color and the like of the light source, and is set according to specific scenes and requirements;
s3, at each pixel point, accumulating and summing the calculation results obtained in the step S2; the contribution of different light source directions to illumination can be considered, and the final illumination intensity can be obtained.
Calculating the difference value between the illumination intensity of each pixel point in the input image and the average illumination intensity value of surrounding pixel points to obtain the shadow value of the pixel point; and adding the shadow value back to the RGB value of the pixel point, so that the shadow can be removed, and illumination filtering is realized.
Preferably, the surrounding pixel points are other pixel points in a circle with each pixel point as a circle center and the radius length of the circle being 7 pixels. Namely, calculating the difference value between each pixel point and the average illumination intensity value of the pixel points within the radius of 7 pixels to obtain the shadow value of the pixel point.
According to the invention, the accuracy and the accuracy of three-dimensional face reconstruction can be improved by calculating the illumination coefficient beta and carrying out illumination filtering on the two-dimensional image.
In this embodiment, the deformation filling process is performed on the first three-dimensional model to obtain the second three-dimensional face model, which specifically includes the following steps:
the first three-dimensional Face model Face3D is expressed as: face3 d=m+s+a+t+b, where M is the average shape of the three-dimensional Face model, S is a shape vector (shape vector), a is a coefficient of the shape vector, T is a three-dimensional Face texture vector (TextureVector), and B is a coefficient of the three-dimensional Face texture vector, specifically an average value obtained by performing principal component analysis (Principle Component Analysis, PCA) on a preset three-dimensional Face model;
note that, face3 d=m+s×a+t×b, where M, A is an existing parameter of the preset three-dimensional Face model (or the standard Face model), it can be obtained by the prior art.
For example, M typically calculates the average shape vector of all faces in a representative set of face data by collecting and aligning and registering the face data.
Preferably, the three-dimensional face model data set of the known faces is used, representative faces are selected as training sets, the face data are aligned into a reference coordinate system, and the aligned face shape vectors are averaged. A is a coefficient obtained by calculating according to an average shape vector, and B is a three-dimensional texture vector coefficient obtained by calculating according to a preset three-dimensional face model.
Based on the three-dimensional feature points of the first three-dimensional model, three-dimensional coordinates of each feature point are calculated and arranged in the form of a vector to obtain a shape vector S, specifically, assuming that there are m facial feature points, three-dimensional coordinates of each feature point are (X i ,Y i ,Z i ) And 1.ltoreq.i.ltoreq.m, the shape vector S may be represented as a 3 m-dimensional vector: s= [ X ] 1 ,Y 1 ,Z 1 ,X 2 ,Y 2 ,Z 2 ,...,X m ,Y m ,Z m ];
Based onThe three-dimensional feature points of the first three-dimensional model are obtained as RGB values of each feature point and arranged in the form of a vector to obtain a texture vector T, specifically, assuming that there are m facial feature points, the color value of each feature point is (R i ,G i ,B i ) And 1.ltoreq.i.ltoreq.m, the texture vector T may be represented as a 3 m-dimensional vector: t= [ R 1 ,G 1 ,B 1 ,R 2 ,G 2 ,B 2 ,...,R m ,G m ,B m ];
Obtaining a standard shape vector_model of a standard three-dimensional face Model, calculating a difference value between a shape vector S and the standard shape vector shape_model, and taking the difference value as a face filling degree value delta S of a user; preferably, the standard three-dimensional face model is a three-dimensional face model of a model with full faces;
according to the face region obtained by face feature extraction, further dividing to obtain a preset face feature point region; based on a preset face feature point area, carrying out first subdivision processing on a face filling degree value delta S of a user to obtain a filling degree value corresponding to the preset face feature point area; the method comprises the following steps: and (3) gridding or dividing the preset face feature point area into a plurality of subareas, and then distributing the filling degree value delta S of the face of the user to the corresponding subarea according to the position, the shape and other characteristics of the preset face feature point area, so that the filling degree value corresponding to the preset face feature point area can be obtained.
If the same deviation is added to the three-dimensional face shape vector of the user, a large difference is finally caused between the face outline of the user and the initial face outline; therefore, based on the preset face feature point area, performing second subdivision processing on the shape vector S to obtain a first shape vector corresponding to the preset face feature point area;
searching a filling degree value corresponding to a required filling region in a user face filling degree value delta S, and adding the filling degree value into a shape vector of the required filling region to obtain a set S_reshape of deformed characteristic shape vectors;
setting a second shape vector of the corresponding region added with the filling degree value as a constraint characteristic shape vector of the deformed characteristic shape vector set S_reshalde, and carrying out interpolation fitting treatment on the constraint characteristic shape vector based on a deformation method of third-order Laplace coordinates to obtain a deformed shape vector; and according to the characteristic shape vector S_reshape obtained by interpolation fitting processing, the second three-dimensional face model is expressed as: face3 d_reshape=m+s_reshape×a+t×b, and a second three-dimensional Face model is obtained.
Preferably, the preset face feature point area at least comprises a forehead area, a brow area, an apple muscle area, a chin area, an eye socket area, a temple outline area, a lacrimal passage area, a nose base area and other areas; the other regions are the remaining regions.
Preferably, the filling degree values corresponding to the preset face feature point areas at least comprise a forehead area filling degree value deltas_forward, a brow area filling degree value deltas_browarch, an apple muscle area filling degree value deltas_plumpcheks, a chin area filling degree value deltas_chn, an orbital area filling degree value deltas_eyestock, a temple contour area filling degree value deltas_temple, a lacrimal passage area filling degree value deltas_telestreugh, a nasal base area filling degree value deltas_smileline and other area filling degree values deltas_other.
Preferably, the first shape vector at least includes a first foreHead feature shape vector s_forward, a first arch feature shape vector s_browarch, a first apple muscle feature shape vector s_plugholes, a first chin feature shape vector s_chan, a first eye socket feature shape vector s_eyestock, a first temple feature shape vector s_temple, a first lacrimal passage feature shape vector s_teartrough, a first nasal base feature shape vector s_smileline, and a first other shape vector s_other; the second shape vector includes at least a second foreHead feature shape vector S ' _forward head, a second browArch feature shape vector S ' _browarch, a second apple muscle feature shape vector S ' _plumpcheeks, a second chin feature shape vector S ' _chan, a second eye socket feature shape vector S ' _eyeblock, a second temple feature shape vector S ' _tem ple, a second tear channel feature shape vector S ' _teartrough, a second nasal base feature shape vector S ' _smileline, a second other shape vector S ' _other.
According to the invention, deformation filling treatment is carried out on regions such as forehead, eyebrow bow, eye sockets, nose base, apple muscle, mouth corner, chin and the like by dividing the regions, and the deformation filling treatment is carried out according to different face concave states by taking a standard three-dimensional model face as a reference, so that the overall face contour effect is natural, and the filling effect is more natural and real; after the face is modified in the three-dimensional space, an illumination coefficient is added to obtain a third three-dimensional face model, so that a full three-dimensional effect can be obtained, and the risk of medical filling is comprehensively evaluated for a user; in addition, the operation process is simple and convenient, the user can obtain the face with no false face, no edema, clear effect, three-dimensional, full and younger face at the mobile terminal by one key without tedious operation, and the medical face filling and repairing effect is obtained, so that the efficiency is higher.
Corresponding to the face image face filling method, the invention provides a face image face filling system, which comprises: the face feature extraction module is used for acquiring an input image and extracting face features to obtain two-dimensional face features; the two-dimensional face texture feature information acquisition module is used for acquiring two-dimensional face texture feature information according to the two-dimensional face features; the illumination coefficient calculation module is used for calculating the illumination coefficient of the input image according to the two-dimensional face characteristics; the three-dimensional face reconstruction module is used for carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model based on the two-dimensional face characteristics and the two-dimensional face texture characteristic information to obtain a first three-dimensional face model; the deformation filling processing module is used for performing deformation filling processing on the first three-dimensional model to obtain a second three-dimensional face model; adding an illumination coefficient to the second three-dimensional model to obtain a third three-dimensional face model; and the mapping module is used for mapping the third three-dimensional face model to a two-dimensional plane to obtain a face image face filling effect diagram.
Preferably, the system further comprises an image processing module, which is used for performing image processing on the input image to obtain a processed image. The image processing includes image conversion processing, which is specifically to convert an input image into a grayscale image, and image adjustment processing, which is specifically to adjust the brightness and contrast of the grayscale image.
And the illumination filtering module is used for carrying out illumination filtering on the input image.
In addition, the present invention also provides a computer-readable storage medium having stored thereon a face image face filling program which, when executed by a processor, implements the steps of the face image face filling method as described above. The computer readable storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the device embodiments, the apparatus embodiments, and the storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference should be made to the description of the method embodiments for relevant points.
Also, herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the foregoing description illustrates and describes the preferred embodiments of the present invention, it is to be understood that the invention is not limited to the forms disclosed herein, but is not to be construed as limited to other embodiments, but is capable of use in various other combinations, modifications and environments and is capable of changes or modifications within the scope of the inventive concept, either as described above or as a matter of skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (10)

1. A face image face filling method, comprising the steps of:
acquiring an input image, and extracting face features to obtain two-dimensional face features;
obtaining two-dimensional face texture feature information according to the two-dimensional face features; calculating the illumination coefficient of the input image according to the two-dimensional face characteristics;
based on the two-dimensional face features and the two-dimensional face texture feature information, carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain a first three-dimensional face model;
performing deformation filling treatment on the first three-dimensional model to obtain a second three-dimensional face model;
adding an illumination coefficient to the second three-dimensional model to obtain a third three-dimensional face model;
and mapping the third three-dimensional face model to a two-dimensional plane to obtain a face filling effect diagram of the face image.
2. The face image face filling method according to claim 1, wherein: based on two-dimensional face features and two-dimensional face texture feature information, carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain a first three-dimensional face model, and specifically comprising the following steps:
based on the two-dimensional face features and the two-dimensional face texture feature information, performing alignment processing and registration processing, and performing three-dimensional face reconstruction by combining a preset three-dimensional face model to obtain an initialized three-dimensional face model; the preset three-dimensional face model is obtained through face feature extraction or is a standard three-dimensional face model;
and performing shape optimization and texture optimization processing on the initialized three-dimensional face model to obtain a first three-dimensional face model.
3. The face image face filling method according to claim 1, wherein: the method comprises the steps of obtaining an input image, extracting face features to obtain two-dimensional face features, and specifically comprises the following steps:
acquiring an input image, and performing image processing on the input image to obtain a processed image;
and inputting the processed image into a deep learning model for face feature extraction to obtain two-dimensional face features, wherein the two-dimensional face features comprise a face boundary box, face key points and confidence degrees of face areas.
4. A face image face filling method according to claim 3, wherein: the image processing includes image conversion processing, which is specifically to convert an input image into a grayscale image, and image adjustment processing, which is specifically to adjust the brightness and contrast of the grayscale image.
5. A face image face filling method according to claim 3, wherein: according to the two-dimensional face feature points, two-dimensional face texture feature information is obtained, specifically: according to the face boundary frame and the face key points, sampling interpolation is carried out on pixels in the face boundary frame through an image interpolation algorithm, and two-dimensional face texture characteristic information is obtained;
the illumination coefficient calculation process of the input image specifically comprises the following steps:
according to the key points of the human face, calculating a minimum bounding rectangle of the human face, projecting the extracted human face area onto a unit hemispherical surface, generating spherical harmonic functions, and determining the order of the spherical harmonic functions according to the required precision;
the extracted face area is divided into a plurality of blocks, RGB values of pixels in each block are calculated and converted into function values under a spherical coordinate system, and spherical harmonic coefficients are fitted through a least square method to obtain illumination coefficients beta.
6. The face image face filling method of claim 5, wherein: according to the two-dimensional face characteristics, calculating an illumination coefficient of an input image, carrying out illumination filtering on the input image, and then carrying out three-dimensional face reconstruction;
the illumination filtering of the input image is specifically:
according to the spherical harmonic function and the illumination coefficient beta, the illumination intensities in different light source directions are calculated, specifically: for each pixel point, calculating the function value of the pixel point under a spherical coordinate system, and summing the function value by using spherical harmonic function and illumination coefficient beta to obtain illumination intensity;
calculating the difference value between the illumination intensity of each pixel point in the input image and the average illumination intensity value of surrounding pixel points to obtain the shadow value of the pixel point; and adding the shadow value back to the RGB value of the pixel point to realize illumination filtering.
7. A face image face filling method according to claim 3, wherein: performing deformation filling processing on the first three-dimensional model to obtain a second three-dimensional face model, and specifically comprising the following steps:
the first three-dimensional Face model Face3D is expressed as: face3 d=m+s×a+t×b, where M is an average shape of the three-dimensional Face model, S is a shape vector, a is a coefficient of the shape vector, T is a three-dimensional Face texture vector, and B is a coefficient of the three-dimensional Face texture vector, specifically an average value obtained by performing principal component analysis on a preset three-dimensional Face model;
calculating three-dimensional coordinates of each feature point based on the three-dimensional feature points of the first three-dimensional model, and arranging the three-dimensional coordinates into a vector form to obtain a shape vector S; based on the three-dimensional feature points of the first three-dimensional model, RGB values of each feature point are obtained and are arranged into a vector form to obtain a texture vector T;
obtaining a standard shape vector shape_model of a standard three-dimensional face Model, calculating a difference value between the shape vector S and the standard shape vector shape_model, and taking the difference value as a face filling degree value of a user Δ S;
According to the face region obtained by face feature extraction, further dividing to obtain a preset face feature point region;
filling degree values for faces of users based on preset face feature point areas Δ S performs a first subdivisionProcessing to obtain a filling degree value corresponding to a preset face feature point area;
performing second subdivision processing on the shape vector S based on the preset face feature point area to obtain a first shape vector corresponding to the preset face feature point area;
filling level value on user face Δ Searching a filling degree value corresponding to a required filling area in S, and adding the filling degree value into a shape vector of the required filling area to obtain a set S_resh ape of deformed characteristic shape vectors;
setting a second shape vector of the corresponding region added with the filling degree value as a constraint characteristic shape vector of the deformed characteristic shape vector set S_reshalde, and carrying out interpolation fitting treatment on the constraint characteristic shape vector based on a deformation method of third-order Laplace coordinates to obtain a deformed shape vector;
and according to the characteristic shape vector S_reshape obtained by interpolation fitting processing, the second three-dimensional face model is expressed as: face3 d_reshape=m+s_reshape×a+t×b, and a second three-dimensional Face model is obtained.
8. The face image face filling method of claim 7, wherein: the preset face characteristic point area at least comprises a forehead area, a brow area, an apple muscle area, a chin area, an eye socket area, a temple outline area, a lacrimal passage area, a nose base area and other areas;
the filling degree value corresponding to the preset face feature point area at least comprises a forehead area filling degree value Δ S_forward, eyebrow area filling degree value Δ S_brown and apple muscle region filling degree value Δ S_PLUMPHEeks, chin region filling level value Δ S_chip, orbital region filling level value Δ S_eyestock and temple contour region filling level value Δ S_sample, tear duct region filling level value Δ S_tearTrough, nasal base region filling level value Δ S_smilerine, other region filling degree value Δ S_other;
The first shape vector at least comprises a first foreHead feature shape vector s_forehead, a first arch feature shape vector s_browarch, a first apple muscle feature shape vector s_plumpcheeks, a first chin feature shape vector s_chn, a first socket feature shape vector s_eyestock, a first temple feature shape vector s_temple, a first lacrimal passage feature shape vector s_teartogh, a first nasal base feature shape vector s_smiline, a first other shape vector s_other;
the second shape vector includes at least a second foreHead feature shape vector S ' _forward head, a second browArch feature shape vector S ' _browarch, a second apple muscle feature shape vector S ' _plumpcheeks, a second chin feature shape vector S ' _chin, a second eye socket feature shape vector S ' _eyestock, a second temple feature shape vector S ' _temple, a second tear channel feature shape vector S ' _testogugh, a second nasal base feature shape vector S ' _smile, a second other shape vector S ' _other.
9. A face image face filling system, comprising:
the face feature extraction module is used for acquiring an input image and extracting face features to obtain two-dimensional face features;
the two-dimensional face texture feature information acquisition module is used for acquiring two-dimensional face texture feature information according to the two-dimensional face features;
the illumination coefficient calculation module is used for calculating the illumination coefficient of the input image according to the two-dimensional face characteristics;
the three-dimensional face reconstruction module is used for carrying out three-dimensional face reconstruction by combining a preset three-dimensional face model based on the two-dimensional face characteristics and the two-dimensional face texture characteristic information to obtain a first three-dimensional face model;
the deformation filling processing module is used for performing deformation filling processing on the first three-dimensional model to obtain a second three-dimensional face model; adding an illumination coefficient to the second three-dimensional model to obtain a third three-dimensional face model;
and the mapping module is used for mapping the third three-dimensional face model to a two-dimensional plane to obtain a face image face filling effect diagram.
10. A computer readable storage medium, wherein a face image face filling program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the face image face filling method according to any one of claims 1 to 8.
CN202310415974.5A 2023-04-18 2023-04-18 Face image face filling method, system and storage medium Pending CN116311474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310415974.5A CN116311474A (en) 2023-04-18 2023-04-18 Face image face filling method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310415974.5A CN116311474A (en) 2023-04-18 2023-04-18 Face image face filling method, system and storage medium

Publications (1)

Publication Number Publication Date
CN116311474A true CN116311474A (en) 2023-06-23

Family

ID=86832583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310415974.5A Pending CN116311474A (en) 2023-04-18 2023-04-18 Face image face filling method, system and storage medium

Country Status (1)

Country Link
CN (1) CN116311474A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665284A (en) * 2023-08-02 2023-08-29 深圳宇石科技有限公司 Face modeling and mask model partition matching method, device, terminal and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665284A (en) * 2023-08-02 2023-08-29 深圳宇石科技有限公司 Face modeling and mask model partition matching method, device, terminal and medium
CN116665284B (en) * 2023-08-02 2023-11-28 深圳宇石科技有限公司 Face modeling and mask model partition matching method, device, terminal and medium

Similar Documents

Publication Publication Date Title
CN108305312B (en) Method and device for generating 3D virtual image
JP2020526809A (en) Virtual face makeup removal, fast face detection and landmark tracking
CN111784821B (en) Three-dimensional model generation method and device, computer equipment and storage medium
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
JP6207210B2 (en) Information processing apparatus and method
US11562536B2 (en) Methods and systems for personalized 3D head model deformation
CN113269862A (en) Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment
US20220292772A1 (en) Methods and systems for constructing facial position map
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
CN111553284A (en) Face image processing method and device, computer equipment and storage medium
CN113808277B (en) Image processing method and related device
CN113628327A (en) Head three-dimensional reconstruction method and equipment
US11461970B1 (en) Methods and systems for extracting color from facial image
CN111127642A (en) Human face three-dimensional reconstruction method
CN116311474A (en) Face image face filling method, system and storage medium
CN111862278A (en) Animation obtaining method and device, electronic equipment and storage medium
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
WO2022197430A1 (en) Methods and systems for forming personalized 3d head and facial models
KR20090088675A (en) System and method for synthesis of face expression using nonlinear regression analysis
US11354844B2 (en) Digital character blending and generation system and method
RU2490710C1 (en) Method of recognising facial images and system for realising said method
CN111582120A (en) Method and terminal device for capturing eyeball activity characteristics
CN115409953B (en) Multi-camera color consistency-based maxillofacial reconstruction method, equipment and medium
CN117218292A (en) 3D model generation method, device, equipment and medium
CN116229548A (en) Model generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination