CN109035380B - Face modification method, device and equipment based on three-dimensional reconstruction and storage medium - Google Patents

Face modification method, device and equipment based on three-dimensional reconstruction and storage medium Download PDF

Info

Publication number
CN109035380B
CN109035380B CN201811060479.2A CN201811060479A CN109035380B CN 109035380 B CN109035380 B CN 109035380B CN 201811060479 A CN201811060479 A CN 201811060479A CN 109035380 B CN109035380 B CN 109035380B
Authority
CN
China
Prior art keywords
dimensional
target modification
target
region
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811060479.2A
Other languages
Chinese (zh)
Other versions
CN109035380A (en
Inventor
廖声洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811060479.2A priority Critical patent/CN109035380B/en
Publication of CN109035380A publication Critical patent/CN109035380A/en
Application granted granted Critical
Publication of CN109035380B publication Critical patent/CN109035380B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a face modification method, a face modification device, face modification equipment and a storage medium based on three-dimensional reconstruction, and belongs to the technical field of image processing. The method comprises the following steps: determining a target modification area from an original face image; carrying out three-dimensional reconstruction on the target modification area to obtain three-dimensional space information corresponding to the target modification area; and coloring the target modification area according to the three-dimensional space information to obtain a modified face image. According to the invention, the target modification area is colored according to the three-dimensional spatial information, so that the texture sense is stronger, the details of the target modification area can be effectively restored, the texture is finer, the stereoscopic impression can be effectively enhanced by polishing the target modification area in the normal direction, and the user experience is further remarkably improved.

Description

Face modification method, device and equipment based on three-dimensional reconstruction and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a method, a device, equipment and a storage medium for face modification based on three-dimensional reconstruction.
Background
With the development of science and technology and the improvement of the application level of technology industrialization, the performance of the mobile phone is better and better, and the hardware configuration is complete. Meanwhile, as the market competition of mobile phones is more and more intense, the hardware configuration cannot attract more electronic consumers, so most mobile phone manufacturers pursue differentiated function planning, design, marketing and the like of mobile phone products. For example, for an application scenario of a lipstick decoration in a face decoration, the existing lipstick decoration is to perform face recognition on an original image; when the original image contains a face region, positioning a lip region from the face region, and acquiring average lip color data according to the lip region; and processing the pixel points in the lip area according to the average lip color data and the color temperature data of the original image. Because the processing is carried out based on the two-dimensional lip region, the prior art has the technical problems that the processing result depends on average lip color data, the processing result is not fine and smooth, the texture sense is not strong, the real stereoscopic impression is not provided, and the user experience is poorer.
Disclosure of Invention
The face modification method, the face modification device, the face modification equipment and the storage medium based on three-dimensional reconstruction provided by the embodiment of the invention can solve the technical problems that the processing result in the prior art does not have three-dimensional luster, depends on average lip color data, is not fine and smooth in processing result, is not strong in texture sense, does not have real three-dimensional sense and is poor in user experience.
In order to achieve the above object, the embodiments of the present invention adopt the following technical solutions:
in a first aspect, a face modification method based on three-dimensional reconstruction provided in an embodiment of the present invention includes: determining a target modification area from an original face image; performing three-dimensional reconstruction on the target modification region to obtain three-dimensional space information corresponding to the target modification region; and coloring the target modification area according to the three-dimensional space information to obtain a modified human face image.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the determining a target modified region from an original face image includes: determining face key point information from an original face image according to a preset face key point detection model; and separating a region to be modified from the original face image according to the face key point information, wherein the region to be modified is the target modification region.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the three-dimensional reconstruction of the target modified region to obtain three-dimensional spatial information corresponding to the target modified region includes: and inputting the target modification area and the face key point information into a preset three-dimensional basic model, and outputting three-dimensional space information corresponding to the target modification area.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the inputting the target modification region and the face key point information into a preset three-dimensional base model, and outputting three-dimensional space information corresponding to the target modification region includes: acquiring the face key point information, basic parameters corresponding to the target modification area and weight coefficients corresponding to the basic parameters; determining a preset three-dimensional basic model matched with the basic parameters according to the basic parameters; and carrying out weighting processing on the preset three-dimensional basic model through the weighting coefficient to obtain three-dimensional space information corresponding to the target modification area.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, further including: acquiring image sample data marked with key points of the face; and carrying out neural network training on the initial face key point detection model through the image sample data to obtain a preset face key point detection model.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the coloring the target modified region according to the three-dimensional spatial information to obtain a modified face image includes: coloring the target modification area according to the three-dimensional space information to obtain a three-dimensional texture coloring result; and replacing the three-dimensional texture coloring result on the target modification area of the original face image to obtain a modified face image.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the performing a coloring process on the target modification region according to the three-dimensional space information to obtain a three-dimensional texture coloring result includes: determining a three-dimensional subdivision grid corresponding to the target modification area according to the three-dimensional space information; and transferring the color of the coordinate corresponding to the texture to the coordinate position corresponding to the three-dimensional subdivision grid to obtain a three-dimensional texture coloring result.
With reference to the sixth possible implementation manner of the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the determining a three-dimensional split mesh corresponding to the target modification region according to the three-dimensional spatial information includes: and triangulating the three-dimensional space information to obtain a three-dimensional subdivision grid corresponding to the target modification area.
With reference to the seventh possible implementation manner of the first aspect, an embodiment of the present invention provides an eighth possible implementation manner of the first aspect, where triangulating the three-dimensional spatial information to obtain a three-dimensional subdivision grid corresponding to the target modification region includes: and performing nearest non-cross triangulation on the three-dimensional space information to obtain a three-dimensional mesh corresponding to the target modification area.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present invention provides a ninth possible implementation manner of the first aspect, where the replacing the three-dimensional texture rendering result to the target modified region of the original face image to obtain a modified face image includes: and performing feathering treatment on the edge area of the target modification area to obtain a modified face image.
In a second aspect, a face modification apparatus based on three-dimensional reconstruction provided in an embodiment of the present invention includes: a modified target determining unit for determining a target modified region from the original face image; the first processing unit is used for carrying out three-dimensional reconstruction on the target modification area to obtain three-dimensional space information corresponding to the target modification area; and the second processing unit is used for performing coloring processing on the target modification area according to the three-dimensional space information to obtain a modified face image.
In a third aspect, a terminal device provided in an embodiment of the present invention includes: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method for face embellishment based on three-dimensional reconstruction according to any one of the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a storage medium, where the storage medium has instructions stored thereon, and when the instructions are executed on a computer, the instructions cause the computer to execute the method for face modification based on three-dimensional reconstruction according to any one of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
according to the face modification method, the face modification device, the face modification equipment and the storage medium based on three-dimensional reconstruction, the target modification area is determined from the original face image, the three-dimensional reconstruction is carried out on the target modification area, the three-dimensional space information corresponding to the target modification area is obtained, accordingly, the details of the target modification area can be effectively restored, the stereoscopic impression of the target modification area can be enhanced, the target modification area is colored according to the three-dimensional space information, the texture sense of the target modification area can be stronger, the texture is finer, the stereoscopic impression of the target modification area is further enhanced, and the user experience is remarkably improved.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a face modification method based on three-dimensional reconstruction according to a first embodiment of the present invention;
fig. 2 is an original face image in the face modification method based on three-dimensional reconstruction shown in fig. 1;
fig. 3 is a schematic diagram of a face detection result in the face modification method based on three-dimensional reconstruction shown in fig. 1;
fig. 4 is a schematic diagram of a triangulation principle in the face modification method based on three-dimensional reconstruction shown in fig. 1;
fig. 5 is a lip region image in the face modification method based on three-dimensional reconstruction shown in fig. 1;
fig. 6 is a schematic diagram of lip region triangulation results in the face modification method based on three-dimensional reconstruction shown in fig. 1;
fig. 7 is a functional module schematic diagram of a face modification apparatus based on three-dimensional reconstruction according to a second embodiment of the present invention;
fig. 8 is a schematic diagram of a terminal device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
First embodiment
Since the existing face modification method has the technical problems of dependence on average lip color data, non-exquisite processing result, non-strong texture, no real stereoscopic impression, and poor user experience, in order to solve the above technical problems, the present embodiment first provides a face modification method based on three-dimensional reconstruction, it should be noted that the steps shown in the flowchart of the attached drawings may be executed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here. The present embodiment will be described in detail below.
Fig. 1 is a flowchart of a face modification method based on three-dimensional reconstruction according to an embodiment of the present invention. The specific process shown in FIG. 1 will be described in detail below.
Step S101, determining a target modification area from an original face image.
Alternatively, the original facial image may be a facial image stored by the user in a terminal device (such as a mobile phone or tablet). Or opening a preview video stream through an image acquisition device (such as a camera) to acquire a preview data frame. Or a facial image downloaded by the user over a network.
The target modification region is one or more regions in the original face image, for example, the target modification region may be, but is not limited to, a lip region, a cheek bone region, an eyebrow region, an eyelid region, or the like.
As an embodiment, step S101 includes: carrying out face detection on an original face image to obtain face key point information; and acquiring a request instruction of a user and extracting a target modification area from the face key point information according to the request instruction.
The request instruction may be to request to obtain a lip region, an eye region, an eyebrow region, or the like.
Optionally, the request instruction of the user may be monitored through a click event of the terminal device or the request instruction input by the user through the terminal device may be acquired.
For example, the original face image shown in fig. 2 is subjected to face recognition by a face recognition method (e.g., a recognition algorithm based on face feature points or an algorithm for recognition using a neural network), so as to obtain a point set (i.e., face key point information) in the face image shown in fig. 3. And determining a target decoration area, such as a lip area, from the point set according to a request instruction of a user.
As another embodiment, step S101 includes: determining face key point information from an original face image according to a preset face key point detection model; and separating a region to be modified from the original face image according to the face key point information, wherein the region to be modified is the target modification region.
Optionally, the target modification region may be separated from the original face image (e.g., the target modification region is copied) to perform image processing on the target modification region separately, so that the processed image data is smaller, and thus, the data processing pressure may be effectively reduced, and the data processing efficiency may be improved.
Alternatively, the target modification region may be directly processed, that is, the modification of the target modification region is directly completed on the original face image.
Optionally, the method further comprises: acquiring image sample data marked with key points of the face; and carrying out neural network training on the initial face key point detection model through the image sample data to obtain a preset face key point detection model.
The face key point information may be, but is not limited to, a face contour point, an eye contour point, a nose contour point, an eyebrow contour point, a forehead contour point, an upper lip contour point, a lower lip contour point, and the like.
Optionally, the image with the face key points marked in advance may be obtained as image sample data, or a preset number of face images (for example, the preset number may be 10 thousands) are collected, and then the face key points are marked on each face image, so as to obtain marked data, where the image with the marked data is used as the image sample data.
Optionally, the performing neural network training on the initial face key point detection model through the image sample data to obtain a preset face key point detection model, including: dividing the image sample data into a training set, a verification set and a test set according to a preset proportion; carrying out neural network training on the initial face key point detection model through the training set, and verifying an intermediate result in the training process by using the verification set (adjusting training parameters in real time); when the training precision and the verification precision both reach preset thresholds, stopping the training process to obtain a trained face key point detection model; testing the trained face key point detection model according to the test set to obtain a test result; and if the test result meets a preset rule (such as performance or capability), taking the trained face key point detection model as a preset face key point detection model.
Wherein, the preset proportion can be set according to actual requirements. Generally, the training set accounts for a greater proportion than the validation set and the test set. For example, the ratio of training set, validation set, and test set may be 8:1:1.
the training precision refers to an error range obtained by training the face key point detection model through a training set, generally, a preset threshold is set for the training precision in advance, and the size of the preset threshold can be set according to actual needs, and is not specifically limited herein. Similarly, the verification precision refers to a numerical value obtained when the trained face key point detection model is verified, and a preset threshold is also preset for the verification precision, and the size of the preset threshold can be set according to actual requirements, which is not specifically limited herein.
As an application scenario, assuming that the target decoration region is lips, lipstick needs to be smeared on the lips. The user firstly runs the program of the face modification method based on three-dimensional reconstruction provided by the embodiment of the invention, and the program loads the default parameter mapping table of the lipstick (different lip shapes have different three-dimensional reconstruction parameters, three-dimensional mesh generation parameters and the like, such as a petal lip, a small round lip, a cherry lip and the like), wherein the user can also adjust the corresponding parameter size by himself. And then executing the human face modification method based on three-dimensional reconstruction provided by the embodiment of the invention to modify the target modification area (namely, the lip is wiped with lipstick).
Step S102, performing three-dimensional reconstruction on the target modification area to obtain three-dimensional space information corresponding to the target modification area.
The three-dimensional space information comprises a target point set corresponding to the target modification area, and three-dimensional coordinates and Euler angles of the target point set in a three-dimensional space system.
As an embodiment, step S102 includes: and inputting the target modification area and the face key point information into a preset three-dimensional basic model, and outputting three-dimensional space information corresponding to the target modification area.
Optionally, the inputting the target modification region and the face key point information into a preset three-dimensional basic model, and outputting three-dimensional space information corresponding to the target modification region includes: acquiring the face key point information, basic parameters corresponding to the target modification area and weight coefficients corresponding to the basic parameters; determining a preset three-dimensional basic model matched with the basic parameters according to the basic parameters; and carrying out weighting processing on the preset three-dimensional basic model according to the weighting coefficient to obtain three-dimensional space information corresponding to the target modification area.
The basic parameters refer to coordinates of all points in a target point set, namely a three-dimensional space coordinate point set, and the three-dimensional space coordinate point set comprises three-dimensional coordinates of all points and Euler angles of the three-dimensional coordinates.
The weight coefficient is the weight of each base parameter.
Alternatively, the weight coefficients may be generated in real time.
Optionally, performing weighting processing on the preset three-dimensional base model according to the weighting coefficient includes: and multiplying each preset three-dimensional basic model by the corresponding weight coefficient to obtain a product, and adding all the products.
For example, assuming that there are 3 predetermined three-dimensional basis models, G1, G2, G3, and their corresponding weighting coefficients Q1, Q2, Q3, the weighting process yields the result G1 × Q1+ G2 × Q2+ G3 × Q3.
In this embodiment, a preset number of preset three-dimensional base models (for example, 100) are pre-established, where the preset three-dimensional base models include a mouth opening part, a mouth closing part, a left mouth corner rising part, and a right mouth corner rising part.
Optionally, the obtaining the face key point information and the basic parameters corresponding to the target modification region includes: and detecting the target modification area according to the face key point information, obtaining a shape parameter (for example, the shape of a lip when the mouth is opened) and an expression parameter (for example, smile) corresponding to the target modification area, and decomposing the shape parameter and the expression parameter to obtain a basic parameter.
The shape parameter and the expression parameter are both points passing through a specific region (such as a lip or a face region), and the result calculated by the coordinates of the points, such as the mouth angle of the mouth angle region relative to a preset three-dimensional model (for example, a non-expression base model), can be extracted as smile if the longitudinal direction of the mouth angle region is reduced. For example, if the distance between the upper and lower lips is larger than the non-expression base model, the image may be drawn as a mouth-open.
Optionally, decomposing the shape parameter and the expression parameter to obtain a basic parameter, including: and decomposing according to the key points corresponding to the shape parameters and the expression parameters to obtain basic parameters.
Of course, in practical use, the decomposition may be performed in other manners, for example, the shape parameter and the expression parameter are decomposed into a plurality of basic parameters according to coordinates. Or decomposing the shape parameter and the expression parameter into a plurality of basic parameters through a preset area.
And step S103, coloring the target modification area according to the three-dimensional space information to obtain a modified face image.
As an embodiment, step S103 includes: coloring the target modification area according to the three-dimensional space information to obtain a three-dimensional texture coloring result; and replacing the three-dimensional texture coloring result on the target modification area of the original face image to obtain a modified face image.
Optionally, the coloring the target modified region according to the three-dimensional spatial information to obtain a three-dimensional texture coloring result includes: determining a three-dimensional subdivision grid corresponding to the target modification area according to the three-dimensional space information; and performing texture coloring treatment on the three-dimensional subdivision grid to obtain a three-dimensional texture coloring result.
Optionally, performing texture coloring processing on the three-dimensional subdivision grid, including: and transferring the color of the coordinate corresponding to the texture to the coordinate position corresponding to the three-dimensional subdivision grid so as to realize coloring treatment.
Optionally, determining a three-dimensional split mesh corresponding to the target modification region according to the three-dimensional spatial information includes: and triangulating the three-dimensional space information to obtain a three-dimensional subdivision grid corresponding to the target modification area. Specifically, a target point set in the three-dimensional space information and three-dimensional information corresponding to the target point set are obtained, triangulation is performed on the target point set according to the three-dimensional information and a triangulation algorithm (delaunay), and a three-dimensional subdivision grid corresponding to the target modification area is obtained.
Optionally, triangulating the three-dimensional spatial information (i.e., triangulating, by using a triangulation algorithm (delaunay), the target point set according to the three-dimensional information of the target point set to obtain a three-dimensional mesh corresponding to the target modified region includes: and performing nearest non-cross triangulation on the three-dimensional space information through a triangulation algorithm to obtain a three-dimensional mesh corresponding to the target modification area.
In this embodiment, through no cross triangulation, can make not crisscross between every triangle in the no cross triangulation that obtains to make when coloring, can not lead to the coordinate that certain triangle corresponds to be colored by repetition and then make to color more evenly, and then make the coloring effect better.
Of course, in practical use, the nearest non-cross triangulation may be performed on the three-dimensional spatial information in other manners. For example, the nearest-neighbor non-cross triangulation of the three-dimensional spatial information may be achieved by algorithms such as a divide and conquer method, a point-by-point interpolation method (Lawson algorithm), or a triangulation method (BowyerWatson algorithm).
For example, as shown in fig. 4, the target point set is triangulated according to the triangulation principle to obtain a three-dimensional mesh composed of a plurality of triangles.
For example, taking a lip region as a target decoration region as an example, triangulation is performed on a set of points in the lips shown in fig. 5 to obtain a three-dimensional mesh corresponding to the target decoration region (i.e., the lip region), so as to obtain the schematic diagram shown in fig. 6.
Optionally, a three-dimensional split mesh corresponding to the target modified region may also be obtained by performing rectangular split (for example, quadrangle or polygon) on the three-dimensional spatial information.
In the embodiment, the texture is colored on the three-dimensional subdivision grid, so that the texture sense is stronger, the details of the target modification area can be effectively restored, the texture is finer, and the stereoscopic impression can be effectively enhanced by polishing the target modification area in the normal direction.
Optionally, replacing the three-dimensional texture coloring result onto the target modified region of the original face image, including: and performing feathering processing on the edge area of the target modification area to obtain a modified face image.
In this embodiment, after the target modification region separated from the original face image is independently modified, the modified three-dimensional texture rendering result is mapped (or replaced) onto the target modification region in the original face image.
In a possible embodiment, after performing feathering processing on the edge area of the target modified area to obtain a modified face image, the method further includes: and displaying the face image on terminal equipment.
For example, assuming that the target decoration region is a lip, the lip region is separated from the original face image, and then the lip region is colored to obtain a three-dimensional texture coloring result, and the three-dimensional texture coloring result is attached to the lip region of the original face image to realize decoration of the face image. Therefore, on one hand, the target modification area is separated and modified, so that the data processing pressure can be effectively reduced, the data processing efficiency is improved, on the other hand, the modified image is directly pasted to be attached to the area corresponding to the original face image, the face modification is further completed, and the modification effect is better.
According to the face modification method based on three-dimensional reconstruction provided by the embodiment of the invention, the target modification region is determined from the original face image, the three-dimensional reconstruction is carried out on the target modification region, and the three-dimensional spatial information corresponding to the target modification region is obtained, so that the details of the target modification region can be effectively restored, the stereoscopic impression of the target modification region can be enhanced, the target modification region is colored according to the three-dimensional spatial information, the texture sense of the target modification region can be stronger, the texture is finer, the stereoscopic impression of the target modification region is further enhanced, and the user experience is remarkably improved.
Second embodiment
Fig. 7 shows a three-dimensional reconstruction-based face modification apparatus that uses the three-dimensional reconstruction-based face modification method according to the first embodiment in a one-to-one correspondence, corresponding to the three-dimensional reconstruction-based face modification method according to the first embodiment. As shown in fig. 7, the face modification apparatus 400 based on three-dimensional reconstruction includes a modification target determination unit 410, a first processing unit 420, a second processing unit 430, and a third processing unit 440. The implementation functions of the modification target determining unit 410, the first processing unit 420, the second processing unit 430, and the third processing unit 440 correspond to the corresponding steps in the first embodiment one to one, and for avoiding redundancy, detailed descriptions are not provided in this embodiment.
And a modified target determining unit 410, configured to determine a target modified region from the original face image.
Optionally, the original face image is a preview image.
Optionally, the modification target determining unit 410 is configured to determine face key point information from an original face image according to a preset face key point detection model; and separating a region to be modified from the original face image according to the face key point information, wherein the region to be modified is the target modification region.
Optionally, acquiring image sample data marked with the key points of the human face; and carrying out neural network training on the initial face key point detection model through the image sample data to obtain a preset face key point detection model.
The first processing unit 420 is configured to perform three-dimensional reconstruction on the target modified region, so as to obtain three-dimensional spatial information corresponding to the target modified region.
Optionally, the first processing unit 420 is further configured to input the target modification region and the face key point information into a preset three-dimensional basic model, and output three-dimensional space information corresponding to the target modification region.
Optionally, the inputting the target modification region and the face key point information into a preset three-dimensional basic model, and outputting three-dimensional space information corresponding to the target modification region includes: acquiring the face key point information, basic parameters corresponding to the target modification area and weight coefficients corresponding to the basic parameters; determining a preset three-dimensional basic model matched with the basic parameters according to the basic parameters; and carrying out weighting processing on the preset three-dimensional basic model through the weighting coefficient to obtain three-dimensional space information corresponding to the target modification area.
And the second processing unit 430 is configured to perform coloring processing on the target modification region according to the three-dimensional space information to obtain a modified face image.
Optionally, the second processing unit 430 is further configured to perform a coloring process on the target modified region according to the three-dimensional space information, so as to obtain a three-dimensional texture coloring result; and replacing the three-dimensional texture coloring result to the target modification area of the original face image to obtain a modified face image.
Optionally, the coloring the target modified region according to the three-dimensional spatial information to obtain a three-dimensional texture coloring result includes: determining a three-dimensional subdivision grid corresponding to the target modification area according to the three-dimensional space information; and performing texture coloring treatment on the three-dimensional subdivision grid to obtain a three-dimensional texture coloring result.
Optionally, the determining a three-dimensional split mesh corresponding to the target modification region according to the three-dimensional spatial information includes: and triangulating the three-dimensional space information to obtain a three-dimensional subdivision grid corresponding to the target modification area.
Optionally, the triangulating the three-dimensional spatial information to obtain a three-dimensional subdivision grid corresponding to the target modification region includes: and performing nearest non-cross triangulation on the three-dimensional space information to obtain a three-dimensional mesh corresponding to the target modification area.
Optionally, replacing the three-dimensional texture coloring result onto the target modified region of the original face image, including: and performing feathering treatment on the edge area of the target modification area to obtain a modified face image.
Optionally, after the second processing unit 430, the apparatus 400 for face retouching based on three-dimensional reconstruction further includes a fourth processing unit, configured to display the retouched face image on a display terminal.
Third embodiment
As shown in fig. 8, is a schematic diagram of a terminal device 300. The terminal device 300 includes a memory 302, a processor 304, a computer program 303 stored in the memory 302 and capable of running on the processor 304, and a display 305 for displaying a modified face image, where when the computer program 303 is executed by the processor 304, the face modification method based on three-dimensional reconstruction in the first embodiment is implemented, and details are not repeated here to avoid repetition. Alternatively, the computer program 303 is implemented by the processor 304 to implement the functions of each model/unit in the face modification apparatus based on three-dimensional reconstruction according to the second embodiment, and is not described herein again to avoid repetition.
Illustratively, the computer program 303 may be partitioned into one or more modules/units, which are stored in the memory 302 and executed by the processor 304 to implement the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 303 in the terminal device 300. For example, the computer program 303 may be divided into the modification target determining unit 410, the first processing unit 420 and the second processing unit 430 in the second embodiment, and specific functions of each unit are as described in the first embodiment or the second embodiment, which are not described herein again.
The terminal device 300 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices.
The Memory 302 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 302 is used for storing a program, and the processor 304 executes the program after receiving an execution instruction, and the method defined by the flow disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 304, or implemented by the processor 304.
The processor 304 may be an integrated circuit chip having signal processing capabilities. The Processor 304 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The display 305 may be an LCD display screen or an LED display screen. Such as a display screen on a cell phone.
It is understood that the structure shown in fig. 8 is only a schematic diagram of the terminal device 300, and the terminal device 300 may further include more or less components than those shown in fig. 8. The components shown in fig. 8 may be implemented in hardware, software, or a combination thereof.
Fourth embodiment
An embodiment of the present invention further provides a storage medium, where instructions are stored in the storage medium, and when the instructions are executed on a computer, the computer program is executed by a processor to implement the method for face modification based on three-dimensional reconstruction in the first embodiment, and details are not repeated here in order to avoid repetition. Alternatively, the computer program, when executed by the processor, implements the functions of each model/unit in the face modification apparatus based on three-dimensional reconstruction according to the second embodiment, and is not described herein again to avoid repetition.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention may be implemented by hardware or by software plus a necessary general hardware platform, and based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, or the like), and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, or the like) to execute the method of the various implementation scenarios in the present invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (10)

1. A face modification method based on three-dimensional reconstruction is characterized by comprising the following steps:
determining a target modification area from an original face image;
performing three-dimensional reconstruction on the target modification region to obtain three-dimensional space information corresponding to the target modification region;
coloring the target modification area according to the three-dimensional space information to obtain a modified human face image;
wherein, the determining the target modification region from the original face image comprises: determining face key point information from an original face image according to a preset face key point detection model; separating a region to be modified from the original face image according to the face key point information, wherein the region to be modified is the target modification region;
the three-dimensional reconstruction of the target modification region to obtain three-dimensional space information corresponding to the target modification region includes: acquiring the human face key point information, basic parameters corresponding to the target modification area and used for representing a three-dimensional space coordinate point set of the target modification area, and weight coefficients corresponding to the basic parameters; determining a preset three-dimensional basic model matched with the basic parameters according to the basic parameters; and carrying out weighting processing on the preset three-dimensional basic model through the weighting coefficient to obtain three-dimensional space information corresponding to the target modification area.
2. The method of claim 1, further comprising:
acquiring image sample data marked with key points of the face;
and carrying out neural network training on the initial face key point detection model through the image sample data to obtain a preset face key point detection model.
3. The method according to claim 1, wherein the coloring the target modified region according to the three-dimensional spatial information to obtain a modified face image comprises:
coloring the target modification area according to the three-dimensional space information to obtain a three-dimensional texture coloring result;
and replacing the three-dimensional texture coloring result on the target modification area of the original face image to obtain a modified face image.
4. The method according to claim 3, wherein the rendering the target modified region according to the three-dimensional spatial information to obtain a three-dimensional texture rendering result comprises:
determining a three-dimensional subdivision grid corresponding to the target modification area according to the three-dimensional space information;
and transferring the color of the coordinate corresponding to the texture to the coordinate position corresponding to the three-dimensional subdivision grid to obtain a three-dimensional texture coloring result.
5. The method according to claim 4, wherein the determining the three-dimensional split mesh corresponding to the target modification region according to the three-dimensional spatial information comprises:
and triangulating the three-dimensional space information to obtain a three-dimensional subdivision grid corresponding to the target modification area.
6. The method according to claim 5, wherein the triangulating the three-dimensional spatial information to obtain a three-dimensional mesh corresponding to the target modification region comprises:
and performing nearest non-cross triangulation on the three-dimensional space information to obtain a three-dimensional mesh corresponding to the target modification area.
7. The method according to claim 3, wherein said replacing the three-dimensional texture coloring result onto the target modified region of the original face image to obtain a modified face image, comprises:
and performing feathering processing on the edge area of the target modification area to obtain a modified face image.
8. A face modification device based on three-dimensional reconstruction is characterized by comprising:
a modified target determining unit for determining a target modified region from the original face image;
the first processing unit is used for carrying out three-dimensional reconstruction on the target modification area to obtain three-dimensional space information corresponding to the target modification area;
the second processing unit is used for carrying out coloring processing on the target modification area according to the three-dimensional space information to obtain a modified human face image;
the modification target determining unit is further specifically configured to determine face key point information from an original face image according to a preset face key point detection model; separating a region to be modified from the original face image according to the face key point information, wherein the region to be modified is the target modification region;
the first processing unit is further specifically configured to obtain the face key point information, a basic parameter corresponding to the target modification region and used for representing a three-dimensional space coordinate point set of the target modification region, and a weight coefficient corresponding to the basic parameter; determining a preset three-dimensional basic model matched with the basic parameters according to the basic parameters; and carrying out weighting processing on the preset three-dimensional basic model through the weighting coefficient to obtain three-dimensional space information corresponding to the target modification area.
9. A terminal device, comprising: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the three-dimensional reconstruction based face embellishment method according to any one of claims 1 to 7 when executing the computer program.
10. A storage medium having stored thereon instructions which, when run on a computer, cause the computer to execute the method of face embellishment based on three-dimensional reconstruction of any of claims 1 to 7.
CN201811060479.2A 2018-09-11 2018-09-11 Face modification method, device and equipment based on three-dimensional reconstruction and storage medium Active CN109035380B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811060479.2A CN109035380B (en) 2018-09-11 2018-09-11 Face modification method, device and equipment based on three-dimensional reconstruction and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811060479.2A CN109035380B (en) 2018-09-11 2018-09-11 Face modification method, device and equipment based on three-dimensional reconstruction and storage medium

Publications (2)

Publication Number Publication Date
CN109035380A CN109035380A (en) 2018-12-18
CN109035380B true CN109035380B (en) 2023-03-10

Family

ID=64621664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811060479.2A Active CN109035380B (en) 2018-09-11 2018-09-11 Face modification method, device and equipment based on three-dimensional reconstruction and storage medium

Country Status (1)

Country Link
CN (1) CN109035380B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882666B (en) * 2020-07-20 2022-06-21 浙江商汤科技开发有限公司 Method, device and equipment for reconstructing three-dimensional grid model and storage medium
CN112529808A (en) * 2020-12-15 2021-03-19 北京映客芝士网络科技有限公司 Image color adjusting method, device, equipment and medium
CN113538639B (en) * 2021-07-02 2024-05-21 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327571A (en) * 2016-08-23 2017-01-11 北京的卢深视科技有限公司 Three-dimensional face modeling method and three-dimensional face modeling device
CN106920274A (en) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN107480613A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Face identification method, device, mobile terminal and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327571A (en) * 2016-08-23 2017-01-11 北京的卢深视科技有限公司 Three-dimensional face modeling method and three-dimensional face modeling device
CN106920274A (en) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN107480613A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Face identification method, device, mobile terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN109035380A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109325437B (en) Image processing method, device and system
KR102523512B1 (en) Creation of a face model
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN108305312B (en) Method and device for generating 3D virtual image
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
US10121273B2 (en) Real-time reconstruction of the human body and automated avatar synthesis
WO2020119458A1 (en) Facial landmark detection method and apparatus, computer device and storage medium
CN111598998A (en) Three-dimensional virtual model reconstruction method and device, computer equipment and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
WO2022095721A1 (en) Parameter estimation model training method and apparatus, and device and storage medium
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN109035380B (en) Face modification method, device and equipment based on three-dimensional reconstruction and storage medium
CN111833236B (en) Method and device for generating three-dimensional face model for simulating user
CN111008935B (en) Face image enhancement method, device, system and storage medium
JP2024501986A (en) 3D face reconstruction method, 3D face reconstruction apparatus, device, and storage medium
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
CN111369428A (en) Virtual head portrait generation method and device
US10991154B1 (en) Method for generating model of sculpture of face with high meticulous, computing device, and non-transitory storage medium
JP2020177615A (en) Method of generating 3d facial model for avatar and related device
RU2697627C1 (en) Method of correcting illumination of an object on an image in a sequence of images and a user's computing device which implements said method
KR20230015430A (en) Method and apparatus for processing face information, electronic device and storage medium
CN110647859B (en) Face image decomposition method and device, electronic equipment and storage medium
US10803677B2 (en) Method and system of automated facial morphing for eyebrow hair and face color detection
CN111275610B (en) Face aging image processing method and system
US10861174B2 (en) Selective 3D registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant