CN108765273B - Virtual face-lifting method and device for face photographing - Google Patents

Virtual face-lifting method and device for face photographing Download PDF

Info

Publication number
CN108765273B
CN108765273B CN201810551058.3A CN201810551058A CN108765273B CN 108765273 B CN108765273 B CN 108765273B CN 201810551058 A CN201810551058 A CN 201810551058A CN 108765273 B CN108765273 B CN 108765273B
Authority
CN
China
Prior art keywords
face
dimensional
dimensional model
user
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810551058.3A
Other languages
Chinese (zh)
Other versions
CN108765273A (en
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810551058.3A priority Critical patent/CN108765273B/en
Publication of CN108765273A publication Critical patent/CN108765273A/en
Priority to PCT/CN2019/089348 priority patent/WO2019228473A1/en
Application granted granted Critical
Publication of CN108765273B publication Critical patent/CN108765273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application provides a virtual face-lifting method and a virtual face-lifting device for face photographing, wherein the method comprises the following steps: acquiring a current original two-dimensional face image of a user and depth information corresponding to the original two-dimensional face image; performing three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model; inquiring face information registered in advance, and judging whether a user is registered; if the fact that the user is registered is known, face three-dimensional model shaping parameters corresponding to the user are obtained, key points on the original face three-dimensional model are adjusted according to the face three-dimensional model shaping parameters, and a target face three-dimensional model after virtual face-lifting is obtained; and mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image. Therefore, the registered user is beautified based on the human face three-dimensional model, the beautifying effect is optimized, and the satisfaction degree of the target user on the beautifying effect and the viscosity of the product are improved.

Description

Virtual face-lifting method and device for face photographing
Technical Field
The application relates to the technical field of human face processing, in particular to a virtual face-lifting method and device for human face photographing.
Background
With the popularization of terminal devices, more and more users are used to take pictures by using the terminal devices, and therefore, the shooting function of the terminal devices is also diversified, for example, related shooting application programs provide a beautifying function for the users.
In the related art, the facial beautification processing is performed based on the two-dimensional face image, the processing effect is poor, and the reality of the processed image is not strong.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
In order to achieve the above object, an embodiment of a first aspect of the present application provides a virtual face-lifting method for photographing a human face, including: acquiring a current original two-dimensional face image of a user and depth information corresponding to the original two-dimensional face image; performing three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model; inquiring face information registered in advance, and judging whether the user is registered; if the fact that the user is registered is known, obtaining a face three-dimensional model shaping parameter corresponding to the user, and adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face-lifting; and mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
In order to achieve the above object, a second aspect of the present application provides a virtual face-lifting device for photographing a human face, including: the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a current original two-dimensional face image of a user and depth information corresponding to the original two-dimensional face image; the reconstruction module is used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model; the query module is used for querying face information registered in advance and judging whether the user is registered; the adjusting module is used for acquiring a face three-dimensional model shaping parameter corresponding to the user when the user is registered, and adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face-lifting; and the mapping module is used for mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
In order to achieve the above object, an embodiment of a third aspect of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the virtual face-lift method for photographing a human face as described in the foregoing embodiment of the first aspect.
To achieve the above object, a fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the virtual face-lift method for photographing a human face as described in the foregoing first aspect of the present application.
To achieve the above object, a fifth aspect of the present application provides an image processing circuit. The image processing circuit includes: an image unit, a depth information unit and a processing unit;
the image unit is used for outputting the current original two-dimensional face image of the user;
the depth information unit is used for outputting depth information corresponding to the original two-dimensional face image;
the processing unit is electrically connected with the image unit and the depth information unit respectively and used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtaining an original face three-dimensional model, inquiring pre-registered face information, judging whether the user is registered or not, obtaining face three-dimensional model shaping parameters corresponding to the user if the user is registered, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face shaping, and mapping the target face three-dimensional model after virtual face shaping to a two-dimensional plane to obtain a target two-dimensional face image. The technical scheme provided by the application at least comprises the following beneficial effects:
the registered user is beautified based on the human face three-dimensional model, the beautifying effect is optimized, and the satisfaction degree of the target user on the beautifying effect and the viscosity of the product are improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a virtual face-lift method for face photographing according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a virtual face-lift method for photographing a human face according to another embodiment of the present application;
FIG. 3 is a schematic structural diagram of a depth image acquisition assembly according to an embodiment of the present disclosure;
fig. 4(a) is a schematic technical flow chart of a virtual face-lift method for photographing a human face according to an embodiment of the present application;
fig. 4(b) is a schematic technical flowchart of a virtual face-lift method for photographing a human face according to another embodiment of the present application;
FIG. 5 is a schematic structural diagram of a virtual face-lift apparatus for photographing human faces according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a virtual face-lift apparatus for photographing a human face according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application; and
FIG. 8 is a schematic diagram of image processing circuitry in one embodiment;
FIG. 9 is a schematic diagram of an image processing circuit, under an embodiment.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
In the embodiment of the application, through acquiring a two-dimensional face image and depth information corresponding to the face image, three-dimensional reconstruction is performed according to the depth information and the face image to obtain a three-dimensional face model, and the face is beautified based on the three-dimensional face model.
The following describes a virtual face-lifting method and apparatus for face photographing according to an embodiment of the present application with reference to the drawings.
Fig. 1 is a schematic flow chart of a virtual face-lift method for face photographing according to an embodiment of the present application.
The face virtual face-lifting method in the embodiment of the application can be applied to computer equipment with a depth information and color information acquisition device, wherein the device with the functions of the depth information and color information (two-dimensional information) acquisition device can be a double-camera system and the like, and the computer equipment can be hardware equipment with various operating systems, touch screens and/or display screens, such as a mobile phone, a tablet computer, a personal digital assistant, wearable equipment and the like.
Step 101, acquiring a current original two-dimensional face image of a user and depth information corresponding to the original two-dimensional face image.
It should be noted that, according to different application scenarios, in the embodiment of the present application, hardware devices for acquiring depth information and original two-dimensional face image information are different:
as a possible implementation manner, the hardware device for acquiring the original two-dimensional face information is a visible light RGB image sensor, and the original two-dimensional face may be acquired based on the RGB visible light image sensor in the computer device. Specifically, the visible light RGB image sensor may include a visible light camera, and the visible light camera may capture visible light reflected by an imaging object to perform imaging, so as to obtain an original two-dimensional face corresponding to the imaging object.
As a possible implementation manner, the manner of acquiring the depth information is to acquire the depth information by a structured light sensor, and specifically, as shown in fig. 2, the manner of acquiring the depth information corresponding to each face image includes the following steps:
step 201, projecting structured light to the face of the current user.
Step 202, shooting a structured light image modulated by the face of the current user.
Step 203, demodulating phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the face image.
In the present example, referring to fig. 3 where the computer device is a smartphone 1000, the depth image acquisition assembly 12 includes a structured light projector 121 and a structured light camera 122. Step 201 may be implemented by the structured light projector 121 and steps 202 and 203 may be implemented by the structured light camera 122.
That is, the structured light projector 121 may be used to project structured light toward the face of the current user; the structured light camera 122 may be configured to capture a structured light image modulated by a face of a current user, and demodulate phase information corresponding to each pixel of the structured light image to obtain depth information.
Specifically, after the structured light projector 121 projects a certain pattern of structured light onto the face of the current user, a structured light image modulated by the face of the current user is formed on the surface of the face of the current user. The structured light camera 122 captures a modulated structured light image, and demodulates the structured light image to obtain depth information. The pattern of the structured light may be laser stripes, gray codes, sinusoidal stripes, non-uniform speckles, etc.
The structured light camera 122 may be further configured to demodulate phase information corresponding to each pixel in the structured light image, convert the phase information into depth information, and generate a depth image according to the depth information.
Specifically, the phase information of the modulated structured light is changed compared with the unmodulated structured light, and the structured light displayed in the structured light image is the distorted structured light, wherein the changed phase information can represent the depth information of the object. Therefore, the structured light camera 122 first demodulates the phase information corresponding to each pixel in the structured light image, and then calculates the depth information according to the phase information.
And 102, performing three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model.
Specifically, three-dimensional reconstruction is carried out according to depth information and an original two-dimensional face image, depth information and two-dimensional information of related points are given, an original face three-dimensional model is obtained through reconstruction, the original face three-dimensional model is a three-dimensional model, a face can be fully restored, and information such as a three-dimensional angle of facial features is included relative to the two-dimensional face model.
According to different application scenes, the method for obtaining the original human face three-dimensional model by three-dimensional reconstruction according to the depth information and the human face image includes but is not limited to the following methods:
as a possible implementation manner, the method includes performing key point identification on each two-dimensional sample face image to obtain positioning key points, determining the relative position of each positioning key point in a three-dimensional space according to the depth information of each positioning key point and the distance of each positioning key point on the two-dimensional sample face image, including the x-axis distance and the y-axis distance on the two-dimensional space, and connecting adjacent positioning key points according to the relative position of each positioning key point in the three-dimensional space to generate an original sample face three-dimensional model. The key points are characteristic points on the face, and can include points on the canthus, the tip of the nose, the corners of the mouth, and the like.
As another possible implementation mode, a plurality of angle original two-dimensional face images are obtained, the face images with high definition are screened out to serve as original data, feature point positioning is carried out, the face angle is roughly estimated by using a feature positioning result, a rough face three-dimensional deformation model is established according to the face angle and the contour, the face feature points are adjusted to be on the same scale with the face three-dimensional deformation model through translation and scaling operations, and coordinate information of points corresponding to the face feature points is extracted to form a sparse face three-dimensional deformation model.
And then, carrying out particle swarm algorithm iterative face three-dimensional reconstruction according to the rough estimation value of the face angle and the sparse face three-dimensional deformation model to obtain a face three-dimensional geometric model, and mapping face texture information in the input two-dimensional image to the face three-dimensional geometric model by adopting a texture pasting method to obtain a complete original face three-dimensional model.
In an embodiment of the application, in order to improve the beauty effect, the original face three-dimensional model can be constructed based on the beautified original two-dimensional face image, so that the constructed original face three-dimensional model is more attractive, and the beauty of the beauty is ensured.
Specifically, the user attribute features of the user are extracted, wherein the user attribute features may include gender, age, race, and skin color, wherein the user attribute features may be obtained according to personal information input during user registration, or obtained by collecting two-dimensional face image information during user registration, and performing beautification on the original two-dimensional face image according to the user attribute features to obtain an beautified original two-dimensional face image, wherein the corresponding relationship between the user attribute features and beauty parameters may be pre-established in a manner of performing beautification on the original two-dimensional face image according to the user attribute features, for example, female beauty parameters are acne removal, skin grinding, whitening, male beauty parameters are acne removal, and the corresponding relationship is queried to obtain corresponding beauty parameters after the user attribute features are obtained, and beautifying the original two-dimensional face image according to the inquired beautifying parameters.
Certainly, the way of beautifying the original two-dimensional face image may include, in addition to the beautifying, brightness optimization, definition enhancement, denoising processing, obstacle processing, and the like, so as to ensure that the original three-dimensional face model is relatively accurate.
And 103, inquiring pre-registered face information and judging whether the user is registered or not.
It can be understood that, in this embodiment, an optimized beauty service is provided based on the registered user, so that, on one hand, the registered user obtains an optimal beauty effect when taking a picture, especially when taking a picture by multiple people, and the satisfaction of the registered user is improved, and on the other hand, the method is helpful for popularizing relevant products. In practical applications, in order to further improve the photographing experience of the registered user, when the registered user is identified, the registered user may be labeled with a special symbolic symbol, for example, the registered user is highlighted with a face focusing frame of a different color, and the registered user is highlighted with a focusing frame of a different shape.
In different application scenarios, the face information registered in advance is inquired, and whether the user is registered is judged, including but not limited to the following modes:
as a possible implementation manner, facial features of a registered user, such as special mark features such as birthmarks and the like, and shape and position features of five sense organs such as a nose and eyes and the like, are obtained in advance, an original two-dimensional face image is analyzed, for example, the facial features of the user are extracted by adopting an image recognition technology, a pre-registered face database is inquired, whether the facial features exist or not is judged, and if the facial features exist, the user is determined to be registered; and if not, determining that the user is not registered.
And 104, if the fact that the user is registered is known, obtaining face three-dimensional model shaping parameters corresponding to the user, and adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting.
The shaping parameters of the three-dimensional face model include, but are not limited to, the adjustment positions and distances of the adjusted target key points in the three-dimensional face model.
Specifically, if the user is known to be a registered user, in order to provide an optimized beauty service for the registered user, a face three-dimensional model shaping parameter corresponding to the user is obtained, and a key point on the original face three-dimensional model is adjusted according to the face three-dimensional model shaping parameter, so that a target face three-dimensional model after virtual face-lifting is obtained. It can be understood that the original face three-dimensional model is actually constructed by a triangular network formed by connecting key points and key points, so that when the key points of the part to be beautified on the original face three-dimensional model are adjusted, the corresponding face three-dimensional model changes, and thus, the target face model after virtual face beautification is obtained.
The mode of the human face three-dimensional model shaping parameters corresponding to the user can be actively registered by the user and can be automatically generated after the human face three-dimensional model shaping parameters are analyzed according to an original human face three-dimensional model method of the user.
As a possible implementation manner, two-dimensional sample face images of multiple angles of a user and depth information corresponding to each two-dimensional sample face image are obtained, three-dimensional reconstruction is performed according to the depth information and the two-dimensional sample face images, an original sample face three-dimensional model is obtained, key points of a part to be reshaped on the original sample face three-dimensional model are adjusted, a target sample face three-dimensional model after virtual reshaping is obtained, the original sample face three-dimensional model and the target sample face three-dimensional model are compared, face three-dimensional model reshaping parameters corresponding to the user are extracted, for example, corresponding coordinate difference value information is generated according to coordinate differences of key points corresponding to the same part, and the like.
In this embodiment, in order to more conveniently adjust the three-dimensional face model, key points of each face-lifting part are displayed on the original sample three-dimensional face model, for example, the key points of each face-lifting part are displayed in a highlight display manner, a shift operation performed by a user on the key points of the part to be face-lifted is detected, for example, a dragging operation performed by the user on a selected key point is detected, the key points are adjusted according to the shift operation, and the target sample three-dimensional face model after virtual face-lifting is obtained according to the adjusted key points and the connection between other adjacent key points.
In the actual implementation process, the adjustment of the key points of the to-be-resized part on the original sample human face three-dimensional model can be received based on different implementation manners, and the following examples are illustrated:
the first example:
in this example, in order to facilitate the operation of the user, an adjustment control may be provided for the user to adjust the three-dimensional model of the human face in real time through the operation of the user on the control.
Specifically, in this embodiment, an adjustment control corresponding to a key point of each face-lifting part is generated, a touch operation performed by a user on the adjustment control corresponding to the key point of the part to be face-lifted is detected, a corresponding adjustment parameter is obtained, the key point of the part to be face-lifted on the original sample face three-dimensional model is adjusted according to the adjustment parameter, a target sample face three-dimensional model after virtual face-lifting is obtained, and face-lifting parameters are obtained based on a difference between the target sample face three-dimensional model after virtual face-lifting and the original sample face three-dimensional model. The adjustment parameters include the moving direction and the moving distance of the key point.
In this embodiment, the user may also be provided with a cosmetic suggestion message, such as providing a cosmetic suggestion like "lip plumping, apple filling" etc., wherein, the face-lifting suggestion information can be in a text form, a voice form and the like, if the user confirms the face-lifting suggestion information, determining key points of the parts to be face-lifted and adjusting parameters according to the face-lifting suggestion information, for example, the user confirms the face-lifting suggestion, the determined face-lifting parameters are adjusting the depth value of the mouth and the depth value of the cheek, wherein, the size of the depth value change can be determined according to the depth value of the corresponding part on the original sample human face three-dimensional model of the user, in order to ensure the natural adjustment effect, the difference value between the adjusted depth value and the initial depth value is in a certain range, and adjusting key points of the part to be subjected to face-lifting on the original sample human face three-dimensional model according to the adjustment parameters to obtain the target sample human face three-dimensional model after virtual face-lifting.
Wherein, in order to further improve the aesthetic feeling of the face-lifting effect, before adjusting the key points of the part to be lifted on the original human face three-dimensional model, the skin texture map covered on the surface of the original human face three-dimensional model can be beautified, and the beautified original human face three-dimensional model can be obtained.
It is understood that when there are acne in the face image, the color of the portion corresponding to acne in the skin texture map may be red, or when there are freckle in the face image, the color of the portion corresponding to freckle in the skin texture map may be coffee color or black, or when there are moles in the face image, the color of the portion corresponding to moles in the skin texture map may be black.
Therefore, whether an abnormal range exists can be determined according to the color of the skin texture image of the original human face three-dimensional model, when the abnormal range does not exist, no processing can be carried out, and when the abnormal range exists, the abnormal range can be beautified by adopting a corresponding beautifying strategy according to the relative position relation of each point in the abnormal range in the three-dimensional space and the color information of the abnormal range.
In general, pox is prominent on the skin surface, nevus can also be prominent on the skin surface, and freckle is not prominent on the skin surface, so in the embodiment of the present application, the abnormality type of the abnormal range can be determined according to the height difference between the central point and the edge point of the abnormal range, for example, the abnormality type can be convex or non-convex. After the abnormal type is determined, the corresponding beautifying strategy can be determined according to the abnormal type and the color information, and then the abnormal range is subjected to skin grinding treatment by adopting the filtering range and the filtering strength indicated by the beautifying strategy according to the matching skin color corresponding to the abnormal range.
For example, when the abnormal type is convex and the color information is red, the abnormal range may be pox, and the buffing degree corresponding to pox is strong, and when the abnormal type is not convex and the color is cyan, the abnormal range may be tattoo, and the buffing degree corresponding to tattoo is weak.
Or filling the skin color in the abnormal range according to the matching skin color corresponding to the abnormal range.
For example, when the abnormal type is a protrusion and the color information is red, in this case, the abnormal range may be pox, and the beautifying strategy for removing pox may be: the acne is ground, and the skin color in the abnormal range corresponding to the acne is filled according to the normal skin color near the acne, which is recorded as matching skin color in the embodiment of the application, or when the abnormal type is not raised and the color is coffee, the abnormal range can be freckle, and the beautifying strategy for removing the freckle can be as follows: from the normal skin tone near the freckle, the skin tone in the abnormal range corresponding to the freckle is filled in as the matching skin tone in the embodiment of the present application.
In the method, the depth information of the closed area obtained by taking each key point as the vertex in the model of the original human face three-dimensional model is consistent, and when the skin texture map covering the surface of the human face three-dimensional model is beautified, each closed area can be beautified respectively, so that the credibility of the pixel value in the beautified closed area can be increased, and the beautifying effect is improved.
As another possible implementation manner of the embodiment of the present application, a beautifying policy corresponding to a local face may be preset, where the local face may include facial parts such as a nose, a lip, eyes, and a cheek. For example, for the nose, the corresponding beautification strategy may be nose tip brightening, wing shadowing to increase the cubic effect of the nose, or for the cheek, the corresponding beautification strategy may be blush addition and/or buffing.
Therefore, in the embodiment of the application, the local face can be identified from the skin texture map according to the color information and the relative position in the original face three-dimensional model, and then the beautifying of the local face is performed according to the beautifying strategy corresponding to the local face.
Optionally, when the local face is an eyebrow, the local face may be subjected to skin polishing processing according to the filtering strength indicated by the beautification strategy corresponding to the eyebrow.
When the local face is a cheek, the local face can be subjected to buffing treatment according to the filtering strength indicated by the beautifying strategy corresponding to the cheek. It should be noted that, in order to make the beautified effect more natural and more prominent, the filter strength indicated by the beautification policy corresponding to the cheek may be greater than the filter strength indicated by the beautification policy corresponding to the eyebrow.
When the local face belongs to the nose, the shadow of the local face can be increased according to the shadow intensity indicated by the beautifying strategy corresponding to the nose.
According to the method and the device, the beautification treatment is carried out on the local face based on the relative position of the local face in the original face three-dimensional model, so that the beautified skin texture map is more natural, and the beautification effect is more prominent. Moreover, the method can realize the beautification treatment of the local face in a targeted manner, thereby improving the imaging effect and the photographing experience of the user.
Of course, in practical applications, if it is known that the user is not registered, the user may be provided with a better beauty service.
In this embodiment, if it is known that the user is not registered, user attribute features of the user are extracted, where the user attribute features may include gender, age, race, and skin color, for example, hairstyle, jewelry, make-up, and the like of the user are identified based on an image analysis technique to determine the user attribute features, and then preset standard face three-dimensional model shaping parameters corresponding to the user attribute features are obtained, and key points on the original face three-dimensional model are adjusted according to the standard face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting.
And 105, mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
Specifically, after the key points of the part to be face-lift on the original face three-dimensional model are adjusted to obtain the target face three-dimensional model after virtual face-lift, the target face three-dimensional model after virtual face-lift can be mapped to the two-dimensional plane to obtain a target face image, and the target two-dimensional face image is subjected to face-lift processing.
In the application, because the skin texture map is three-dimensional, beautify the skin texture map, can make the skin texture map after beautifying more natural, thereby will map the target face three-dimensional model that generates after the virtual cosmetic according to the face three-dimensional model after beautifying to the two-dimensional plane, obtain the target two-dimensional face image after beautifying, carry out the beauty treatment to target two-dimensional face image, can make the target two-dimensional face image after the beauty more true, the beautifying effect is more outstanding, the beauty effect show after the cosmetic is provided for the user, further promote user's cosmetic experience.
In order to make the flow of the virtual face-lift method for photographing a face clearer for those skilled in the art, the following is exemplified with reference to its application in a specific scenario, and the following is described:
in this example, the calibration refers to calibrating a camera to determine a corresponding key point of a key point in a face image in a three-dimensional space.
In the registration stage, as shown in fig. 4(a), a face may be previewed and scanned by a camera module to obtain two-dimensional sample face images at multiple angles of a user, for example, approximately 20 two-dimensional sample face images and depth maps at different angles are collected for subsequent three-dimensional face reconstruction, where missing angles and scanning progress and depth information corresponding to each two-dimensional sample face image may be prompted during scanning, and three-dimensional reconstruction is performed according to the depth information and the two-dimensional sample face images to obtain an original sample face three-dimensional model.
And analyzing five sense organs of the 3D face model, such as face shape, nose width, nose height, eye size, lip thickness and the like, giving face-lifting suggestion information, if a user confirms the face-lifting suggestion information, determining key points and adjustment parameters of a part to be face-lifted according to the face-lifting suggestion information, and adjusting the key points of the part to be face-lifted on the original sample face three-dimensional model according to the adjustment parameters to obtain the target sample face three-dimensional model after virtual face lifting.
Furthermore, as shown in fig. 4(b), in the recognition stage, a current original two-dimensional face image of the user and depth information corresponding to the original two-dimensional face image are obtained, three-dimensional reconstruction is performed according to the depth information and the original two-dimensional face image, an original face three-dimensional model is obtained, face information registered in advance is inquired, whether the user is registered is judged, if the user is known to be registered, face three-dimensional model shaping parameters corresponding to the user are obtained, key points on the original face three-dimensional model are adjusted according to the face three-dimensional model shaping parameters, a target face three-dimensional model after virtual face shaping is obtained, and the target face three-dimensional model after virtual face shaping is mapped to a two-dimensional plane to obtain a target two-dimensional face image.
To sum up, the virtual face-lifting method for photographing a face of a user in the embodiment of the present application obtains a current original two-dimensional face image of the user and depth information corresponding to the original two-dimensional face image, performs three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtains an original face three-dimensional model, queries face information registered in advance, determines whether the user is registered, obtains a face three-dimensional model shaping parameter corresponding to the user if the user is known to be registered, adjusts a key point on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face lifting, and maps the target face three-dimensional model after virtual face lifting to a two-dimensional plane to obtain a target two-dimensional face image. Therefore, the registered user is beautified based on the human face three-dimensional model, the beautifying effect is optimized, and the satisfaction degree of the target user on the beautifying effect and the viscosity of the product are improved.
In order to implement the above embodiments, the present application further provides a virtual face-lift apparatus for face photographing, and fig. 5 is a schematic structural diagram of the virtual face-lift apparatus for face photographing according to an embodiment of the present application. As shown in fig. 5, the virtual face-lift apparatus for photographing a human face includes an obtaining module 10, a reconstructing module 20, an inquiring module 30, an adjusting module 40 and a mapping module 50.
The acquiring module 10 is configured to acquire a current original two-dimensional face image of a user and depth information corresponding to the original two-dimensional face image.
And the reconstruction module 20 is configured to perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image, and obtain an original face three-dimensional model.
And the query module 30 is configured to query face information registered in advance, and determine whether the user is registered.
In one embodiment of the present application, as shown in fig. 6, the query module 30 includes an extraction unit 31 and a determination unit 32.
Wherein, the extracting unit 31 is configured to analyze the original two-dimensional face image and extract facial features of the user.
A determining unit 32, configured to query a pre-registered face database, determine whether the facial features exist, determine that the user is registered if the facial features exist, and determine that the user is not registered if the facial features do not exist.
And the adjusting module 40 is configured to, when it is known that the user is registered, obtain a face three-dimensional model shaping parameter corresponding to the user, and adjust a key point on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face-lifting.
And the mapping module 50 is configured to map the virtual-face-trimmed target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
It should be noted that the explanation of the embodiment of the virtual face-lift method for photographing a face is also applicable to the virtual face-lift device for photographing a face in the embodiment, and details are not repeated here.
To sum up, the virtual face-lifting device for photographing a face of a user in the embodiment of the present application obtains a current original two-dimensional face image of the user and depth information corresponding to the original two-dimensional face image, performs three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtains an original three-dimensional face model, queries face information registered in advance, determines whether the user is registered, obtains a face three-dimensional model shaping parameter corresponding to the user if the user is known to be registered, adjusts a key point on the original three-dimensional face model according to the face three-dimensional model shaping parameter, obtains a target three-dimensional face model after virtual face lifting, and maps the target three-dimensional face model after virtual face lifting to a two-dimensional plane to obtain a target two-dimensional face image. Therefore, the registered user is beautified based on the human face three-dimensional model, the beautifying effect is optimized, and the satisfaction degree of the target user on the beautifying effect and the viscosity of the product are improved.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium, on which a computer program is stored, which when executed by a processor of a mobile terminal implements the virtual face-lift method for photographing a human face as described in the foregoing embodiments.
In order to implement the above embodiments, the present application further provides an electronic device.
Fig. 7 is a schematic diagram of the internal structure of the electronic device 200 in one embodiment. The electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 connected by a system bus 210. Memory 230 of electronic device 200 stores, among other things, an operating system and computer-readable instructions. The computer readable instructions can be executed by the processor 220 to implement the face beautification method of the embodiment of the application. The processor 220 is used to provide computing and control capabilities that support the operation of the overall electronic device 200. The display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 250 may be a touch layer covered on the display 240, a button, a trackball or a touch pad arranged on a housing of the electronic device 200, or an external keyboard, a touch pad or a mouse. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc.
It will be understood by those skilled in the art that the structure shown in fig. 7 is only a schematic diagram of a part of the structure related to the present application, and does not constitute a limitation to the electronic device 200 to which the present application is applied, and a specific electronic device 200 may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
In order to implement the above embodiments, the present invention also proposes an image processing circuit including an image unit 310, a depth information unit 320, and a processing unit 330. Wherein the content of the first and second substances,
and an image unit 310, configured to output a current original two-dimensional face image of the user.
And a depth information unit 320 for outputting depth information corresponding to the original two-dimensional face image.
The processing unit 330 is electrically connected to the image unit and the depth information unit, and configured to perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtain an original face three-dimensional model, query pre-registered face information, determine whether a user is registered, if it is known that the user is registered, obtain a face three-dimensional model shaping parameter corresponding to the user, adjust a key point on the original face three-dimensional model according to the face three-dimensional model shaping parameter, obtain a target face three-dimensional model after virtual face shaping, and map the target face three-dimensional model after virtual face shaping to a two-dimensional plane, so as to obtain a target two-dimensional face image.
In this embodiment, the image unit 310 may specifically include: an Image sensor 311 and an Image Signal Processing (ISP) processor 312 electrically connected to each other. Wherein the content of the first and second substances,
and an image sensor 311 for outputting raw image data.
And the ISP processor 312 is configured to output the original two-dimensional face image according to the original image data.
In the embodiment of the present application, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312, and the ISP processor 312 analyzes the raw image data to capture image statistics, including a human face image in YUV format or RGB format, which can be used to determine one or more control parameters of the image sensor 311. Where the image sensor 311 may include an array of color filters (e.g., Bayer filters), and corresponding photosites, the image sensor 311 may acquire light intensity and wavelength information captured by each photosite and provide a set of raw image data that may be processed by the ISP processor 312. The ISP processor 312 processes the original image data to obtain a face image in YUV format or RGB format, and sends the face image to the processing unit 330.
The ISP processor 312 may process the raw image data in a plurality of formats on a pixel-by-pixel basis when processing the raw image data. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
As a possible implementation manner, the depth information unit 320 includes a structured light sensor 321 and a depth map generating chip 322, which are electrically connected. Wherein the content of the first and second substances,
a structured light sensor 321 for generating an infrared speckle pattern.
And the depth map generating chip 322 is used for outputting depth information corresponding to the original two-dimensional face image according to the infrared speckle pattern.
In the embodiment of the present application, the structured light sensor 321 projects speckle structured light to a subject, obtains structured light reflected by the subject, and obtains an infrared speckle pattern according to imaging of the reflected structured light. The structured light sensor 321 sends the infrared speckle pattern to the Depth Map generating chip 322, so that the Depth Map generating chip 322 determines the morphological change condition of the structured light according to the infrared speckle pattern, and further determines the Depth of the shot object according to the morphological change condition, so as to obtain a Depth Map (Depth Map), wherein the Depth Map indicates the Depth of each pixel point in the infrared speckle pattern. The depth map generating chip 322 sends the depth map to the processing unit 330.
As a possible implementation manner, the processing unit 330 includes: a CPU331 and a GPU (Graphics Processing Unit) 332 electrically connected. Wherein the content of the first and second substances,
the CPU331 is configured to align the face image and the depth map according to the calibration data, and output a three-dimensional face model according to the aligned face image and depth map.
And the GPU332 is used for acquiring a face three-dimensional model shaping parameter corresponding to the user if the user is registered, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face shaping, and mapping the target face three-dimensional model after virtual face shaping to a two-dimensional plane to obtain a target two-dimensional face image.
In the embodiment of the present application, the CPU331 acquires a face image from the ISP processor 312, acquires a depth map from the depth map generating chip 322, and aligns the face image with the depth map by combining with calibration data obtained in advance, thereby determining depth information corresponding to each pixel point in the face image. Further, the CPU331 performs three-dimensional reconstruction based on the depth information and the face image, to obtain a three-dimensional face model.
The CPU331 sends the face three-dimensional model to the GPU332, so that the GPU332 executes the virtual face-lift method of face photographing as described in the foregoing embodiments according to the face three-dimensional model to obtain a target two-dimensional face image.
Further, the image processing circuit may further include: the first display unit 341.
The first display unit 341 is electrically connected to the processing unit 330, and is configured to display an adjustment control corresponding to a key point of the portion to be resized.
Further, the image processing circuit may further include: and a second display unit 342.
The second display unit 342 is electrically connected to the processing unit 340, and is configured to display the three-dimensional model of the target sample face after virtual face-lifting.
Optionally, the image processing circuit may further include: an encoder 350 and a memory 360.
In the embodiment of the present application, the beautified face image obtained by the GPU332 may be further encoded by the encoder 350 and then stored in the memory 360, wherein the encoder 350 may be implemented by a coprocessor.
In one embodiment, the Memory 360 may be multiple or divided into multiple Memory spaces, and the image data processed by the GPU312 may be stored in a dedicated Memory, or a dedicated Memory space, and may include a DMA (Direct Memory Access) feature. Memory 360 may be configured to implement one or more frame buffers.
The above process is described in detail below with reference to fig. 9.
It should be noted that fig. 9 is a schematic diagram of an image processing circuit as one possible implementation. For ease of illustration, only the various aspects associated with the embodiments of the present application are shown.
As shown in fig. 9, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312, and the ISP processor 312 analyzes the raw image data to capture image statistics, including images of human faces in YUV format or RGB format, that can be used to determine one or more control parameters of the image sensor 311. Where the image sensor 311 may include an array of color filters (e.g., Bayer filters), and corresponding photosites, the image sensor 311 may acquire light intensity and wavelength information captured by each photosite and provide a set of raw image data that may be processed by the ISP processor 312. The ISP processor 312 processes the original image data to obtain a face image in YUV format or RGB format, and sends the face image to the CPU 331.
The ISP processor 312 may process the raw image data in a plurality of formats on a pixel-by-pixel basis when processing the raw image data. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
As shown in fig. 9, the structured light sensor 321 projects speckle structured light to a subject, acquires structured light reflected by the subject, and forms an image according to the reflected structured light to obtain an infrared speckle pattern. The structured light sensor 321 sends the infrared speckle pattern to the Depth Map generating chip 322, so that the Depth Map generating chip 322 determines the morphological change condition of the structured light according to the infrared speckle pattern, and further determines the Depth of the shot object according to the morphological change condition, so as to obtain a Depth Map (Depth Map), wherein the Depth Map indicates the Depth of each pixel point in the infrared speckle pattern. The depth map generating chip 322 sends the depth map to the CPU 331.
The CPU331 acquires a face image from the ISP processor 312, acquires a depth map from the depth map generation chip 322, and aligns the face image with the depth map by combining with calibration data obtained in advance, thereby determining depth information corresponding to each pixel point in the face image. Further, the CPU331 performs three-dimensional reconstruction based on the depth information and the face image, to obtain a three-dimensional face model.
The CPU331 sends the face three-dimensional model to the GPU332, so that the GPU332 executes the method described in the foregoing embodiment according to the face three-dimensional model to implement virtual face-lifting, and obtain a face image after virtual face-lifting. The virtual face-lift image processed by the GPU332 may be displayed by the display 340 (including the first display unit 341 and the second display unit 351), and/or encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.
In one embodiment, the Memory 360 may be multiple or divided into multiple Memory spaces, and the image data processed by the GPU332 may be stored in a dedicated Memory, or a dedicated Memory space, and may include a DMA (Direct Memory Access) feature. Memory 360 may be configured to implement one or more frame buffers.
For example, the following steps are performed to implement the control method by using the processor 220 in fig. 9 or by using the image processing circuit (specifically, the CPU331 and the GPU332) in fig. 9:
the CPU331 acquires a two-dimensional face image and depth information corresponding to the face image; the CPU331 performs three-dimensional reconstruction according to the depth information and the face image to obtain a face three-dimensional model; the GPU332 acquires the face three-dimensional model shaping parameters corresponding to the user, and adjusts key points on the original face three-dimensional model according to the face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting; and the GPU332 maps the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (19)

1. A virtual face-lifting method for face photographing is characterized by comprising the following steps:
acquiring original two-dimensional face images of a user at multiple current angles, projecting non-uniform speckle structured light to the face of the user, shooting a structured light image modulated by the face of the user, and demodulating phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the original two-dimensional face image;
performing three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model;
inquiring face information registered in advance, and judging whether the user is registered;
if the fact that the user is registered is known, inquiring registration information to obtain a face three-dimensional model shaping parameter corresponding to the user, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face shaping, wherein before the inquiring registration information obtains the face three-dimensional model shaping parameter corresponding to the user, the method comprises the following steps:
acquiring two-dimensional sample face images of the user at multiple angles and depth information corresponding to each two-dimensional sample face image, performing three-dimensional reconstruction according to the depth information and the two-dimensional sample face images to acquire an original sample face three-dimensional model, adjusting key points of a part to be trimmed on the original sample face three-dimensional model to obtain a target sample face three-dimensional model after virtual trimming, comparing the original sample face three-dimensional model with the target sample face three-dimensional model, extracting face three-dimensional model trimming parameters corresponding to the user, and storing the face three-dimensional model trimming parameters in the registration information;
if the user is not registered, acquiring attribute features of the user, searching standard face three-dimensional model shaping parameters corresponding to the attribute features according to the attribute features, and adjusting key points on the original face three-dimensional model according to the standard face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting;
and mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
2. The method of claim 1, wherein the querying pre-registered face information and determining whether the user is registered comprises:
analyzing the original two-dimensional face image and extracting facial features of the user;
inquiring a face database registered in advance, judging whether the face features exist, and if so, determining that the user is registered; and if not, determining that the user is not registered.
3. The method of claim 1, wherein the attribute characteristics of the user comprise:
gender, age, race, and skin color.
4. The method according to claim 1, before the obtaining an original face three-dimensional model according to the depth information and the original two-dimensional face image by three-dimensional reconstruction, further comprising:
extracting attribute features of the user;
and beautifying the original two-dimensional face image according to the attribute characteristics to obtain a beautified original two-dimensional face image.
5. The method of claim 1, after said obtaining the original human face three-dimensional model, further comprising:
and beautifying the skin texture map covered on the surface of the original human face three-dimensional model to obtain the beautified original human face three-dimensional model.
6. The method of claim 1, wherein the performing three-dimensional reconstruction based on the depth information and the two-dimensional sample face image to obtain an original sample face three-dimensional model comprises:
performing key point identification on each two-dimensional sample face image to obtain positioning key points;
determining the relative position of the positioning key point in a three-dimensional space according to the depth information of the positioning key point and the distance of the positioning key point on each two-dimensional sample face image;
and connecting adjacent positioning key points according to the relative positions of the positioning key points in the three-dimensional space to generate an original sample human face three-dimensional model.
7. The method according to claim 1, wherein the adjusting key points of the portion to be face-trimmed on the original sample human face three-dimensional model to obtain a virtual face-trimmed target sample human face three-dimensional model comprises:
generating an adjusting control corresponding to the key point of each face-lifting part;
detecting touch operation of an adjusting control corresponding to a key point of a part to be beautified by the user, and acquiring corresponding adjusting parameters;
and adjusting key points of the part to be subjected to face-lifting on the original sample human face three-dimensional model according to the adjustment parameters to obtain a target sample human face three-dimensional model after virtual face-lifting.
8. The method according to claim 1, wherein the adjusting key points of the portion to be face-trimmed on the original sample human face three-dimensional model to obtain a virtual face-trimmed target sample human face three-dimensional model comprises:
displaying key points of each face-lifting part on the original sample human face three-dimensional model;
and detecting the displacement operation of the key points of the part to be subjected to face-lifting by the user, and adjusting the key points according to the displacement operation to obtain the target sample human face three-dimensional model after virtual face-lifting.
9. The method according to claim 1, wherein the adjusting key points of the portion to be face-trimmed on the original sample human face three-dimensional model to obtain a virtual face-trimmed target sample human face three-dimensional model comprises:
providing a cosmetic suggestion information to the user;
if the user confirms the face-lifting suggestion information, determining key points and adjustment parameters of the part to be face-lifted according to the face-lifting suggestion information;
and adjusting key points of the part to be subjected to face-lifting on the original sample human face three-dimensional model according to the adjustment parameters to obtain a target sample human face three-dimensional model after virtual face-lifting.
10. A virtual face-lifting device for face photographing, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring original two-dimensional face images of a plurality of current angles of a user, projecting non-uniform speckle structured light to the face of the current user, shooting a structured light image modulated by the face of the current user, and demodulating phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the original two-dimensional face image;
the reconstruction module is used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model;
the query module is used for querying face information registered in advance and judging whether the user is registered;
the adjusting module is used for inquiring registration information to obtain a face three-dimensional model shaping parameter corresponding to the user when the user is registered, and adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face-lifting;
a registration module, configured to, before the registration information is queried to obtain face three-dimensional model shaping parameters corresponding to the user, obtain two-dimensional sample face images of the user at multiple angles and depth information corresponding to each two-dimensional sample face image, perform three-dimensional reconstruction according to the depth information and the two-dimensional sample face images, obtain an original sample face three-dimensional model, adjust a key point of a portion to be reshaped on the original sample face three-dimensional model, obtain a target sample face three-dimensional model after virtual reshaping, compare the original sample face three-dimensional model with the target sample face three-dimensional model, extract face three-dimensional model shaping parameters corresponding to the user, and store the face three-dimensional model shaping parameters in the registration information;
the adjusting module is further configured to, when the user is not registered, acquire attribute features of the user, search for standard face three-dimensional model shaping parameters corresponding to the attribute features according to the attribute features, and adjust key points on the original face three-dimensional model according to the standard face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting;
and the mapping module is used for mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
11. The apparatus of claim 10, wherein the query module comprises:
the extracting unit is used for analyzing the original two-dimensional face image and extracting the facial features of the user;
and the determining unit is used for inquiring a face database registered in advance, judging whether the face features exist or not, if so, determining that the user is registered, and if not, determining that the user is not registered.
12. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the virtual face-lift method for photographing a human face as claimed in any one of claims 1 to 9 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a virtual face-lift method for photographing a human face according to any one of claims 1 to 9.
14. An image processing circuit, characterized in that the image processing circuit comprises: an image unit, a depth information unit and a processing unit;
the image unit is used for outputting original two-dimensional face images of a plurality of current angles of a user;
the depth information unit comprises a structured light sensor and a depth map generation chip which are electrically connected, wherein the depth information unit is used for projecting non-uniform speckle structured light to the face of a current user, the structured light sensor is used for generating a structured light image modulated by the face of the current user, and the depth map generation chip is used for demodulating phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to the original two-dimensional face image;
the processing unit is respectively electrically connected with the image unit and the depth information unit and is used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain an original face three-dimensional model, inquiring pre-registered face information and judging whether the user is registered or not, inquiring registered information to obtain face three-dimensional model shaping parameters corresponding to the user if the user is registered, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting,
before the query registration information obtains the face three-dimensional model shaping parameters corresponding to the user, the method comprises the following steps:
acquiring two-dimensional sample face images of the user at a plurality of angles and depth information corresponding to each two-dimensional sample face image,
performing three-dimensional reconstruction according to the depth information and the two-dimensional sample face image to obtain an original sample face three-dimensional model,
adjusting key points of a part to be reshaped on the original sample human face three-dimensional model to obtain a target sample human face three-dimensional model after virtual reshaping,
comparing the original sample human face three-dimensional model with the target sample human face three-dimensional model, extracting human face three-dimensional model shaping parameters corresponding to the user, and storing the human face three-dimensional model shaping parameters in the registration information,
if the user is not registered, acquiring attribute features of the user, searching standard face three-dimensional model shaping parameters corresponding to the attribute features according to the attribute features, and adjusting key points on the original face three-dimensional model according to the standard face three-dimensional model shaping parameters to obtain a target face three-dimensional model after virtual face-lifting;
and mapping the virtual face-lifting target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
15. The image processing circuit of claim 14, wherein the image unit comprises an image sensor and an image signal processing ISP processor electrically connected;
the image sensor is used for outputting original image data;
and the image signal processing ISP processor is used for outputting the original two-dimensional face image according to the original image data.
16. The image processing circuit of claim 15, wherein the processing unit comprises a CPU and a GPU electrically connected;
the CPU is used for carrying out three-dimensional reconstruction according to the depth information and the original two-dimensional face image, acquiring an original face three-dimensional model, inquiring face information registered in advance and judging whether the user is registered or not;
and the GPU is used for acquiring a face three-dimensional model shaping parameter corresponding to the user if the user is registered, adjusting key points on the original face three-dimensional model according to the face three-dimensional model shaping parameter to obtain a target face three-dimensional model after virtual face shaping, and mapping the target face three-dimensional model after virtual face shaping to a two-dimensional plane to obtain a target two-dimensional face image.
17. The image processing circuit of claim 16, wherein the GPU is further configured to:
extracting attribute features of the user;
and beautifying the original two-dimensional face image according to the attribute characteristics to obtain a beautified original two-dimensional face image.
18. The image processing circuit according to any of claims 14-17, further comprising a first display unit;
the first display unit is electrically connected with the processing unit and used for displaying the adjusting control corresponding to the key point of the part to be beautified.
19. The image processing circuit according to any of claims 14-17, further comprising a second display unit;
and the second display unit is electrically connected with the processing unit and is used for displaying the target sample human face three-dimensional model after virtual face-lifting.
CN201810551058.3A 2018-05-31 2018-05-31 Virtual face-lifting method and device for face photographing Active CN108765273B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810551058.3A CN108765273B (en) 2018-05-31 2018-05-31 Virtual face-lifting method and device for face photographing
PCT/CN2019/089348 WO2019228473A1 (en) 2018-05-31 2019-05-30 Method and apparatus for beautifying face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810551058.3A CN108765273B (en) 2018-05-31 2018-05-31 Virtual face-lifting method and device for face photographing

Publications (2)

Publication Number Publication Date
CN108765273A CN108765273A (en) 2018-11-06
CN108765273B true CN108765273B (en) 2021-03-09

Family

ID=64001237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810551058.3A Active CN108765273B (en) 2018-05-31 2018-05-31 Virtual face-lifting method and device for face photographing

Country Status (2)

Country Link
CN (1) CN108765273B (en)
WO (1) WO2019228473A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765273B (en) * 2018-05-31 2021-03-09 Oppo广东移动通信有限公司 Virtual face-lifting method and device for face photographing
CN111353931B (en) * 2018-12-24 2023-10-03 黄庆武整形医生集团(深圳)有限公司 Shaping simulation method, system, readable storage medium and apparatus
CN110020600B (en) * 2019-03-05 2021-04-16 厦门美图之家科技有限公司 Method for generating a data set for training a face alignment model
CN110189406B (en) * 2019-05-31 2023-11-28 创新先进技术有限公司 Image data labeling method and device
CN110278029B (en) * 2019-06-25 2020-12-22 Oppo广东移动通信有限公司 Data transmission control method and related product
CN110310318B (en) * 2019-07-03 2022-10-04 北京字节跳动网络技术有限公司 Special effect processing method and device, storage medium and terminal
CN110321849B (en) * 2019-07-05 2023-12-22 腾讯科技(深圳)有限公司 Image data processing method, device and computer readable storage medium
CN110473295B (en) * 2019-08-07 2023-04-25 重庆灵翎互娱科技有限公司 Method and equipment for carrying out beautifying treatment based on three-dimensional face model
CN110675489B (en) * 2019-09-25 2024-01-23 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
JP2022512262A (en) 2019-11-21 2022-02-03 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Image processing methods and equipment, image processing equipment and storage media
CN111031305A (en) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
CN112927343B (en) * 2019-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 Image generation method and device
CN111178337B (en) * 2020-01-07 2020-12-29 南京甄视智能科技有限公司 Human face key point data enhancement method, device and system and model training method
GB2591994B (en) * 2020-01-31 2024-05-22 Fuel 3D Tech Limited A method for generating a 3D model
CN111370100A (en) * 2020-03-11 2020-07-03 深圳小佳科技有限公司 Face-lifting recommendation method and system based on cloud server
CN111539882A (en) * 2020-04-17 2020-08-14 华为技术有限公司 Interactive method for assisting makeup, terminal and computer storage medium
CN111966852B (en) * 2020-06-28 2024-04-09 北京百度网讯科技有限公司 Face-based virtual face-lifting method and device
CN112150618B (en) * 2020-10-16 2022-11-29 四川大学 Processing method and device for virtual shaping of canthus
CN113177879A (en) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN113724396A (en) * 2021-09-10 2021-11-30 广州帕克西软件开发有限公司 Virtual face-lifting method and device based on face mesh
CN113657357B (en) * 2021-10-20 2022-02-25 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114120414B (en) * 2021-11-29 2022-11-01 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN113902790B (en) * 2021-12-09 2022-03-25 北京的卢深视科技有限公司 Beauty guidance method, device, electronic equipment and computer readable storage medium
CN114998554A (en) * 2022-05-05 2022-09-02 清华大学 Three-dimensional cartoon face modeling method and device
CN115239888B (en) * 2022-08-31 2023-09-12 北京百度网讯科技有限公司 Method, device, electronic equipment and medium for reconstructing three-dimensional face image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6283858B1 (en) * 1997-02-25 2001-09-04 Bgk International Incorporated Method for manipulating images
CN105938627A (en) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 Processing method and system for virtual plastic processing on face
CN107705356A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777195B (en) * 2010-01-29 2012-04-25 浙江大学 Three-dimensional face model adjusting method
CN106940880A (en) * 2016-01-04 2017-07-11 中兴通讯股份有限公司 A kind of U.S. face processing method, device and terminal device
CN107993209B (en) * 2017-11-30 2020-06-12 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108040208A (en) * 2017-12-18 2018-05-15 信利光电股份有限公司 A kind of depth U.S. face method, apparatus, equipment and computer-readable recording medium
CN108765273B (en) * 2018-05-31 2021-03-09 Oppo广东移动通信有限公司 Virtual face-lifting method and device for face photographing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6283858B1 (en) * 1997-02-25 2001-09-04 Bgk International Incorporated Method for manipulating images
CN105938627A (en) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 Processing method and system for virtual plastic processing on face
CN107705356A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
计算机辅助三维整形外科手术关键技术研究;田炜;《中国优秀硕士学位论文全文数据库信息科技辑》;20071015;正文第9-52页 *

Also Published As

Publication number Publication date
WO2019228473A1 (en) 2019-12-05
CN108765273A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108765273B (en) Virtual face-lifting method and device for face photographing
CN108447017B (en) Face virtual face-lifting method and device
CN109118569B (en) Rendering method and device based on three-dimensional model
KR102362544B1 (en) Method and apparatus for image processing, and computer readable storage medium
CN106909875B (en) Face type classification method and system
CN107852533B (en) Three-dimensional content generation device and three-dimensional content generation method thereof
CN109690617B (en) System and method for digital cosmetic mirror
CN105938627B (en) Processing method and system for virtual shaping of human face
EP2923306B1 (en) Method and apparatus for facial image processing
US9691136B2 (en) Eye beautification under inaccurate localization
KR101733512B1 (en) Virtual experience system based on facial feature and method therefore
CN108682050B (en) Three-dimensional model-based beautifying method and device
JP2020526809A (en) Virtual face makeup removal, fast face detection and landmark tracking
WO2021036314A1 (en) Facial image processing method and apparatus, image device, and storage medium
WO2015029392A1 (en) Makeup support device, makeup support method, and makeup support program
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN109191393B (en) Three-dimensional model-based beauty method
CN109191584B (en) Three-dimensional model processing method and device, electronic equipment and readable storage medium
CN102074040A (en) Image processing apparatus, image processing method, and program
CN109242760B (en) Face image processing method and device and electronic equipment
KR20170092533A (en) A face pose rectification method and apparatus
CN113128376B (en) Wrinkle identification method and device based on image processing and terminal equipment
KR20160110038A (en) Image processing apparatus and image processing method
KR20200100020A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
KR101827998B1 (en) Virtual experience system based on facial feature and method therefore

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant