WO2019228473A1 - 人脸图像的美化方法和装置 - Google Patents

人脸图像的美化方法和装置 Download PDF

Info

Publication number
WO2019228473A1
WO2019228473A1 PCT/CN2019/089348 CN2019089348W WO2019228473A1 WO 2019228473 A1 WO2019228473 A1 WO 2019228473A1 CN 2019089348 W CN2019089348 W CN 2019089348W WO 2019228473 A1 WO2019228473 A1 WO 2019228473A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
dimensional
original
user
image
Prior art date
Application number
PCT/CN2019/089348
Other languages
English (en)
French (fr)
Inventor
黄杰文
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019228473A1 publication Critical patent/WO2019228473A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces

Definitions

  • the present application relates to the field of portrait processing technology, and in particular, to a method and a device for beautifying a face image.
  • a beautification process is performed based on a two-dimensional face image, the processing effect is not good, and the processed image is not realistic.
  • This application is intended to solve at least one of the technical problems in the related technology.
  • an embodiment of the first aspect of the present application proposes a method for beautifying a face image, including: obtaining a current original two-dimensional face image of a user, and depth information corresponding to the original two-dimensional face image. Performing a three-dimensional reconstruction based on the depth information and the original two-dimensional face image to obtain a three-dimensional model of the original face; querying pre-registered face information to determine whether the user is registered; if it is known that the user is already registered, Obtaining the beautification parameters of the three-dimensional model of the face corresponding to the user, and adjusting key points on the original three-dimensional model of the face according to the beautification parameters of the three-dimensional model of the face to obtain a virtual beautified three-dimensional model of the target face; The virtual 3D model of the target face is mapped to a 2D plane to obtain a target 2D face image.
  • an embodiment of the second aspect of the present application proposes a device for beautifying a face image, including: an obtaining module, configured to obtain a current original two-dimensional face image of the user, and the original two-dimensional face image.
  • Depth information corresponding to the image a reconstruction module for three-dimensional reconstruction based on the depth information and the original two-dimensional face image to obtain a three-dimensional model of the original face; a query module for querying pre-registered face information To determine whether the user is registered; an adjustment module configured to obtain a beautification parameter of a three-dimensional model of the face corresponding to the user when it is learned that the user is already registered, and to modify the original person according to the beautification parameter of the three-dimensional model of the face The key points on the three-dimensional face model are adjusted to obtain a virtual beautified target face three-dimensional model; a mapping module is configured to map the virtual beautified target face three-dimensional model to a two-dimensional plane to obtain a target two-dimensional face image.
  • an embodiment of the third aspect of the present application provides an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, , To realize the beautification method of a face image according to the foregoing embodiment of the first aspect.
  • an embodiment of the fourth aspect of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the person described in the foregoing embodiment of the first aspect is implemented. Beautification method of face image.
  • an embodiment of the fifth aspect of the present application provides an image processing circuit.
  • the image processing circuit includes: an image unit, a depth information unit, and a processing unit;
  • the image unit is configured to output a current original two-dimensional face image of the user
  • the depth information unit is configured to output depth information corresponding to the original two-dimensional face image
  • the processing unit is electrically connected to the image unit and the depth information unit, respectively, and configured to perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtain a three-dimensional model of the original face, and query Pre-registered face information, to determine whether the user is registered, and if it is learned that the user is already registered, obtain a 3D model beautification parameter of the face corresponding to the user, and modify the original according to the 3D model beautification parameter of the face.
  • the key points on the three-dimensional face model are adjusted to obtain a virtual beautified target face three-dimensional model, and the virtual beautified target face three-dimensional model is mapped to a two-dimensional plane to obtain a target two-dimensional face image.
  • FIG. 1 is a schematic flowchart of a face image beautification method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a face image beautification method according to another embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a depth image acquisition component according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for beautifying a face image provided by an embodiment of the present application
  • FIG. 5 is a schematic flowchart of a method for beautifying a face image provided by another embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a beautification device for a face image according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a beautification device for a face image according to another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an image processing circuit in an embodiment
  • FIG. 10 is a schematic diagram of an image processing circuit as a possible implementation manner.
  • the processing effect is not good, and the processed image is not very realistic.
  • a two-dimensional face image and a person are obtained.
  • the depth information corresponding to the face image is 3D reconstructed according to the depth information and the face image to obtain a 3D model of the face.
  • the beautification is based on the 3D model of the face. Compared with the 2D beautification, the depth information of the face is considered to achieve It can distinguish and beautify different parts of the face, and improve the realism of beautification.
  • the depth information helps to clearly distinguish the nose from other parts. Therefore, it is possible to avoid blurring the face due to erroneous skin peeling of other parts.
  • FIG. 1 is a schematic flowchart of a method for beautifying a face image provided by an embodiment of the present application.
  • the face virtual beautification method in the embodiment of the present application can be applied to a computer device having a depth information and color information acquisition device.
  • the device having the functions of the depth information and color information (two-dimensional information) acquisition device may be a dual camera system, etc.
  • the computer device may be a hardware device such as a mobile phone, a tablet computer, a personal digital assistant, or a wearable device, which has various operating systems, a touch screen, and / or a display screen.
  • Step 101 Obtain a current original two-dimensional face image of the user and depth information corresponding to the original two-dimensional face image.
  • the hardware device for obtaining the original two-dimensional human face information is a visible light RGB image sensor, and the original two-dimensional human face can be obtained based on the RGB visible light image sensor in the computer device.
  • the visible light RGB image sensor may include a visible light camera. The visible light camera may capture visible light reflected by the imaging object for imaging, and obtain an original two-dimensional human face corresponding to the imaging object.
  • the way to obtain the depth information is through a structured light sensor. Specifically, as shown in FIG. 2, the way to obtain the depth information corresponding to each face image includes the following steps:
  • Step 201 Project structured light onto the face of the current user.
  • Step 202 Take a structured light image modulated by the current user's face.
  • Step 203 Demodulate phase information corresponding to each pixel of the structured light image to obtain depth information corresponding to a face image.
  • the depth image acquisition component 12 includes a structured light projector 121 and a structured light camera 122.
  • Step 201 may be implemented by a structured light projector 121
  • steps 202 and 203 may be implemented by a structured light camera 122.
  • the structured light projector 121 can be used to project structured light onto the face of the current user; the structured light camera 122 can be used to capture the structured light image modulated by the face of the current user, and each pixel corresponding to the demodulated structured light image corresponds to Phase information to get depth information.
  • the structured light projector 121 projects structured light of a certain pattern on the face of the current user
  • a structured light image modulated by the face of the current user is formed on the surface of the face of the current user.
  • the structured light camera 122 captures the modulated structured light image, and then demodulates the structured light image to obtain depth information.
  • the mode of structured light may be laser fringe, Gray code, sine fringe, non-uniform speckle, and the like.
  • the structured light camera 122 may further be used to demodulate phase information corresponding to each pixel in the structured light image, convert the phase information into depth information, and generate a depth image according to the depth information.
  • the phase information of the modulated structured light has changed.
  • the structured light presented in the structured light image is structured light that has been distorted, and the changed phase information You can characterize the depth information of the object. Therefore, the structured light camera 122 first demodulates phase information corresponding to each pixel in the structured light image, and then calculates depth information according to the phase information.
  • Step 102 Perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain a three-dimensional model of the original face.
  • the three-dimensional reconstruction is performed according to the depth information and the original two-dimensional face image, and the relevant point depth information and two-dimensional information are given to reconstruct the three-dimensional model of the original face.
  • the original three-dimensional model of the face can be fully restored
  • the human face, relative to the two-dimensional face model also includes information such as the three-dimensional angles of the facial features.
  • the three-dimensional reconstruction method based on the depth information and the face image to obtain a three-dimensional model of the original face includes, but is not limited to, the following methods:
  • keypoint recognition is performed on each two-dimensional sample face image to obtain positioning keypoints.
  • the distance on the face image including the x-axis distance and the y-axis distance in the two-dimensional space, determines the relative position of the positioning key point in the three-dimensional space, and connects the adjacent positioning keys according to the relative position of the positioning key point in the three-dimensional space.
  • Point to generate a 3D model of the original sample face the key points are characteristic points on the human face, which may include points on the corners of the eyes, nose, and mouth.
  • obtain original two-dimensional face images from multiple angles and filter out high-definition face images as raw data to locate feature points, and use the result of feature location to roughly estimate the face angle
  • a rough three-dimensional deformation model of the face is established, and the facial feature points are adjusted to the same scale as the three-dimensional deformation model of the face by panning and zooming operations, and corresponding to the facial feature points are extracted
  • the coordinate information of the points forms a three-dimensional deformation model of the sparse face.
  • a particle swarm algorithm is used to iteratively reconstruct the 3D face of the face to obtain a 3D geometric model of the face.
  • the method of texture posting is used.
  • the face texture information in the input two-dimensional image is mapped to the three-dimensional geometric model of the face to obtain a complete three-dimensional model of the original face.
  • a three-dimensional model of the original face in order to improve the beautification effect, can also be constructed based on the original two-dimensional face image after the beautification. Therefore, the three-dimensional model of the original face constructed is more beautiful and ensures that Beautifying aesthetics.
  • the user attribute characteristics of the user are extracted.
  • the user attribute characteristics may include gender, age, race, and skin color.
  • the attribute characteristics of the user may be obtained according to the personal information input by the user during registration, or may be It is obtained by collecting and analyzing the two-dimensional face image information during user registration, and beautifying the original two-dimensional face image according to user attribute characteristics to obtain a beautified original two-dimensional face image.
  • the way of beautifying the two-dimensional face image can be to establish the correspondence between user attribute characteristics and beautification parameters in advance. For example, the beautification parameters of women are acne, microdermabrasion, and whitening, and the beautification parameters of men are acne. After obtaining the user attribute characteristics, query the corresponding relationship to obtain the corresponding beautification parameters, and beautify the original two-dimensional face image according to the retrieved beautification parameters.
  • the way to beautify the original two-dimensional face image in addition to the above-mentioned beautification, can also include brightness optimization, sharpness improvement, denoising processing, and obstacle processing to ensure that the three-dimensional model of the original face is more accurate.
  • Step 103 Query pre-registered face information to determine whether the user is registered.
  • optimized landscaping services are provided based on registered users.
  • registered users get the best landscaping effect when taking pictures, especially when multiple people take pictures, which improves the satisfaction of registered users.
  • Degree on the other hand, helps promote related products.
  • the registered user may be marked with a special iconic symbol, for example, a different color face focus frame is used to highlight the registered user. Users, use different shaped focus frames to highlight registered users.
  • querying pre-registered face information to determine whether a user is registered includes but is not limited to the following methods:
  • facial features of registered users such as special mark features such as birthmarks, shapes and position features of facial features such as noses and eyes, etc. are obtained in advance, and the original two-dimensional face image is analyzed, such as image recognition
  • the technology extracts the user's facial features, queries a pre-registered facial database, and determines whether there are facial features. If it exists, it determines that the user is already registered; if it does not exist, it determines that the user is not registered.
  • Step 104 if it is known that the user has registered, obtain the beautification parameters of the three-dimensional model of the face corresponding to the user, and adjust the key points on the original three-dimensional model of the face according to the beautification parameters of the three-dimensional face of the face to obtain the virtual beautified target face Three-dimensional model.
  • the beautification parameters of the three-dimensional face model include, but are not limited to, adjustment positions and distances of target key points adjusted in the three-dimensional face model.
  • the user in order to provide an optimized beautification service for the registered user, obtain the beautification parameters of the three-dimensional model of the face corresponding to the user, and modify the original three-dimensional model of the face according to the beautification parameters of the three-dimensional model of the face.
  • the key points are adjusted to obtain a virtual 3D model of the target face.
  • the three-dimensional model of the original face is actually constructed by key points and a triangle network formed by the connection of the key points. Therefore, when adjusting the key points of the parts to beautified on the three-dimensional model of the original face, the corresponding person The three-dimensional model of the face is changed, thereby obtaining the target face model after virtual beautification.
  • the manner of beautifying parameters of the three-dimensional model of the face corresponding to the user may be actively registered by the user, or may be automatically generated after analysis according to the user's original three-dimensional model of the face.
  • two-dimensional sample face images of multiple angles of the user and depth information corresponding to each two-dimensional sample face image are obtained, and three-dimensional reconstruction is performed according to the depth information and the two-dimensional sample face image.
  • To obtain a 3D model of the original sample face adjust the key points on the 3D model of the original sample face to beautify, and obtain a virtual 3D model of the target sample face, and compare the 3D model of the original sample face with the target sample face
  • the three-dimensional model extracts the beautification parameters of the three-dimensional model of the face corresponding to the user, such as generating corresponding coordinate difference information according to the coordinate difference of key points corresponding to the same part.
  • the key points of each beautified part are displayed on the original sample three-dimensional model of the face, for example, the key points of each beautified part are displayed in a highlighted manner. , Detect the shift operation performed by the user on the key points of the beautified part, such as detecting the user's drag operation on the selected key point, etc., adjust the key point according to the shift operation, and adjust the key point according to the adjustment and other adjacent keys The points are connected to obtain a virtual beautified 3D model of the target sample face.
  • an adjustment control may be provided for the user to adjust the three-dimensional model of the face in real time through the operation of the control by the user.
  • an adjustment control corresponding to a key point of each landscaping part is generated, a touch operation performed by the adjustment control corresponding to a key point of the landscaping part to be detected by the user is acquired, and corresponding adjustment parameters are obtained according to the adjustment.
  • the parameters are used to adjust the key points on the 3D model of the original sample face to beautified to obtain a virtual beautified 3D model of the target sample face.
  • Gap gets landscaping parameters.
  • the adjustment parameters include the moving direction and distance of key points.
  • beautification suggestion information such as providing “beauty lips, filling the apple machine” and other beautification suggestions.
  • the beautification suggestion information may be in text form, voice form, etc., if the user confirms the beautification suggestion information
  • determine the key points and adjustment parameters of the part to beautify For example, the user confirms the above-mentioned beautification suggestions, and the determined beautification parameters are adjusting the depth value of the mouth and the depth of the cheek.
  • the magnitude of the change in depth value can be Determined according to the depth value of the corresponding part of the user's original sample three-dimensional model. In order to ensure the natural effect of the adjustment, the difference between the adjusted depth value and the initial depth value is within a certain range.
  • the key points of the part to beautify on the model were adjusted to obtain a virtual 3D model of the target sample face.
  • the color of the part corresponding to the acne in the skin texture image can be red, or when there are freckles in the face image, the color of the part corresponding to the freckles in the skin texture image
  • the color can be brown or black, or when there is a mole in the face image, the color of the part corresponding to the mole in the skin texture map can be black.
  • the difference between the center point and the edge point of the abnormal range can be used.
  • the difference in height between the two types determines the type of anomaly to which the anomaly belongs. For example, the anomaly type can be raised or not raised.
  • the corresponding beautification strategy can be determined according to the type of the anomaly and color information, and then the anomalous range is subjected to microdermabrasion using the filtering range and filtering intensity indicated by the beautification strategy according to the matching skin color corresponding to the anomaly range.
  • the range of the abnormality may be acne, and the degree of dermabrasion corresponding to the acne is strong.
  • the color is cyan.
  • the abnormal range may be a tattoo, and the degree of dermabrasion corresponding to the tattoo is weak.
  • the skin color in the abnormal range may also be filled according to the matching skin color corresponding to the abnormal range.
  • the range of the abnormality can be acne
  • the beautification strategy of acne can be: dermabrasion treatment of the acne
  • the normal skin color is described in the examples of the present application as matching the skin color and filling the skin color within the abnormal range corresponding to acne, or when the abnormal type is not raised and the color is brown, at this time, the abnormal range may be freckles.
  • the freckle beautification strategy may be: according to the normal skin color near the freckles, in the embodiment of the present application, it is recorded as matching the skin color and filling the skin color in the abnormal range corresponding to the freckles.
  • a beautification strategy corresponding to a local human face may be set in advance, where the local human face may include facial parts such as a nose, a lip, an eye, and a cheek.
  • the corresponding beautification strategy can be the nose tip lightening treatment and the nose wing shadow treatment to increase the three-dimensional sense of the nose, or for the cheeks, the corresponding beautification strategy can be adding blush and / Or microdermabrasion.
  • a local face can be identified from the skin texture map according to the color information and the relative position in the original three-dimensional model of the original face, and then the local face can be processed according to the beautification strategy corresponding to the local face. beautify.
  • the local human face when the local human face is an eyebrow, the local human face may be subjected to a microdermabrasion treatment according to the filtering intensity indicated by the beautification strategy corresponding to the eyebrow.
  • the local human face When the local human face is a cheek, the local human face can be subjected to a microdermabrasion treatment according to the filtering intensity indicated by the beautification strategy corresponding to the cheek. It should be noted that in order to make the beautified effect more natural and the beautification effect more prominent, the filtering intensity indicated by the beautification strategy corresponding to the cheek may be greater than the filtering intensity indicated by the beautifying strategy corresponding to the eyebrows.
  • the shadow intensity of the local face can be increased according to the shadow intensity indicated by the beautification strategy corresponding to the nose.
  • beautifying it can make the beautified skin texture map more natural and the beautifying effect more prominent.
  • beautifying processing of local human faces in a targeted manner, thereby improving the imaging effect and the user's photographing experience.
  • the user attribute characteristics of the user are extracted.
  • the user attribute characteristics may include gender, age, race, and skin color, such as identifying the user ’s hairstyle, jewelry, Make-up or not, etc. to determine user attribute characteristics, and then obtain preset standard face 3D model beautification parameters corresponding to the user attribute characteristics, and perform key points on the original 3D model face according to the standard face 3D model beautification parameters. Adjust to get a virtual 3D model of the target face.
  • Step 105 Map the three-dimensional model of the target face after virtual beautification to a two-dimensional plane to obtain a target two-dimensional face image.
  • the virtual beautified target face three-dimensional model may be mapped to a two-dimensional plane to obtain The target face image, and beautify the target two-dimensional face image.
  • beautifying the skin texture map can make the beautified skin texture map more natural, so that the target face three-dimensional model generated after virtual beautification according to the beautified face three-dimensional model Mapping to a two-dimensional plane to obtain a beautified target two-dimensional face image, and beautifying the target two-dimensional face image can make the beautified target two-dimensional face image more real, and the beautification effect is more prominent, providing users with The landscaping effect display after landscaping is further enhanced to enhance the landscaping experience of users.
  • calibration refers to calibrating the camera to determine the key points corresponding to the key points in the face image in the three-dimensional space.
  • the map is used for subsequent three-dimensional face reconstruction, in which the missing angle and scanning progress can be prompted during scanning, as well as the depth information corresponding to each two-dimensional sample face image, and the three-dimensional reconstruction is performed based on the depth information and the two-dimensional sample face image.
  • the current original two-dimensional face image of the user and the depth information corresponding to the original two-dimensional face image are acquired, and three-dimensionality is performed according to the depth information and the original two-dimensional face image.
  • Reconstruction obtain the original three-dimensional model of the face, query the pre-registered face information, determine whether the user is registered, and if it is known that the user has registered, obtain the beautification parameters of the three-dimensional model of the face corresponding to the user.
  • the key points on the original 3D model of the face were adjusted to obtain a virtual 3D model of the target face, and the 3D model of the virtual beautified target face was mapped to a 2D plane to obtain the target 2D face image.
  • the face image beautification method in the embodiment of the present application obtains the current original two-dimensional face image of the user and the depth information corresponding to the original two-dimensional face image.
  • the image is three-dimensionally reconstructed to obtain the original three-dimensional model of the face, query the pre-registered face information to determine whether the user is registered, and if it is known that the user is already registered, then obtain the beautification parameters of the three-dimensional model of the face corresponding to the user.
  • the beautification parameters adjust key points on the original 3D model of the face to obtain a virtual 3D model of the target face.
  • the 3D model of the virtual beautified target face is mapped to a 2D plane to obtain the target 2D face image.
  • FIG. 6 is a schematic structural diagram of a face image beautification device according to an embodiment of the present application.
  • the face image beautification device includes an acquisition module 10, a reconstruction module 20, a query module 30, an adjustment module 40, and a mapping module 50.
  • the obtaining module 10 is configured to obtain a current original two-dimensional face image of the user and depth information corresponding to the original two-dimensional face image.
  • a reconstruction module 20 is configured to perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image to obtain a three-dimensional model of the original face.
  • the query module 30 is configured to query pre-registered face information and determine whether the user is registered.
  • the query module 30 includes an extraction unit 31 and a determination unit 32.
  • the extraction unit 31 is configured to analyze the original two-dimensional face image and extract facial features of the user.
  • the determining unit 32 is configured to query a pre-registered face database to determine whether the facial feature exists, and if it exists, determine that the user is already registered, and if it does not exist, determine that the user is not registered.
  • An adjustment module 40 is configured to obtain a beautification parameter of a three-dimensional model of the face corresponding to the user when it is learned that the user has registered, and to key points on the original three-dimensional model of the face according to the beautification parameter of the three-dimensional model of the face. Make adjustments to get a virtual 3D model of the target face.
  • a mapping module 50 is configured to map the three-dimensional model of the target face after the virtual beautification to a two-dimensional plane to obtain a target two-dimensional face image.
  • the face image beautification device in the embodiment of the present application obtains the current original two-dimensional face image of the user and the depth information corresponding to the original two-dimensional face image.
  • the image is three-dimensionally reconstructed to obtain the original three-dimensional model of the face, query the pre-registered face information to determine whether the user is registered, and if it is known that the user is already registered, then obtain the beautification parameters of the three-dimensional model of the face corresponding to the user.
  • the beautification parameters adjust key points on the original 3D model of the face to obtain a virtual 3D model of the target face.
  • the 3D model of the virtual beautified target face is mapped to a 2D plane to obtain the target 2D face image.
  • the registered users are beautified based on the three-dimensional face model, the beautification effect is optimized, and the target user ’s satisfaction with the beautification effect and the stickiness with the product are improved.
  • the present application also proposes a computer-readable storage medium having stored thereon a computer program that, when executed by a processor of a mobile terminal, implements a method for beautifying a face image as described in the foregoing embodiments. .
  • the present application also proposes an electronic device.
  • FIG. 8 is a schematic diagram of the internal structure of the electronic device 200 in an embodiment.
  • the electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 connected through a system bus 210.
  • the memory 230 of the electronic device 200 stores an operating system and computer-readable instructions.
  • the computer-readable instructions can be executed by the processor 220 to implement the face beautification method in the embodiment of the present application.
  • the processor 220 is used to provide computing and control capabilities to support the operation of the entire electronic device 200.
  • the display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 250 may be a touch layer covered on the display 240, or may be a button, a trackball, or a touchpad provided on the housing of the electronic device 200. It can also be an external keyboard, trackpad, or mouse.
  • the electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (such as a smart bracelet, a smart watch, a smart helmet, a smart glasses), and the like.
  • FIG. 8 is only a schematic diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the electronic device 200 to which the solution of the present application is applied.
  • the specific electronic device 200 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
  • the present invention further provides an image processing circuit, which includes an image unit 310, a depth information unit 320, and a processing unit 330. among them,
  • the image unit 310 is configured to output a current original two-dimensional face image of the user.
  • the depth information unit 320 is configured to output depth information corresponding to the original two-dimensional face image.
  • the processing unit 330 is electrically connected to the image unit and the depth information unit, respectively, and is configured to perform three-dimensional reconstruction according to the depth information and the original two-dimensional face image, obtain a three-dimensional model of the original face, query pre-registered face information, and judge the user. Whether to register, if it is known that the user is already registered, obtain the beautification parameters of the 3D model of the face corresponding to the user, and adjust the key points on the original 3D model of the face according to the 3D model beautification parameters to obtain the target face after virtual beautification
  • the 3D model maps the 3D model of the target face after virtual beautification to a 2D plane to obtain the target 2D face image.
  • the image unit 310 may specifically include an image sensor 311 and an image signal processing (ISP) processor 312 that are electrically connected. among them,
  • ISP image signal processing
  • the image sensor 311 is configured to output raw image data.
  • An ISP processor 312 is configured to output the original two-dimensional face image according to the original image data.
  • the original image data captured by the image sensor 311 is first processed by the ISP processor 312.
  • the ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311.
  • Information including face images in YUV or RGB format.
  • the image sensor 311 may include a color filter array (such as a Bayer filter), and a corresponding photosensitive unit.
  • the image sensor 311 may obtain light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312. A set of raw image data.
  • the ISP processor 312 processes the original image data, it obtains a face image in YUV format or RGB format and sends it to the processing unit 330.
  • the ISP processor 312 can process the original image data pixel by pixel in a variety of formats when processing the original image data.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the original image data and collect statistical information about the image data.
  • the image processing operations may be performed with the same or different bit depth accuracy.
  • the depth information unit 320 includes an electrically connected structured light sensor 321 and a depth map generation chip 322. among them,
  • the structured light sensor 321 is configured to generate an infrared speckle pattern.
  • the depth map generation chip 322 is configured to output depth information corresponding to the original two-dimensional face image according to the infrared speckle map.
  • the structured light sensor 321 projects speckle structured light onto a subject, obtains the structured light reflected by the subject, and forms an infrared speckle pattern by imaging the reflected structured light.
  • the structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain the depth map. (Depth map), the depth map indicates the depth of each pixel in the infrared speckle map.
  • the depth map generation chip 322 sends the depth map to the processing unit 330.
  • the processing unit 330 includes a CPU 331 and a GPU (Graphics Processing Unit) 332 which are electrically connected. among them,
  • the CPU 331 is configured to align the face image and the depth map according to the calibration data, and output a three-dimensional model of the face according to the aligned face image and the depth map.
  • the GPU332 is configured to obtain a beautification parameter of a three-dimensional model of a face corresponding to the user if it is known that the user is registered, and adjust key points on the three-dimensional model of the original face according to the beautification parameter of the three-dimensional model of the face.
  • To obtain a virtual 3D model of the target face and map the virtual 3D model of the target face to a 2D plane to obtain a target 2D face image.
  • the CPU 331 obtains a face image from the ISP processor 312, obtains the depth map from the depth map generation chip 322, and combines the calibration data obtained in advance to align the face image with the depth map, thereby determining the person Depth information corresponding to each pixel in the face image. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the face image to obtain a three-dimensional model of the face.
  • the CPU 331 sends the three-dimensional model of the face to the GPU 332, so that the GPU 332 performs the beautification method of the face image as described in the foregoing embodiment according to the three-dimensional model of the face to obtain the target two-dimensional face image.
  • the image processing circuit may further include a first display unit 341.
  • the first display unit 341 is electrically connected to the processing unit 330 and is configured to display adjustment controls corresponding to key points of a part to be beautified.
  • the image processing circuit may further include a second display unit 342.
  • the second display unit 342 is electrically connected to the processing unit 340 and configured to display a virtual beautified three-dimensional model of the target sample face.
  • the image processing circuit may further include: an encoder 350 and a memory 360.
  • the beautified face image processed by the GPU 332 may also be encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.
  • the memory 360 may be multiple, or divided into multiple storage spaces.
  • the image data processed by the storage GPU312 may be stored in a dedicated memory or a dedicated storage space, and may include DMA (Direct Memory Access, directly directly). Memory access) feature.
  • the memory 360 may be configured to implement one or more frame buffers.
  • FIG. 10 is a schematic diagram of an image processing circuit as a possible implementation manner. For ease of description, only aspects related to the embodiments of the present application are shown.
  • the original image data captured by the image sensor 311 is first processed by the ISP processor 312.
  • the ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311.
  • Information including face images in YUV or RGB format.
  • the image sensor 311 may include a color filter array (such as a Bayer filter), and a corresponding photosensitive unit.
  • the image sensor 311 may obtain light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312.
  • the ISP processor 312 processes the original image data to obtain a face image in YUV format or RGB format, and sends the face image to the CPU 331.
  • the ISP processor 312 can process the original image data pixel by pixel in a variety of formats when processing the original image data.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the original image data and collect statistical information about the image data.
  • the image processing operations may be performed with the same or different bit depth accuracy.
  • the structured light sensor 321 projects speckle structured light onto a subject, acquires the structured light reflected by the subject, and forms an infrared speckle pattern based on the reflected structured light.
  • the structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain the depth map. (Depth map), the depth map indicates the depth of each pixel in the infrared speckle map.
  • the depth map generation chip 322 sends the depth map to the CPU 331.
  • the CPU 331 obtains the face image from the ISP processor 312, and obtains the depth map from the depth map generation chip 322. Combined with the calibration data obtained in advance, the face image can be aligned with the depth map, thereby determining the pixels in the face image. Corresponding depth information. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the face image to obtain a three-dimensional model of the face.
  • the CPU 331 sends the three-dimensional model of the face to the GPU 332, so that the GPU 332 executes the method as described in the foregoing embodiment according to the three-dimensional model of the face to realize virtual beautification of the face and obtain a virtual beautified face image.
  • the virtual beautified face image processed by the GPU 332 may be displayed on the display 340 (including the first display unit 341 and the second display unit 351 described above), and / or, encoded by the encoder 350 and stored in the memory 360, where:
  • the encoder 350 may be implemented by a coprocessor.
  • the memory 360 may be multiple or divided into multiple storage spaces.
  • the image data processed by the storage GPU 332 may be stored in a dedicated memory or a dedicated storage space, and may include DMA (Direct Memory Access, directly directly). Memory access) feature.
  • the memory 360 may be configured to implement one or more frame buffers.
  • the following are the steps for implementing the control method using the processor 220 in FIG. 10 or the image processing circuit (specifically, the CPU 331 and the GPU 332) in FIG. 10:
  • the CPU 331 obtains a two-dimensional face image and the depth information corresponding to the face image; the CPU 331 performs a three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model of the face; the GPU 332 obtains the three-dimensional model of the face Corresponding beautification parameters of the three-dimensional model of the face, and adjusting key points on the original three-dimensional model of the face according to the beautification parameters of the three-dimensional model of the face to obtain a virtual beautified three-dimensional model of the target face; GPU 332 will beautify the virtual The 3D model of the target face is mapped to a 2D plane to obtain a target 2D face image.
  • first and second are used for descriptive purposes only and should not be interpreted as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present application, the meaning of "plurality” is at least two, for example, two, three, etc., unless it is specifically and specifically defined otherwise.
  • any process or method description in a flowchart or otherwise described herein can be understood as representing a module, fragment, or portion of code that includes one or more executable instructions for implementing steps of a custom logic function or process
  • the scope of the preferred embodiments of the present application includes additional implementations, in which the functions may be performed out of the order shown or discussed, including performing functions in a substantially simultaneous manner or in the reverse order according to the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application pertain.
  • a sequenced list of executable instructions that can be considered to implement a logical function can be embodied in any computer-readable medium,
  • the instruction execution system, device, or device such as a computer-based system, a system including a processor, or other system that can fetch and execute instructions from the instruction execution system, device, or device), or combine these instruction execution systems, devices, or devices Or equipment.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) with one or more wirings, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disk read-only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable Processing to obtain the program electronically and then store it in computer memory.
  • each part of the application may be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic circuits with logic gates for implementing logic functions on data signals Logic circuits, ASICs with suitable combinational logic gate circuits, programmable gate arrays (PGA), field programmable gate arrays (FPGAs), etc.
  • a person of ordinary skill in the art can understand that all or part of the steps carried by the methods in the foregoing embodiments may be implemented by a program instructing related hardware.
  • the program may be stored in a computer-readable storage medium.
  • the program is When executed, one or a combination of the steps of the method embodiment is included.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist separately physically, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the aforementioned storage medium may be a read-only memory, a magnetic disk, or an optical disk.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本申请提出一种人脸图像的美化方法和装置,其中,方法包括:获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息;根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型;查询预先注册的人脸信息,判断用户是否注册;若获知用户已经注册,则获取与用户对应的人脸三维模型美化参数,根据人脸三维模型美化参数对原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型;将虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。由此,基于人脸三维模型对已经注册的用户进行美化,优化了美化效果,提升了目标用户对美化效果的满意度和与产品的粘性。

Description

人脸图像的美化方法和装置
相关申请的交叉引用
本申请要求OPPO广东移动通信有限公司于2018年5月31日提交的、申请名称为“人脸拍照的虚拟整容方法和装置”的、中国专利申请号“201810551058.3”的优先权。
技术领域
本申请涉及人像处理技术领域,尤其涉及一种人脸图像的美化方法和装置。
背景技术
随着终端设备的普及,越来越多的用户习惯使用终端设备进行拍照,因此,终端设备的拍照功能也越发的多元化,比如,相关拍照应用程序为用户提供美化功能等。
相关技术中,基于二维的人脸图像进行美化处理,处理效果不佳,处理后的图像真实感不强。
发明内容
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。
为达上述目的,本申请第一方面实施例提出了一种人脸图像的美化方法,包括:获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息;根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型;查询预先注册的人脸信息,判断所述用户是否注册;若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型;将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
为达上述目的,本申请第二方面实施例提出了一种人脸图像的美化装置,包括:获取模块,用于获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息;重构模块,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型;查询模块,用于查询预先注册的人脸信息,判断所述用户是否注册;调整模块,用于在获知所述用户已经注册时,获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型;映射模块,用于将所述虚拟美化后的目标人脸三维模型映射 到二维平面,得到目标二维人脸图像。
为达上述目的,本申请第三方面实施例提出了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如前述第一方面实施例所述的人脸图像的美化方法。
为达上述目的,本申请第四方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如前述第一方面实施例所述的人脸图像的美化方法。
为达上述目的,本申请第五方面实施例提出了一种图像处理电路。所述图像处理电路包括:图像单元、深度信息单元和处理单元;
所述图像单元,用于输出用户当前的原始二维人脸图像;
所述深度信息单元,用于输出与所述原始二维人脸图像对应的深度信息;
所述处理单元,分别与所述图像单元和所述深度信息单元电性连接,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断所述用户是否注册,若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。本申请提供的技术方案,至少包括如下有益效果:
基于人脸三维模型对已经注册的用户进行美化,优化了美化效果,提升了目标用户对美化效果的满意度和与产品的粘性。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请一个实施例所提供的人脸图像的美化方法的流程示意图;
图2为本申请另一个实施例所提供的人脸图像的美化方法的流程示意图;
图3为本申请一个实施例所提供的深度图像采集组件的结构示意图;
图4为本申请一个实施例所提供的人脸图像的美化方法的技术流程示意图;
图5为本申请另一个实施例所提供的人脸图像的美化方法的技术流程示意图;
图6是根据本申请一个实施例人脸图像的美化装置的结构示意图;
图7是根据本申请另一个实施例人脸图像的美化装置的结构示意图;
图8为本申请实施例所提供的电子设备的结构示意图;以及
图9为一个实施例中图像处理电路的示意图;
图10为作为一种可能的实现方式的图像处理电路的示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
针对现有技术中基于二维的人脸图像进行美化处理,处理效果不佳,处理后的图像真实感不强的技术问题,本申请实施例中,通过获取二维的人脸图像,以及人脸图像对应的深度信息,根据深度信息和人脸图像,进行三维重构,得到人脸三维模型,基于人脸三维模型进行美化,相较于二维美化,考量了人脸的深度信息,实现了人脸不同部位的区分化美化,提高了美化的真实感,比如,基于人脸三维模型进行美化,在对鼻子部位进行磨皮时,由于深度信息有助清晰的区分出鼻子与其他部位,因而,避免对其他部位误磨皮导致人脸模糊等。
下面参考附图描述本申请实施例的人脸图像的美化方法和装置。
图1为本申请一个实施例所提供的人脸图像的美化方法的流程示意图。
本申请实施例的人脸虚拟美化方法可以应用于具有深度信息和彩色信息获取装置的计算机设备,其中,具有深度信息和彩色信息(二维信息)获取装置功能的装置可以是双摄系统等,该计算机设备可以为手机、平板电脑、个人数字助理、穿戴式设备等具有各种操作系统、触摸屏和/或显示屏的硬件设备。
步骤101,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息。
需要说明的是,根据应用场景的不同,本申请的实施例中,获取深度信息和原始二维人脸图像信息的硬件装置不同:
作为一种可能的实现方式,获取原始二维人脸信息的硬件装置为可见光RGB图像传感器,可以基于计算机设备中的RGB可见光图像传感器获取原始二维人脸。具体地,可见光RGB图像传感器可以包括可见光摄像头,可见光摄像头可以捕获由成像对象反射的可见光进行成像,得到成像对象对应的原始二维人脸。
作为一种可能的实现方式,获取深度信息的方式为通过结构光传感器获取,具体地,如图2所示,获取每个人脸图像对应的深度信息的方式包括如下步骤:
步骤201,向当前用户人脸投射结构光。
步骤202,拍摄经当前用户人脸调制的结构光图像。
步骤203,解调结构光图像的各个像素对应的相位信息以得到人脸图像对应的深度信息。
在本示例中,参见图3计算机设备为智能手机1000时,深度图像采集组件12包括结构光投射器121和结构光摄像头122。步骤201可以由结构光投射器121实现,步骤202和步骤203可以由结构光摄像头122实现。
也即是说,结构光投射器121可用于向当前用户人脸投射结构光;结构光摄像头122可用于拍摄经当前用户人脸调制的结构光图像,以及解调结构光图像的各个像素对应的相位信息以得到深度信息。
具体地,结构光投射器121将一定模式的结构光投射到当前用户的人脸上后,在当前用户的人脸的表面会形成由当前用户人脸调制后的结构光图像。结构光摄像头122拍摄经调制后的结构光图像,再对结构光图像进行解调以得到深度信息。其中,结构光的模式可以是激光条纹、格雷码、正弦条纹、非均匀散斑等。
其中,结构光摄像头122可进一步用于解调结构光图像中各个像素对应的相位信息,将相位信息转化为深度信息,以及根据深度信息生成深度图像。
具体地,与未经调制的结构光相比,调制后的结构光的相位信息发生了变化,在结构光图像中呈现出的结构光是产生了畸变之后的结构光,其中,变化的相位信息即可表征物体的深度信息。因此,结构光摄像头122首先解调出结构光图像中各个像素对应的相位信息,再根据相位信息计算出深度信息。
步骤102,根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型。
具体地,根据深度信息和原始二维人脸图像进行三维重构,赋予相关点深度信息和二维信息,重构获取原始人脸三维模型,该原始人脸三维模型为三维立体模型可以充分还原出人脸,相对二维人脸模型,还包括了人脸的五官的立体角度等信息。
根据应用场景的不同,根据深度信息和人脸图像进行三维重构获取原始人脸三维模型的方式包括但是不限于以下方式:
作为一种可能的实现方式,对每一张二维样本人脸图像进行关键点识别,得到定位关键点,对每一张人脸图像,根据定位关键点的深度信息和定位关键点在二维样本人脸图像上的距离,包括二维空间上的x轴距离和y轴距离,确定定位关键点在三维空间中的相对位置,根据定位关键点在三维空间中的相对位置,连接相邻的定位关键点,生成原始样本人脸三维模型。其中,关键点为人脸上的特征点,可包括眼角、鼻尖、嘴角上的点等。
作为另一种可能的实现方式,获取多个角度原始二维的人脸图像,并筛选出清晰度较 高的人脸图像作为原始数据,进行特征点定位,利用特征定位结果粗略估计人脸角度,根据人脸的角度和轮廓建立粗糙的人脸三维形变模型,并将人脸特征点通过平移、缩放操作调整到与人脸三维形变模型在同一尺度上,并抽取出与人脸特征点对应点的坐标信息形成稀疏人脸三维形变模型。
进而,根据人脸角度粗略估计值和稀疏人脸三维形变模型,进行微粒群算法迭代人脸三维重构,得到人脸三维几何模型,在得到人脸三维几何模型后,采用纹理张贴的方法将输入二维图像中的人脸纹理信息映射到人脸三维几何模型,得到完整的原始人脸三维模型。
在本申请的一个实施例中,为了提升美化效果,还可以基于美化后的原始二维人脸图像进行原始人脸三维模型的构建,由此,构建的原始人脸三维模型较为美观,保证了美化的美观性。
具体而言,提取用户的用户属性特征,其中,用户属性特征可包括性别、年龄、人种、以及肤色,其中,用户的属性特征可以是根据用户注册时输入的个人信息获取的,也可以是通过采集用户注册时的二维人脸图像信息分析获取的,根据用户属性特征对原始二维人脸图像进行美化处理,得到美化后的原始二维人脸图像,其中,根据用户属性特征对原始二维人脸图像进行美化处理的方式,可以为预先建立用户属性特征和美化参数的对应关系,比如,女性的美化参数为祛痘、磨皮、美白,男性的美化参数为祛痘等,从而,在获取用户属性特征后,查询该对应关系获取对应的美化参数,根据查询到的美化参数对原始二维人脸图像进行美化处理。
当然,对原始二维人脸图像进行美化的方式除了上述美化外,还可包括亮度优化、清晰度提高、去噪处理、障碍物处理等,以保证原始人脸三维模型较为精确。
步骤103,查询预先注册的人脸信息,判断用户是否注册。
可以理解,在本实施例中,基于已经注册的用户提供优化美化服务,一方面,使得已注册用户在拍照时,尤其是在多人拍照时,得到最优美化效果,提升已注册用户的满意度,另一方面,有助于推广相关产品。在实际应用中,为了进一步提升已注册用户的拍照体验,在识别出已注册用户时,可以使用特殊标志性符号对已注册用户进行标注,比如,使用不同颜色人脸对焦框突出显示已经注册的用户,使用不同形状的对焦框突出显示已经注册的用户。
在不同的应用场景下,查询预先注册的人脸信息,判断用户是否注册包括但不限于以下方式:
作为一种可能的实现方式,预先获取注册用户的面部特征,比如,胎记等特殊标记特征、鼻子、眼睛等五官部位的形状和位置特征等,分析原始二维人脸图像,比如采用图像识别的技术提取用户的面部特征,查询预先注册的面部数据库,判断是否存在面部特征, 若存在,则确定用户已经注册;若不存在,则确定用户没有注册。
步骤104,若获知用户已经注册,则获取与用户对应的人脸三维模型美化参数,根据人脸三维模型美化参数对原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型。
其中,人脸三维模型美化参数包括但不限于对人脸三维模型中调整的目标关键点的调整位置以及距离等。
具体地,如果获知用户为已经注册的用户,则为了为该已注册用户提供优化的美化服务,获取与用户对应的人脸三维模型美化参数,根据人脸三维模型美化参数对原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型。可以理解的是,原始人脸三维模型实际上是由关键点以及关键点连接形成的三角网络搭建的,因而,在对原始人脸三维模型上待美化部位的关键点进行调整时,对应的人脸三维模型变化,从而,得到虚拟美化后的目标人脸模型。
其中,与用户对应的人脸三维模型美化参数的方式可以为用户主动注册的、可以是根据用户的原始人脸三维模型法分析后自动生成的。
作为一种可能的实现方式,获取用户多个角度的二维样本人脸图像,以及与每个二维样本人脸图像对应的深度信息,根据深度信息和二维样本人脸图像进行三维重构,获取原始样本人脸三维模型,对原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型,比较原始样本人脸三维模型和目标样本人脸三维模型,提取与用户对应的人脸三维模型美化参数,比如,根据相同部位对应的关键点坐标差异生成对应的坐标差值信息等。
在本实施例中,为了更加方便对人脸三维模型的调整,在原始样本人脸三维模型上显示每个美化部位的关键点,比如,以高亮显示的方式显示每个美化部位的关键点,检测用户对待美化部位的关键点进行的移位操作,比如检测用户对选中的关键点的拖动操作等,根据移位操作对关键点进行调整,根据调整后的关键点以及其他相邻关键点的连接,得到虚拟美化后的目标样本人脸三维模型。
在实际执行过程中,可以基于不同的实现方式接收对原始样本人脸三维模型上待美化部位的关键点进行调整,示例说明如下:
第一种示例:
在本示例中,为了便于用户的操作,可以为用户提供调整控件通过用户对控件的操作实时进行人脸三维模型的调整。
具体而言,在本实施例中,生成与每个美化部位的关键点对应的调整控件,检测用户对待美化部位的关键点对应的调整控件进行的触控操作,获取相应的调整参数,根据调整 参数对原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型,基于该虚拟美化后的目标样本人脸三维模型与原始样本人脸三维模型的差距获取美化参数。其中,调整参数包括关键点的移动方向和移动距离等。
在本实施例中,还可向用户提供美化建议信息,比如提供“丰唇、填充苹果机”等美化建议,其中,该美化建议信息可以为文字形式、语音形式等,若用户确认美化建议信息,根据美化建议信息确定待美化部位的关键点以及调整参数,比如,用户确认上述美化建议,确定的美化参数为调整嘴部的深度值以及脸颊的深度值等,其中,深度值变化的大小可以根据用户的原始样本人脸三维模型上对应部分的深度值确定,为了保证调整的效果自然,调整后的深度值与初始深度值的差值在一定范围内,根据调整参数对原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型。
其中,为了进一步提高美化的效果的美感,在对原始人脸三维模型上待美化部位的关键点进行调整之前,还可对覆盖在原始人脸三维模型表面的皮肤纹理图进行美化,得到美化后的原始人脸三维模型。
可以理解的是,当人脸图像中有痘痘时,皮肤纹理图中痘痘对应的部位的颜色可以为红色,或者,当人脸图像中有雀斑时,皮肤纹理图中雀斑对应的部位的颜色可以为咖啡色或黑色,或者,当人脸图像中有黑痣时,皮肤纹理图中黑痣对应的部位的颜色可以为黑色。
因此,可以根据原始人脸三维模型的皮肤纹理图的颜色,确定是否存在异常范围,当未存在异常范围时,可以不做任何处理,而当存在异常范围时,可以进一步根据异常范围内的各点在三维空间中的相对位置关系,以及异常范围的颜色信息,采用对应的美化策略,对异常范围进行美化。
一般情况下,痘痘是突出皮肤表面的,黑痣也可以是突出皮肤表面的,而雀斑是未突出皮肤表面的,因此,本申请实施例中,可以根据异常范围的中心点与边缘点之间的高度差,确定异常范围所属的异常类型,例如,异常类型可以为凸起或者未凸起。在确定异常类型后,可以根据异常类型和颜色信息,确定对应的美化策略,而后根据异常范围对应的匹配肤色,采用美化策略指示的滤波范围和滤波强度对异常范围进行磨皮处理。
举例而言,当异常类型为凸起,颜色信息为红色时,此时,该异常范围内可以为痘痘,痘痘对应的磨皮程度较强,当异常类型为未凸起,颜色为青色时,此时,该异常范围内可以为纹身,纹身对应的磨皮程度较弱。
或者,还可以根据异常范围对应的匹配肤色,填充异常范围内的肤色。
例如,当异常类型为凸起,颜色信息为红色时,此时,该异常范围内可以为痘痘, 则祛痘的美化策略可以为:对痘痘进行磨皮处理,以及可以根据痘痘附近的正常肤色,本申请实施例中记为匹配肤色,填充痘痘对应的异常范围内的肤色,或者,当异常类型为未凸起,颜色为咖啡色时,此时,该异常范围内可以为雀斑,则祛斑的美化策略可以为:根据雀斑附近的正常肤色,本申请实施例中记为匹配肤色,填充雀斑对应的异常范围内的肤色。
本申请中,由于原始人脸三维模型的模型中,以各关键点为顶点得到的封闭区域的深度信息是一致的,当对覆盖在人脸三维模型表面的皮肤纹理图进行美化时,可以分别对各个封闭区域进行美化,由此,可以增加美化后的封闭区域中像素值的可信度,提升美化效果。
作为本申请实施例的另一种可能的实现方式,可以预先设置局部人脸对应的美化策略,其中,局部人脸可以包括鼻部、唇部、眼部、脸颊等脸部部位。例如,对于鼻部而言,其对应的美化策略可以为鼻尖提亮处理,鼻翼阴影处理,从而增加鼻部的立体感,或者,对于脸颊而言,其对应的美化策略可以为添加腮红和/或磨皮处理。
因此,本申请实施例中,可以根据颜色信息和在原始人脸三维模型中的相对位置,从皮肤纹理图中识别出局部人脸,而后根据局部人脸对应的美化策略,对局部人脸进行美化。
可选地,在局部人脸为眉毛时,可以根据眉毛对应的美化策略指示的滤波强度,对局部人脸进行磨皮处理。
在局部人脸为脸颊时,可以根据脸颊对应的美化策略指示的滤波强度,对局部人脸进行磨皮处理。需要说明的是,为了使得美化后的效果更加自然,美化效果更加突出,脸颊对应的美化策略指示的滤波强度可以大于眉毛对应的美化策略指示的滤波强度。
在局部人脸属于鼻部时,可以根据鼻部对应的美化策略指示的阴影强度,增加局部人脸的阴影。
本申请中,基于局部人脸在原始人脸三维模型中的相对位置,对其进行美化处理,可以使得美化后的皮肤纹理图更加自然,美化效果更加突出。并且,可以实现有针对性地对局部人脸进行美化处理,从而提升成像效果,提升用户的拍照体验。
当然,在实际应用中,若获知用户没有注册,也可以为用户提供供较优的美化服务。
在本实施例中,若获知用户没有注册,则提取用户的用户属性特征,其中,用户属性特征可包括性别、年龄、人种、以及肤色,比如基于图像分析技术识别出用户的发型、首饰、有无化妆等,以确定用户属性特征,进而,获取预设的与用户属性特征对应的标准人脸三维模型美化参数,根据标准人脸三维模型美化参数对原始人脸三维模型上的关键点进 行调整,得到虚拟美化后的目标人脸三维模型。
步骤105,将虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
具体地,在对原始人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标人脸三维模型之后,可以将虚拟美化后的目标人脸三维模型映射到二维平面,得到目标人脸图像,并对目标二维人脸图像进行美化处理。
本申请中,由于皮肤纹理图是三维的,对皮肤纹理图进行美化,可以使得美化后的皮肤纹理图更加自然,从而将根据美化后的人脸三维模虚拟美化后生成的目标人脸三维模型映射到二维平面,得到美化后的目标二维人脸图像,对目标二维人脸图像进行美化处理,可以使得美化后的目标二维人脸图像更加真实,美化效果更加突出,为用户提供了美化后美化效果展示,进一步提升用户的美化体验。
为了使得本领域的技术人员对人脸图像的美化方法的流程更加清楚,下面结合其在具体场景中的应用进行举例,说明如下:
在本示例中,标定,是指对摄像头进行标定,确定人脸图像中的关键点在三维空间中对应的关键点。
在注册阶段,如图4所示,可以通过摄像头模组预览扫描人脸,获取用户多个角度的二维样本人脸图像,比如,采集将近20张不同角度的二维样本人脸图像和深度图用于后续三维人脸重建,其中,在扫描时可提示缺失角度和扫描进度,以及与每个二维样本人脸图像对应的深度信息,根据深度信息和二维样本人脸图像进行三维重构,获取原始样本人脸三维模型,
对3D人脸模型进行五官分析,如脸型、鼻宽、鼻高、眼睛大小、嘴唇厚度等,给出美化建议信息,若用户确认美化建议信息,根据美化建议信息确定待美化部位的关键点以及调整参数,根据调整参数对原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型。
进而,如图5所示,在识别阶段,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息,根据深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型美化参数,根据人脸三维模型美化参数对原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
综上所述,本申请实施例的人脸图像的美化方法,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息,根据深度信息和原始二维人脸图像进行三维重 构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型美化参数,根据人脸三维模型美化参数对原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。由此,基于人脸三维模型对已经注册的用户进行美化,优化了美化效果,提升了目标用户对美化效果的满意度和与产品的粘性。
为了实现上述实施例,本申请还提出一种人脸图像的美化装置,图6是根据本申请一个实施例人脸图像的美化装置的结构示意图。如图6所示,该人脸图像的美化装置包括获取模块10、重构模块20、查询模块30、调整模块40和映射模块50。
其中,获取模块10,用于获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息。
重构模块20,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型。
查询模块30,用于查询预先注册的人脸信息,判断所述用户是否注册。
在本申请的一个实施例中,如图7所示,查询模块30包括提取单元31、确定单元32。
其中,提取单元31,用于分析所述原始二维人脸图像,提取所述用户的面部特征。
确定单元32,用于查询预先注册的面部数据库,判断是否存在所述面部特征,若存在,则确定所述用户已经注册,若不存在,则确定所述用户没有注册。
调整模块40,用于在获知所述用户已经注册时,获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型。
映射模块50,用于将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
需要说明的是,前述对人脸图像的美化方法实施例的解释说明也适用于该实施例的人脸图像的美化装置,此处不再赘述。
综上所述,本申请实施例的人脸图像的美化装置,获取用户当前的原始二维人脸图像,以及与原始二维人脸图像对应的深度信息,根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型美化参数,根据人脸三维模型美化参数对原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。由此,基于人脸三维模 型对已经注册的用户进行美化,优化了美化效果,提升了目标用户对美化效果的满意度和与产品的粘性。
为了实现上述实施例,本申请还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被移动终端的处理器执行时实现如前述实施例中所述的人脸图像的美化方法。
为了实现上述实施例,本申请还提出一种电子设备。
图8为一个实施例中电子设备200的内部结构示意图。该电子设备200包括通过系统总线210连接的处理器220、存储器230、显示器240和输入装置250。其中,电子设备200的存储器230存储有操作系统和计算机可读指令。该计算机可读指令可被处理器220执行,以实现本申请实施方式的人脸美化方法。该处理器220用于提供计算和控制能力,支撑整个电子设备200的运行。电子设备200的显示器240可以是液晶显示屏或者电子墨水显示屏等,输入装置250可以是显示器240上覆盖的触摸层,也可以是电子设备200外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该电子设备200可以是手机、平板电脑、笔记本电脑、个人数字助理或穿戴式设备(例如智能手环、智能手表、智能头盔、智能眼镜)等。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的示意图,并不构成对本申请方案所应用于其上的电子设备200的限定,具体的电子设备200可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
为了实现上述实施例,本发明还提出了一种图像处理电路,该图像处理电路包括图像单元310、深度信息单元320和处理单元330。其中,
图像单元310,用于输出用户当前的原始二维人脸图像。
深度信息单元320,用于输出与原始二维人脸图像对应的深度信息。
处理单元330,分别与图像单元和深度信息单元电性连接,用于根据深度信息和原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断用户是否注册,若获知用户已经注册,则获取与用户对应的人脸三维模型美化参数,根据人脸三维模型美化参数对原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
本申请实施例中,图像单元310具体可以包括:电性连接的图像传感器311和图像信号处理(Image Signal Processing,简称ISP)处理器312。其中,
图像传感器311,用于输出原始图像数据。
ISP处理器312,用于根据所述原始图像数据,输出所述原始二维人脸图像。
本申请实施例中,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的人脸图像。其中,图像传感器311可包括色彩滤镜阵列(如Bayer滤镜),以及对应的感光单元,图像传感器311可获取每个感光单元捕捉的光强度和波长信息,并提供可由ISP处理器312处理的一组原始图像数据。ISP处理器312对原始图像数据进行处理后,得到YUV格式或者RGB格式的人脸图像,并发送至处理单元330。
其中,ISP处理器312在对原始图像数据进行处理时,可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器312可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
作为一种可能的实现方式,深度信息单元320,包括电性连接的结构光传感器321和深度图生成芯片322。其中,
结构光传感器321,用于生成红外散斑图。
深度图生成芯片322,用于根据红外散斑图,输出与原始二维人脸图像对应的深度信息。
本申请实施例中,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map),该深度图指示了红外散斑图中各像素点的深度。深度图生成芯片322将深度图发送至处理单元330。
作为一种可能的实现方式,处理单元330,包括:电性连接的CPU331和GPU(Graphics Processing Unit,图形处理器)332。其中,
CPU331,用于根据标定数据,对齐人脸图像与深度图,根据对齐后的人脸图像与深度图,输出人脸三维模型。
GPU332,用于若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
本申请实施例中,CPU331从ISP处理器312获取到人脸图像,从深度图生成芯片322 获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出人脸图像中各像素点对应的深度信息。进而,CPU331根据深度信息和人脸图像,进行三维重构,得到人脸三维模型。
CPU331将人脸三维模型发送至GPU332,以便GPU332根据人脸三维模型执行如前述实施例中描述的人脸图像的美化方法,得到目标二维人脸图像。
进一步地,图像处理电路还可以包括:第一显示单元341。
第一显示单元341,与所述处理单元330电性连接,用于显示对待美化部位的关键点对应的调整控件。
进一步地,图像处理电路还可以包括:第二显示单元342。
第二显示单元342,与所述处理单元340电性连接,用于显示虚拟美化后的目标样本人脸三维模型。
可选地,图像处理电路还可以包括:编码器350和存储器360。
本申请实施例中,GPU332处理得到的美化后的人脸图,还可以由编码器350编码后存储至存储器360,其中,编码器350可由协处理器实现。
在一个实施例中,存储器360可以为多个,或者划分为多个存储空间,存储GPU312处理后的图像数据可存储至专用存储器,或者专用存储空间,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。存储器360可被配置为实现一个或多个帧缓冲器。
下面结合图10,对上述过程进行详细说明。
需要说明的是,图10为作为一种可能的实现方式的图像处理电路的示意图。为便于说明,仅示出与本申请实施例相关的各个方面。
如图10所示,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的人脸图像。其中,图像传感器311可包括色彩滤镜阵列(如Bayer滤镜),以及对应的感光单元,图像传感器311可获取每个感光单元捕捉的光强度和波长信息,并提供可由ISP处理器312处理的一组原始图像数据。ISP处理器312对原始图像数据进行处理后得到YUV格式或者RGB格式的人脸图像,并发送至CPU331。
其中,ISP处理器312在对原始图像数据进行处理时,可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器312可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
如图10所示,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构 光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map),该深度图指示了红外散斑图中各像素点的深度。深度图生成芯片322将深度图发送至CPU331。
CPU331从ISP处理器312获取到人脸图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出人脸图像中各像素点对应的深度信息。进而,CPU331根据深度信息和人脸图像,进行三维重构,得到人脸三维模型。
CPU331将人脸三维模型发送至GPU332,以便GPU332根据人脸三维模型执行如前述实施例中描述的方法,实现人脸虚拟美化,得到虚拟美化后的人脸图像。GPU332处理得到的虚拟美化后的人脸图像,可以由显示器340(包括上述第一显示单元341和第二显示单元351)显示,和/或,由编码器350编码后存储至存储器360,其中,编码器350可由协处理器实现。
在一个实施例中,存储器360可以为多个,或者划分为多个存储空间,存储GPU332处理后的图像数据可存储至专用存储器,或者专用存储空间,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。存储器360可被配置为实现一个或多个帧缓冲器。
例如,以下为运用图10中的处理器220或运用图10中的图像处理电路(具体为CPU331和GPU332)实现控制方法的步骤:
CPU331获取二维的人脸图像,以及所述人脸图像对应的深度信息;CPU331根据所述深度信息和所述人脸图像,进行三维重构,得到人脸三维模型;GPU332获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型;GPU332将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或 者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既 可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (25)

  1. 一种人脸图像的美化方法,其特征在于,包括:
    获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息;
    根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型;
    查询预先注册的人脸信息,判断所述用户是否注册;
    若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型;
    将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
  2. 根据权利要求1所述的方法,其特征在于,所述查询预先注册的人脸信息,判断所述用户是否注册,包括:
    分析所述原始二维人脸图像,提取所述用户的面部特征;
    查询预先注册的面部数据库,判断是否存在所述面部特征,若存在,则确定所述用户已经注册;若不存在,则确定所述用户没有注册。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述判断所述用户是否注册之后,还包括:
    若获知所述用户没有注册,则提取所述用户的用户属性特征;
    获取预设的与所述用户属性特征对应的标准人脸三维模型美化参数,根据所述标准人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型。
  4. 根据权利要求3所述的方法,其特征在于,所述用户属性特征包括:
    性别、年龄、人种、以及肤色。
  5. 根据权利要求1-4任一所述的方法,其特征在于,在所述根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型之前,还包括:
    提取所述用户的用户属性特征;
    根据所述用户属性特征对所述原始二维人脸图像进行美化处理,得到美化后的原始二维人脸图像。
  6. 根据权利要求1-5任一所述的方法,其特征在于,还包括:
    获取所述用户多个角度的二维样本人脸图像,以及与每个二维样本人脸图像对应的深度信息;
    根据所述深度信息和所述二维样本人脸图像进行三维重构,获取原始样本人脸三维模 型;
    对所述原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型;
    比较所述原始样本人脸三维模型和所述目标样本人脸三维模型,提取与所述用户对应的人脸三维模型美化参数。
  7. 根据权利要求6所述的方法,其特征在于,在所述获取原始人脸三维模型之后,还包括:
    对覆盖在所述原始人脸三维模型表面的皮肤纹理图进行美化,得到美化后的原始人脸三维模型。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述深度信息和所述二维样本人脸图像进行三维重构,获取原始样本人脸三维模型,包括:
    对每一张二维样本人脸图像进行关键点识别,得到定位关键点;
    对每一张人脸图像,根据定位关键点的深度信息和定位关键点在所述二维样本人脸图像上的距离,确定所述定位关键点在三维空间中的相对位置;
    根据所述定位关键点在三维空间中的相对位置,连接相邻的定位关键点,生成原始样本人脸三维模型。
  9. 根据权利要求6所述的方法,其特征在于,所述对所述原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型,包括:
    生成与每个美化部位的关键点对应的调整控件;
    检测所述用户对待美化部位的关键点对应的调整控件进行的触控操作,获取相应的调整参数;
    根据所述调整参数对所述原始人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型。
  10. 根据权利要求6所述的方法,其特征在于,所述对所述原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型,包括:
    在所述原始样本人脸三维模型上显示每个美化部位的关键点;
    检测用户对待美化部位的关键点进行的移位操作,根据所述移位操作对所述关键点进行调整,得到虚拟美化后的目标样本人脸三维模型。
  11. 根据权利要求6所述的方法,其特征在于,所述对所述原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型,包括:
    向所述用户提供美化建议信息;
    若所述用户确认所述美化建议信息,根据所述美化建议信息确定待美化部位的关键点 以及调整参数;
    根据所述调整参数对所述原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型。
  12. 一种人脸图像的美化装置,其特征在于,包括:
    获取模块,用于获取用户当前的原始二维人脸图像,以及与所述原始二维人脸图像对应的深度信息;
    重构模块,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型;
    查询模块,用于查询预先注册的人脸信息,判断所述用户是否注册;
    调整模块,用于在获知所述用户已经注册时,获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型;
    映射模块,用于将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
  13. 如权利要求12所述的装置,其特征在于,所述查询模块包括:
    提取单元,用于分析所述原始二维人脸图像,提取所述用户的面部特征;
    确定单元,用于查询预先注册的面部数据库,判断是否存在所述面部特征,若存在,则确定所述用户已经注册,若不存在,则确定所述用户没有注册。
  14. 如权利要求12或13所述的装置,其特征在于,所述查询模块,具体用于:
    分析所述原始二维人脸图像,提取所述用户的面部特征;
    查询预先注册的面部数据库,判断是否存在所述面部特征,若存在,则确定所述用户已经注册;若不存在,则确定所述用户没有注册。
  15. 如权利要求12或13所述的装置,其特征在于,还包括:
    提取模块,用于提取所述用户的用户属性特征;
    美化模块,用于根据所述用户属性特征对所述原始二维人脸图像进行美化处理,得到美化后的原始二维人脸图像。
  16. 如权利要求12或13所述的装置,其特征在于,所述获取模块,还用于:
    获取所述用户多个角度的二维样本人脸图像,以及与每个二维样本人脸图像对应的深度信息;
    根据所述深度信息和所述二维样本人脸图像进行三维重构,获取原始样本人脸三维模型;
    所述调整模块,还用于:
    对所述原始样本人脸三维模型上待美化部位的关键点进行调整,得到虚拟美化后的目标样本人脸三维模型;
    比较所述原始样本人脸三维模型和所述目标样本人脸三维模型,提取与所述用户对应的人脸三维模型美化参数。
  17. 一种电子设备,其特征在于,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如权利要求1-11中任一所述的人脸图像的美化方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-11中任一所述的人脸图像的美化方法。
  19. 一种图像处理电路,其特征在于,所述图像处理电路包括:图像单元、深度信息单元和处理单元;
    所述图像单元,用于输出用户当前的原始二维人脸图像;
    所述深度信息单元,用于输出与所述原始二维人脸图像对应的深度信息;
    所述处理单元,分别与所述图像单元和所述深度信息单元电性连接,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型,查询预先注册的人脸信息,判断所述用户是否注册,若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
  20. 根据权利要求19所述的图像处理电路,其特征在于,所述图像单元,包括电性连接的图像传感器和图像信号处理ISP处理器;
    所述图像传感器,用于输出原始图像数据;
    所述ISP处理器,用于根据所述原始图像数据,输出所述原始二维人脸图像。
  21. 根据权利要求19所述的图像处理电路,其特征在于,所述深度信息单元,包括电性连接的结构光传感器和深度图生成芯片;
    所述结构光传感器,用于生成红外散斑图;
    所述深度图生成芯片,用于根据所述红外散斑图,输出与所述原始二维人脸图像对应的深度信息。
  22. 根据权利要求21所述的图像处理电路,其特征在于,所述处理单元,包括电性连接的CPU和GPU;
    其中,所述CPU,用于根据所述深度信息和所述原始二维人脸图像进行三维重构,获取原始人脸三维模型,并查询预先注册的人脸信息,判断所述用户是否注册;
    所述GPU,用于若获知所述用户已经注册,则获取与所述用户对应的人脸三维模型美化参数,根据所述人脸三维模型美化参数对所述原始人脸三维模型上的关键点进行调整,得到虚拟美化后的目标人脸三维模型,将所述虚拟美化后的目标人脸三维模型映射到二维平面,得到目标二维人脸图像。
  23. 根据权利要求22所述的图像处理电路,其特征在于,所述GPU还用于:
    提取所述用户的用户属性特征;
    根据所述用户属性特征对所述原始二维人脸图像进行美化处理,得到美化后的原始二维人脸图像。
  24. 根据权利要求19-23任一项所述的图像处理电路,其特征在于,所述图像处理电路还包括第一显示单元;
    所述第一显示单元,与所述处理单元电性连接,用于显示对待美化部位的关键点对应的调整控件。
  25. 根据权利要求19-23任一项所述的图像处理电路,其特征在于,所述图像处理电路还包括第二显示单元;
    所述第二显示单元,与所述处理单元电性连接,用于显示虚拟美化后的目标样本人脸三维模型。
PCT/CN2019/089348 2018-05-31 2019-05-30 人脸图像的美化方法和装置 WO2019228473A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810551058.3 2018-05-31
CN201810551058.3A CN108765273B (zh) 2018-05-31 2018-05-31 人脸拍照的虚拟整容方法和装置

Publications (1)

Publication Number Publication Date
WO2019228473A1 true WO2019228473A1 (zh) 2019-12-05

Family

ID=64001237

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089348 WO2019228473A1 (zh) 2018-05-31 2019-05-30 人脸图像的美化方法和装置

Country Status (2)

Country Link
CN (1) CN108765273B (zh)
WO (1) WO2019228473A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177879A (zh) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 图像处理方法、装置、电子设备以及存储介质
GB2591994A (en) * 2020-01-31 2021-08-18 Fuel 3D Tech Limited A method for generating a 3D model
CN114299267A (zh) * 2021-12-28 2022-04-08 北京快来文化传播集团有限公司 图像编辑系统和方法
US12033278B2 (en) 2020-01-31 2024-07-09 Resmed Sensor Technologies Limited Method for generating a 3D model

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765273B (zh) * 2018-05-31 2021-03-09 Oppo广东移动通信有限公司 人脸拍照的虚拟整容方法和装置
CN111353931B (zh) * 2018-12-24 2023-10-03 黄庆武整形医生集团(深圳)有限公司 整形模拟方法、系统、可读存储介质和设备
CN110020600B (zh) * 2019-03-05 2021-04-16 厦门美图之家科技有限公司 生成用于训练人脸对齐模型的数据集的方法
CN110189406B (zh) * 2019-05-31 2023-11-28 创新先进技术有限公司 图像数据标注方法及其装置
CN110278029B (zh) * 2019-06-25 2020-12-22 Oppo广东移动通信有限公司 数据传输控制方法及相关产品
CN110310318B (zh) * 2019-07-03 2022-10-04 北京字节跳动网络技术有限公司 一种特效处理方法及装置、存储介质与终端
CN110321849B (zh) * 2019-07-05 2023-12-22 腾讯科技(深圳)有限公司 图像数据处理方法、装置以及计算机可读存储介质
CN110473295B (zh) * 2019-08-07 2023-04-25 重庆灵翎互娱科技有限公司 一种基于三维人脸模型进行美颜处理的方法和设备
CN110675489B (zh) * 2019-09-25 2024-01-23 北京达佳互联信息技术有限公司 一种图像处理方法、装置、电子设备和存储介质
CN111031305A (zh) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 图像处理方法及装置、图像设备及存储介质
JP2022512262A (ja) 2019-11-21 2022-02-03 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド 画像処理方法及び装置、画像処理機器並びに記憶媒体
CN112927343B (zh) * 2019-12-05 2023-09-05 杭州海康威视数字技术股份有限公司 一种图像生成方法及装置
CN111178337B (zh) * 2020-01-07 2020-12-29 南京甄视智能科技有限公司 人脸关键点数据增强方法、装置、系统以及模型训练方法
CN111370100A (zh) * 2020-03-11 2020-07-03 深圳小佳科技有限公司 基于云端服务器的整容推荐方法及系统
CN111539882A (zh) * 2020-04-17 2020-08-14 华为技术有限公司 辅助化妆的交互方法、终端、计算机存储介质
CN111966852B (zh) * 2020-06-28 2024-04-09 北京百度网讯科技有限公司 基于人脸的虚拟整容的方法和装置
CN112150618B (zh) * 2020-10-16 2022-11-29 四川大学 一种用于眼角虚拟整形的处理方法及装置
CN113724396A (zh) * 2021-09-10 2021-11-30 广州帕克西软件开发有限公司 一种基于人脸网格的虚拟整容方法及装置
CN113763285B (zh) * 2021-09-27 2024-06-11 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN114581987A (zh) * 2021-10-20 2022-06-03 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN114120414B (zh) * 2021-11-29 2022-11-01 北京百度网讯科技有限公司 图像处理方法、装置、电子设备和介质
CN113902790B (zh) * 2021-12-09 2022-03-25 北京的卢深视科技有限公司 美容指导方法、装置、电子设备和计算机可读存储介质
CN114998554A (zh) * 2022-05-05 2022-09-02 清华大学 三维卡通人脸建模方法及装置
CN115239888B (zh) * 2022-08-31 2023-09-12 北京百度网讯科技有限公司 用于重建三维人脸图像的方法、装置、电子设备和介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6283858B1 (en) * 1997-02-25 2001-09-04 Bgk International Incorporated Method for manipulating images
CN105938627A (zh) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 用于人脸虚拟整形的处理方法和系统
CN108040208A (zh) * 2017-12-18 2018-05-15 信利光电股份有限公司 一种深度美颜方法、装置、设备及计算机可读存储介质
CN108765273A (zh) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 人脸拍照的虚拟整容方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777195B (zh) * 2010-01-29 2012-04-25 浙江大学 一种三维人脸模型的调整方法
CN106940880A (zh) * 2016-01-04 2017-07-11 中兴通讯股份有限公司 一种美颜处理方法、装置和终端设备
CN107705356A (zh) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 图像处理方法和装置
CN107730445B (zh) * 2017-10-31 2022-02-18 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质和电子设备
CN107993209B (zh) * 2017-11-30 2020-06-12 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6283858B1 (en) * 1997-02-25 2001-09-04 Bgk International Incorporated Method for manipulating images
CN105938627A (zh) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 用于人脸虚拟整形的处理方法和系统
CN108040208A (zh) * 2017-12-18 2018-05-15 信利光电股份有限公司 一种深度美颜方法、装置、设备及计算机可读存储介质
CN108765273A (zh) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 人脸拍照的虚拟整容方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TIAN, WEI: "Research on the Three Dimensional Computer aided Plastic Surgery", MASTER'S DEGREE DISSERTATION, no. 4, 15-10-2007, Northwest University, XP055659432, ISSN: 1674-0246 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2591994A (en) * 2020-01-31 2021-08-18 Fuel 3D Tech Limited A method for generating a 3D model
GB2591994B (en) * 2020-01-31 2024-05-22 Fuel 3D Tech Limited A method for generating a 3D model
US12033278B2 (en) 2020-01-31 2024-07-09 Resmed Sensor Technologies Limited Method for generating a 3D model
CN113177879A (zh) * 2021-04-30 2021-07-27 北京百度网讯科技有限公司 图像处理方法、装置、电子设备以及存储介质
CN114299267A (zh) * 2021-12-28 2022-04-08 北京快来文化传播集团有限公司 图像编辑系统和方法

Also Published As

Publication number Publication date
CN108765273B (zh) 2021-03-09
CN108765273A (zh) 2018-11-06

Similar Documents

Publication Publication Date Title
WO2019228473A1 (zh) 人脸图像的美化方法和装置
CN108447017B (zh) 人脸虚拟整容方法和装置
CN109118569B (zh) 基于三维模型的渲染方法和装置
WO2021036314A1 (zh) 人脸图像处理方法及装置、图像设备及存储介质
US8218862B2 (en) Automatic mask design and registration and feature detection for computer-aided skin analysis
US11043011B2 (en) Image processing method, apparatus, terminal, and storage medium for fusing images of two objects
WO2020034698A1 (zh) 基于三维模型的特效处理方法、装置和电子设备
KR101733512B1 (ko) 얼굴 특징 기반의 가상 체험 시스템 및 그 방법
WO2020034785A1 (zh) 三维模型处理方法和装置
KR101141643B1 (ko) 캐리커쳐 생성 기능을 갖는 이동통신 단말기 및 이를 이용한 생성 방법
KR20170008638A (ko) 3차원 컨텐츠 생성 장치 및 그 3차원 컨텐츠 생성 방법
CN108682050B (zh) 基于三维模型的美颜方法和装置
CN103597518A (zh) 生成反映玩家外观的化身
CN109272579B (zh) 基于三维模型的美妆方法、装置、电子设备和存储介质
CN109191393B (zh) 基于三维模型的美颜方法
CN109242760B (zh) 人脸图像的处理方法、装置和电子设备
CN111062891A (zh) 图像处理方法、装置、终端及计算机可读存储介质
WO2020034738A1 (zh) 三维模型的处理方法、装置、电子设备及可读存储介质
WO2023066120A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2023273247A1 (zh) 人脸图像处理方法及装置、计算机可读存储介质、终端
CN108521594B (zh) 一种基于体感相机手势识别的自由视点视频播放方法
CN110852934A (zh) 图像处理方法及装置、图像设备及存储介质
CN110675413B (zh) 三维人脸模型构建方法、装置、计算机设备及存储介质
US12001746B2 (en) Electronic apparatus, and method for displaying image on display device
CN113676721A (zh) 一种ar眼镜的图像获取方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19810303

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19810303

Country of ref document: EP

Kind code of ref document: A1