CN111539882A - Interactive method for assisting makeup, terminal and computer storage medium - Google Patents

Interactive method for assisting makeup, terminal and computer storage medium Download PDF

Info

Publication number
CN111539882A
CN111539882A CN202010304007.8A CN202010304007A CN111539882A CN 111539882 A CN111539882 A CN 111539882A CN 202010304007 A CN202010304007 A CN 202010304007A CN 111539882 A CN111539882 A CN 111539882A
Authority
CN
China
Prior art keywords
makeup
color image
picture
face
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010304007.8A
Other languages
Chinese (zh)
Inventor
冯嘉树
宋杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010304007.8A priority Critical patent/CN111539882A/en
Publication of CN111539882A publication Critical patent/CN111539882A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application provides an interactive method, a terminal and a computer storage medium for assisting makeup, relates to the technical field of terminals, and can combine a user face and a virtual makeup by combining the depth value of a depth image, so that the virtual makeup can be attached to the face of the user in a more three-dimensional manner. The method comprises the following steps: acquiring a depth image and a color image acquired by a camera, wherein the pixel value of each pixel point in the depth image is used for representing the depth value of a corresponding pixel point in the color image; displaying a color image acquired by a camera; extracting facial feature information in the color image; selecting virtual makeup; combining the depth value and the face characteristic information, and fusing the face of the color image with the virtual makeup to obtain a picture with a first fusion effect; and displaying the picture of the first fusion effect.

Description

Interactive method for assisting makeup, terminal and computer storage medium
Technical Field
The application relates to the technical field of terminals, in particular to an interactive method for assisting makeup, a terminal and a computer storage medium.
Background
Due to the demand of beauty, beauty cameras and various retouching software are widely applied, wherein the function of trying to make up enables users to experience virtual makeup effects without real makeup, and the users can conveniently determine the proper makeup style. It is often possible to blend in the user's face with some simple makeup template.
Disclosure of Invention
The application provides an interactive method, a terminal and a computer storage medium for assisting makeup, which can combine the face of a user and a virtual makeup appearance by combining the depth value of a depth image, so that the virtual makeup appearance can be attached to the face of the user in a more three-dimensional manner.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides an interactive method of assisting makeup, the method comprising: starting a camera of the camera; acquiring a depth image and a color image acquired by a camera, wherein the pixel value of each pixel point in the depth image is used for representing the depth value of a corresponding pixel point in the color image; displaying a color image acquired by a camera; extracting facial feature information in the color image; selecting virtual makeup; combining the depth value and the face characteristic information, and fusing the face of the color image with the virtual makeup to obtain a picture with a first fusion effect; and displaying the picture of the first fusion effect.
The cameras may include a structured light camera, a time of flight (TOF) camera, a binocular stereo camera, and other depth cameras. The images acquired by the camera include a depth image and a color image. The pixel value of each pixel in the depth image is used for representing the distance from the point in the real scene corresponding to the pixel to the vertical plane where the camera is located. The color image may be an image in RGB (RGB represents red, green, and blue, respectively) mode, and the pixel values corresponding to each color are stored through R, G, B three channels. The pixels of the depth image and the color image acquired by the camera are in one-to-one correspondence. The virtual makeup may be stored in the format of a still image. The selection of the virtual makeup may be manual or software recommendation, for example, a plurality of virtual makeup may be displayed for the user to select, or one virtual makeup may be recommended according to the face of the user shot by the camera. The facial feature information in the color image may be represented by image features, such as edges, corners, points, and the like, and the specific way of extracting the facial image feature information may be to extract image features of all abrupt edges, corners, points, and the like of pixel values in the color image to obtain a feature image, then match the feature image with a preset general face template, determine the position of the face according to the matching result, and further determine the position of the five sense organs. After the camera is turned on and before the virtual makeup is selected, the display device configured by the terminal device of the embodiment of the application can display the picture (color image, not depth image) shot by the camera in real time. After the virtual makeup is selected, combining the depth value and the face characteristic information, fusing the virtual makeup and the color image frame currently collected by the camera, displaying the fused picture, fusing the collected color image frame and the virtual makeup in real time, and displaying the real-time fusion effect.
In one possible implementation, combining the depth value and the face feature information to fuse the face of the color image with the virtual makeup to obtain a first fusion effect picture, including: establishing a three-dimensional model of the face according to the depth value; deforming the virtual makeup according to the three-dimensional model of the face; combining the face characteristic information, and fusing the face of the color image with the deformed virtual makeup to obtain a picture with a first fusion effect.
In one possible implementation manner, combining the face feature information to fuse the face of the color image with the deformed virtual makeup to obtain a picture with a first fusion effect, the method includes: according to the face feature information, carrying out image registration on the face of the color image and the deformed virtual makeup to determine that the deformed virtual makeup corresponds to the pixel value of each pixel point in the color image; and weighting the pixel value of each pixel in the color image and the corresponding pixel value of the virtual makeup by using a preset weighting algorithm to obtain the pixel value of each pixel in the picture with the first fusion effect.
In one possible implementation, selecting a virtual makeup includes: and selecting a virtual makeup according to the facial feature information in the color image.
In one possible implementation, after displaying the picture of the first fusion effect, the method further includes: receiving a determination operation used by a user to determine to select a virtual makeup; in response to the determination operation, combining the depth value and the face characteristic information, fusing the face of the color image with a virtual makeup object model to obtain a picture with a second fusion effect, wherein the virtual makeup object model is used for guiding the completion of the makeup process of the virtual makeup; and displaying the picture of the second fusion effect.
In one possible implementation, before receiving a determination operation for determining to select a virtual makeup by a user, the method further includes: uploading the picture with the first fusion effect to a cloud evaluation system; obtaining a recommended makeup for the target face part determined by the cloud evaluation system according to the picture of the first fusion effect; replacing the makeup of the target face part in the virtual makeup with the recommended makeup to obtain a replacement makeup; combining the depth value and the face characteristic information, the face of the color image is fused with the makeup replacement to update the picture with the first fusion effect.
In one possible implementation, after updating the picture of the first fusion effect, the method further includes: uploading the updated picture of the first fusion effect to a cloud evaluation system; and obtaining an authorization application sent by the cloud evaluation system, wherein the authorization application is a request initiated by the cloud evaluation system when the uploaded picture score exceeds a preset numerical value, and the authorization application is used for requesting to add the substitute makeup to a recommended makeup template library of the cloud evaluation system.
In one possible implementation, selecting a virtual makeup includes: acquiring and displaying a recommended makeup template library provided by a cloud evaluation system; and receiving selection operation in the recommended makeup template library to obtain the selected virtual makeup.
In one possible implementation, fusing a face of the color image with a virtual makeup object model includes: blending according to makeup steps of a makeup process, wherein, in each makeup step: determining a face part corresponding to the current makeup step; searching a virtual makeup object model corresponding to the face part; and fusing the color image and the searched virtual makeup object model according to the preset relative position relationship between the face part and the searched virtual makeup object model.
In one possible implementation, the virtual makeup object model is a makeup tool, and/or a character image.
In one possible implementation, after determining the face part corresponding to the current makeup step, the method further includes: extracting a part of the virtual makeup corresponding to the face part in the virtual makeup; combining the depth value and the facial feature information, and fusing the facial part of the color image with part of the virtual makeup to obtain a picture with a third fusion effect; comparing the color image acquired by the camera with the picture with the third fusion effect; and outputting prompt information according to the comparison result, wherein the prompt information is used for prompting the effect difference between the actual makeup and the partial virtual makeup of the user.
In a second aspect, the present application provides an interactive device for assisting makeup, for performing the interactive method for assisting makeup provided in the first aspect and any one of the possible implementations thereof, the device including: the starting unit is used for starting a camera of the camera; the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a depth image and a color image acquired by a camera, and the pixel value of each pixel point in the depth image is used for representing the depth value of a corresponding pixel point in the color image; the first display unit is used for displaying the color image acquired by the camera; the extraction unit is used for extracting facial feature information in the color image; a selecting unit for selecting a virtual makeup; the fusion unit is used for fusing the face of the color image and the virtual makeup by combining the depth value and the face characteristic information to obtain a picture with a first fusion effect; and the second display unit is used for displaying the picture of the first fusion effect.
In a third aspect, the present application provides a terminal comprising: the camera is used for acquiring a depth image and a color image, wherein the pixel value of each pixel point in the depth image is used for representing the depth value of a corresponding pixel point in the color image; the touch screen comprises a touch sensor and a display screen, and the display screen is used for displaying the color image acquired by the camera and the picture of the first fusion effect; a communication module; one or more processors; one or more memories; and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the terminal, cause the terminal to perform the steps of: starting a camera of the camera; acquiring a depth image and a color image acquired by a camera; displaying a color image acquired by a camera; extracting facial feature information in the color image; selecting virtual makeup; combining the depth value and the face characteristic information, and fusing the face of the color image with the virtual makeup to obtain a picture with a first fusion effect; and displaying the picture of the first fusion effect.
In one possible implementation, the instructions, when executed by the terminal, cause the terminal to perform a step of blending the face of the color image with the virtual makeup in combination with the depth value and the face feature information to obtain a picture with a first blending effect, including: establishing a three-dimensional model of the face according to the depth value; deforming the virtual makeup according to the three-dimensional model of the face; combining the face characteristic information, and fusing the face of the color image with the deformed virtual makeup to obtain a picture with a first fusion effect. In one possible implementation manner, the step of causing the terminal to perform fusion of the face of the color image and the deformed virtual makeup in combination with the facial feature information to obtain a picture with a first fusion effect includes: according to the face feature information, carrying out image registration on the face of the color image and the deformed virtual makeup to determine that the deformed virtual makeup corresponds to the pixel value of each pixel point in the color image; and weighting the pixel value of each pixel in the color image and the corresponding pixel value of the virtual makeup by using a preset weighting algorithm to obtain the pixel value of each pixel in the picture with the first fusion effect.
In one possible implementation, the instructions, when executed by the terminal, cause the terminal to perform the step of selecting a virtual makeup, include: and selecting a virtual makeup according to the facial feature information in the color image.
In one possible implementation manner, the instructions, when executed by the terminal, cause the terminal to, after the step of displaying the picture of the first fusion effect, further perform the following steps: receiving a determination operation used by a user to determine to select a virtual makeup; in response to the determination operation, combining the depth value and the face characteristic information, fusing the face of the color image with a virtual makeup object model to obtain a picture with a second fusion effect, wherein the virtual makeup object model is used for guiding the completion of the makeup process of the virtual makeup; and displaying the picture of the second fusion effect.
In a possible implementation manner, the instructions, when executed by the terminal, cause the terminal to, before performing the step of receiving a determination operation for determining to select a virtual makeup by the user, further perform the following steps: uploading the picture with the first fusion effect to a cloud evaluation system; obtaining a recommended makeup for the target face part determined by the cloud evaluation system according to the picture of the first fusion effect; replacing the makeup of the target face part in the virtual makeup with the recommended makeup to obtain a replacement makeup; combining the depth value and the face characteristic information, the face of the color image is fused with the makeup replacement to update the picture with the first fusion effect.
In a possible implementation manner, the instructions, when executed by the terminal, cause the terminal to, after the step of updating the picture of the first fusion effect, further perform the following steps: uploading the updated picture of the first fusion effect to a cloud evaluation system; and obtaining an authorization application sent by the cloud evaluation system, wherein the authorization application is a request initiated by the cloud evaluation system when the uploaded picture score exceeds a preset numerical value, and the authorization application is used for requesting to add the substitute makeup to a recommended makeup template library of the cloud evaluation system.
In one possible implementation, the instructions, when executed by the terminal, cause the terminal to perform the step of selecting a virtual makeup, include: acquiring and displaying a recommended makeup template library provided by a cloud evaluation system; and receiving selection operation in the recommended makeup template library to obtain the selected virtual makeup.
In one possible implementation, the instructions, when executed by the terminal, cause the terminal to perform the step of fusing the face of the color image with the virtual makeup object model includes: blending according to makeup steps of a makeup process, wherein, in each makeup step: determining a face part corresponding to the current makeup step; searching a virtual makeup object model corresponding to the face part; and fusing the color image and the searched virtual makeup object model according to the preset relative position relationship between the face part and the searched virtual makeup object model.
In one possible implementation, the virtual makeup object model is a makeup tool, and/or a character image.
In one possible implementation, the instructions, when executed by the terminal, cause the terminal to, after performing the step of determining the face part corresponding to the current makeup step, further perform the following steps: extracting a part of the virtual makeup corresponding to the face part in the virtual makeup; combining the depth value and the facial feature information, and fusing the facial part of the color image with part of the virtual makeup to obtain a picture with a third fusion effect; comparing the color image acquired by the camera with the picture with the third fusion effect; and outputting prompt information according to the comparison result, wherein the prompt information is used for prompting the effect difference between the actual makeup and the partial virtual makeup of the user.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on a terminal, cause the terminal to perform the method according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product for causing a terminal to perform the method according to any of the first aspect when the computer program product is run on the terminal.
In any of the above technical solutions and possible implementation manners, a depth image and a color image acquired by a camera are obtained, wherein a pixel value of each pixel point in the depth image is used to represent a depth value of a corresponding pixel point in the color image; displaying a color image acquired by a camera; extracting facial feature information in the color image; selecting virtual makeup; combining the depth value and the face characteristic information, and fusing the face of the color image with the virtual makeup to obtain a picture with a first fusion effect; the picture displaying the first fusion effect can combine the face of the user and the virtual makeup by combining the depth value of the depth image, so that the virtual makeup can be attached to the face of the user in a more three-dimensional manner.
It is understood that the terminal, the computer storage medium and the computer program product provided above are all used for executing the corresponding method provided above, and therefore, the beneficial effects achieved by the terminal, the computer storage medium and the computer program product may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Drawings
FIG. 1 is a first interaction diagram of an interaction method for assisting makeup according to an embodiment of the present disclosure;
fig. 2 is a schematic view of an application scenario of an interaction method for assisting makeup according to an embodiment of the present disclosure; (ii) a
Fig. 3 is a first schematic structural diagram of a terminal according to an embodiment of the present disclosure;
FIG. 4 is a first flowchart illustrating an interactive method for assisting makeup according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a concept of makeup migration in an interactive method for assisting makeup according to an embodiment of the present disclosure;
FIG. 6 is a second flowchart illustrating an interactive method for assisting makeup according to an embodiment of the present disclosure;
FIG. 7 is a third schematic flowchart illustrating an interactive method for assisting makeup according to an embodiment of the present disclosure;
FIG. 8 is a second interaction diagram illustrating an interaction method for assisting makeup according to an embodiment of the present disclosure;
FIG. 9 is a schematic block diagram of an interactive device for assisting makeup according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
Embodiments of the present embodiment will be described in detail below with reference to the accompanying drawings.
The interaction method for assisting makeup provided by the embodiment of the application can be applied to terminal equipment configured with a camera, for example, the terminal equipment can be a mobile phone, a tablet device, a personal computer and the like, and can be specifically executed by controlling the terminal by an application program installed in the terminal.
Illustratively, the terminal may be a mobile phone. An application scenario of an interaction method for assisting makeup is provided in an embodiment of the present application, as shown in fig. 1, a user may open an application 101 for assisting makeup in a mobile phone, log in software with a user account, and after a server of the application 101 performs authentication according to account login information input by the user, the application 101 enters a first interface in the program. After the user selects the "try-up" function within application 101, application 101 requests the operating system of the handset to turn on the camera.
As shown in fig. 2, after the camera is turned on, the user can hold the mobile phone 100 with his/her hand, and perform self-shooting by using the front camera 102, and a real-time effect can be displayed on the display screen 103 (display device of the terminal). The user may click to select one virtual makeup from the plurality of virtual makeup 104 for trial makeup, and after the user performs the click selection, the application 101 may use a fusion algorithm to fuse the user's face displayed on the camera 102 and the virtual makeup selected by the user for display in combination with the depth information (i.e., depth value) acquired by the camera, and display the effect of the user after makeup on the display screen 103. Optionally, the provided virtual makeup may also include an option of recommending virtual makeup, and if the user selects the option, the program selects a virtual makeup according to the face of the user, for example, a face image library may be preset, images of faces representative of several types of faces and five sense organs are stored in advance, a recommended virtual makeup is configured for each face, after the color image collected by the camera is obtained, the color image is matched with each face image in the face image library, and the recommended virtual makeup corresponding to the face with the highest matching degree is determined, so as to obtain the virtual makeup recommended to the user.
Optionally, the effect of the virtual makeup selected by the user through clicking can be uploaded to a cloud evaluation system in communication with the mobile phone 100, the cloud evaluation system can score the makeup effect of the virtual makeup selected by the user according to a pre-trained scoring model, for example, the face image of the makeup can be manually scored to obtain a training picture with labels, and a neural network model is trained according to the training picture, so that the trained neural network model can score the makeup effect of the face. If the score is high, the cloud evaluation system can store the virtual makeup uploaded by the user as a makeup template and update the makeup template into a database of the makeup template, wherein the database of the makeup template is used for providing a template of the virtual makeup for the user, and a plurality of virtual makeup 104 displayed by the mobile phone 100 shown in fig. 2 are virtual makeup in the database of the makeup template. If the score is low, the cloud evaluation system can recommend other makeup to the user, specifically, the makeup can be recommended for different parts, the application program 101 can display the fusion effect of the virtual reality according to the makeup recommended by the cloud evaluation system, the image of the user after virtual makeup is displayed, and if the user likes the virtual makeup, the user can determine to select the virtual makeup.
After the user selects one of the virtual makeup appearances, the application 101 may perform virtual reality fusion of the virtual makeup operator character image and/or the makeup tool with the scene photographed by the camera 102, and display an interactive scene of the makeup operator character image and/or the makeup tool for making up the user on the display screen 103 in a preset makeup flow (for example, the preset makeup flow may be foundation → eye shadow → blush → lipstick) to guide the user to make up. Optionally, a voice broadcast may be sent to prompt, for example, the notes, techniques, etc. of the current makeup step may be played.
The effect of the user applying makeup in reality can be photographed by the camera 102 in real time. The application 101 may also compare the real-time makeup effect of the user in reality with the makeup effect of the virtual makeup, and the specific comparison method may be determined by using a color difference, for example, for the eye region, the pixel values of each pixel point in the eye region are obtained from the actually photographed image and the image fused with the virtual makeup, the pixel values of the corresponding pixel points are subjected to difference calculation, and a matching degree value is calculated according to the color difference of the actually photographed image and the image fused with the virtual makeup and a preset evaluation formula, so as to evaluate the real makeup effect of the user. After the real makeup effect is obtained, prompt information can be broadcasted through voice so as to prompt the difference between the real makeup and the virtual makeup of the user aiming at the eye region. Optionally, the real-time makeup effect of the user in reality and the makeup effect of the virtual makeup may be directly displayed on the display screen 103 in a contrasting manner, for example, the display screen 103 is divided into two regions, the real-time makeup effect of the user in reality is displayed in one region, and the makeup effect of the virtual makeup is displayed in the other region.
It should be noted that the application scenario of the above-mentioned interaction method for assisting makeup is only used for an exemplary illustration, and in actual use, the above-mentioned interaction method for assisting makeup may also be applied to other devices or systems, for example, a mobile phone and an Augmented Reality (AR) glasses device may be used, a camera of the mobile phone is used to capture a real image of a face of a user, and the AR glasses device is used to display a fusion effect of a virtual makeup and a virtual Reality of the face of the user. The embodiment of the present application is not particularly limited to this.
Optionally, as shown in fig. 3, the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a camera 193, a display screen 194, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention is not to be specifically limited to a mobile phone. In other embodiments of the present application, the handset may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the cell phone. The charging management module 140 may also supply power to the mobile phone through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 may receive input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The power management module 141 may be configured to monitor performance parameters such as battery capacity, battery cycle count, battery charging voltage, battery discharging voltage, battery state of health (e.g., leakage, impedance), and the like. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to a mobile phone. The mobile communication module 150 may include one or more filters, switches, power amplifiers, Low Noise Amplifiers (LNAs), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to a mobile phone, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices that integrate one or more communication processing modules. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the handset antenna 1 is coupled to the mobile communication module 150 and the handset antenna 2 is coupled to the wireless communication module 160 so that the handset can communicate with the network and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The mobile phone realizes the display function through the GPU, the display screen 194, the application processor and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the cell phone may include 1 or N display screens 194, with N being a positive integer greater than 1.
The mobile phone can realize shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. In some embodiments, the handset 100 may include 1 or N cameras, N being a positive integer greater than 1. The camera 193 may be a front camera or a rear camera.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the mobile phone selects the frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The handset may support one or more video codecs. Thus, the mobile phone can play or record videos in various encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may execute the above instructions stored in the internal memory 121, so as to enable the mobile phone to execute the method for intelligently recommending contacts, as well as various functional applications and data processing, and the like, provided in some embodiments of the present application. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage area may also store one or more application programs (e.g., gallery, contacts, etc.), etc. The data storage area can store data (such as photos, contacts and the like) created during the use of the mobile phone 101 and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a nonvolatile memory, such as one or more magnetic disk storage devices, flash memory devices, Universal Flash Storage (UFS), and the like. In other embodiments, the processor 110 causes the handset to perform the methods provided in the embodiments of the present application, as well as various functional applications and data processing, by executing instructions stored in the internal memory 121, and/or instructions stored in a memory disposed in the processor.
The mobile phone can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The handset can listen to music through the speaker 170A or listen to a hands-free conversation.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the mobile phone receives a call or voice information, the receiver 170B can be close to the ear to receive voice.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The handset may be provided with one or more microphones 170C. In other embodiments, the mobile phone may be provided with two microphones 170C to achieve the noise reduction function in addition to collecting the sound signal. In other embodiments, the mobile phone may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
The headphone interface 170D is used to connect a wired headphone. The earphone interface 170D may be the USB interface 130, or may be an open mobile platform (OMTP) standard interface of 3.5mm, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like, which is not limited in this embodiment.
Of course, the mobile phone provided in the embodiment of the present application may further include one or more devices such as a key 190, a motor 191, an indicator 192, and a SIM card interface 195, which is not limited in this embodiment of the present application.
An interactive method for assisting makeup according to an embodiment of the present application applied to the terminal will be described in detail with reference to fig. 4.
As shown in fig. 4, the interactive method for assisting makeup provided in the embodiment of the present application may be applied to the application scenarios shown in fig. 1 to 2, and the interactive method for assisting makeup provided in the embodiment of the present application includes:
201. acquiring a depth image and a color image acquired by a camera;
the camera generally comprises a plurality of lenses and optical sensors, and the mainstream camera can adopt the following methods according to different measurement principles: the method comprises a flight time method, a structured light method and a binocular stereoscopic vision method, wherein a camera can be used for acquiring depth information of a shot scene, so that three-dimensional reconstruction can be carried out on the scene conveniently, and the embodiment of the application is not repeated herein. A color image is an image captured by a color image sensor in a camera.
202. Displaying a color image acquired by a camera;
the display device may be the display screen 103 of the cell phone 100 described above. In step 202, an original color image captured by the camera or a picture processed by some image processing method, for example, an image processed by some functions of skin grinding, whitening, etc. of some image processing software, is displayed.
203. Extracting facial feature information in the color image;
the image features may be edges, angles, points, and the like with large image gray scale changes, and the algorithm for extracting the image features may use some currently known image feature extraction algorithms, which are not described herein again in the embodiments of the present application. After the feature extraction algorithm is executed, a feature image is obtained, wherein the feature image comprises the extracted features. And then, matching in the characteristic image by using a preset universal face template, determining the position of the face according to a matching result, and further determining the position of the five sense organs.
204. Selecting virtual makeup;
the step can be executed by a user, the executing party of the embodiment of the application provides some templates of virtual makeup, and optionally, the templates can be templates of a whole face (including accessories such as five sense organs, hairstyles, hats, glasses and the like) or templates of various parts of the face respectively. For example, a plurality of templates of virtual makeup are displayed on the display screen 103 of the mobile phone 100, and the user clicks on the display screen 103 to instruct selection of the corresponding virtual makeup.
205. Combining the depth value and the face characteristic information, and fusing the face of the color image with the virtual makeup to obtain a picture with a first fusion effect;
an optional embodiment is that, the following steps are executed to obtain the picture with the first fusion effect:
first, a three-dimensional model of the face is built based on the depth values. Since the depth value can represent the distance between each point and the camera in the real scene, the three-dimensional structure of the face of the user can be represented, and a three-dimensional model capable of representing the height of the face is built according to the distance between the face and the camera.
Next, the virtual makeup is deformed according to the three-dimensional model of the face. Specifically, some deformation algorithms known at present may be adopted, wherein the deformation algorithms may use a Moving Least Squares (MLS) algorithm, a line-based deformation algorithm, triangular mesh affine transformation, and the like, and the embodiments of the present application are not described herein again.
And finally, combining the face characteristic information, and fusing the face of the color image with the deformed virtual makeup to obtain a picture with a first fusion effect. The fusion algorithm may use alpha fusion (pixel value weighted fusion) or poisson fusion, and optionally, an algorithm of Photoshop layer mixing may also be used.
The contents of the virtual makeup may include only the makeup itself, that is, an eye shadow, a foundation color, a blush, a lipstick, a hairstyle, accessories, and the like. Alternatively, the virtual makeup may be a photograph of a person who has been made up. When the virtual makeup is a photograph of a person who has made up, a method of making up migration may be used in a manner of combining the virtual makeup with a three-dimensional model of the face. As shown in fig. 5, the dressing migration principle is to extract the feature points of the virtual dressing and the user head model, align the feature points according to a deformation algorithm, and finally fuse the deformed virtual dressing with the facial features image of the user. Optionally, the degree of cosmetic shade can also be controlled by controlling the parameters during migration.
206. Displaying a picture of the first fusion effect;
the effect after the face is merged with the virtual makeup is displayed on the display screen 103.
Optionally, an optional implementation manner is further provided in this embodiment, and after performing step 206 shown in fig. 4, the following steps shown in fig. 6 may also be included:
301. uploading the picture with the first fusion effect to a cloud evaluation system;
the cloud evaluation system may communicate with the mobile phone 100, and the mobile phone 100 may send data to the cloud evaluation system using a mobile network, or send data to the cloud evaluation system through a wireless fidelity Wi-Fi network connected to the internet.
302. Obtaining a recommended makeup for the target part determined by the cloud evaluation system according to the picture of the first fusion effect;
the cloud evaluation system may include a scoring model obtained by artificial intelligence AI training using big data, for example, a face image disclosed in the internet may be collected as a big data gallery, and manually labeled and scored as a trained sample picture to train a neural network model, so that the neural network model can score the makeup of the input face image. The embodiments of the present application are not limited thereto, and the above examples are only for illustrative purposes.
If the grading model scores the picture with the first fusion effect lower than a preset threshold value, the makeup can be recommended. Optionally, the cloud evaluation system may adopt a "makeup trial" manner, that is, the makeup of some parts of the face in the picture of the first fusion effect is replaced, the replaced image is input to the scoring model, and if the score of the scoring model is higher than a preset threshold, the makeup of the replaced parts can be recommended to the user and fed back to the mobile phone 100 of the user. The cloud evaluation system can firstly obtain the score of the virtual makeup template of each part in the picture uploaded by the user and replace the virtual makeup template of the part with lower score with the virtual makeup template with higher score aiming at the part under the condition that the score of the picture of the first fusion effect uploaded by the user is less than a preset threshold value, then, the makeup is replaced and the above scoring model is used for scoring. The embodiments of the present application are not limited thereto, and the above examples are only for illustrative purposes.
303. Replacing the makeup of the target face part in the virtual makeup with the recommended makeup to obtain a replacement makeup;
after receiving the recommended makeup fed back by the cloud evaluation system, the mobile phone 100 replaces the makeup of the target part in the virtual makeup with the recommended makeup to obtain a replacement makeup.
304. Combining the depth value and the face characteristic information, the face of the color image is fused with the makeup replacement to update the picture with the first fusion effect.
Optionally, after the picture with the first fusion effect is updated, the user can see the effect of the substitute makeup on the display device, and then a dialog box for initiating an authorization application to the user can be displayed on the mobile phone 100 to request the user to add the effect of the substitute makeup to the big data gallery of the cloud evaluation system, if the user chooses to approve, the cloud evaluation system can store the updated picture with the fusion substitute makeup into the virtual makeup template database of the full face, and if the user chooses not to approve, the cloud evaluation system cannot store the database.
Optionally, the present embodiment also provides two optional implementation manners, and after performing step 206 shown in fig. 4, or after performing step 304 shown in fig. 6, the following steps shown in fig. 7 may also be included:
401. receiving a determination operation used by a user for determining to select a virtual makeup;
if the user likes the makeup, a confirmed virtual button may be clicked on the display 103 of the cell phone 100 to indicate that the cell phone 100 user has determined to select the virtual makeup.
402. In response to the determination operation, combining the depth value and the face characteristic information, fusing the face of the color image with a virtual makeup object model to obtain a picture with a second fusion effect, wherein the virtual makeup object model is used for guiding the completion of the makeup process of the virtual makeup;
wherein the virtual makeup object model may be a makeup tool, and/or a character image. Since the makeup process is divided into different steps, for example, one process is to put foundation first, then put eye shadow, and finally put lipstick on, the virtual makeup object model can be fused with the actual scene in the picture in steps. For example, in the stage of applying foundation, a sponge is used to brush on the face, in the stage of applying eye shadow (as shown in fig. 8), an eye shadow brush can be fused in the picture to brush on the eyes of the user, and in the stage of applying lipstick, a lipstick with a corresponding color is used to brush on the mouth of the user, so as to guide the user to complete the makeup process of the virtual makeup selected by the user according to the makeup process. In order to achieve the above-described fusion effect, after a part of the face to which the current makeup step of the makeup flow is directed is specified, a virtual makeup object model corresponding to the part is specified, and further, the virtual makeup object models corresponding to the part are fused on the screen according to the relative positional relationship between the virtual makeup object model and the part.
Optionally, after the part of the face targeted by the current makeup step of the makeup process is determined, a virtual makeup of the part and a picture of the middle part of the picture may be generated, the picture and the picture are compared, and prompt information is output according to the comparison result, for example, an image and a picture of the middle part of the picture may be displayed on a display device, or the matching degree between the image and the picture of the middle part of the picture may be determined according to the comparison result, and then the matching degree is broadcasted by using voice prompt. For example, after the fusion effect of the makeup process of the eye shadow part is played, after a period of time, and after the user actually makes up the eye shadow part, the camera 102 may capture an image of a human face and may compare the image with a picture of the eye shadow part, where a specific comparison method may be to evaluate the comparison between the real makeup effect and the virtual makeup effect of the user through the comparison of pixel values.
403. And displaying the picture of the second fusion effect.
As shown in fig. 9, an embodiment of the present application further provides an interactive device for assisting makeup, where the device is used to perform the interactive method for assisting makeup provided by the embodiment of the present application, and the device includes: the system comprises a starting unit 10, an acquiring unit 20, a first display unit 30, an extracting unit 40, a selecting unit 50, a fusing unit 60 and a second display unit 70.
The starting unit 10 is used for starting a camera of the camera; the acquiring unit 20 is configured to acquire a depth image and a color image acquired by a camera, where a pixel value of each pixel point in the depth image is used to represent a depth value of a corresponding pixel point in the color image; the first display unit 30 is configured to display a color image acquired by the camera; the extracting unit 40 is used for extracting facial feature information in the color image; the selection unit 50 is used for selecting virtual makeup; the fusion unit 60 is configured to fuse the face of the color image with the virtual makeup by combining the depth value and the face feature information to obtain a picture with a first fusion effect; the second display unit 70 is used for displaying a picture of the first fusion effect.
The interaction device for assisting makeup provided in the embodiment of the present application is used to execute the interaction method for assisting makeup provided in the embodiment of the present application, and for parts not described in detail in the embodiment of the present device, reference may be made to corresponding descriptions in the above method embodiments, and details of the embodiment of the present device are not repeated here.
As shown in fig. 10, an embodiment of the present application discloses a terminal, including: a camera 909 configured to acquire a depth image and a color image, wherein a pixel value of each pixel point in the depth image is used to represent a depth value of a corresponding pixel point in the color image; the touch screen 901, where the touch screen 901 includes a touch sensor 906 and a display screen 907, and is used to display the color image acquired by the camera and a picture of the first fusion effect; one or more processors 902; a memory 903; a communication module 908; and one or more computer programs 904. The various devices described above may be connected by one or more communication buses 905. Wherein the one or more computer programs 904 are stored in the memory 903 and configured to be executed by the one or more processors 902, the one or more computer programs 904 comprising instructions that can be used to control the terminal to perform the steps of the above embodiments, including:
step 1, starting a camera of a camera;
step 2, acquiring a depth image and a color image acquired by a camera;
step 3, controlling a display screen to display the color image acquired by the camera;
step 4, extracting facial feature information in the color image;
step 5, selecting virtual makeup;
step 6, combining the depth value and the face characteristic information, and fusing the face of the color image with the virtual makeup to obtain a picture with a first fusion effect;
and 7, controlling the display screen to display the picture of the first fusion effect.
Optionally, the step of, when the instruction is executed by the terminal, causing the terminal to perform fusion of the face of the color image and the virtual makeup in combination with the depth value and the face feature information to obtain a picture with a first fusion effect includes: establishing a three-dimensional model of the face according to the depth value; deforming the virtual makeup according to the three-dimensional model of the face; combining the face characteristic information, and fusing the face of the color image with the deformed virtual makeup to obtain a picture with a first fusion effect. Optionally, the step of, when the instruction is executed by the terminal, causing the terminal to perform fusion of the face of the color image and the deformed virtual makeup in combination with the facial feature information to obtain a picture with a first fusion effect includes: according to the face feature information, carrying out image registration on the face of the color image and the deformed virtual makeup to determine that the deformed virtual makeup corresponds to the pixel value of each pixel point in the color image; and weighting the pixel value of each pixel in the color image and the corresponding pixel value of the virtual makeup by using a preset weighting algorithm to obtain the pixel value of each pixel in the picture with the first fusion effect.
Optionally, the step of causing the terminal to select the virtual makeup when the instruction is executed by the terminal includes: and selecting a virtual makeup according to the facial feature information in the color image.
Optionally, the instruction, when executed by the terminal, causes the terminal to perform the following steps after the step of displaying the first fusion effect picture is performed: receiving a determination operation used by a user to determine to select a virtual makeup; in response to the determination operation, combining the depth value and the face characteristic information, fusing the face of the color image with a virtual makeup object model to obtain a picture with a second fusion effect, wherein the virtual makeup object model is used for guiding the completion of the makeup process of the virtual makeup; and displaying the picture of the second fusion effect.
Optionally, the instructions, when executed by the terminal, cause the terminal to further perform the following steps before receiving a step of determining operation of the user for determining selection of the virtual makeup: uploading the picture with the first fusion effect to a cloud evaluation system; obtaining a recommended makeup for the target face part determined by the cloud evaluation system according to the picture of the first fusion effect; replacing the makeup of the target face part in the virtual makeup with the recommended makeup to obtain a replacement makeup; combining the depth value and the face characteristic information, the face of the color image is fused with the makeup replacement to update the picture with the first fusion effect.
Optionally, when the instruction is executed by the terminal, the terminal further performs the following steps after the step of updating the picture of the first fusion effect is performed: uploading the updated picture of the first fusion effect to a cloud evaluation system; and obtaining an authorization application sent by the cloud evaluation system, wherein the authorization application is a request initiated by the cloud evaluation system when the uploaded picture score exceeds a preset numerical value, and the authorization application is used for requesting to add the substitute makeup to a recommended makeup template library of the cloud evaluation system.
Optionally, the step of causing the terminal to select the virtual makeup when the instruction is executed by the terminal includes: acquiring and displaying a recommended makeup template library provided by a cloud evaluation system; and receiving selection operation in the recommended makeup template library to obtain the selected virtual makeup.
Optionally, the instructions, when executed by the terminal, cause the terminal to perform the step of fusing the face of the color image with the virtual makeup object model, including: blending according to makeup steps of a makeup process, wherein, in each makeup step: determining a face part corresponding to the current makeup step; searching a virtual makeup object model corresponding to the face part; and fusing the color image and the searched virtual makeup object model according to the preset relative position relationship between the face part and the searched virtual makeup object model.
Optionally, the virtual makeup object model is a makeup tool, and/or a character image.
Optionally, the instruction, when executed by the terminal, causes the terminal to further perform the following steps after the step of determining the face part corresponding to the current makeup step is performed: extracting a part of the virtual makeup corresponding to the face part in the virtual makeup; combining the depth value and the facial feature information, and fusing the facial part of the color image with part of the virtual makeup to obtain a picture with a third fusion effect; comparing the color image acquired by the camera with the picture with the third fusion effect; and outputting prompt information according to the comparison result, wherein the prompt information is used for prompting the effect difference between the actual makeup and the partial virtual makeup of the user.
For example, the processor 902 may specifically be the processor 110 shown in fig. 3, the memory 903 may specifically be the internal memory 121 shown in fig. 3, the display 907 may specifically be the display 194 shown in fig. 3, the communication module 908 may specifically be the mobile communication module 150 and/or the wireless communication module 160 shown in fig. 3, and the touch sensor 906 may specifically be a touch sensor in the sensor module 180 shown in fig. 3, which is not limited in this embodiment of the present invention.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (25)

1. An interactive method of assisting makeup, the method comprising:
starting a camera of the camera;
acquiring a depth image and a color image acquired by the camera, wherein the pixel value of each pixel point in the depth image is used for representing the depth value of a corresponding pixel point in the color image;
displaying the color image acquired by the camera;
extracting facial feature information in the color image;
selecting virtual makeup;
combining the depth value and the face characteristic information to fuse the face of the color image and the virtual makeup to obtain a picture with a first fusion effect;
and displaying the picture of the first fusion effect.
2. The method according to claim 1, wherein said merging the face of the color image with the virtual makeup in combination with the depth value and the facial feature information to obtain a first merged-effect picture comprises:
establishing a three-dimensional model of the face according to the depth value;
deforming the virtual makeup according to the three-dimensional model of the face;
and combining the facial feature information to fuse the face of the color image with the deformed virtual makeup to obtain a picture with the first fusion effect.
3. The method according to claim 2, wherein said merging the face of the color image with the virtual makeup after the transformation in combination with the facial feature information to obtain the picture of the first merging effect comprises:
according to the face feature information, carrying out image registration on the face of the color image and the deformed virtual makeup, so as to determine that the deformed virtual makeup corresponds to the pixel value of each pixel point in the color image;
and weighting the pixel value of each pixel in the color image and the corresponding pixel value of the virtual makeup by using a preset weighting algorithm to obtain the pixel value of each pixel in the picture with the first fusion effect.
4. A method according to any one of claims 1-3, wherein said selecting a virtual makeup comprises:
and selecting the virtual makeup according to the facial feature information in the color image.
5. The method according to any one of claims 1 to 4, wherein after displaying the picture of the first fusion effect, the method further comprises:
receiving a determination operation used by a user to determine to select the virtual makeup;
in response to the determination operation, combining the depth value and the face feature information, fusing the face of the color image with a virtual makeup object model to obtain a picture with a second fusion effect, wherein the virtual makeup object model is used for guiding the completion of the makeup process of the virtual makeup;
and displaying the picture of the second fusion effect.
6. The method as set forth in claim 5, wherein, prior to receiving a determination operation by a user for determining selection of the virtual makeup, the method further comprises:
uploading the picture of the first fusion effect to a cloud evaluation system;
acquiring a recommended makeup for the target face part determined by the cloud evaluation system according to the picture of the first fusion effect;
replacing the makeup of the target face part in the virtual makeup with the recommended makeup to obtain a replacement makeup;
combining the depth value and the facial feature information to fuse the face of the color image with the makeup replacement to update the picture of the first fusion effect.
7. The method according to claim 6, wherein after updating the picture of the first fusion effect, the method further comprises:
uploading the updated picture of the first fusion effect to the cloud evaluation system;
obtaining an authorization application sent by the cloud evaluation system, wherein the authorization application is a request initiated by the cloud evaluation system under the condition that the score of an uploaded picture is determined to exceed a preset numerical value, and the authorization application is used for requesting to add the alternative makeup to a recommended makeup template library of the cloud evaluation system.
8. The method of claims 1-7, wherein selecting the virtual makeup includes:
acquiring and displaying a recommended makeup template library provided by a cloud evaluation system;
receiving a selection operation in the recommended makeup template library to obtain the selected virtual makeup.
9. The method according to any one of claims 1-8, wherein said fusing the face of the color image with a virtual makeup object model comprises:
fusing according to the makeup steps of the makeup process, wherein in each of the makeup steps:
determining a face part corresponding to the current makeup step;
searching the virtual makeup object model corresponding to the face part;
and fusing the color image and the searched virtual makeup object model according to the preset relative position relationship between the face part and the searched virtual makeup object model.
10. The method of claim 9, wherein the virtual makeup object model is a makeup tool, and/or a character image.
11. The method according to claim 9 or 10, wherein after determining the face part corresponding to the current makeup step, the method further comprises:
extracting a part of the virtual makeup corresponding to the face part in the virtual makeup;
combining the depth value and the facial feature information, and fusing the facial part of the color image with the partial virtual makeup to obtain a picture with a third fusion effect;
comparing the color image acquired by the camera with the picture of the third fusion effect;
and outputting prompt information according to the comparison result, wherein the prompt information is used for prompting the effect difference between the actual makeup and the partial virtual makeup of the user.
12. An interactive device for assisting makeup, characterized in that it is adapted to perform the interactive method for assisting makeup according to any one of claims 1 to 11.
13. A terminal, characterized in that the terminal comprises:
a camera;
a touch screen comprising a touch sensor and a display screen;
a communication module;
one or more processors;
one or more memories;
and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the terminal, cause the terminal to perform the steps of:
starting a camera of the camera;
acquiring a depth image and a color image acquired by the camera, wherein the pixel value of each pixel point in the depth image is used for representing the depth value of a corresponding pixel point in the color image;
displaying the color image acquired by the camera;
extracting facial feature information in the color image;
selecting virtual makeup;
combining the depth value and the face characteristic information to fuse the face of the color image and the virtual makeup to obtain a picture with a first fusion effect;
and displaying the picture of the first fusion effect.
14. The terminal of claim 13, wherein the instructions, when executed by the terminal, cause the terminal to perform the step of blending the face of the color image with the virtual makeup in combination with the depth value and the facial feature information to obtain a first blended-effect picture comprising:
establishing a three-dimensional model of the face according to the depth value;
deforming the virtual makeup according to the three-dimensional model of the face;
and combining the facial feature information to fuse the face of the color image with the deformed virtual makeup to obtain a picture with the first fusion effect.
15. The terminal according to claim 14, wherein the instructions, when executed by the terminal, cause the terminal to perform the step of blending the face of the color image with the deformed virtual makeup in combination with the facial feature information to obtain the first blended-effect picture comprises:
according to the face feature information, carrying out image registration on the face of the color image and the deformed virtual makeup, so as to determine that the deformed virtual makeup corresponds to the pixel value of each pixel point in the color image;
and weighting the pixel value of each pixel in the color image and the corresponding pixel value of the virtual makeup by using a preset weighting algorithm to obtain the pixel value of each pixel in the picture with the first fusion effect.
16. A terminal as defined in any one of claims 13-15, wherein the instructions, when executed by the terminal, cause the terminal to perform the step of selecting a virtual makeup comprising:
and selecting the virtual makeup according to the facial feature information in the color image.
17. The terminal according to any of claims 13-16, wherein the instructions, when executed by the terminal, cause the terminal to, after performing the step of displaying the first fusion effect picture, further perform the steps of:
receiving a determination operation used by a user to determine to select the virtual makeup;
in response to the determination operation, combining the depth value and the face feature information, fusing the face of the color image with a virtual makeup object model to obtain a picture with a second fusion effect, wherein the virtual makeup object model is used for guiding the completion of the makeup process of the virtual makeup;
and displaying the picture of the second fusion effect.
18. The terminal of claim 17, wherein the instructions, when executed by the terminal, cause the terminal to, prior to performing the step of receiving a determination by a user to determine selection of the virtual makeup, further perform the steps of:
uploading the picture of the first fusion effect to a cloud evaluation system;
acquiring a recommended makeup for the target face part determined by the cloud evaluation system according to the picture of the first fusion effect;
replacing the makeup of the target face part in the virtual makeup with the recommended makeup to obtain a replacement makeup;
combining the depth value and the facial feature information to fuse the face of the color image with the makeup replacement to update the picture of the first fusion effect.
19. The terminal of claim 18, wherein the instructions, when executed by the terminal, cause the terminal to, after the step of updating the picture of the first fusion effect, further perform the steps of:
uploading the updated picture of the first fusion effect to the cloud evaluation system;
obtaining an authorization application sent by the cloud evaluation system, wherein the authorization application is a request initiated by the cloud evaluation system under the condition that the score of an uploaded picture is determined to exceed a preset numerical value, and the authorization application is used for requesting to add the alternative makeup to a recommended makeup template library of the cloud evaluation system.
20. A terminal as claimed in claims 13-19, wherein the instructions, when executed by the terminal, cause the terminal to perform the step of selecting a virtual makeup comprising:
acquiring and displaying a recommended makeup template library provided by a cloud evaluation system;
receiving a selection operation in the recommended makeup template library to obtain the selected virtual makeup.
21. The terminal of any of claims 13-20, wherein the instructions, when executed by the terminal, cause the terminal to perform the step of fusing the face of the color image with a virtual makeup object model comprises:
fusing according to the makeup steps of the makeup process, wherein in each of the makeup steps:
determining a face part corresponding to the current makeup step;
searching the virtual makeup object model corresponding to the face part;
and fusing the color image and the searched virtual makeup object model according to the preset relative position relationship between the face part and the searched virtual makeup object model.
22. The terminal of claim 21, wherein the virtual makeup object model is a makeup tool, and/or a character image.
23. The terminal according to claim 21 or 22, wherein the instructions, when executed by the terminal, cause the terminal to, after performing the step of determining the face part corresponding to the current makeup step, further perform the steps of:
extracting a part of the virtual makeup corresponding to the face part in the virtual makeup;
combining the depth value and the facial feature information, and fusing the facial part of the color image with the partial virtual makeup to obtain a picture with a third fusion effect;
comparing the color image acquired by the camera with the picture of the third fusion effect;
and outputting prompt information according to the comparison result, wherein the prompt information is used for prompting the effect difference between the actual makeup and the partial virtual makeup of the user.
24. A computer storage medium, characterized by comprising computer instructions which, when run on a terminal, cause the terminal to perform the interactive method of assisting makeup according to any one of claims 1-11.
25. A computer program product comprising instructions for causing a terminal to perform the interactive method of assisting makeup according to any one of claims 1 to 11 when the computer program product is run on the terminal.
CN202010304007.8A 2020-04-17 2020-04-17 Interactive method for assisting makeup, terminal and computer storage medium Pending CN111539882A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010304007.8A CN111539882A (en) 2020-04-17 2020-04-17 Interactive method for assisting makeup, terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010304007.8A CN111539882A (en) 2020-04-17 2020-04-17 Interactive method for assisting makeup, terminal and computer storage medium

Publications (1)

Publication Number Publication Date
CN111539882A true CN111539882A (en) 2020-08-14

Family

ID=71975067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010304007.8A Pending CN111539882A (en) 2020-04-17 2020-04-17 Interactive method for assisting makeup, terminal and computer storage medium

Country Status (1)

Country Link
CN (1) CN111539882A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258385A (en) * 2020-10-21 2021-01-22 北京达佳互联信息技术有限公司 Multimedia resource generation method, device, terminal and storage medium
CN112819718A (en) * 2021-02-01 2021-05-18 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN113301243A (en) * 2020-09-14 2021-08-24 阿里巴巴集团控股有限公司 Image processing method, interaction method, system, device, equipment and storage medium
CN113554622A (en) * 2021-07-23 2021-10-26 江苏医像信息技术有限公司 Intelligent quantitative analysis method and system for face skin makeup
WO2022042195A1 (en) * 2020-08-24 2022-03-03 华为技术有限公司 Skin care aiding method, device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960187A (en) * 2017-03-17 2017-07-18 合肥龙图腾信息技术有限公司 Cosmetic navigation system, apparatus and method
CN108765273A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 The virtual lift face method and apparatus that face is taken pictures
WO2018214115A1 (en) * 2017-05-25 2018-11-29 华为技术有限公司 Face makeup evaluation method and device
CN109191569A (en) * 2018-09-29 2019-01-11 深圳阜时科技有限公司 A kind of simulation cosmetic device, simulation cosmetic method and equipment
CN110235169A (en) * 2017-02-01 2019-09-13 株式会社Lg生活健康 Evaluation system of making up and its method of operating
CN110992493A (en) * 2019-11-21 2020-04-10 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110235169A (en) * 2017-02-01 2019-09-13 株式会社Lg生活健康 Evaluation system of making up and its method of operating
CN106960187A (en) * 2017-03-17 2017-07-18 合肥龙图腾信息技术有限公司 Cosmetic navigation system, apparatus and method
WO2018214115A1 (en) * 2017-05-25 2018-11-29 华为技术有限公司 Face makeup evaluation method and device
CN108765273A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 The virtual lift face method and apparatus that face is taken pictures
CN109191569A (en) * 2018-09-29 2019-01-11 深圳阜时科技有限公司 A kind of simulation cosmetic device, simulation cosmetic method and equipment
CN110992493A (en) * 2019-11-21 2020-04-10 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042195A1 (en) * 2020-08-24 2022-03-03 华为技术有限公司 Skin care aiding method, device, and storage medium
CN113301243A (en) * 2020-09-14 2021-08-24 阿里巴巴集团控股有限公司 Image processing method, interaction method, system, device, equipment and storage medium
CN113301243B (en) * 2020-09-14 2023-08-11 阿里巴巴(北京)软件服务有限公司 Image processing method, interaction method, system, device, equipment and storage medium
CN112258385A (en) * 2020-10-21 2021-01-22 北京达佳互联信息技术有限公司 Multimedia resource generation method, device, terminal and storage medium
WO2022083257A1 (en) * 2020-10-21 2022-04-28 北京达佳互联信息技术有限公司 Multimedia resource generation method and terminal
CN112819718A (en) * 2021-02-01 2021-05-18 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN113554622A (en) * 2021-07-23 2021-10-26 江苏医像信息技术有限公司 Intelligent quantitative analysis method and system for face skin makeup

Similar Documents

Publication Publication Date Title
CN111539882A (en) Interactive method for assisting makeup, terminal and computer storage medium
CN110086985B (en) Recording method for delayed photography and electronic equipment
US11889180B2 (en) Photographing method and electronic device
CN111476911A (en) Virtual image implementation method and device, storage medium and terminal equipment
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
US20220180485A1 (en) Image Processing Method and Electronic Device
WO2022042776A1 (en) Photographing method and terminal
WO2021223724A1 (en) Information processing method and apparatus, and electronic device
CN114710640A (en) Video call method, device and terminal based on virtual image
CN114007099A (en) Video processing method and device for video processing
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN113850726A (en) Image transformation method and device
CN113727018B (en) Shooting method and equipment
WO2020056694A1 (en) Augmented reality communication method and electronic devices
CN112269554B (en) Display system and display method
CN109685741B (en) Image processing method and device and computer storage medium
CN113850709A (en) Image transformation method and device
CN115546858B (en) Face image processing method and electronic equipment
CN114120950B (en) Human voice shielding method and electronic equipment
CN113572957B (en) Shooting focusing method and related equipment
CN112348738B (en) Image optimization method, image optimization device, storage medium and electronic equipment
CN115412678A (en) Exposure processing method and device and electronic equipment
CN115249364A (en) Target user determination method, electronic device and computer-readable storage medium
CN114418837B (en) Dressing migration method and electronic equipment
CN116993875B (en) Digital person generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination