WO2023198073A1 - 面部特征检测方法、可读介质和电子设备 - Google Patents

面部特征检测方法、可读介质和电子设备 Download PDF

Info

Publication number
WO2023198073A1
WO2023198073A1 PCT/CN2023/087675 CN2023087675W WO2023198073A1 WO 2023198073 A1 WO2023198073 A1 WO 2023198073A1 CN 2023087675 W CN2023087675 W CN 2023087675W WO 2023198073 A1 WO2023198073 A1 WO 2023198073A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
user
preview image
key point
Prior art date
Application number
PCT/CN2023/087675
Other languages
English (en)
French (fr)
Inventor
丁欣
胡宏伟
周一丹
赵琳
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023198073A1 publication Critical patent/WO2023198073A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present application relates to the field of image processing technology, and in particular to a facial feature detection method, a readable medium and an electronic device.
  • UV-A long-wave ultraviolet
  • UVB medium-wave ultraviolet
  • UVC short-wave ultraviolet
  • sunscreen can absorb ultraviolet rays
  • sunscreen they just apply it by feeling.
  • the application status of sunscreen is invisible to the naked eye. For example, whether the application area is completely covered, whether the application amount is sufficient, whether the application is even, and when it needs to be reapplied.
  • embodiments of the present application provide a facial feature detection method, a readable medium, and an electronic device.
  • the technical solution of this application is applied to electronic equipment.
  • the electronic equipment includes a first camera and a second camera located on different sides of the electronic device, and a first screen located on the same side of the second camera.
  • the technical solution of this application determines the facial features of the user in the first preview image collected by the second camera through the facial features in multiple ultraviolet images including the user's face collected by the first camera, and determines the facial features of the user in the first preview image based on the facial features of the user in the first preview image.
  • Features Adjust the pixel values of pixels in the user's facial area in the first preview image to obtain the target image, and then display the target image through the first screen.
  • the electronic device obtains the ultraviolet image in real time by responding to the detection instruction to detect the user's facial features, and determines the first preview image collected by the second camera based on the user's facial features corresponding to the ultraviolet image obtained in real time. facial features of the user, and the final target image is displayed on a larger screen. Therefore, the technical solution of this application can not only meet the user's real-time needs for facial feature detection, but also maximize the display of color detection results, helping to improve the user experience.
  • embodiments of the present application provide a method for detecting facial features, which is applied to an electronic device.
  • the electronic device includes a first camera and a second camera located on different sides of the electronic device, and a third camera located on the same side of the second camera.
  • One screen is applied to an electronic device.
  • the facial feature detection method includes: detecting a detection instruction to detect the user's facial features; responding to the detection instruction, acquiring multiple ultraviolet images including the user's face collected through the first camera; acquiring multiple ultraviolet images including the user's face collected through the second camera A first preview image of the face, wherein the first preview image is a color image; according to the facial features of the user in each UV image, the facial features of the user in the first preview image are determined; according to the determined facial features of the user in the first preview image Features: Adjust the pixel values of pixels in the user's facial area in the first preview image to obtain a target image; display the target image on the first screen.
  • the electronic device obtains the ultraviolet image in real time by responding to the detection instruction to detect the user's facial features, and determines the first preview image collected by the second camera based on the user's facial features corresponding to the ultraviolet image obtained in real time.
  • the user's facial features are displayed on the first screen with a larger area. Therefore, the technical solution of this application can not only meet the user's real-time needs for facial feature detection, but also maximize the display of color detection results, which is helpful To improve user experience.
  • the ultraviolet image is used to detect the application of sunscreen on the user's face.
  • the first screen is an inner screen, and the inner screen area is larger.
  • the target image is an RGB image and is obtained based on the front-end first preview image taken in real time, the target image is ultimately displayed on the internal screen, which can not only meet the user's real-time needs for sunscreen detection, but also present the user with sunscreen.
  • the frost detection results are more beautiful, and the detection results can be maximized for display, which is convenient for users to view and helps improve user experience.
  • acquiring a plurality of ultraviolet images including the user's face collected by the first camera includes: in response to the detection instruction, after determining that it is necessary to re- When collecting multiple ultraviolet images including the user's face, the user is prompted to face the first camera, and multiple ultraviolet images including the user's face are collected through the first camera.
  • obtaining the first preview image including the user's face collected through the second camera includes: determining that re-acquisition of multiple ultraviolet images including the user's face has been completed, or When it is determined that there is no need to re-acquire multiple ultraviolet images including the user's face, the user is prompted to face the second camera, and a first preview image including the user's face is collected through the second camera.
  • the electronic device saves ultraviolet images of the user's face taken from various angles within a preset time period, such as within 5 minutes, there is no need to re-acquire ultraviolet images of the user's face.
  • the electronic device saves ultraviolet images of the user's face taken from various angles within a preset time period, such as within 5 minutes, there is no need to re-acquire ultraviolet images of the user's face.
  • the ultraviolet image of the user's face needs to be re-acquired, and the ultraviolet images of the user's face from various angles that meet the conditions in terms of position, size, etc. have been re-acquired
  • the user is prompted to face the second camera, and the user is prompted to face the second camera through the second camera.
  • the camera captures a first preview image including the user's face.
  • the above method further includes: determining the sunscreen detection results of the user's face in each UV image, the key point information of each UV image, and the key point information of the first preview image.
  • the UV image can be a grayscale image, or a black-and-white image after binarization of the grayscale image, areas in the UV image corresponding to different sunscreen application conditions on the user's face have obvious differences in pixel values, which can be The sunscreen application status of the user's face is quickly determined based on the above UV image.
  • the user's facial features include the sunscreen detection results of the user's face, and based on the user's facial features in each UV image, the user's facial features in the first preview image are determined, including: Based on the determined sunscreen detection results of the user's face in each UV image, as well as the key point information of each UV image and the key point information of the first preview image, the sunscreen detection result of the face in the first preview image is determined.
  • the UV image can be a grayscale image, or a black-and-white image after binarization of the grayscale image
  • areas in the UV image corresponding to different sunscreen application conditions on the user's face have obvious differences in pixel values, which can be
  • the sunscreen application status of the user's face is quickly determined based on the above UV image. Therefore, the sunscreen detection result of the face in the first preview image can be quickly determined based on the key point information of each UV image and the key point information of the first preview image, which can improve the sunscreen detection efficiency, and the detection method is simple.
  • the third A sunscreen detection result on the face in the preview image includes: determining the target UV ray in the multiple UV images of the user's face that matches the first preview image based on the key point information of each UV image and the key point information of the first preview image. image; according to the key point information of the target UV image and the key point information of the first preview image, determine the transformation relationship between the target UV image and the first preview image; according to the transformation relationship and the sunscreen detection result corresponding to the target UV image, The sunscreen detection result of the user's face in the first preview image is determined.
  • the key point information of the target ultraviolet image and the key point information of the first preview image can be obtained simply and quickly, the key point information of the target ultraviolet image and the key point information of the first preview image can be used to accurately determine the target ultraviolet image and the second preview image.
  • a transformation relationship between a preview image, so that the sunscreen detection result of the user's face in the first preview image can be determined accurately and quickly based on the transformation relationship and the sunscreen detection result corresponding to the target UV image.
  • the detection process is fast and the detection results are accurate.
  • the target ultraviolet ray that matches the first preview image among the plurality of ultraviolet images of the user's face is determined.
  • Image including: calculating the face position, face size and face angle in each UV image based on the key point information of each UV image, and calculating the face in the preview image based on the key point information of the first preview image position, face size, and face angle; determine from multiple UV images of the user's face the UV image whose face position, face size, and face angle are close to the first preview image as the target UV image.
  • the electronic device further includes a second screen located on the same side of the first camera.
  • the method further includes: according to each The key point information of the UV image is used to calculate the face position, face size and face angle in each UV image respectively; based on the calculated face position, face size and face angle in each UV image, a first Prompt information; display the first prompt information through the second screen to prompt the user to adjust the facial position and/or facial posture.
  • the electronic device further includes a third camera located on the same side of the first camera, and the method further includes: in response to the detection instruction, obtaining a plurality of images including the user's face collected by the third camera. second preview image; determine the key point information of each second preview image; calculate the face position, face size and face angle in each second preview image according to the key point information of each second preview image; according to The calculated face position, face size and face angle in each second preview image are used to generate second prompt information; the second prompt information is displayed on the second screen to prompt the user to adjust the facial position and/or facial posture.
  • determining the transformation relationship between the target UV image and the first preview image based on the key point information of the target UV image and the key point information of the first preview image includes: according to The key point information of the target ultraviolet image is triangulated on the key points of the target ultraviolet image, and the key points of the first preview image are triangulated according to the key point information of the first preview image; the triangulation process is performed on the target ultraviolet image. Perform affine transformation between each face triangle obtained in the end and the corresponding face triangle in the first preview image, and calculate the relationship between each face triangle in the target UV image and the corresponding face triangle in the first preview image. The transformation relationship; determine the set of transformation relationships between each face triangle in the target UV image and the corresponding face triangle in the first preview image as the transformation relationship between the target UV image and the first preview image .
  • determining the sunscreen detection result of the user's face in the first preview image based on the transformation relationship and the sunscreen detection result corresponding to the target UV image includes: based on the target UV image and the third A transformation relationship between preview images, transform the sunscreen detection result corresponding to the target UV image, and obtain the transformed sunscreen detection result corresponding to the target UV image; determine the transformed sunscreen detection result as the first Preview the sunscreen detection results on the user's face in the image.
  • the key point information of the target ultraviolet image and the key point information of the first preview image can be obtained simply and quickly, the key point information of the target ultraviolet image and the key point information of the first preview image can be used to accurately determine the target ultraviolet image and the second preview image.
  • a transformation relationship between a preview image, so that the sunscreen detection result of the user's face in the first preview image can be determined accurately and quickly based on the transformation relationship and the sunscreen detection result corresponding to the target UV image.
  • the detection process is fast and the detection results are accurate.
  • the sunscreen detection results on the face in the preview image include: 3D face reconstruction based on the key point information of each UV image to obtain an uncolored 3D face and the key point information of the 3D face; 3D face reconstruction based on the user's key point information in each UV image.
  • the uncolored three-dimensional face is colored to obtain the colored three-dimensional face; based on the colored three-dimensional face, the key point information of the three-dimensional face, and the key point information of the first preview image, The sunscreen detection result of the user's face in the first preview image is determined.
  • the sunscreen on the user's face in the first preview image is determined based on the colored three-dimensional face, the key point information of the three-dimensional face, and the key point information of the first preview image.
  • the detection results include: rotating the colored three-dimensional face in three-dimensional space to obtain a multi-angle three-dimensional face; performing two-dimensional projection of the multi-angle three-dimensional face to obtain a multi-angle projected face, and based on the three-dimensional face.
  • the key point information of the face corresponding to the projected face at each angle is determined; based on the key point information of the face corresponding to the projected face at each angle and the key point information of the first preview image, the projected face from multiple angles is
  • the projected faces matching the first preview image are selected from the faces, and the matching projected faces are determined as the sunscreen detection results of the user's face in the first preview image.
  • the first preview image is selected from the multi-angle projected face.
  • Matching the projected face to the preview image includes: separately calculating the position difference between the key points of the face in the projected face at each angle and the corresponding key point of the face in the first preview image, and separately calculating the position difference of the projected face at each angle. Perform a weighted sum of all position differences in the projection; determine the projected face whose result of the weighted sum satisfies the conditions as the projected face matching the first preview image.
  • the first aspect above also includes: detecting an instruction to compare the sunscreen application status of the user's face with historical sunscreen detection records; in response to the detection instruction, determining the correspondence of the historical sunscreen detection records
  • Multiple images include historical UV images of the user's face, as well as key point information of each historical UV image and sunscreen detection results of the user's face in each historical UV image; based on the key point information of each historical UV image, the user's face in each historical UV image
  • the sunscreen detection result of the face and the key point information of the first preview image are used to determine the historical sunscreen detection result of the user's face in the first preview image; based on the determined historical sunscreen detection result of the user's face in the first preview image,
  • the pixel values of the pixels in the user's facial area in the first preview image are adjusted to obtain the image to be superimposed.
  • the pixel values of pixels in sub-regions with different degrees of sunscreen application are different. ;
  • the user can make a comparison and view, thereby analyzing the areas where the user is more likely to miss application or apply less.
  • determining the sunscreen detection results of the user's face in each UV image, the key point information of each UV image, and the key point information of the first preview image include: using deep learning Method or cascade shape regression method, perform key point detection on each UV image and the first preview image respectively, and obtain the key point information of each UV image and the key point information of the first preview image; use the preset sunscreen detection model The sunscreen application status of the user's face is detected for each UV image, and the sunscreen detection results of the user's face in each UV image are obtained.
  • the difference between the acquisition time of each ultraviolet image and the acquisition time of the first preview image is less than the time threshold.
  • the first screen is an internal screen
  • the second screen is an external screen
  • the first screen is an internal screen
  • Folding screen the second screen is an external screen.
  • the area of the first screen is larger than the area of the second screen. Displaying target images on a larger first screen helps improve user experience.
  • the first camera is an ultraviolet camera
  • the second camera is a front camera
  • the third camera is a rear camera.
  • it also includes: in the user's facial area of the target image, sub-regions with different degrees of sunscreen application have different display colors.
  • the user can intuitively view the application of sunscreen in different areas of the user's face, which helps to improve the user experience.
  • embodiments of the present application provide a computer-readable storage medium. Instructions are stored on the computer-readable storage medium. When the instructions are executed on an electronic device, the electronic device causes the electronic device to perform the above-mentioned first aspect and the first aspect. Facial feature detection method in any possible implementation.
  • embodiments of the present application provide a computer program product.
  • the computer program product includes instructions. When executed by one or more processors, the instructions are used to implement the above-mentioned first aspect and any of the possibilities of the first aspect. Facial feature detection method in the implementation.
  • an electronic device including:
  • One or more processors when the instructions are executed by the one or more processors, the processor executes the facial feature detection method in the above first aspect and any possible implementation of the first aspect.
  • Figure 1 is an application scenario in which a user uses a mobile phone to check the application status of facial sunscreen provided by an embodiment of the present application;
  • Figure 2A is a schematic structural diagram of the front and background of a mobile phone provided by an embodiment of the present application.
  • Figures 2B to 2K are schematic diagrams of some UI interfaces provided by embodiments of the present application.
  • Figure 3 is a schematic diagram of the hardware structure of a mobile phone provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of the UI process and algorithm flow of the pre-shooting stage in a sunscreen detection method provided by the embodiment of the present application;
  • Figure 5A is a schematic diagram of a human face image that has been marked with key points provided by an embodiment of the present application
  • Figure 5B is a schematic diagram of a process for inputting UV images into a trained sunscreen detection model to detect sunscreen application according to an embodiment of the present application
  • Figure 6 is a schematic diagram of the UI process and algorithm flow of the real-time viewing stage in a sunscreen detection method provided by an embodiment of the present application;
  • Figure 7 is a schematic diagram of the UI process and algorithm flow of the pre-shooting stage in another sunscreen detection method provided by the embodiment of the present application;
  • Figure 8A is a schematic diagram of a three-dimensional human face provided by an embodiment of the present application.
  • Figure 8B is a schematic diagram of a mask image provided by an embodiment of the present application.
  • FIG. 8C is a schematic diagram of a three-dimensional human face obtained by patch coloring the three-dimensional human face shown in FIG. 8A according to the mask image shown in FIG. 8B according to an embodiment of the present application;
  • Figure 8D is a schematic diagram of a three-dimensional human face shown in Figure 8A after using mask images from different angles to color the three-dimensional human face shown in Figure 8A according to an embodiment of the present application;
  • Figure 9 is a schematic diagram of the UI process and algorithm flow of the real-time viewing stage in another sunscreen detection method provided by the embodiment of the present application.
  • Figure 10A is a schematic diagram of a rotated three-dimensional human face provided by an embodiment of the present application.
  • Figure 10B is a projection image obtained by two-dimensional projection of the three-dimensional human face shown in Figure 10A;
  • Figure 11 is a schematic flow chart for comparing historical sunscreen application status and current application status provided by an embodiment of the present application
  • Figures 12A to 12D are schematic diagrams of some UI interfaces provided by embodiments of the present application.
  • Illustrative embodiments of the present application include, but are not limited to, a facial feature detection method, readable media, and electronic devices.
  • the facial feature detection method provided by this application can be applied to some application scenarios where the imaging and viewing of the terminal device are not synchronized, but there is a demand for real-time display effects, such as the imaging of the user's face, and the application of sunscreen to the user's face. Detect the condition, detect the user’s facial skin condition (such as wrinkles, stains, pores, etc.), and allow users to view the detection results in real time. For ease of explanation, the following will take the detection of sunscreen application on the user's face as an example to further introduce the technical solution of the present application in detail.
  • smart terminal devices have the advantages of small size, light weight, portability, etc.
  • smart terminal devices are endowed with more and more functions.
  • smart terminals can be used to detect the application of chemicals such as sunscreen on the user's skin, and then visually present the detection results to the user, which helps the user understand the area and amount of sunscreen applied, and then allows the user to confirm Whether re-coating is required.
  • the technical solution of this application can be applied to any smart terminal device with a display screen and a camera, including but not limited to mobile phones, tablet computers, laptop computers, wearable devices, head-mounted displays, portable game consoles, portable music Players, reader devices, etc.
  • a mobile phone 100 As an example.
  • FIG 1 shows an application scenario in which a user uses a mobile phone 100 to check the application status of facial sunscreen.
  • the mobile phone 100 is externally connected to an ultraviolet (UV) accessory 200.
  • UV ultraviolet
  • the mobile phone 100 is connected to the UV accessory through a USB interface.
  • the UV accessory 200 integrates an ultraviolet fill light and an ultraviolet camera.
  • the mobile phone 100 can realize the sunscreen detection function by running the executable program of the sunscreen detection application.
  • the mobile phone 100 collects the face preview image P0 through the front camera 101 and displays it in the form of an RGB image (a color image with three color channels of red, green, and blue) for Users view the preview image in real time to adjust facial posture.
  • RGB image a color image with three color channels of red, green, and blue
  • the mobile phone 100 collects the user's facial image through the external UV accessory 200, and the facial image (that is, the grayscale image) collected through the external UV accessory 200 is directly presented to the user as a detection image. Because the skin area with sunscreen applied, such as the area Z1 in the detection image P1 displayed by the mobile phone 100 in Figure 1, absorbs ultraviolet rays and appears in a darker color, such as black; while no sunscreen is applied or less sunscreen is applied. Areas of skin that fail to absorb UV rays appear as a lighter color in the image, such as white or gray. The user understands the sunscreen application status on the face by viewing the detection image P1.
  • the mobile phone 100 since the mobile phone 100 directly presents the grayscale image collected through the external UV accessory 200 to the user as a detection image, the user understands the sunscreen application on the face by viewing the grayscale image. situation.
  • the grayscale image presented by this presentation method the skin area without sunscreen is darker, which will give the user an unsightly visual experience and lead to a poor user experience.
  • users need to carry UV accessories with them, which brings inconvenience to users.
  • a UV accessory including a UV fill light and a UV camera can be integrated on the mobile phone 100, and the sunscreen detection results are displayed in real time as a color image. Therefore, when the user uses the mobile phone 100 to detect sunscreen, there is no need to connect external UV accessories, and the user can view the color in real time. Colored sunscreen test results.
  • the above-mentioned UV accessories can be arranged on the back of the mobile phone 100 (that is, the opposite side of the main screen of the mobile phone 100).
  • a smaller screen (hereinafter referred to as the external screen) can be set in the background of the mobile phone 100.
  • the user can preview the image through the external screen. and view test results.
  • the front of the mobile phone 100 is provided with a large screen (that is, the inner screen) 103 and the front camera 101 ; the back of the mobile phone 100 is provided with an ultraviolet fill light 104 , an ultraviolet light Camera 102, rear camera 105 and external screen 106.
  • the user first clicks on the sun protection detection control 1001 shown in Figure 2B to cause the mobile phone 100 to open the sun protection detection page shown in Figure 2C, which includes a "photograph yourself" control 1002 and a "photograph others" control 1003.
  • the mobile phone 100 determines that the user has adjusted the face to the center of the preview screen
  • the mobile phone 100 collects the RGB rear preview image located at the center of the preview screen through the rear camera 105, and uses the ultraviolet camera 102 to collect the grayscale image of the user's face (hereinafter Referred to as UV image).
  • the UV image is then passed through the set sunscreen detection algorithm to obtain the sunscreen application detection results on the user's face, such as detecting areas where sunscreen is not applied and areas where sunscreen is insufficiently applied.
  • the detection results are converted into markers of different colors, and then the markers of different colors are added to the RGB post-preview image and presented to the user in the form of screen P3 as shown in Figure 2F.
  • the area Z2 with insufficient application is displayed with color 1 in the red, green, and blue (RGB) hue circle
  • the area Z3 with no sunscreen applied is displayed with color 2 in the red, green, and blue (RGB) hue circle. display.
  • the mobile phone 100 can only display the color detection results in real time through the external screen 106 . Due to the small size of the external screen 106, the markers in the result image with markers of different colors displayed by the external screen 106 may not be clearly visible, resulting in poor user experience.
  • the sunscreen detection results are displayed in the form of RGB images on the internal screen of the mobile phone 100, thereby maximizing the display of the detection results and making it convenient for the user to view.
  • the mobile phone 100 first takes UV images of the user's face from various angles through the rear ultraviolet camera 102 to obtain a UV image sequence. Perform key point detection on each UV image to obtain key point information of each UV image.
  • the key points of the UV image are determined using key point detection methods such as deep learning methods and cascade shape regression methods, and are used to characterize the user's facial features (including eyebrows, eyes, nose, mouth) and Key pixels of key facial features such as facial contours.
  • each UV image in the UV image sequence For example, input each UV image in the UV image sequence into a preset sunscreen detection model to obtain each UV
  • the mask image corresponding to the image (that is, the sunscreen detection result)
  • the mask image corresponding to each UV image is a grayscale image or a black and white image
  • the pixels in each mask image correspond to the pixels in the corresponding UV image one-to-one.
  • the color blocks corresponding to the areas with different sunscreen application conditions in the corresponding UV image are displayed.
  • the color of each color block can be black, white or gray, for example, the area Z0 in the mask image corresponding to a certain UV image. , absorbs ultraviolet rays and appears as a darker color, such as black; while areas of skin where no sunscreen is applied or where less sunscreen is applied fail to absorb ultraviolet rays and appear as a lighter color in the image, such as white or gray.
  • a front preview image of the user's face is captured through the front camera 101.
  • the front preview image is what the user can pass through.
  • the RGB image including the user's face is viewed in real time through the internal screen (larger screen) of the mobile phone 100 .
  • the sunscreen on the user's face is determined when the front-facing camera captures the pre-preview image. Smear condition.
  • the pixel values of multiple pixels in the facial area in the pre-preview image are adjusted to obtain the target image. For example, the user's forehead area is insufficiently smeared with sunscreen. , then adjust the pixel value of the forehead area in the pre-preview image, thereby adjusting the color of the forehead area in the pre-preview image to purple.
  • the target image is displayed on the inner screen (a larger screen).
  • the target image is an RGB image and is obtained based on the pre-preview image taken in real time
  • the target image is ultimately displayed on the internal screen, which can not only meet the user's real-time needs for sunscreen detection, but also present the user with the most accurate sunscreen detection
  • the results are more beautiful, and the detection results can be maximized for display, making it easier for users to view and helping to improve user experience.
  • the mobile phone 100 divides the entire sunscreen detection process into two stages: a pre-shooting stage and a real-time viewing stage.
  • the pre-shooting stage the mobile phone 100 stores the UV image sequence obtained above, the mask image corresponding to each UV image, and the key point information of each UV image.
  • the mobile phone 100 performs key point detection on the front preview image collected by the front camera 101 to obtain the key point information of the front preview image, and then based on the key point information of the front preview image and the key points of each UV image Point information, determine the UV image that matches the pre-preview image from the UV image sequence, and determine the transformation relationship T between this matching UV image and the pre-preview image, and convert the mask image corresponding to this matching UV image Transformation is performed based on the transformation relationship T to obtain a mask image to be fused corresponding to the pre-preview image, so that the mask image to be fused and the pre-preview image are fused and displayed on the inner screen 103 .
  • the mobile phone 100 performs three-dimensional face reconstruction based on the above-obtained UV image sequence and the key point information of each UV image to obtain a reconstructed three-dimensional face, and uses the above-obtained
  • the mask image corresponding to each UV image is colored on the three-dimensional human face to obtain the colored three-dimensional human face, and the relevant data of the three-dimensional human face is stored.
  • the mobile phone 100 performs key point detection on the front preview image collected by the front camera 101 to obtain the key point information of the front preview image, and then rotates the colored three-dimensional face at different angles and performs secondary processing.
  • Dimensional projection is used to obtain projection images from different angles, and then based on the key point information of the pre-preview image and the key point information of each projection image, the projection image that matches the pre-preview image is determined from each projection image, and the matching
  • the projected image is used as the mask image to be fused corresponding to the pre-preview image, so that the mask image to be fused and the pre-preview image are fused and displayed on the inner screen 103 .
  • the user clicks on the sunscreen detection control 1001 shown in Figure 2B (this control can be a control in a sunscreen detection application, or can also be a control in other applications integrated with sunscreen detection functions) , causing the mobile phone 100 to open the sun protection detection page shown in Figure 2C, and then the user clicks on the self-shooting control 1002 to enter the interface shown in Figure 2D, and a reminder message 1004 of "Please flip to the external screen" pops up, prompting the user to turn over the mobile phone 100.
  • the user flips the mobile phone 100 so that the external screen 106 of the mobile phone 100 faces the user's face.
  • the rear camera 105 collects a rear preview image of the face, and the external screen 106 displays the image as shown in Figure 2E in the form of an RGB color image.
  • Rear preview screen P2 as well as related prompt information for prompting the user to adjust facial posture.
  • the mobile phone 100 collects UV images of the user's face from various angles through the ultraviolet camera 102 .
  • the mobile phone 100 After the mobile phone 100 has collected the user's facial UV images (UV image sequence) from various angles, and calculated the mask image corresponding to each UV image and the key point information of each UV image, it enters the interface shown in Figure 2G and pops up "Please "Turn to inner screen” reminder message 1005, or prompt the user to flip the phone 100 again through voice, so that the user's face faces the front camera 101 of the phone 100, and enters the real-time viewing stage.
  • UV image sequence UV image sequence
  • the front camera 101 of the mobile phone 100 collects the front preview image of the face, and displays it on the inner screen 103.
  • the front preview screen P3 shown in Figure 2H is displayed in the form of an RGB color image, as well as related prompt information for prompting the user to adjust the facial posture.
  • the mobile phone 100 determines that the user's face is in a suitable position, for example, when it is determined that the user has adjusted the face to the center of the front preview screen, the mobile phone 100 collects and displays the front preview image P4 as shown in FIG. 2I.
  • the mask image to be fused corresponding to the pre-preview image is determined.
  • the areas Z4 and Z5 that need to be reapplied with sunscreen are displayed in real-time on the pre-preview image P4 shown in Figure 2I in a filled color form and presented to the user.
  • the hardware structure of a mobile phone 100 provided by an embodiment of the present application will be introduced in detail below with reference to FIG. 3 .
  • the mobile phone 100 implements the sunscreen detection method provided by the embodiment of the present application by executing the executable program.
  • the mobile phone 100 may include a processor 110, a power module 140, a memory 180, a mobile communication module 150, a wireless communication module 160, an interface module 190, an audio module 170, an internal screen 103, an external screen 106, Front camera 101, rear camera 105, UV camera 102, UV fill light 104, etc.
  • the processor 110 may include one or more processing units, for example, may include a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), a neural network processor (Neural-network Processing Unit, NPU) ), digital signal processor (Digital Signal Processor, DSP), microprocessor (Micro-programmed Control Unit, MCU), artificial intelligence (Artificial Intelligence, AI) processor or programmable logic device (Field Programmable Gate Array, FPGA) processing modules or processing circuits.
  • different processing units can be independent devices or integrated in one or more processors.
  • the processor 110 can be used to perform key point detection and sunscreen detection on the UV image collected by the ultraviolet camera 102 .
  • the processor 110 can be used to perform face detection and key point detection on the front preview image collected by the front camera 101 . In some embodiments, the processor 110 can be used to perform face alignment on the UV image and the pre-preview image, determine the transformation relationship between the UV image and the pre-preview image, and convert the sunscreen of the UV image to The key points corresponding to the detection results are the key points corresponding to the pre-preview image.
  • Memory 180 can be used to store data, software programs and modules, and can be volatile memory (Volatile Memory), such as random access memory (Random-Access Memory, RAM); or non-volatile memory (Non-Volatile Memory), For example, read-only memory (Read-Only Memory, ROM), flash memory (Flash Memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, or It can also be a removable storage medium, such as a Secure Digital (SD) memory card.
  • the memory 180 may include a program storage area (not shown) and a data storage area (not shown).
  • Program code may be stored in the program storage area, and the program code is used to cause the processor 110 to execute the sunscreen detection method provided by the embodiment of the present application by executing the program code.
  • the data storage area can be used to store the facial key points detected by the processor 110 on the UV image, the sunscreen detection results of the UV image, and the facial key points corresponding to the sunscreen detection results of the UV image.
  • the data storage area can also be used to store facial key points detected by the processor 110 on the pre-preview image.
  • the data storage area can also be used to store the mapping relationship between the UV image and the pre-preview image determined by the processor 110 .
  • Power module 140 may include a power supply, power management components, and the like.
  • the power source can be a battery.
  • the power management component is used to manage the charging of the power supply and the power supply from the power supply to other modules.
  • the charging management module is used to receive charging input from the charger; the power management module is used to connect the power supply, the charging management module and the processor 110 .
  • the mobile communication module 150 may include, but is not limited to, an antenna, a power amplifier, a filter, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 can provide wireless communication including 2G/3G/4G/5G etc. applied on the mobile phone 100. letter solution.
  • the mobile communication module 150 can receive electromagnetic waves through an antenna, perform filtering, amplification, and other processing on the received electromagnetic waves, and transmit them to a modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 160 may include an antenna, and implements the transmission and reception of electromagnetic waves via the antenna.
  • the wireless communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (Wireless Local Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), Bluetooth (Bluetooth, BT), and global navigation satellites.
  • WLAN Wireless Local Area Networks
  • WLAN Wireless Local Area Networks
  • Bluetooth Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Global Navigation Satellite System
  • GNSS Global Navigation Satellite System
  • FM Frequency Modulation
  • the mobile communication module 150 and the wireless communication module 160 of the mobile phone 100 may also be located in the same module.
  • the front camera 101 and the rear camera 105 are used to capture front preview images and rear preview images respectively.
  • the ultraviolet camera 102 is used to collect UV images.
  • the UV fill light 104 is used to provide UV fill light when the UV camera 102 collects UV images.
  • the internal screen 103 is usually the large screen of the mobile phone 100 . It can be used to display the front preview image collected by the front camera 101 in real time, and can also be used to display the front preview image with the sunscreen detection result added.
  • the external screen 106 is usually a smaller screen in the mobile phone 100 and is usually not on the same side of the mobile phone 100 as the internal screen 103 .
  • the external screen 106 can be used to display the rear preview image collected by the rear camera 105 in real time.
  • the audio module 170 may convert digital audio information into an analog audio signal output, or convert an analog audio input into a digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 . In some embodiments, audio module 170 may include speaker 170A, receiver 170B, microphone 170C, and headphone interface 170D. For example, in some embodiments of the present application, the mobile phone 100 can voice prompt the user to adjust the facial posture and facial position through the speaker 170A, so as to be able to collect the user's facial images from various angles and capture all the user's facial features.
  • the interface module 190 includes an external memory interface, a universal serial bus (Universal Serial Bus, USB) interface, a subscriber identification module (Subscriber Identification Module, SIM) card interface, etc.
  • the external memory interface can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone 100.
  • the external memory card communicates with the processor 110 through the external memory interface to implement data storage functions.
  • the universal serial bus interface is used by the mobile phone 100 to communicate with other mobile phones.
  • the user identity module card interface is used to communicate with the SIM card installed in the mobile phone 100, such as reading the phone number stored in the SIM card, or writing the phone number into the SIM card.
  • the mobile phone 100 also includes buttons, motors, indicators, etc.
  • the keys may include volume keys, on/off keys, etc.
  • the motor is used to make the mobile phone 100 produce a vibration effect.
  • Indicators may include laser pointers, radio frequency indicators, LED indicators, etc.
  • the hardware structure shown in FIG. 3 does not constitute a specific limitation on the mobile phone 100 .
  • the mobile phone 100 may include more or fewer components than shown in FIG. 3 , or combine some components, or split some components, or arrange different components.
  • the embodiment of the present application divides the entire sunscreen detection into a pre-shooting stage and a real-time viewing stage.
  • the specific processes of the pre-shooting stage and real-time viewing stage will be introduced in detail below.
  • the processing process of the pre-shooting stage as shown in Figure 4 includes the UI process of the pre-shooting stage and the algorithm process of the pre-shooting stage.
  • the execution subject of each step in the flow chart shown in Figure 4 can be the mobile phone 100 shown in Figure 3 .
  • the UI process of the pre-shooting stage shown in Figure 4 is introduced. Specifically, with reference to Figure 4, the UI process of the pre-shooting stage in a sunscreen detection method provided by an embodiment of the present application includes the following steps:
  • S401 Start pre-shooting and prompt the user to turn over the phone 100.
  • Starting the pre-shooting here refers to starting the pre-shooting function of the mobile phone 100 .
  • the mobile phone 100 displays the sun protection detection control 1001 shown in Figure 2B through the internal screen 103, and the user clicks the sun protection detection control 1001 shown in Figure 2B, Let the mobile phone 100 open the sun protection detection page shown in Figure 2C.
  • the mobile phone 100 can also prompt the user to turn over the mobile phone 100 through voice. This application does not limit how the mobile phone 100 presents the prompt information to the user. So that the rear camera 105, the ultraviolet camera 102, the ultraviolet fill light 104 and the external screen 106 of the mobile phone 100 are facing the user's face, so that the rear camera 105 and/or the ultraviolet camera 102 can be used to take pictures of the user wearing sunscreen. Frost's facial image, and the user can see the content displayed on the external screen 106.
  • the mobile phone 100 detects that the user clicks on the "shoot yourself" control 1002 to start the pre-shooting function, and after detecting that the user flips the mobile phone 100, turns on the UV module composed of the ultraviolet camera 102 and the ultraviolet fill light 104 (not shown). In some embodiments, the mobile phone 100 can also turn on the UV module after prompting the user to turn the mobile phone 100 to the side of the external screen 106 and detecting that the mobile phone 100 has been turned over through the sensor.
  • the mobile phone 100 can also display the above-mentioned sunscreen detection control 1001 through the external screen 106, and the user clicks the sunscreen detection control 1001 to cause the mobile phone 100 to open the sunscreen detection page. Then the user clicks the above-mentioned "shoot yourself” control 1002, and the mobile phone 100 turns on the UV module, thereby starting to collect UV images.
  • the mobile phone 100 After the mobile phone 100 turns on the UV module, it starts to continuously collect multiple frames of UV images of the user's face through the ultraviolet camera 102 to obtain a UV image sequence, and starts the pre-shooting algorithm process to perform the collected UV image sequence. Processing to generate prompt information to prompt the user to adjust facial posture. And after it is determined that the user has adjusted the facial posture, the UV camera 102 is used to collect the user's facial UV images from various angles, and perform sunscreen detection based on the collected user's facial UV images from various angles to obtain the sunscreen detection results and the corresponding person. Key points of face.
  • the mobile phone 100 can also turn on the rear camera 105 while turning on the UV module (not shown) composed of the ultraviolet camera 102 and the ultraviolet fill light 104.
  • the rear preview image of the face is collected by the rear camera 105, and the rear preview image P2 shown in Figure 2E is displayed on the external screen 106 in the form of an RGB color image, as well as the processing results of the rear preview image based on the pre-shooting algorithm.
  • the UV camera 102 is used to collect the user's facial UV images from various angles, and perform sunscreen detection based on the collected user's facial UV images from various angles to obtain the sunscreen detection results and the corresponding person.
  • the rear preview image collected by the rear camera 105 is a color image, and the color image includes richer features than the UV image, therefore, the prompt information generated based on the rear preview image collected by the rear camera 105 is more accurate. It allows users to quickly adjust their posture and improve sunscreen detection efficiency.
  • S403 According to the face detection output of the algorithm, prompt the user to move the phone 100 or move the face so that the face is in the appropriate position. within the setting range.
  • the mobile phone 100 continuously collects multiple frames of UV images of the user's face through the ultraviolet camera 102 to obtain a UV image sequence, and processes the collected UV image sequence according to the pre-shooting algorithm, and passes
  • the text prompt shown in FIG. 2H reminds the user to move the mobile phone 100 or move the face so that the face is within a suitable position, for example, the face is at the center of the field of view of the ultraviolet camera 102 lens.
  • the mobile phone 100 can also prompt the user to move the mobile phone 100 or move the face through voice.
  • the mobile phone 100 collects a rear preview image sequence of a person's face through the rear camera 105, processes the collected rear preview image sequence according to the pre-shooting algorithm, and uses the UI according to the processing results. Or the user is prompted by voice to move the mobile phone 100 or move the face so that the face is within a suitable position range, for example, the face is placed at the center of the field of view of the rear camera 105 lens.
  • S404 Based on the face detection output of the algorithm, the user is prompted to turn his head left, turn his head right, raise his head, or lower his head to complete full-angle pre-shooting. As a result, the mobile phone 100 can complete shooting of UV images of the user's face from various angles.
  • the mobile phone 100 determines that the user has adjusted the face to a suitable position range, collects a UV image of the user's front face, and then prompts the user to move the face through the UI interface as shown in Figure 2J or through voice. Turn left, turn right, raise your head, and lower your head to collect UV images of the user's face from different angles. So that the mobile phone 100 performs sunscreen detection on the user's facial UV images at different angles according to the pre-shooting algorithm, and obtains sunscreen detection results corresponding to the UV images at different angles, and sunscreen detection results corresponding to the UV images at different angles in the UV images at different angles. face key points.
  • S405 Prompt the user that pre-shooting is completed.
  • the interface shown in Figure 2G is entered, and a reminder "Please flip to the inner screen" pops up.
  • Information 1005 prompts the user that the pre-shooting is completed, so that the user flips the mobile phone 100 again so that the user's face faces the front camera 101 of the mobile phone 100 and enters the real-time viewing stage.
  • the mobile phone 100 can also prompt the user in the form of a voice prompt that the pre-photography has been completed.
  • the user turns the mobile phone 100 over again so that the inner screen 103 of the mobile phone 100 faces the user's face.
  • the algorithm flow corresponding to a pre-shooting stage of S402 in a sunscreen detection method includes the following steps:
  • S4021 Input the UV image sequence collected by the UV module.
  • the UV image sequence collected by the UV module is also the image sequence collected by the ultraviolet camera 102.
  • the mobile phone 100 can also collect a rear preview image sequence of a person's face through the rear camera 105 and input it into the processor 110 of the mobile phone 100 .
  • the mobile phone 100 performs face detection on the input UV image sequence or post-preview image sequence through the face detection model through the processor 110, and then detects the facial features (including eyebrows) of the face area based on the detected face area.
  • face key point detection means that given a face image, through key point detection methods such as deep learning methods and cascade shape regression methods, the location of the key areas of the face is detected and key points of these key areas are detected.
  • the facial features in the face image shown in Figure 5A are marked with facial contours and key points in one of 68 points, 98 points, 106 points, and 186 points.
  • the mobile phone 100 collects the user's facial UV images from different angles through the ultraviolet camera 102.
  • the mobile phone 100 needs to perform face detection and key point detection on these user's facial UV images from different angles. .
  • the ultraviolet camera 102 After the ultraviolet camera 102 is turned on, it will continuously collect UV images of the user's face. Therefore, this embodiment of the present application will perform face detection and key point detection on each frame of UV images collected by the ultraviolet camera 102.
  • S4023 Calculate the position and size of the face and determine whether the collected UV image meets the conditions. If yes, it means that the collected UV image meets the conditions, and the position and distance of the face are within the appropriate range, and enter S4025; if not, it means that the collected UV image does not meet the conditions, for example, the face is 100 meters away from the mobile phone, and the shooting The facial area of the face in the UV image is small, or if part of the face area is not captured, then S4024 is entered, and corresponding prompt information is generated to prompt the user to adjust the face position until the face is within a suitable position range.
  • the mobile phone 100 performs face detection and key point detection on a UV image sequence or a post-preview image sequence, obtains a face detection result and a key point detection result, and determines a UV image sequence or key point detection result based on the face detection result and key point detection result.
  • the mobile phone 100 can determine whether the distance between the geometric center of the face in the UV image sequence or the post-preview image sequence and the geometric center of the picture in the image is less than a preset distance threshold, and whether the ratio of the face size to the image size is within a certain preset value.
  • the user is guided to move the mobile phone 100 through the UI interface or output voice prompts, or to move the user's face until the face is within a suitable position range.
  • S4024 Generate prompt information to prompt the user to adjust the face position until the UV image meets the conditions.
  • the mobile phone 100 when the mobile phone 100 determines that the collected UV image does not meet the conditions, it can generate prompt information, such as move left, move right, move closer, move farther, etc., and prompt the user to move the mobile phone in the form of text, voice, etc. 100 or move the face until the face is within the appropriate position. It is not difficult to understand that if the size and distance of the collected UV image are appropriate, the detected sunscreen application will be more accurate.
  • S4025 Calculate the face angle, generate prompt information to prompt the user to adjust the face posture, and intercept the UV image of the face area based on key points.
  • p90 is the leftmost key point of the right eye in the face as shown in Figure 5A
  • p103 is the center key point of the bridge of the nose
  • p71 is the rightmost key point of the left eye
  • p105 is the nose tip key point
  • p130 is the center of the lower edge of the lip.
  • Key point p108 is the key point in the center of the bottom of the nose
  • x and y represent the horizontal and vertical coordinates of the key point respectively.
  • the mobile phone 100 uses UI prompts or voice prompts to enable the user to turn his head left and right and raise his head in sequence, so as to collect facial images of the user from different angles through the ultraviolet camera 102 .
  • the facial area in each UV image can be intercepted according to the key points in each UV image, and the UV image of the face area corresponding to the UV image at each angle is obtained.
  • the mobile phone 100 collects the user's facial UV images from different angles through the ultraviolet camera 102 and intercepts them to obtain the facial UV at each angle.
  • S4026 Perform sunscreen detection on the acquired UV image of the human face area.
  • the mobile phone 100 inputs the face area UV image corresponding to the facial UV image at each angle obtained through S4026 into a trained sunscreen detection model (such as the U-Net model), performs sunscreen detection, and obtains the user's face A binary black-and-white image of the sunscreen application area (hereinafter, this image will be referred to as the mask image for short).
  • the sunscreen detection model is trained from a large number of UV images and corresponding binary black and white images. For example, as shown in Figure 5B, the UV image is input into the trained sunscreen detection model to detect the application of sunscreen, and a black and white mask image is obtained as shown in Figure 5B.
  • the area in the mask image where sufficient amount of application is applied is black.
  • the area with insufficient application is the white area z6.
  • the sunscreen detection model in order to make the detection results more refined, can also be trained from a large number of UV images and grayscale images that represent the application of sunscreen in the UV images. For example, if the UV image to be detected is input to the sunscreen detection model, the model will output the corresponding grayscale image.
  • the black area in the grayscale image is the area where sunscreen is applied in sufficient amount, and the white area is where sunscreen is not applied.
  • the gray area is the area where the amount of paint is insufficient.
  • the mobile phone 100 records (that is, saves) the mask image corresponding to the UV image of the user's face at each angle obtained through S4027, and the key points of the face of the UV image of the user's face at each angle.
  • the mobile phone 100 can also calculate the face position, size, and angle on the facial UV image at each angle obtained through S4026, and save the face position, size, and angle data corresponding to the facial UV image at each angle.
  • S4028 and S4025 can be combined into one step.
  • the combined steps are: calculate the face angle, determine whether the full-angle shooting is completed, and if the full-angle shooting is not completed, generate prompt information to The user is prompted to adjust the face posture; if the full-angle shooting has been completed, the UV image of the face area is intercepted based on key points, and then enters S4026 to detect sunscreen on the UV image of the face area.
  • the processing process of the real-time viewing stage as shown in Figure 6 includes the UI process of the real-time viewing stage and the asynchronous transformation algorithm process.
  • the execution subject of each step in the flow chart shown in Figure 6 can also be the mobile phone 100 shown in Figure 3 .
  • the UI process of the real-time viewing phase shown in Figure 6 is introduced. Specifically, with reference to Figure 6, the UI process of the real-time viewing phase in a sunscreen detection method provided by an embodiment of the present application includes the following steps:
  • the mobile phone 100 can determine to enter the real-time viewing stage. This starts the UI process of the real-time viewing phase.
  • S602 Determine whether there is a pre-shooting record within the preset time period. If there is, it means that there is a pre-shooting record within the preset time period, and you can enter S603 to confirm whether the user needs to pre-shoot again; otherwise, it means that there is no pre-shooting record within the preset time period, and you can enter S604 to start the pre-shooting UI and Algorithmic process.
  • the preset time period may be, for example,
  • the preset time period is within half an hour from the last pre-shooting time.
  • S603 Determine whether the user chooses to perform pre-shooting again. If yes, it indicates that pre-shooting needs to be performed again and S604 is entered; otherwise, it indicates that pre-shooting does not need to be performed again and S605 is entered.
  • the mobile phone 100 determines that there is a pre-shooting record within the preset time period, it is possible that after the last pre-shooting, the user has re-applied or re-applied sunscreen, or the user has a large number of Sweating, washing face, etc., cause the user's sunscreen application situation to change from the sunscreen application situation during the last pre-shooting. Therefore, the user can check the pop-up "Whether to re-preset" in the mobile phone 100 interface as shown in Figure 2K. "shooting" notification 1006 to decide whether to trigger the mobile phone 100 to re-shoot the pre-shooting.
  • S604 Start the pre-shooting UI and algorithm process. That is to say, the UV images are re-taken, and the sunscreen detection results of these UV images are determined based on the taken UV images.
  • the pre-shooting UI and algorithm flow please refer to the above-mentioned description of the UI and algorithm flow in the pre-shooting stage shown in Figure 4, which will not be described again here.
  • the front camera can be turned on. 101 collects the user's pre-preview image (RGB image), and displays the preview image on the internal screen 103 in real time.
  • the mobile phone 100 will continuously collect the user's front preview images.
  • S606 Determine the sunscreen detection result corresponding to the pre-preview image.
  • the mobile phone 100 can use an asynchronous transformation algorithm to determine the sunscreen detection result corresponding to the front preview image.
  • the mobile phone 100 runs the asynchronous transformation algorithm shown in Figure 6, selects the UV image closest to the current frame (pre-preview image) from the UV images of the user's face from various angles collected in the pre-shooting stage shown in Figure 4, and converts this The mask corresponding to the UV image is fused with the current frame.
  • the specific asynchronous transformation algorithm may include an asynchronous transformation algorithm based on face alignment, and an asynchronous transformation algorithm based on three-dimensional face reconstruction.
  • Figure 6 uses an asynchronous transformation algorithm based on face alignment, which will be introduced in detail below in conjunction with S6061 to S6066, and will not be described here.
  • the asynchronous transformation algorithm based on three-dimensional face reconstruction will be introduced in detail later in conjunction with another embodiment.
  • S607 Display the preview screen with the sunscreen detection results in real time.
  • the UV image closest to the current frame is selected from the user's facial UV images from various angles collected in the pre-shooting stage shown in Figure 4, and the mask image corresponding to the UV image is Fusion with the current frame as different layers to obtain the fused image.
  • the fused image different colors in the RGB color space are filled in different areas of the mask image, and the color-filled image is displayed in real time as a preview screen. .
  • the mobile phone 100 finally displays the RGB preview picture P4 in real time through the internal screen 103 as shown in FIG. 2I , where P4 includes areas Z4 and Z5 marked with different colors for the user to view.
  • the embodiment shown in FIG. 6 can finally realize the real-time display of the pre-preview image of the facial area that needs to be reapplied with sunscreen marked with a specific color through the internal screen 103 of the mobile phone 100, that is, a larger screen, because
  • the pre-preview image is an RGB image, that is, a color image.
  • the detection results can be maximized to be displayed in a color image with a better user experience, which is convenient for users to view and helps Improve user experience.
  • an asynchronous transformation algorithm based on face alignment includes the following steps:
  • S6061 Input the image sequence collected by the front camera 101.
  • the mobile phone 100 continuously collects N front preview images f1 to fn through the front camera 101, and inputs the image sequence composed of f1 to fn into the processor 110 of the mobile phone 100.
  • S6062 Perform face detection and key point detection.
  • the mobile phone 100 uses the processor 110 to perform face detection on the input pre-preview images f1 to fn through a face detection model, and then detects the facial features (including eyebrows, eyes, etc.) of the face area based on the detected face area. , nose, mouth) and facial contours for key point detection. And the same key point annotation method selected in S4022 in Figure 4 is used to annotate facial key points.
  • S6063 Calculate the position, size and angle of the face.
  • the mobile phone 100 uses the face detection and key point detection results for the front preview images f1 to fn obtained through S6062 to calculate the position, size, angle and other characteristics of the face in the front preview image. Based on the position, size, angle and other characteristics of the face in the pre-preview image, the UV image closest to the position, size and angle of the face in the pre-preview image is determined from the UV image sequence taken in the pre-shooting stage.
  • S6064 Search all image sequences saved in the pre-shooting stage to find the pre-shooting frame fs that is closest to the face position, size and angle of the current frame f.
  • the current frame f may be one of the pre-preview images f1 to fn.
  • the mobile phone 100 searches for the face position, size, and angle data of the facial UV image at each angle obtained in S4026 that was saved in the pre-shooting stage shown in Figure 4, thereby finding out the person with the current frame f.
  • the transformation relationship T from fs to f refers to the position transformation relationship between each key point in fs and the corresponding key point in f.
  • the mobile phone 100 can perform triangulation processing on the current frame f and the pre-captured frame fs based on the face key points, and then calculate affine transformation for each pair of face triangles in the pre-preview image f and the pre-captured frame fs, so that the pre-captured frame fs Each triangle in the shooting frame fs is aligned with the corresponding triangle in the preview image f after transformation.
  • the set of these transformations constitutes the transformation relationship T.
  • S6066 Transform the sunscreen detection result (mask image) of fs according to the transformation relationship T to obtain the sunscreen detection result of the corresponding area in f.
  • the sunscreen detection result (mask image) of fs is transformed based on the transformation relationship T to obtain the transformed mask image, then the transformed mask image is f of sun protection
  • the mobile phone 100 fuses f and the transformed mask image to obtain a fused image, and then displays the fused image as a preview screen.
  • the following will continue to take the mobile phone 100 as an example to introduce in detail the specific process of the mobile phone 100 performing another sunscreen detection provided by the embodiment of the present application.
  • the entire sunscreen detection is also divided into a pre-shooting stage and a real-time viewing stage.
  • the specific processing process of the pre-shooting stage is shown in Figure 7.
  • the processing process of the pre-shooting stage shown in Figure 7 includes the UI process of the pre-shooting stage and the algorithm flow of the pre-shooting stage.
  • the execution subject of each step shown in Figure 7 may be the mobile phone 100.
  • the UI processes S701 to S705 in the pre-shooting stage shown in FIG. 7 are exactly the same as the UI processes S401 to S405 in the pre-shooting stage shown in FIG. 4 , and will not be described in detail below.
  • the following is only for Figure 7
  • the content of the "pre-shooting stage algorithm flow" in the flow chart shown is different from the flow chart shown in Figure 4 will be introduced in detail.
  • the difference between the "pre-shooting stage algorithm flow" involved in Figure 7 and Figure 4 is that the data representation forms finally obtained in Figure 7 and Figure 4 to characterize the sunscreen detection results of the UV image are different.
  • the "pre-shooting stage algorithm flow" involved in Figure 7 is: the mobile phone 100 performs three-dimensional face reconstruction based on the UV image sequence and the key point information of each UV image to obtain the reconstructed three-dimensional face, and uses the above-described corresponding UV images to
  • the mask image is colored on the three-dimensional human face to obtain the colored three-dimensional human face, and the relevant data of the colored three-dimensional human face is recorded.
  • the sunscreen application condition of the face area in the pre-preview image is determined based on the colored three-dimensional face.
  • the "pre-shooting stage algorithm flow" involved in Figure 4 is: the mobile phone 100 determines the mask image corresponding to the UV image at each angle and the key point information of each UV image that meets the conditions. Based on the mask image corresponding to the UV image at each angle and the key point information of each UV image, the sunscreen application status of the face area in the pre-preview image is determined.
  • S7021 Enter the image sequence collected by the UV module. Since the UV module includes an ultraviolet camera 102 and an ultraviolet fill light 104, the UV image sequence collected by the UV module is also the image sequence collected by the ultraviolet camera 102.
  • the mobile phone 100 performs face detection on the input UV image sequence or post-preview image sequence through the face detection model through the processor 110, and then detects the facial features (including eyebrows) of the face area based on the detected face area. , eyes, nose, mouth) and facial contours for key point detection.
  • S7023 Calculate the position and size of the face and determine whether the collected UV image meets the conditions. If yes, it means that the collected UV image meets the conditions, and the position and distance of the face are within the appropriate range, enter S7025; if not, it means that the collected UV image does not meet the conditions, for example, the face is 100 meters away from the mobile phone, and the shooting The facial area of the face in the UV image is small, or if part of the face area is not captured, then enter S7024 to generate corresponding prompt information to prompt the user to adjust the face position until the face is within the appropriate position range.
  • the mobile phone 100 can also determine whether the distance between the geometric center of the face in the UV image sequence or the post-preview image sequence and the geometric center of the picture in the image is smaller than a preset distance threshold, and the size of the face in the image. Whether the size ratio is within a certain preset range, a voice prompt is output to guide the user to move the mobile phone 100, or move the user's face until the face is within a suitable position range.
  • S7024 Generate prompt information to prompt the user to adjust the face position until the UV image meets the conditions.
  • the mobile phone 100 when the mobile phone 100 determines that the collected UV image does not meet the conditions, it can generate prompt information, such as move left, move right, move closer, move farther, etc., and prompt the user to move the mobile phone in the form of text, voice, etc. 100 or move the face until the face is within the appropriate position. It is not difficult to understand that if the size and distance of the collected UV image are appropriate, the detected sunscreen application will be more accurate.
  • S7025 Calculate the face angle, generate prompt information to prompt the user to adjust the face posture, and intercept the UV image of the face area based on key points. For details, reference may be made to the relevant description of S4025 in Figure 4 above, which will not be described again here.
  • S7026 Detect sunscreen on the acquired UV image of the face area. For details, reference may be made to the relevant description of S4026 in Figure 4 above, which will not be described again here.
  • S7027 Reconstruct the three-dimensional face model based on the key points of the current frame.
  • the current frame here refers to the UV image of the face area corresponding to the facial UV image from each angle collected by the ultraviolet camera 102 after the mobile phone 100 determines that the user's face position and distance are appropriate in the pre-shooting stage.
  • each frame in the aforementioned facial UV images from various angles can be called the current frame.
  • the mobile phone 100 uses a three-dimensional deformable face model (3D Morphable Face Model, 3DMM) to realize three-dimensional face reconstruction based on the key points of facial UV images from various angles continuously collected by the ultraviolet camera 102, and obtains the three-dimensional face as shown in Figure 8A human face.
  • 3D Morphable Face Model 3DMM
  • S7028 Establish a mapping relationship between the sunscreen detection result of the current frame and the three-dimensional face.
  • the sunscreen detection result of the current frame is also the sunscreen detection result of the facial UV image at each angle, for example, a black and white image as shown in Figure 5B, or a grayscale image.
  • a three-dimensional human face is projected into two dimensions at different angles, and the projected two-dimensional image corresponds to the sunscreen detection results (mask image) corresponding to the UV image at different angles.
  • Each face area in a three-dimensional face is a "patch", and the two-dimensional image obtained by the above projection is obtained by projecting the corresponding three-dimensional patch in the three-dimensional face.
  • the patch corresponding to the under-painted area Z7 in the mask image shown in FIG. 8B is Z8 shown in FIG. 8C .
  • S7029 Use the sunscreen detection results of the current frame to color the patches of the three-dimensional human face.
  • the corresponding patches in the three-dimensional face are colored with different colors.
  • the corresponding patches in the three-dimensional face are not colored. You can also use different colors for the unpainted areas in the mask image and the patches corresponding to the above-mentioned areas with sufficient and insufficient paint to color the corresponding patches in the three-dimensional face.
  • the corresponding patches in the three-dimensional face of the mask image with insufficient paint and the areas that are not painted can also be used with different values.
  • color The specific coloring method can be determined according to needs, and is not limited in this application.
  • the UV images of the user at various angles have corresponding mask images (that is, the sunscreen detection results)
  • the result of coloring the three-dimensional face using mask images corresponding to UV images at different angles can be as shown in Figure 8D.
  • S7030 Determine whether full-angle shooting is completed. If yes, it means that the full-angle shooting has been completed, and enter S7031 to record the colored three-dimensional face; otherwise, it means that the full-angle shooting has not been completed, and return to S7025 until the UV images of the user's face from all angles are shot.
  • S7031 Record the colored 3D face for the UI to prompt that the pre-shooting is completed.
  • the mobile phone 100 determines that the full-angle UV image shooting has been completed, the colored three-dimensional face is saved, and the user can be prompted through the UI that the pre-shooting is completed. For example, after the mobile phone 100 determines that the pre-photography is completed, a prompt 1005 of "please flip to the inner screen" as shown in Figure 2G pops up on the external screen 106 to prompt the user that the pre-photography is completed and to flip the mobile phone 100 to capture the front preview image. .
  • execution sequence in the flow chart shown in FIG. 7 is just an example. In other embodiments, other execution sequences can also be adopted, and some steps can also be split or combined, which is not limited here.
  • the processing process of the real-time viewing phase as shown in Figure 9 includes the UI process of the real-time viewing phase and the asynchronous transformation algorithm process.
  • the execution subject of each step in the flow chart shown in Figure 9 can also be the mobile phone 100 shown in Figure 3.
  • S901 to S907 in the UI flow of the real-time viewing phase in the flowchart shown in FIG. 9 and S601 to S607 in the UI flow of the real-time viewing phase shown in FIG. 6 They are exactly the same. The following is a brief explanation without going into details.
  • the UI process in the real-time viewing stage shown in Figure 9 includes the following:
  • the mobile phone 100 can determine to enter the real-time viewing stage. This starts the UI process of the real-time viewing phase.
  • S902 Determine whether there is a pre-shooting record within the preset time period. If there is, it means that there is a pre-shooting record within the preset time period, and you can enter S903 to confirm whether the user needs to pre-shoot again; otherwise, it means that there is no pre-shooting record within the preset time period, and you can enter S904 to start the pre-shooting UI and Algorithmic process.
  • the preset time period may be, for example,
  • the preset time period is within half an hour from the last pre-shooting time.
  • S903 Determine whether the user chooses to perform pre-shooting again. If yes, it indicates that pre-shooting needs to be performed again and S904 is entered; otherwise, it indicates that pre-shooting does not need to be performed again and S905 is entered.
  • S904 Start the pre-shooting UI and algorithm process.
  • the pre-shooting UI and algorithm flow please refer to the aforementioned description of the UI and algorithm flow in the pre-shooting stage shown in Figure 7 , which will not be described again here.
  • S905 Turn on the front camera 101 and display the preview image on the internal screen 103.
  • the front camera can be turned on. 101 collects the user's pre-preview image (RGB image), and displays the preview image on the internal screen 103 in real time.
  • S906 Determine the sunscreen detection result corresponding to the pre-preview image.
  • the mobile phone 100 can use an asynchronous transformation algorithm to determine the sunscreen detection result corresponding to the front preview image.
  • the specific asynchronous transformation algorithm will be introduced in detail below in conjunction with S9061 to S9065, and will not be described here.
  • S907 Display the preview screen with the sunscreen detection results in real time.
  • the mask image determined by running the asynchronous transformation algorithm and the current frame are fused as different layers to obtain the fused image.
  • different colors in the RGB color space are filled in different areas of the mask image.
  • the asynchronous transformation algorithm in the real-time viewing stage in the flow chart shown in Figure 9 is introduced in detail below.
  • S9061 and S9062 filled with "gray background" are exactly the same as S6061 and S6062 in the asynchronous transformation algorithm in the real-time viewing stage shown in Figure 6. They will be briefly described below and will not be repeated.
  • S9063 to S9065 in the asynchronous transformation algorithm flow shown in Figure 9 are introduced in detail.
  • the asynchronous transformation algorithm shown in Figure 9 includes the following:
  • S9061 Enter the image sequence collected by the front camera.
  • the mobile phone 100 continuously collects N front preview images f1 to fn through the front camera 101, and inputs the image sequence composed of f1 to fn into the processor 110 of the mobile phone 100.
  • S9062 Perform face detection and key point detection.
  • the mobile phone 100 uses the processor 110 to perform face detection on the input pre-preview images f1 to fn through a face detection model, and then detects the facial features (including eyebrows, eyes, etc.) of the face area based on the detected face area. , nose, mouth) and facial contours for key point detection. And the same key point annotation method selected by S7022 in Figure 7 is used for facial key point annotation.
  • the sunscreen detection results (mask image) corresponding to UV images from different angles are colored patches.
  • S9064 Rotate the three-dimensional face to make it consistent with the angle of the face in the current frame.
  • the relationship between the key points of the face in the projected two-dimensional image and the key points of the face in the current frame can be calculated. distance.
  • the colored three-dimensional face is rotated to the state shown in Figure 10A
  • the three-dimensional face shown in Figure 10A is projected into the two-dimensional image shown in Figure 10B
  • each face in the two-dimensional image shown in Figure 10B is Perform a weighted summation of the position differences between the key points and the corresponding key points in the current frame, and determine that the weighted summation result is close to 0 or as small as possible, then determine the current rotation angle of the three-dimensional face shown in Figure 10A and The current frame is consistent.
  • S9065 Perform two-dimensional projection on the rotated three-dimensional face, and obtain the detection result (mask image) of the current frame based on the patch color.
  • the image obtained by two-dimensional projection of the rotated three-dimensional face can be determined as the mask image corresponding to the current frame (pre-preview image). And fuse the mask corresponding to the UV image and the current frame as different layers to obtain the fused image.
  • the fused image different colors in the RGB color space are filled in different areas of the mask image. After filling the color The image is displayed in real time as a preview screen.
  • Figure 11 shows a process for comparing the historical sunscreen application status with the current application status provided by the embodiment of the present application.
  • the execution subject of each step can also be the mobile phone 100. Specifically, the process shown in Figure 11 The diagram includes the following steps:
  • the mobile phone 100 detects the user's click operation and determines that the user clicked the "Compare with History” function.
  • the name of the "Compare with History” control 1007 here is just an example. In actual applications, the control corresponding to the history comparison function can be represented by other names, and this application does not limit this.
  • S1102 Determine whether there is a saved pre-shooting record. If yes, it indicates that there is a saved pre-shooting record, and the process goes to S1104, where the saved pre-shooting record is displayed for the user to select; otherwise, it indicates that there is no saved pre-shooting record, and then S1103 is entered, and the user is prompted that there is no comparable historical record.
  • S1103 Prompt the user that there is no comparable history record.
  • S1104 Display the time of all saved pre-shooting records for the user to select.
  • the user clicks the "Compare with History" control 1007 shown in Figure 12A, and the mobile phone 100 determines that there is a saved pre-shooting record, and then enters the interface as shown in Figure 12C, displaying information such as the time of the saved pre-shooting record. For users to choose.
  • S1105 Obtain the pre-shooting record selected by the user.
  • the user can select a pre-shot record. In some embodiments, the user can also select multiple pre-shooting records at the same time. For example, if the user clicks the control 1009 in the interface shown in FIG. 12C, the mobile phone 100 obtains the pre-shooting record 2 selected by the user.
  • S1106 Turn on the front camera 101 and display the preview image on the internal screen 103.
  • the mobile phone 100 determines that the user has selected one or more pre-shooting records, it turns on the front camera 101 to collect the user's front preview image, and displays the preview image on the internal screen 103 .
  • S1107 Take the image sequence collected by the front camera 101 and the pre-shooting record selected by the user as input, run the asynchronous transformation algorithm, and obtain the corresponding result of the historical detection result of the pre-shooting time on the current preview image sequence.
  • the historical detection result (mask) of the pre-shooting time corresponding to the pre-shooting record is determined. image) and the front preview image collected in real time by the front camera 101, thereby determining the mapping detection obtained by mapping the historical detection results (mask image) of the pre-shooting time corresponding to the pre-shooting record according to the mapping relationship result. That is to say, the historical detection results corresponding to the pre-shooting record are transformed into display results corresponding to the positions of each area of the human face in the current preview interface through an asynchronous transformation algorithm.
  • the asynchronous transformation algorithm may be any asynchronous transformation algorithm in the embodiment shown in Figure 6 or Figure 9 .
  • S1108 Execute the real-time viewing phase process to obtain the current real-time detection results.
  • the real-time viewing phase process is executed to obtain the current real-time detection results.
  • S1109 Display historical results and real-time detection results in different layers on the preview screen. This allows users to intuitively compare and view the historical detection results corresponding to the historical records selected by the user and the current real-time detection results.
  • the mobile phone 100 fuses the historical results with the real-time detection results, as well as the current frame (pre-preview image), so that the user can intuitively view the historical records without applying sunscreen.
  • the area is Z9, while the areas currently not applying sunscreen are Z4 and Z5. Among them, historical results and real-time detection results can be displayed in different colors.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the steps in each of the above method embodiments can be implemented.
  • Embodiments of the present application provide a computer program product.
  • the steps in each of the above method embodiments can be implemented when the mobile terminal is executed.
  • An embodiment of the present application also provides an electronic device.
  • the electronic device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor.
  • the processor executes The computer program implements the steps in any of the above method embodiments.
  • Embodiments of the mechanisms disclosed in this application may be implemented in hardware, software, firmware, or a combination of these implementation methods.
  • Embodiments of the present application may be implemented as a computer program or program code executing on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements) , at least one input device and at least one output device.
  • Program code may be applied to input instructions to perform the functions described herein and to generate output information.
  • Output information can be applied to one or more output devices in a known manner.
  • a processing system includes any processor having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor. system.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • Program code may be implemented in a high-level procedural language or an object-oriented programming language to communicate with the processing system.
  • assembly language or machine language can also be used to implement program code.
  • the mechanisms described in this application are not limited to the scope of any particular programming language. In either case, the language may be a compiled or interpreted language.
  • the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments may also be implemented as instructions carried on or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be operated by one or more processors Read and execute.
  • instructions may be distributed over a network or through other computer-readable media.
  • machine-readable media may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy disks, optical disks, optical disks, read-only memories (CD-ROMs), magnetic Optical disc, Read Only Memory (ROM), Random Access Memory (RAM), Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Memory Read memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic card or optical Card, flash memory, or tangible machine-readable memory used to transmit information via electrical, optical, acoustic, or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.) using the Internet.
  • machine-readable media includes any type of machine-readable media suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, computer).
  • each unit/module mentioned in each device embodiment of this application is a logical unit/module.
  • a logical unit/module can be a physical unit/module, or it can be a physical unit/module.
  • Part of the module can also be implemented as a combination of multiple physical units/modules.
  • the physical implementation of these logical units/modules is not the most important.
  • the combination of functions implemented by these logical units/modules is what solves the problem of this application. Key technical issues raised.
  • the above-mentioned equipment embodiments of this application do not introduce units/modules that are not closely related to solving the technical problems raised by this application. This does not mean that the above-mentioned equipment embodiments do not exist. Other units/modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Geometry (AREA)
  • Mathematical Physics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及图像处理技术领域,特别涉及一种面部特征检测方法、可读介质和电子设备。该方法包括:检测到对用户的面部特征进行检测的检测指令;获取通过第一摄像头采集的多张包括用户面部的紫外图像;获取通过第二摄像头采集的包括用户面部的第一预览图像,其中第一预览图像为彩色图像;根据各紫外图像中用户的面部特征确定出第一预览图像中用户的面部特征;根据第一预览图像中用户的面部特征对第一预览图像中用户面部区域的像素点的像素值进行调整得到目标图像,通过第一屏幕显示目标图像。本申请技术方案可以满足用户对面部特征检测的实时性需求,最大化显示彩色的检测结果,有助于提升用户体验。

Description

面部特征检测方法、可读介质和电子设备
本申请要求于2022年04月15日提交国家知识产权局、申请号为202210400185.X、申请名称为“面部特征检测方法、可读介质和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别涉及一种面部特征检测方法、可读介质和电子设备。
背景技术
皮肤老化与损伤与长期接受日光照射关系密切。日光中紫外线是对皮肤伤害最大的成分,其中,长波紫外线(Ultraviolet-A,UVA)主要引起光老化和皮肤肿瘤,中波紫外线(Ultraviolet-B,UVB)主要引起日晒伤,短波紫外线(Ultraviolet-C,UVC)大多会被大气层阻断可忽略。由于防晒霜可以吸收紫外线,因此,越爱越多的人会选择涂抹防晒霜来减少紫外线对皮肤的伤害。通常人们在涂抹防晒霜时,只是凭感觉涂抹,很多情况下防晒霜的涂抹状况是肉眼不可见的,例如,涂抹区域是否完全覆盖,涂抹量是否足够,涂抹是否均匀,什么时候需要补涂,这些均是人们非常关注却难以直观感知的。
发明内容
有鉴于此,本申请实施例提供了一种面部特征检测方法、可读介质和电子设备。本申请的技术方案应用于电子设备,电子设备包括位于电子设备不同侧的第一摄像头和第二摄像头,以及位于所述第二摄像头同一侧的第一屏幕。本申请技术方案通过第一摄像头采集的多张包括用户面部的紫外图像中的面部特征,确定出第二摄像头采集的第一预览图像中用户的面部特征,并根据第一预览图像中用户的面部特征,对第一预览图像中用户面部区域的像素点的像素值进行调整,得到目标图像,然后通过第一屏幕显示目标图像。由于电子设备通过响应对用户的面部特征进行检测的检测指令,实时获取到紫外图像,并且根据实时获取到的紫外图像所对应的用户的面部特征确定出第二摄像头采集到的第一预览图像中用户的面部特征,并且最终将得到的目标图像在面积较大的屏幕上显示。因此,本申请技术方案不仅可以满足用户对面部特征检测的实时性需求,还可以最大化显示彩色的检测结果,有助于提升用户体验。
第一方面,本申请实施例提供了一种面部特征检测方法,应用于电子设备,电子设备包括位于电子设备不同侧的第一摄像头和第二摄像头,以及位于所述第二摄像头同一侧的第一屏幕。该面部特征检测方法包括:检测到对用户的面部特征进行检测的检测指令;响应于检测指令,获取通过第一摄像头采集的多张包括用户面部的紫外图像;获取通过第二摄像头采集的包括用户面部的第一预览图像,其中第一预览图像为彩色图像;根据各紫外图像中用户的面部特征,确定出第一预览图像中用户的面部特征;根据确定出的第一预览图像中用户的面部特征,对第一预览图像中用户面部区域的像素点的像素值进行调整,得到目标图像;通过第一屏幕显示目标图像。
由于电子设备通过响应对用户的面部特征进行检测的检测指令,实时获取到紫外图像,并且根据实时获取到的紫外图像所对应的用户的面部特征确定出第二摄像头采集到的第一预览图像中用户的面部特征,并且最终将得到的目标图像在面积较大的第一屏幕上显示。因此,本申请技术方案不仅可以满足用户对面部特征检测的实时性需求,还可以最大化显示彩色的检测结果,有助 于提升用户体验。
在上述第一方面的一种可能的实现中,紫外图像用于进行对用户面部的防晒霜涂抹情况的检测。例如,利用紫外图像进行对用户面部的防晒霜涂抹情况的检测,第一屏幕为内屏,内屏面积较大,应理解,在利用紫外图像进行对用户面部的防晒霜涂抹情况的检测时,由于上述目标图像是RGB图像,且是基于实时拍摄的前置第一预览图像得到的,因此最终通过内屏显示目标图像,不仅可以满足用户对防晒霜检测的实时性需求,呈现给用户的防晒霜检测结果更加美观,还可以使检测结果最大化显示,方便用户查看,有助于提升用户体验。
在上述第一方面的一种可能的实现中,上述方法中的响应于检测指令,获取通过第一摄像头采集的多张包括用户面部的紫外图像,包括:响应于检测指令,在确定出需要重新采集多张包括用户面部的紫外图像时,提示用户面对第一摄像头,并且通过第一摄像头采集到多张包括用户面部的紫外图像。
在上述第一方面的一种可能的实现中,获取通过第二摄像头采集的包括用户面部的第一预览图像,包括:在确定出已完成对多张包括用户面部的紫外图像的重新采集,或者在确定出无需重新采集多张包括用户面部的紫外图像时,提示用户面对第二摄像头,并且通过第二摄像头采集包括用户面部的第一预览图像。
例如,电子设备保存的有预设时间段内,例如5分钟内,拍摄的用户各个角度的面部的紫外图像,则可以无需再重新采集用户面部的紫外图像。又如,在确定出需要重新采集用户面部的紫外图像,并且已经重新采集完位置、大小等均满足条件的用户各个角度的面部的紫外图像之后,提示用户面对第二摄像头,并且通过第二摄像头采集包括用户面部的第一预览图像。
在上述第一方面的一种可能的实现中,上述方法还包括:确定出各紫外图中用户面部的防晒霜检测结果、各紫外图像的关键点信息以及第一预览图像的关键点信息。
由于紫外图像可以为灰度图像,或者为对灰度图像进行二值化处理后的黑白图像,紫外图像中对应于用户面部防晒霜涂抹情况不同的区域,有较为明显的像素值的差异,可以很快速的根据上述紫外图像确定出用户面部的防晒霜涂抹情况。
在上述第一方面的一种可能的实现中,用户的面部特征包括用户面部的防晒霜检测结果,根据各紫外图像中用户的面部特征,确定出第一预览图像中用户的面部特征,包括:根据确定出的各紫外图中用户面部的防晒霜检测结果,以及各紫外图像的关键点信息和第一预览图像的关键点信息,确定出第一预览图像中面部的防晒霜检测结果。
由于紫外图像可以为灰度图像,或者为对灰度图像进行二值化处理后的黑白图像,紫外图像中对应于用户面部防晒霜涂抹情况不同的区域,有较为明显的像素值的差异,可以很快速的根据上述紫外图像确定出用户面部的防晒霜涂抹情况。从而可以快速第根据各紫外图像的关键点信息和第一预览图像的关键点信息,确定出第一预览图像中面部的防晒霜检测结果,可以提升防晒霜检测效率,且检测手段简单。
在上述第一方面的一种可能的实现中,根据确定出的各紫外图中用户面部的防晒霜检测结果,以及各紫外图像的关键点信息和第一预览图像的关键点信息,确定出第一预览图像中面部的防晒霜检测结果,包括:根据各紫外图像的关键点信息、第一预览图像的关键点信息,确定出多张用户面部的紫外图像中和第一预览图像匹配的目标紫外图像;根据目标紫外图像的关键点信息、第一预览图像的关键点信息,确定出目标紫外图像和第一预览图像之间的变换关系;根据变换关系以及目标紫外图像对应的防晒霜检测结果,确定出第一预览图像中用户面部的防晒霜检测结果。
由于目标紫外图像的关键点信息、第一预览图像的关键点信息均可以简单快速得到,利用目标紫外图像的关键点信息、第一预览图像的关键点信息可以准确地确定出目标紫外图像和第一预览图像之间的变换关系,从而可以准确、快速地,根据该变换关系以及目标紫外图像对应的防晒霜检测结果,确定出第一预览图像中用户面部的防晒霜检测结果。使得检测过程快速,检测结果准确。
在上述第一方面的一种可能的实现中,根据各紫外图像的关键点信息、第一预览图像的关键点信息,确定出多张用户面部的紫外图像中和第一预览图像匹配的目标紫外图像,包括:根据各紫外图像的关键点信息分别计算出各紫外图像中的人脸位置、人脸大小以及人脸角度,并且根据第一预览图像的关键点信息计算出预览图中的人脸位置、人脸大小以及人脸角度;从多张用户面部的紫外图像中确定出人脸位置、人脸大小以及人脸角度和第一预览图像接近的紫外图像作为目标紫外图像。
在上述第一方面的一种可能的实现中,电子设备还包括位于第一摄像头同一侧的第二屏幕,在确定出第一预览图像中面部的防晒霜检测结果之前,方法还包括:根据各紫外图像的关键点信息分别计算出各紫外图像中的人脸位置、人脸大小以及人脸角度;根据计算出的各紫外图像中的人脸位置、人脸大小以及人脸角度,生成第一提示信息;通过第二屏幕显示第一提示信息,以提示用户调整面部位置和/或面部姿态。
在上述第一方面的一种可能的实现中,电子设备还包括位于第一摄像头同一侧的第三摄像头,方法还包括:响应于检测指令,获取通过第三摄像头采集的多张包括用户面部的第二预览图像;确定出各第二预览图像的关键点信息;根据各第二预览图像的关键点信息分别计算出各第二预览图像中的人脸位置、人脸大小以及人脸角度;根据计算出的各第二预览图像中的人脸位置、人脸大小以及人脸角度,生成第二提示信息;通过第二屏幕显示第二提示信息,以提示用户调整面部位置和/或面部姿态。
在上述第一方面的一种可能的实现中,根据目标紫外图像的关键点信息、第一预览图像的关键点信息,确定出目标紫外图像和第一预览图像之间的变换关系,包括:根据目标紫外图像的关键点信息对目标紫外图像的关键点进行三角化处理,并且根据第一预览图像的关键点信息对第一预览图像的关键点进行三角化处理;对目标紫外图像中三角化处理后得到的各人脸三角,和第一预览图像中对应的人脸三角之间进行仿射变换,计算出目标紫外图像中的各人脸三角和第一预览图像中对应的人脸三角之间的变换关系;将目标紫外图像中的各人脸三角和第一预览图像中对应的人脸三角之间的变换关系的构成的集合,确定为目标紫外图像和第一预览图像之间的变换关系。
在上述第一方面的一种可能的实现中,根据变换关系以及目标紫外图像对应的防晒霜检测结果,确定出第一预览图像中用户面部的防晒霜检测结果,包括:基于目标紫外图像和第一预览图像之间的变换关系,对目标紫外图像对应的防晒霜检测结果进行变换,得到对应于目标紫外图像的变换后的防晒霜检测结果;将变换后的防晒霜检测结果,确定为第一预览图像中用户面部的防晒霜检测结果。
由于目标紫外图像的关键点信息、第一预览图像的关键点信息均可以简单快速得到,利用目标紫外图像的关键点信息、第一预览图像的关键点信息可以准确地确定出目标紫外图像和第一预览图像之间的变换关系,从而可以准确、快速地,根据该变换关系以及目标紫外图像对应的防晒霜检测结果,确定出第一预览图像中用户面部的防晒霜检测结果。使得检测过程快速,检测结果准确。
在上述第一方面的一种可能的实现中,根据确定出的各紫外图中用户面部的防晒霜检测结果,以及各紫外图像的关键点信息和第一预览图像的关键点信息,确定出第一预览图像中面部的防晒霜检测结果,包括:根据各紫外图像的关键点信息,进行三维人脸重建,得到未着色的三维人脸以及三维人脸的关键点信息;根据各紫外图中用户面部的防晒霜检测结果,对未着色的三维人脸进行着色,得到着色后的三维人脸;根据着色后的三维人脸、三维人脸的关键点信息、第一预览图像的关键点信息,确定出第一预览图像中用户面部的防晒霜检测结果。
在上述第一方面的一种可能的实现中,根据着色后的三维人脸、三维人脸的关键点信息、第一预览图像的关键点信息,确定出第一预览图像中用户面部的防晒霜检测结果,包括:在三维空间中旋转着色后的三维人脸,得到多角度的三维人脸;将多角度的三维人脸进行二维投影,得到多角度的投影人脸,并基于三维人脸的关键点信息确定出各角度的投影人脸对应的人脸关键点信息;基于各角度的投影人脸对应的人脸关键点信息、第一预览图像的关键点信息,从多角度的投影人脸中筛选出和第一预览图像匹配的投影人脸,将匹配的投影人脸确定为第一预览图像中用户面部的防晒霜检测结果。
在上述第一方面的一种可能的实现中,基于各角度的投影人脸对应的人脸关键点信息、第一预览图像的关键点信息,从多角度的投影人脸中筛选出和第一预览图像匹配的投影人脸,包括:分别计算出各角度的投影人脸中的人脸关键点和第一预览图像的对应的人脸关键点的位置差,并且分别对各角度的投影人脸中所有的位置差进行加权求和;将加权求和的结果满足条件的投影人脸确定为和第一预览图像匹配的投影人脸。
在上述第一方面的一种可能的实现中,还包括:检测到对用户面部的防晒霜涂抹情况和历史防晒霜检测记录进行对比的指令;响应于检测指令,确定出历史防晒霜检测记录对应的多张包括用户面部的历史紫外图像,以及各历史紫外图像的关键点信息和各历史紫外图像中用户面部的防晒霜检测结果;根据各历史紫外图像的关键点信息、各历史紫外图像中用户面部的防晒霜检测结果以及第一预览图像的关键点信息,确定出第一预览图像中用户面部的历史防晒霜检测结果;根据确定出的第一预览图像中用户面部的历史防晒霜检测结果,对第一预览图像中用户面部区域的像素点的像素值进行调整,得到待叠加图像,其中,在待叠加图像的用户面部区域中,防晒霜涂抹程度不同的子区域的像素点的像素值不同;将目标图像和待叠加图像作为不同图层进行叠加,得到叠加图像;通过第一屏幕显示叠加图像。
通过第一预览图像中用户面部的实时防晒霜检测结果,与历史防晒霜检测结果进行叠加显示,可以方便用户进行对比查看,从而分析出用户较容易漏涂,或者少涂的区域。
在上述第一方面的一种可能的实现中,确定出各紫外图中用户面部的防晒霜检测结果、各紫外图像的关键点信息以及第一预览图像的关键点信息,包括:利用深度学习的方法或者级联形状回归的方法,分别对各紫外图像、第一预览图像进行关键点检测,得到各紫外图像的关键点信息以及第一预览图像的关键点信息;利用预设的防晒霜检测模型对各紫外图像分别进行用户面部防晒霜涂抹情况检测,得到各紫外图中用户面部的防晒霜检测结果。
在上述第一方面的一种可能的实现中,各紫外图像的采集时间和第一预览图像的采集时间之间的差值小于时间阈值。
从而保证紫外图像和第一预览图像中用户面部的防晒霜涂抹情况相差较小,使得最终显示出的防晒霜检测结果较为准确。
在上述第一方面的一种可能的实现中,第一屏幕为内屏,第二屏幕为外屏,或者第一屏幕为 折叠屏,第二屏幕为外屏。
在上述第一方面的一种可能的实现中,第一屏幕的面积大于第二屏幕的面积。通过面积较大的第一屏幕显示目标图像,有助于提升用户体验。
在上述第一方面的一种可能的实现中,第一摄像头为紫外摄像头,第二摄像头为前置摄像头,第三摄像头为后置摄像头。
在上述第一方面的一种可能的实现中,还包括:在目标图像的用户面部区域中,防晒霜涂抹程度不同的子区域的显示色彩不同。
由于待显示的目标图像中防晒霜涂抹程度不同的子区域的显示色彩不同,从而可以使得用户直观地查看到用户面部不同区域的防晒霜涂抹情况,有助于提升用户体验。
第二方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质上存储有指令,该指令在电子设备上执行时使电子设备执行如上述第一方面以及第一方面的任意一种可能的实现中的面部特征检测方法。
第三方面,本申请实施例提供了一种计算机程序产品,计算机程序产品包括指令,指令当被一个或多个处理器执行时用于实现如上述第一方面以及第一方面的任意一种可能的实现中的面部特征检测方法。
第四方面,本申请实施例提供了一种电子设备,包括:
存储器,用于存储指令,以及
一个或多个处理器,当指令被一个或多个处理器执行时,处理器执行如上述第一方面以及第一方面的任意一种可能的实现中的面部特征检测方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种用户使用手机来查看面部防晒霜涂抹状况的应用场景;
图2A为本申请实施例提供的一种手机的正面和背景结构示意图;
图2B至图2K为本申请实施例提供的一些UI界面示意图;
图3为本申请实施例提供的一种手机的硬件结构示意图;
图4为本申请实施例提供的一种防晒霜检测方法中预拍摄阶段的UI流程和算法流程示意图;
图5A为本申请实施例提供的一种已被标注过关键点的人脸图像示意图;
图5B为本申请实施例提供的一种将UV图像输入训练好的防晒霜检测模型进行防晒霜涂抹情况检测的过程示意图;
图6为本申请实施例提供的一种防晒霜检测方法中实时查看阶段的UI流程和算法流程示意图;
图7为本申请实施例提供的另一种防晒霜检测方法中预拍摄阶段的UI流程和算法流程示意图;
图8A为本申请实施例提供的一种三维人脸的示意图;
图8B为本申请实施例提供的一种mask图像的示意图;
图8C为本申请实施例提供的一种将图8A所示的三维人脸根据图8B所示的mask图像进行面片着色后的三维人脸示意图;
图8D为本申请实施例提供的一种利用不同角度的mask图像分别对图8A所示的三维人脸进行着色后的三维人脸示意图;
图9为本申请实施例提供的另一种防晒霜检测方法中实时查看阶段的UI流程和算法流程示意图;
图10A为本申请实施例提供的一种旋转后的三维人脸示意图;
图10B为将图10A所示的三维人脸进行二维投影得到的投影图像;
图11为本申请实施例提供的一种将历史防晒霜涂抹状况和当前涂抹状况进行比对的流程示意图;
图12A至图12D为本申请实施例提供的一些UI界面示意图。
具体实施方式
本申请的说明性实施例包括但不限于一种面部特征检测方法、可读介质和电子设备。
本申请提供的面部特征检测方法可以应用于一些对终端设备的成像与查看不同步、但对实时显示效果有需求的应用场景,例如应用于对用户面部的成像,以及对用户面部的防晒霜涂抹情况的检测、对用户面部皮肤状况(例如:皱纹、色斑、毛孔等)的检测,并可以供用户实时查看检测结果。为了便于说明,下面将以对用户面部的防晒霜涂抹情况的检测为例,在进一步详细介绍本申请的技术方案。
由于智能终端设备具有体积小、重量轻、便携等优点,智能终端设备被赋予越来越多的功能。例如,智能终端可以用于对用户皮肤涂抹的防晒霜等化学物品的涂抹情况进行检测,然后将检测结果直观呈现给用户,有助于用户了解防晒霜的涂抹区域、涂抹量,进而供用户确认是否需要补涂。
本申请技术方案可以应用于任意一种具有显示屏和摄像头的智能终端设备中,包括但不限于手机、平板计算机、膝上型计算机、可穿戴设备、头戴式显示器、便携式游戏机、便携式音乐播放器、阅读器设备,等等。为了便于说明,下面继续以智能终端为手机100为例进行说明。
图1示出了一种用户使用手机100来查看面部防晒霜涂抹状况的应用场景。手机100外接有紫外(Ultraviolet,UV)配件200,例如手机100通过USB接口与UV配件连接,UV配件200集成有紫外补光灯及紫外摄像头。手机100可以通过运行防晒霜检测应用的可执行程序,实现防晒霜检测功能。在手机100开启防晒霜检测功能之后,手机100通过前置摄像头101采集人脸预览图像P0并以RGB图像(具有红、绿、蓝这3个颜色通道的彩色图像)的形式显示出来,以供用户实时查看预览图像从而调整面部姿态。手机100通过外接的UV配件200采集用户的面部图像,并通过外接的UV配件200采集到的面部图像(即灰度图像)直接作为检测图像呈现给用户。由于涂抹了防晒霜的皮肤区域,例如图1中手机100显示的检测图像P1中的区域Z1,吸收了紫外线,呈现为较暗的颜色,例如黑色;而未涂抹防晒霜或者防晒霜涂抹较少的皮肤区域未能吸收掉紫外线,在图像中呈现为较浅的颜色,例如白色或灰色。用户通过查看检测图像P1来了解面部的防晒霜涂抹状况。
然而,在图1所示的实施例中,由于手机100是将通过外接的UV配件200采集到的灰度图像直接作为检测图像呈现给用户,用户通过查看灰度图像来了解面部的防晒霜涂抹状况。然而,这种呈现方式呈现出来的灰度图像中未涂抹防晒霜的皮肤区域颜色较暗,会给用户带来图像不美观的视觉体验,会导致用户体验不佳。另外,用户需要随身携带UV配件,给用户带来不便。
为了改善用户使用手机100进行防晒霜检测的体验,在一些实施例中,可以在手机100上集成包含紫外补光灯及紫外摄像头的UV配件,并且将防晒霜检测结果以彩色图像进行实时显示。从而使得用户在使用手机100进行防晒霜检测时,无需再外接UV配件,并且用户可以实时查看到彩 色的防晒霜检测结果。然而为了不影响手机100的屏占比,可以将上述UV配件设置在手机100的背面(也即手机100中主屏幕的相反面)。并且为了保证防晒霜检测结果的实时显示,可以在手机100的背景设置一较小的屏幕(以下简称为外屏),用户在使用手机100进行防晒霜涂抹检测时,可以通过外屏进行图像预览以及查看检测结果。
例如,在图2A示出的手机100的正面和背面结构示意图中,手机100正面设置有大屏(也即内屏)103以及前置摄像头101;手机100背面设置有紫外补光灯104、紫外摄像头102、后置摄像头105以及外屏106。用户首先通过点击图2B所示的防晒检测控件1001,使手机100开启图2C所示的防晒检测的页面,其中有“拍自己”控件1002和“拍别人”控件1003。当用户点击“拍自己”控件1002时,进入图2D所示的界面,弹出“请翻转至外屏”的提醒信息1004,提示用户翻转手机100。用户将手机100进行翻转,使手机100的外屏106正对用户面部,由后置摄像头105采集人脸的后置预览图像,在外屏106上以RGB彩色图像的形式显示有图2E所示的后置预览画面P2,以及用于提示用户调整面部姿态的相关提示信息。当手机100确定用户将面部调整至预览画面的中心位置时,手机100通过后置摄像头105采集位于预览画面中心位置的RGB后置预览图像,并且采用紫外摄像头102采集用户面部的灰度图像(以下简称为UV图像)。然后将UV图像通过设定的防晒霜检测算法,得到用户面部的防晒霜涂抹检测结果,例如检测出未涂抹防晒霜和防晒霜涂抹不足量的区域。将该检测结果转换为不同颜色的标记,然后将不同颜色的标记添加至RGB后置预览图像上,以如图2F所示的画面P3的形式呈现给用户。例如,参考图2F,将涂抹不足量的区域Z2以红绿蓝(RGB)色相环中的颜色1进行显示,将未涂抹防晒霜的区域Z3以红绿蓝(RGB)色相环中的颜色2进行显示。
然而在上述图2A至图2F所示的实施例中,当用户通过手机100对用户自己的面部进行防晒霜涂抹情况的检测时,由于紫外摄像头102设置在手机100的外屏侧,为了保证防晒霜检测结果的实时显示,手机100只能通过外屏106来实时显示彩色的检测结果。由于外屏106的尺寸较小,会导致外屏106显示的具有不同颜色标记的结果图像中的标记看不清楚,从而导致用户体验不佳。
为了在手机100能够实时显示防晒霜检测结果的前提下,在手机100的内屏以RGB图像的形式显示防晒霜检测结果,从而使检测结果最大化显示,方便用户查看。
本申请提供了一种防晒霜检测方法,手机100首先通过后置的紫外摄像头102拍摄用户面部的各个角度的UV图像,得到UV图像序列。对各UV图像进行关键点检测,得到各UV图像的关键点信息。其中UV图像的关键点是指利用深度学习的方法、级联形状回归的方法等关键点检测方法确定出来的,用于表征UV图像中用户的面部五官(包括眉毛、眼睛、鼻子、嘴巴)以及面部轮廓等关键面部特征的关键像素点。
然后,对UV图像序列中各UV图像进行防晒霜涂抹情况检测,得到各UV图像对应的防晒霜检测结果,例如将UV图像序列中各UV图像分别输入预设的防晒霜检测模型,得到各UV图像对应的mask图像(也即防晒霜检测结果),各UV图像对应的mask图像为灰度图像或者黑白图像,并且各mask图像中的像素点与对应的UV图像中的像素点一一对应。各mask图像中,显示出了对应的UV图像中防晒霜涂抹情况不同的各个区域对应的色块,各色块的颜色可以为黑色、白色或者灰色,例如某UV图像对应的mask图像中的区域Z0,吸收了紫外线,呈现为较暗的颜色,例如黑色;而未涂抹防晒霜或者防晒霜涂抹较少的皮肤区域未能吸收掉紫外线,在图像中呈现为较浅的颜色,例如白色或灰色。
然后,再通过前置摄像头101拍摄用户面部的前置预览图像,该前置预览图像为用户可以通 过手机100的内屏(面积较大的屏幕)实时查看到的包含该用户的面部的RGB图像。并且对该前置预览图像进行关键点检测,得到该前置预览图像的关键点信息。
然后,根据上述得到的各UV图像对应的防晒霜检测结果、各UV图像的关键点信息以及前置预览图像的关键点信息,确定出前置摄像头在拍摄前置预览图像时用户面部的防晒霜涂抹情况。从而根据确定出的前置预览图像对应的用户面部的防晒霜涂抹情况,对前置预览图像中面部区域多个像素点的像素值进行调整,得到目标图像,例如,用户额头区域防晒霜涂抹不足,则调整前置预览图像中额头区域的像素值,从而将前置预览图像中额头区域的颜色调整为紫色。最后通过内屏(面积较大的屏幕)显示该目标图像。
由于该目标图像是RGB图像,且是基于实时拍摄的前置预览图像得到的,因此最终通过内屏显示目标图像,不仅可以满足用户对防晒霜检测的实时性需求,呈现给用户的防晒霜检测结果更加美观,还可以使检测结果最大化显示,方便用户查看,有助于提升用户体验。
例如,在一些实施例中,手机100将整个防晒霜检测过程分为两个阶段:预拍摄阶段以及实时查看阶段。在预拍摄阶段,手机100存储上述得到的UV图像序列、各UV图像对应的mask图像以及各UV图像的关键点信息。在实时查看阶段,手机100对前置摄像头101采集到的前置预览图像进行关键点检测,得到前置预览图像的关键点信息,然后根据前置预览图像的关键点信息以及各UV图像的关键点信息,从UV图像序列中确定出和前置预览图像匹配的UV图像,并且确定出这个匹配的UV图像和前置预览图像之间的变换关系T,将这个匹配的UV图像对应的mask图像基于该变换关系T进行变换,得到和前置预览图像对应的待融合mask图像,从而将待融合mask图像和前置预览图像融合处理后通过内屏103进行显示。
又如,在另一些实施例中,在预拍摄阶段,手机100根据上述得到的UV图像序列、各UV图像的关键点信息进行三维人脸重建,得到重建的三维人脸,并且利用上述得到的各UV图像对应的mask图像对该三维人脸进行着色,得到着色后的三维人脸,存储该三维人脸的相关数据。在实时查看阶段,手机100对前置摄像头101采集到的前置预览图像进行关键点检测,得到前置预览图像的关键点信息,然后将着色后的三维人脸进行不同角度的旋转并进行二维投影,得到不同角度的投影图像,然后根据前置预览图像的关键点信息以及各投影图像的关键点信息,从各投影图像中确定出和前置预览图像匹配的投影图像,将这个匹配的投影图像作为和前置预览图像对应的待融合mask图像,从而将待融合mask图像和前置预览图像融合处理后通过内屏103进行显示。
例如,在预拍摄阶段,用户通过点击图2B所示的防晒检测控件1001(该控件可以是防晒霜检测应用程序中的控件,还可以是集成有防晒霜检测功能的其他应用程序中的控件),使手机100开启图2C所示的防晒检测的页面,然后用户点击拍自己控件1002,进入图2D所示的界面,弹出“请翻转至外屏”的提醒信息1004,提示用户翻转手机100。用户将手机100进行翻转,使手机100的外屏106正对用户面部,由后置摄像头105采集人脸的后置预览图像,在外屏106上以RGB彩色图像的形式显示有图2E所示的后置预览画面P2,以及用于提示用户调整面部姿态的相关提示信息。当确定用户将面部调整至后置预览画面的中心位置时,手机100通过紫外摄像头102采集用户各角度的面部UV图像。在手机100已采集完用户各角度的面部UV图像(UV图像序列),并且计算得到各UV图像对应的mask图像以及各UV图像的关键点信息之后,进入图2G所示的界面,弹出“请翻转至内屏”的提醒信息1005,或者通过语音提示用户再次翻转手机100,以使用户面部正对手机100的前置摄像头101,进入实时查看阶段。
在实时查看阶段,由手机100的前置摄像头101采集人脸的前置预览图像,在内屏103上以 RGB彩色图像的形式显示有图2H所示的前置预览画面P3,以及用于提示用户调整面部姿态的相关提示信息。在手机100确定出用户面部处于合适位置时,例如确定出用户已将面部调整至前置预览画面的中心位置时,手机100采集并显示如图2I所示的前置预览图像P4。并且根据预拍摄阶段计算得到的UV图像序列、各UV图像对应的mask图像以及各UV图像的关键点信息,确定出和前置预览图像对应的待融合mask图像。将待融合mask图像和前置预览图像P4进行融合后,将需要补涂防晒霜的区域Z4、Z5填充彩色的形式实时显示在图2I所示的前置预览图像P4上呈现给用户。
下面将结合图3对本申请实施例提供的一种手机100的硬件结构进行详细介绍。手机100通过执行可执行程序,实现本申请实施例提供的防晒霜检测方法。具体地,如图3所示,手机100可以包括处理器110、电源模块140、存储器180、移动通信模块150、无线通信模块160、接口模块190、音频模块170、内屏103、外屏106、前置摄像头101、后置摄像头105、紫外摄像头102、紫外补光灯104等。
处理器110可以包括一个或多个处理单元,例如,可以包括中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)、神经网络处理器(Neural-network Processing Unit,NPU)、数字信号处理器(Digital Signal Processor,DSP)、微处理器(Micro-programmed Control Unit,MCU)、人工智能(Artificial Intelligence,AI)处理器或可编程逻辑器件(Field Programmable Gate Array,FPGA)等的处理模块或处理电路。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。例如,在本申请的一些实例中,处理器110可以用来对紫外摄像头102采集的UV图像进行关键点检测以及防晒霜检测。在一些实施例中,处理器110可以用来对前置摄像头101采集的前置预览图像进行人脸检测以及关键点检测。在一些实施例中,处理器110可以用来对UV图像和前置预览图像进行人脸对齐,确定出UV图像到前置预览图像的变换关系,并且根据该变换关系,将UV图像的防晒霜检测结果所对应的关键点在前置预览图像中对应的关键点。
存储器180可用于存储数据、软件程序以及模块,可以是易失性存储器(Volatile Memory),例如随机存取存储器(Random-Access Memory,RAM);或者非易失性存储器(Non-Volatile Memory),例如只读存储器(Read-Only Memory,ROM),快闪存储器(Flash Memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者上述种类的存储器的组合,或者也可以是可移动存储介质,例如安全数字(Secure Digital,SD)存储卡。具体的,存储器180可以包括程序存储区(未图示)和数据存储区(未图示)。程序存储区内可存储程序代码,该程序代码用于使处理器110通过执行该程序代码,执行本申请实施例提供的防晒霜检测方法。在申请实施例中,数据存储区可以用于存储处理器110对UV图像检测得到的人脸关键点、UV图像的防晒霜检测结果,以及UV图像的防晒霜检测结果对应的人脸关键点。在申请实施例中,数据存储区还可以用于存储处理器110对前置预览图像检测得到的人脸关键点。在申请实施例中,数据存储区还可以用于存储处理器110确定出来的UV图像和前置预览图像之间的映射关系。
电源模块140可以包括电源、电源管理部件等。电源可以为电池。电源管理部件用于管理电源的充电和电源向其他模块的供电。充电管理模块用于从充电器接收充电输入;电源管理模块用于连接电源,充电管理模块与处理器110。
移动通信模块150可以包括但不限于天线、功率放大器、滤波器、低噪声放大器(Low Noise Amplify,LNA)等。移动通信模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通 信的解决方案。移动通信模块150可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以包括天线,并经由天线实现对电磁波的收发。无线通信模块160可以提供应用在手机100上的包括无线局域网络(Wireless Local Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络),蓝牙(Bluetooth,BT),全球导航卫星系统(Global Navigation Satellite System,GNSS),调频(Frequency Modulation,FM),近距离无线通信技术(Near Field Communication,NFC),红外技术(Infrared,IR)等无线通信的解决方案。手机100可以通过无线通信技术与网络以及其他设备进行通信。
在一些实施例中,手机100的移动通信模块150和无线通信模块160也可以位于同一模块中。
前置摄像头101和后置摄像头105分别用于拍摄前置预览图像以及后置预览图像。紫外摄像头102用于采集UV图像。紫外补光灯104用于在紫外摄像头102采集UV图像时进行紫外补光。
内屏103通常为手机100的大屏。可以用于实时显示前置摄像头101采集的前置预览图像,还可以用于显示添加有防晒霜检测结果的前置预览图像。外屏106通常为手机100中面积较小的屏幕,通常和内屏103不在手机100的同一侧。外屏106可以用于实时显示后置摄像头105采集的后置预览图像。
音频模块170可以将数字音频信息转换成模拟音频信号输出,或者将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。在一些实施例中,音频模块170可以包括扬声器170A、受话器170B、麦克风170C以及耳机接口170D。例如,在本申请的一些实施例中,手机100可以通过扬声器170A语音提示用户调整面部姿态和面部位置,以能够采集到用户各个角度的面部图像,并且能够拍全用户的五官。
接口模块190包括外部存储器接口、通用串行总线(Universal Serial Bus,USB)接口及用户标识模块(Subscriber Identification Module,SIM)卡接口等。其中外部存储器接口可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机100的存储能力。外部存储卡通过外部存储器接口与处理器110通信,实现数据存储功能。通用串行总线接口用于手机100和其他手机进行通信。用户标识模块卡接口用于与安装至手机100的SIM卡进行通信,例如读取SIM卡中存储的电话号码,或将电话号码写入SIM卡中。
在一些实施例中,手机100还包括按键、马达以及指示器等。其中,按键可以包括音量键、开/关机键等。马达用于使手机100产生振动效果。指示器可以包括激光指示器、射频指示器、LED指示器等。
可以理解的是,以上图3所示的硬件结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图3所示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
下面将继续以手机100为例,对手机100执行本申请实施例提供的一种防晒霜检测的具体过程进行详细介绍。如上所述,本申请实施例将整个防晒霜检测分为预拍摄阶段和实时查看阶段。下面将分别对预拍摄阶段和实时查看阶段的具体过程进行详细介绍。
预拍摄阶段
如图4所示的预拍摄阶段的处理过程包括预拍摄阶段的UI流程以及预拍摄阶段的算法流程,图4所示的流程图中各步骤的执行主体可以是图3所示的手机100。首先介绍图4所示的预拍摄阶段的UI流程,具体地,参考图4,本申请实施例提供的一种防晒霜检测方法中预拍摄阶段的UI流程包括以下各步骤:
S401:启动预拍摄,提示用户翻转手机100。这里的启动预拍摄,是指启动手机100的预拍摄功能。例如,在一些实施例中,在用户使用手机100的内屏103时,手机100通过内屏103显示有图2B所示的防晒检测控件1001,用户通过点击图2B所示的防晒检测控件1001,使手机100开启图2C所示的防晒检测的页面。然后用户点击“拍自己”控件1002,启动手机100的预拍摄功能,使手机100进入图2D所示的界面,弹出“请翻转至外屏”的提醒信息1004,提示用户翻转手机100。在一些实施例中,手机100还可以通过语音来提示用户翻转手机100,本申请对于手机100如何向用户呈现提示信息不做限定。以使手机100的后置摄像头105、紫外摄像头102、紫外补光灯104以及手机100的外屏106正对用户的面部,从而可以利用后置摄像头105和/或紫外摄像头102拍摄用户涂有防晒霜的面部图像,并且可以使用户看到外屏106上显示的内容。
S402:开启UV模组,启动预拍摄算法流程。
在一些实施例中,手机100检测到用户点击“拍自己”控件1002,启动预拍摄功能,并且在检测到用户翻转手机100之后,开启由紫外摄像头102以及紫外补光灯104构成的UV模组(未图示)。在一些实施例中,手机100还可以在提示用户将手机100翻转至外屏106一侧,并且通过传感器检测到手机100被翻转之后,开启UV模组。
在一些实施例中,手机100还可以通过外屏106显示上述防晒霜检测控件1001,用户通过点击防晒检测控件1001,使手机100开启防晒检测的页面。然后用户点击上述“拍自己”控件1002,手机100开启UV模组,从而开始采集UV图像。
在一些实施例中,在手机100开启UV模组之后,开始通过紫外摄像头102连续采集多帧用户面部的UV图像,得到UV图像序列,并且启动预拍摄算法流程,对采集到的UV图像序列进行处理,以产生提示信息提示用户调整面部姿态。并且在确定出用户将面部姿态调整好之后通过紫外摄像头102采集用户各角度的面部UV图像,根据采集到的用户各角度的面部UV图像进行防晒霜检测,以得到防晒霜检测结果和对应的人脸关键点。
在一些实施例中,手机100还可以在开启由紫外摄像头102以及紫外补光灯104构成的UV模组(未图示)的同时,开启后置摄像头105。由后置摄像头105采集人脸的后置预览图像,在外屏106上以RGB彩色图像的形式显示有图2E所示的后置预览画面P2,以及基于预拍摄算法对后置预览图像的处理结果产生的用于提示用户调整面部姿态的相关提示信息。并且在确定出用户将面部姿态调整好之后通过紫外摄像头102采集用户各角度的面部UV图像,根据采集到的用户各角度的面部UV图像进行防晒霜检测,以得到防晒霜检测结果和对应的人脸关键点。由于利用后置摄像头105采集的后置预览图像是彩色图像,彩色图像包括的特征相对于UV图像来说更加丰富,因此,基于后置摄像头105采集的后置预览图像产生的提示信息更加准确,可以使用户快速调整好姿态,提高防晒霜检测效率。
其中,具体的预拍摄算法流程将在下文结合S4021至S4029进行详细介绍,在此先不展开描述。
S403:根据算法的人脸检测输出,提示用户移动手机100或移动面部,使得人脸在合适的位 置范围内。
例如,在一些实施例中,手机100通过紫外摄像头102连续采集多帧用户面部的UV图像,得到UV图像序列,并且根据预拍摄算法,对采集到的UV图像序列进行处理,并根据处理结果通过如图2H所示的文字提示的方式提醒用户移动手机100或移动面部,使得人脸在合适的位置范围内,例如,使人脸处于紫外摄像头102镜头下视场角的中心位置处。在一些实施例中,手机100还可以通过语音提示用户移动手机100或移动面部。
又如,在一些实施例中,手机100通过后置摄像头105采集人脸的后置预览图像序列,并且根据预拍摄算法,对采集到的后置预览图像序列进行处理,并根据处理结果通过UI或语音提示用户移动手机100或移动面部,使得人脸在合适的位置范围内,例如,使人脸处于后置摄像头105镜头下视场角的中心位置处。
S404:根据算法的人脸检测输出,提示用户向左转头、向右转头、抬头、低头,完成全角度的预拍摄。从而使手机100可以完成对用户面部各角度UV图像的拍摄。
例如,在一些实施例中,在手机100确定出用户已将面部调整至合适的位置范围内之后,采集用户正面的UV图像,然后通过如图2J所示的UI界面或者通过语音提示用户将面部进行左转、右转、抬头、低头,从而采集用户不同角度的面部UV图像。以使手机100根据预拍摄算法对用户不同角度的面部UV图像进行防晒霜检测,得到不同角度UV图像对应的防晒霜检测结果,以及不同角度UV图像的防晒霜检测结果对应的不同角度UV图像中的人脸关键点。
S405:提示用户预拍摄完成。
在一些实施例中,在手机100已采集完用户面部的UV图像,例如已完成对用户面部各角度的UV图像拍摄之后,进入图2G所示的界面,弹出“请翻转至内屏”的提醒信息1005,提示用户预拍摄完成,以使用户再次翻转手机100,以使用户面部正对手机100的前置摄像头101,进入实时查看阶段。
在一些实施例中,在手机100已采集完用户面部的UV图像,例如已完成对用户面部各角度的UV图像拍摄之后,手机100还可以以语音提示的方式提示用户已完成预拍摄,以提示用户将手机100再次翻转,以使手机100的内屏103正对用户的面部。
下面将对上述S402中手机100启动预拍摄算法流程后,预拍摄算法的具体处理过程进行详细介绍。
可以理解,上述步骤S401至步骤S405的执行顺序只是一种示例,在另一些实施例中,也可以采用其他执行顺序,还可以拆分或合并部分步骤,在此不做限定。
继续参考图4,本申请实施例提供的一种防晒霜检测方法中对应于S402的一种预拍摄阶段的算法流程包括以下各步骤:
S4021:输入UV模组采集到的UV图像序列。
由于UV模组包括紫外摄像头102以及紫外补光灯104,因此,UV模组采集的UV图像序列也即紫外摄像头102采集的图像序列。
在一些实施例中,手机100还可以通过后置摄像头105采集人脸的后置预览图像序列,输入手机100的处理器110。
S4022:进行人脸检测及关键点检测。
例如,手机100通过处理器110将输入的UV图像序列或者后置预览图像序列通过人脸检测模型进行人脸检测,从而再根据检测出的人脸区域,对人脸区域的面部五官(包括眉毛、眼睛、鼻 子、嘴巴)以及面部轮廓等关键区域进行关键点检测。其中,人脸关键点检测是指给定人脸图像,通过深度学习的方法、级联形状回归的方法等关键点检测方法,检测出人脸面部的关键区域位置并对这些关键区域进行关键点标注,例如对图5A所示的人脸图像中的面部五官以面部轮廓,以68点、98点、106点、186点中的其中一种标注方式进行关键点标注。
又如,手机100在确认人脸位置、距离合适之后,通过紫外摄像头102采集用户不同角度的面部UV图像,则手机100需要将这些用户不同角度的面部UV图像均进行人脸检测及关键点检测。
需要说明的是,紫外摄像头102在开启之后,会一直连续采集用户面部的UV图像,因此,本申请实施例会对紫外摄像头102采集到的每一帧UV图像进行人脸检测以及关键点检测。
S4023:计算人脸位置及大小,判断采集的UV图像是否符合条件。如果是,则表明采集的UV图像符合条件,人脸的位置、距离在合适的范围内,进入S4025;如果否,则表明采集的UV图像不符合条件,例如人脸距离手机100较远,拍摄的UV图像中人脸的面部区域较小,又如人脸的部分区域未拍摄到,则进入S4024,生成相应的提示信息以提示用户调整人脸位置,直至人脸在合适的位置范围内。
例如,手机100对UV图像序列或者后置预览图像序列进行人脸检测以及关键点检测,得到人脸检测结果以及关键点检测结果,根据人脸检测结果以及关键点检测结果确定出UV图像序列或者后置预览图像序列中的人脸几何中心和图像中的画面几何中心的距离,以及人脸大小占图像大小的比例。则手机100可以基于UV图像序列或者后置预览图像序列中的人脸几何中心和图像中的画面几何中心的距离是否小于预设的距离阈值,以及人脸大小占图像大小的比例是否在某预设范围内,通过UI界面或者输出语音提示指导用户移动手机100,或者移动用户的面部,直至人脸在合适的位置范围内。
S4024:生成提示信息,以提示用户调整人脸位置,直至UV图像符合条件。
也就是说,在手机100确定出采集的UV图像不符合条件时,可以生成提示信息,例如向左移动、向右移动、近一点、远一点等,并以文字、语音等形式提示用户移动手机100或者移动面部,直至人脸在合适的位置范围内。不难理解,如果采集的UV图像大小、距离合适,则检测出来的防晒霜涂抹情况较为准确。
S4025:计算人脸角度,生成提示信息以提示用户调整人脸姿态,并根据关键点截取人脸区域UV图像。
也即在手机100确定出人脸位置、距离已调整至合适的位置范围内之后,手机100根据例如图5A所示的实施例中标注的多个人脸关键点计算表征人脸角度的特征,例如,以图5A所示的人脸关键点为例:表征左右角度的特征:LR=(p90.x-p103.x)/(p103.x-p71.x);表征俯仰角度的特征:RB=(p105.x-p103.x)/(p130.x-p108.x)。其中,p90为如图5A所示的人脸中右眼最左侧关键点,p103为鼻梁中央关键点,p71为左眼最右侧关键点,p105为鼻尖关键点,p130为嘴唇下边缘中央关键点,p108为鼻子底部中央关键点,x、y分别表示关键点的横、纵坐标。手机100通过UI提示或者语音提示,使得用户依次完成左右转头、抬低头的动作,以通过紫外摄像头102采集用户不同角度的面部图像。
针对每个角度的UV图像,可以根据各UV图像中的关键点,截取各UV图像中的面部区域,得到各角度UV图像对应的人脸区域UV图像。例如,根据如图5A所示标注的人脸图像的多个关键点,将用户在合适的位置范围内时手机100通过紫外摄像头102采集用户不同角度的面部UV图像进行截取,获得各角度面部UV图像对应的人脸区域UV图像。
S4026:对所获取的人脸区域UV图像进行防晒霜检测。
在一些实施例中,手机100将通过S4026获得的各角度面部UV图像对应的人脸区域UV图像输入训练好的防晒霜检测模型(例如U-Net模型),进行防晒霜检测,得到用户面部的防晒霜涂抹区域的二值化黑白图像(以下将该图像简称为mask图像)。其中,防晒霜检测模型则是由大量的UV图像以及对应的二值化黑白图像训练得到的。例如,如图5B所示,将UV图像输入训练好的防晒霜检测模型进行防晒霜涂抹情况检测,得到如图5B所示的黑白的mask图像,该mask图像中涂抹足量的区域为黑色,涂抹不足量的区域为白色的区域z6。
在一些实施例中,为了使检测结果更加精细,防晒霜检测模型还可以是由大量的UV图像以及表征UV图像中防晒霜涂抹情况的灰度图像训练得到的。例如,将待检测的UV图像输入该防晒霜检测模型,则该模型输出对应的灰度图像,该灰度图像中黑色的区域为防晒霜涂抹足量的区域,白色的区域为未涂抹防晒霜的区域,灰色的区域为涂抹不足量的区域。
S4027:记录检测结果以及对应的人脸关键点。
例如,手机100将通过S4027获得的对应用户各角度面部UV图像的mask图像,以及用户各角度面部UV图像的人脸关键点进行记录(也即保存)。
在一些实施例中,手机100还可以对通过S4026获得的各角度面部UV图像进行人脸位置、大小、角度计算,保存各角度面部UV图像对应的人脸位置、大小、角度数据。
S4028:判断是否完成全角度拍摄。
如果是,则表明已完成对用户各角度的面部UV图像的拍摄,从而可以通过语音提示或者通过UI界面提示用户预拍摄完成,则手机100可以进入实时查看阶段。如果否,则进入S4025,直至拍摄完用户各角度的面部UV图像。
可以理解,上述步骤S4021至步骤S4028的执行顺序只是一种示例,在另一些实施例中,也可以采用其他执行顺序,还可以拆分或合并部分步骤,在此不做限定。例如,在一些实施例中,S4028还可以和S4025合并成一个步骤,例如,合并后的步骤为:计算人脸角度,判断是否完成全角度拍摄,如果未完成全角度拍摄,则生成提示信息以提示用户调整人脸姿态;如果已完成全角度拍摄,则根据关键点截取人脸区域UV图像,然后进入S4026进行人脸区域UV图像的防晒霜检测。
实时查看阶段
下面将对与上述图4所示的手机100的预拍摄阶段处理过程对应的实时查看阶段的处理过程进行详细介绍。
如图6所示的实时查看阶段的处理过程包括实时查看阶段的UI流程以及异步变换算法流程,图6所示的流程图中各步骤的执行主体也可以是图3所示的手机100。首先介绍图6所示的实时查看阶段的UI流程,具体地,参考图6,本申请实施例提供的一种防晒霜检测方法中实时查看阶段的UI流程包括以下各步骤:
S601:确定进入实时查看阶段。
例如,在一些实施例中,当手机100在确定已完成全角度拍摄,并通过语音或者UI提示用户预拍摄完成之后,手机100即可确定进入实时查看阶段。从而启动实时查看阶段的UI流程。
S602:判断预设时间段内是否有预拍摄记录。如果有,则表明预设时间段内有预拍摄记录,则可以进入S603确认用户是否需要重新进行预拍摄;否则表明预设时间段内没有预拍摄记录,则可以进入S604,启动预拍摄UI及算法流程。其中预设时间段例如可以是
距离上次预拍摄的时间在半个小时以内等预先设定的时间段。
S603:判断用户是否选择重新进行预拍摄。如果是,则表明需要重新进行预拍摄,进入S604,否则表明不需要重新进行预拍摄,进入S605。
例如,在一些实施例中,即使手机100确定在预设时间段内有预拍摄记录,但是由于有可能在上一次预拍摄结束之后,用户有补涂、重涂防晒霜,或有用户出现大量流汗、洗脸等情况,使得用户的防晒霜涂抹情况和上一次预拍摄时的防晒霜涂抹情况发生了变化,因此,用户可以在如图2K所示的手机100界面中弹出的“是否重新预拍摄”通知1006,决定是否触发手机100重新进行预拍摄。
S604:启动预拍摄UI以及算法流程。也就是重新拍摄UV图像,并基于拍摄的UV图像确定出这些UV图像的防晒霜检测结果。预拍摄UI以及算法流程具体可参考前述关于图4所示的预拍摄阶段的UI以及算法流程的相关描述,在此不再赘述。
S605:开启前置摄像头101并在内屏103显示预览画面。
在一些实施例中,在手机100确定有预设时间段内的预拍摄记录,并且无需重新进行预拍摄,或者手机100已经完成新的预拍摄UI流程以及UI算法流程之后,可以开启前置摄像头101采集用户的前置预览图像(RGB图像),并在内屏103实时显示预览画面。
需要说明的是,手机100在开启前置摄像头101之后,会一直连续采集用户的前置预览图像。
S606:确定出前置预览图像对应的防晒霜检测结果。
也即在手机100通过前置摄像头101采集到用户的前置预览图像之后,手机100可以采用异步变换算法确定出与前置预览图像对应的防晒霜检测结果。例如,手机100运行图6所示的异步变换算法,从图4所示的预拍摄阶段采集的用户各角度面部UV图像中选择和当前帧(前置预览图像)最近的UV图像,并且将该UV图像对应的mask和当前帧进行融合。其中,具体的异步变换算法可以包括基于人脸对齐的异步变换算法,以及基于人脸三维重建的异步变换算法。对应于图4所示的预拍摄阶段的处理过程,图6采用基于人脸对齐的异步变换算法,将在下文结合S6061至S6066进行详细介绍,在此先不展开描述。而基于人脸三维重建的异步变换算法将在后文中结合另一实施例进行详细介绍。
S607:实时显示添加有防晒霜检测结果的预览画面。
也就是说通过运行异步变换算法从图4所示的预拍摄阶段采集的用户各角度面部UV图像中选择和当前帧(前置预览图像)最近的UV图像,并且将该UV图像对应的mask图像和当前帧作为不同的图层进行融合,得到融合后的图像,在融合后的图像中针对mask图像的不同区域填充RGB颜色空间中的不同颜色,将填充颜色后的图像作为预览画面进行实时显示。
例如,手机100最终通过如图2I所示的内屏103实时显示出RGB预览画面P4,其中P4中包括通过不同颜色标记出的未涂抹防晒霜的区域Z4和Z5,以供用户查看。
可以理解,上述步骤S601至步骤S607的执行顺序只是一种示例,在另一些实施例中,也可以采用其他执行顺序,还可以拆分或合并部分步骤,在此不做限定。
由于图6所示的实施例最终可以实现通过手机100的内屏103,也即面积较大的屏幕,实时显示用特定颜色标记出的需要补涂防晒霜的面部区域的前置预览图像,由于前置预览图像是RGB图像,也即是彩色的图像,可以在保证实时显示防晒霜检测结果的前提下,使检测结果以用户观感较好的彩色图像最大化显示,方便用户查看,有助于提升用户体验。
下面继续参考图6,对上述S606涉及的一种基于人脸对齐的异步变换算法进行详细介绍。具 体地,如图6所示,本申请实施例提供的一种基于人脸对齐的异步变换算法包括以下各步骤:
S6061:输入前置摄像头101采集到的图像序列。
例如,手机100通过前置摄像头101连续采集到N张前置预览图像f1至fn,将f1至fn构成的图像序列输入手机100的处理器110。
S6062:进行人脸检测及关键点检测。
例如,手机100通过处理器110将输入的前置预览图像f1至fn通过人脸检测模型进行人脸检测,从而再根据检测出的人脸区域,对人脸区域的面部五官(包括眉毛、眼睛、鼻子、嘴巴)以及面部轮廓等关键区域进行关键点检测。并且采用和上述图4中S4022所选择的同一种关键点标注方法进行人脸关键点标注。
S6063:计算人脸位置、大小及角度。
例如,手机100将通过S6062得到的针对前置预览图像f1至fn的人脸检测以及关键点检测结果,计算出前置预览图像中人脸的位置、大小以及角度等特征。以根据前置预览图像中人脸的位置、大小、角度等特征,从预拍摄阶段拍摄的UV图像序列中确定出和前置预览图像中人脸的位置、大小、角度最接近的UV图像。
S6064:搜索预拍摄阶段所保存的所有图像序列,查找与当前帧f的人脸位置、大小及角度最接近的预拍摄帧fs。其中,当前帧f可以为前置预览图像f1至fn中的其中一帧图像。
例如,在一些实施例中,手机100搜索在图4所示的预拍摄阶段保存的针对S4026获取的各角度面部UV图像的人脸位置、大小、角度数据,从而查找出与当前帧f的人脸位置、大小及角度最接近的预拍摄帧fs(UV图像)。
S6065:将f与fs进行人脸对齐,计算得到fs到f的变换关系T。其中,fs到f的变换关系T是指:fs中的各关键点和f中对应关键点之间的位置变换关系。
例如,手机100可以将当前帧f与预拍摄帧fs根据人脸关键点进行三角化处理,然后将前置预览图像f与预拍摄帧fs中的每对人脸三角计算仿射变换,使得预拍摄帧fs中的每个三角经过变换后与预览图像f中对应的三角对齐,这些变换的集合即构成了变换关系T。
S6066:根据变换关系T将fs的防晒霜检测结果(mask图像)进行变换,得到f中对应区域的防晒霜检测结果。
也就是说,在确定出fs到f的变换关系T之后,将fs的防晒霜检测结果(mask图像)基于变换关系T进行变换,得到变换后的mask图像,则变换后的mask图像则为f的防晒
霜检测结果。以使手机100将f和变换后的mask图像进行融合,得到融合后的图像,从而将融合后的图像作为预览画面进行显示。
可以理解,上述步骤S6061至步骤S6066的执行顺序只是一种示例,在另一些实施例中,也可以采用其他执行顺序,还可以拆分或合并部分步骤,在此不做限定。
下面将继续以手机100为例,对手机100执行本申请实施例提供的另一种防晒霜检测的具体过程进行详细介绍。同样是将整个防晒霜检测分为预拍摄阶段和实时查看阶段,预拍摄阶段的具体处理过程如图7所示。
预拍摄阶段
图7所示的预拍摄阶段的处理过程包括预拍摄阶段UI流程以及预拍摄阶段的算法流程。图7所示的各步骤的执行主体可以为手机100。其中,图7所示的上述预拍摄阶段的UI流程S701至S705与图4所示的预拍摄阶段的UI流程S401至S405完全相同,以下将不再详述。下面仅对图7 所示的流程图中与图4所示的流程图不同的部分“预拍摄阶段算法流程”的内容进行详细介绍。图7和图4中涉及的“预拍摄阶段算法流程”区别在于:图7和图4最终获得的用以表征UV图像的防晒霜检测结果的数据表示的形式不同。
图7涉及的“预拍摄阶段算法流程”是:手机100根据UV图像序列、各UV图像的关键点信息进行三维人脸重建,得到重建的三维人脸,并且利用上述得到的各UV图像对应的mask图像对该三维人脸进行着色,得到着色后的三维人脸,并记录着色后的三维人脸的相关数据。以根据着色后的三维人脸,确定出前置预览图像中人脸区域的防晒霜涂抹情况。
图4涉及的“预拍摄阶段算法流程”是:手机100确定出满足条件的,各角度的UV图像对应的mask图像以及各UV图像的关键点信息。以根据各角度的UV图像对应的mask图像以及各UV图像的关键点信息,确定出前置预览图像中人脸区域的防晒霜涂抹情况。
具体地,图7所示的预拍摄阶段算法流程包括以下内容:
S7021:输入UV模组采集到的图片序列。由于UV模组包括紫外摄像头102以及紫外补光灯104,因此,UV模组采集的UV图像序列也即紫外摄像头102采集的图像序列。
S7022:进行人脸检测及关键点检测。
例如,手机100通过处理器110将输入的UV图像序列或者后置预览图像序列通过人脸检测模型进行人脸检测,从而再根据检测出的人脸区域,对人脸区域的面部五官(包括眉毛、眼睛、鼻子、嘴巴)以及面部轮廓等关键区域进行关键点检测。
S7023:计算人脸位置及大小,判断采集的UV图像是否符合条件。如果是,则表明采集的UV图像符合条件,人脸的位置、距离在合适的范围内,进入S7025;如果否,则表明采集的UV图像不符合条件,例如人脸距离手机100较远,拍摄的UV图像中人脸的面部区域较小,又如人脸的部分区域未拍摄到,则进入S7024,生成相应的提示信息以提示用户调整人脸位置,直至人脸在合适的位置范围内。
在一些实施例中,手机100还可以基于UV图像序列或者后置预览图像序列中的人脸几何中心和图像中的画面几何中心的距离是否小在预设的距离阈值,以及人脸大小占图像大小的比例是否在某预设范围内,输出语音提示指导用户移动手机100,或者移动用户的面部,直至人脸在合适的位置范围内。
S7024:生成提示信息,以提示用户调整人脸位置,直至UV图像符合条件。
也就是说,在手机100确定出采集的UV图像不符合条件时,可以生成提示信息,例如向左移动、向右移动、近一点、远一点等,并以文字、语音等形式提示用户移动手机100或者移动面部,直至人脸在合适的位置范围内。不难理解,如果采集的UV图像大小、距离合适,则检测出来的防晒霜涂抹情况较为准确。
S7025:计算人脸角度,生成提示信息以提示用户调整人脸姿态,并根据关键点截取人脸区域UV图像。具体可参考以上关于图4中S4025的相关描述,在此不再赘述。
S7026:对所获取的人脸区域UV图像进行防晒霜检测。具体可参考以上关于图4中S4026的相关描述,在此不再赘述。
S7027:基于当前帧的关键点重建三维人脸模型。
这里的当前帧是指手机100在预拍摄阶段,在确定用户的人脸位置、距离合适之后,通过紫外摄像头102采集到的各角度面部UV图像对应的人脸区域UV图像。
应理解,在手机100启动紫外摄像头102之后,紫外摄像头102在持续采集各角度面部UV图 像。因此,前述各角度面部UV图像中的每一帧都可以被称为当前帧。
例如,手机100根据紫外摄像头102持续采集到的各角度面部UV图像的关键点,采用三维可变形人脸模型(3D Morphable Face Model,3DMM)实现三维人脸重建,得到如图8A所示的三维人脸。
S7028:将当前帧的防晒霜检测结果与三维人脸建立映射关系。
其中,当前帧的防晒霜检测结果也即上述各角度面部UV图像的防晒霜检测结果,为例如图5B所示的黑白图像,或者是灰度图像。
示例性地,将三维人脸进行不同角度的二维投影,则投影得到的二维图像和不同角度的UV图像对应的防晒霜检测结果(mask图像)一一对应。三维人脸中各个人脸区域为“面片”,则上述投影得到的二维图像则是由三维人脸中的对应的三维面片投影得到的。
例如,在一些实施例中,针对图8A所示的三维人脸,其中对应于图8B所示的mask图像中涂抹不足的区域Z7的面片为图8C所示的Z8。
S7029:利用当前帧的防晒霜检测结果对三维人脸的面片进行着色。
例如,对于mask图像中通过不同颜色所表征的涂抹足量、不足量的区域,分别将其在三维人脸中对应的面片用不同的颜色进行着色。而对于mask图像中未涂抹的区域,将其在三维人脸中对应的面片不着色。也可以对mask图像中未涂抹的区域利用和上述涂抹足量、不足量的区域所对应的面片不同的颜色,对三维人脸中相应的面片进行着色。
在一些实施例中,由于用户更想关注涂抹不足量的区域以及未涂抹的区域,因此,还可以将于mask图像中涂抹不足量以及未涂抹的区域在三维人脸中对应的面片用不同的颜色进行着色。具体着色方式可以根据需要来确定,本申请对此不作限定。
在一些实施例中,由于针对用户各角度的UV图像都有相应的mask图像(也即防晒霜检测结果),因此需要利用不同角度的UV图像对应的mask图像对三维人脸进行着色。不同角度UV图像对应的mask图像对三维人脸分别进行着色的结果可以为如图8D所示。
S7030:判断是否完成全角度拍摄。是则表明已完成全角度拍摄,进入S7031,记录已着色完毕的三维人脸;否则表明仍未完成全角度拍摄,返回S7025,直至拍摄完用户各角度的面部UV图像。
S7031:记录着色完毕的三维人脸供UI提示预拍摄完毕。
也即在手机100确定已完成全角度的UV图像拍摄之后,保存着色完毕的三维人脸,并且可以通过UI提示用户预拍摄完毕。例如,手机100确定预拍摄完毕之后,在外屏106中弹出如图2G所示的“请翻转至内屏”的提示1005,以提示用户预拍摄完毕,并翻转手机100进行前置预览图像的拍摄。
可以理解,上述图7所示的流程图中的执行顺序只是一种示例,在另一些实施例中,也可以采用其他执行顺序,还可以拆分或合并部分步骤,在此不做限定。
实时查看阶段
下面将对与上述图7所示的手机100的预拍摄阶段处理过程对应的实时查看阶段的处理过程进行详细介绍。
如图9所示的实时查看阶段的处理过程包括实时查看阶段的UI流程以及异步变换算法流程,图9所示的流程图中各步骤的执行主体也可以是图3所示的手机100。图9所示的流程图中的实时查看阶段的UI流程中的S901至S907和图6所示的实时查看阶段的UI流程中的S601至S607 完全相同,以下只简单说明,不再详述,如图9所示的实时查看阶段的UI流程包括以下内容:
S901:确定进入实时查看阶段。
例如,在一些实施例中,当手机100在确定已完成全角度拍摄,并通过语音或者UI提示用户预拍摄完成之后,手机100即可确定进入实时查看阶段。从而启动实时查看阶段的UI流程。
S902:判断预设时间段内是否有预拍摄记录。如果有,则表明预设时间段内有预拍摄记录,则可以进入S903确认用户是否需要重新进行预拍摄;否则表明预设时间段内没有预拍摄记录,则可以进入S904,启动预拍摄UI及算法流程。其中预设时间段例如可以是
距离上次预拍摄的时间在半个小时以内等预先设定的时间段。
S903:判断用户是否选择重新进行预拍摄。如果是,则表明需要重新进行预拍摄,进入S904,否则表明不需要重新进行预拍摄,进入S905。
S904:启动预拍摄UI以及算法流程。预拍摄UI以及算法流程具体可参考前述关于图7所示的预拍摄阶段的UI以及算法流程的相关描述,在此不再赘述。
S905:开启前置摄像头101并在内屏103显示预览画面。在一些实施例中,在手机100确定有预设时间段内的预拍摄记录,并且无需重新进行预拍摄,或者手机100已经完成新的预拍摄UI流程以及UI算法流程之后,可以开启前置摄像头101采集用户的前置预览图像(RGB图像),并在内屏103实时显示预览画面。
S906:确定出前置预览图像对应的防晒霜检测结果。
也即在手机100通过前置摄像头101采集到用户的前置预览图像之后,手机100可以采用异步变换算法确定出与前置预览图像对应的防晒霜检测结果。具体的异步变换算法将在下文中结合S9061至S9065进行详细介绍,在此不再展开描述。
S907:实时显示添加有防晒霜检测结果的预览画面。
也就是将通过运行异步变换算法确定出的mask图像和当前帧作为不同的图层进行融合,得到融合后的图像,在融合后的图像中针对mask图像的不同区域填充RGB颜色空间中的不同颜色,将填充颜色后的图像作为预览画面进行实时显示。下面对图9所示的流程图中的实时查看阶段的异步变换算法进行详细介绍。其中填充有“灰色背景”的S9061、S9062和图6所示的实时查看阶段异步变换算法中的S6061、S6062完全相同,以下将简单描述,不再重复赘述。对图9所示的异步变换算法流程中的S9063至S9065进行详细介绍。图9所示的异步变换算法包括以下内容:
S9061:输入前置摄像头采集到的图片序列。
例如,手机100通过前置摄像头101连续采集到N张前置预览图像f1至fn,将f1至fn构成的图像序列输入手机100的处理器110。
S9062:进行人脸检测及关键点检测。
例如,手机100通过处理器110将输入的前置预览图像f1至fn通过人脸检测模型进行人脸检测,从而再根据检测出的人脸区域,对人脸区域的面部五官(包括眉毛、眼睛、鼻子、嘴巴)以及面部轮廓等关键区域进行关键点检测。并且采用和上述图7中S7022所选择的同一种关键点标注方法进行人脸关键点标注。
S9063:读取预拍摄阶段记录的三维人脸。
也即读取图7所示的预拍摄阶段在S7032记录的着色完毕的三维人脸。其中包含采用
不同角度的UV图像对应的防晒霜检测结果(mask图像)着色后的面片。
S9064:旋转三维人脸,使其与当前帧人脸的角度一致。
可以通过改变三维人脸的角度,并将其进行二维投影,计算投影后的二维图像中人脸关键点与当前帧(通过前置摄像头101拍摄的前置预览图像)人脸关键点的距离。例如将着色完毕的三维人脸旋转为图10A所示的状态,将图10A所示的三维人脸投影成图10B所示的二维图像,对图10B所示的二维图像中各个人脸关键点与当前帧中对应的各关键点的位置差进行加权求和,确定出加权求和的结果趋近于0或尽可能小时,则确定图10A所示的三维人脸当前的旋转角度与当前帧一致。
S9065:将旋转后的三维人脸进行二维投影,根据面片颜色得到当前帧的检测结果(mask图像)。
在一些实施例中,可以将旋转后的三维人脸进行二维投影得到的图像确定为当前帧(前置预览图像)对应的mask图像。并且将该UV图像对应的mask和当前帧作为不同的图层进行融合,得到融合后的图像,在融合后的图像中针对mask图像的不同区域填充RGB颜色空间中的不同颜色,将填充颜色后的图像作为预览画面进行实时显示。
此外,本申请实施例还可以实现将历史防晒霜涂抹状况和当前涂抹状况进行比对的功能。图11示出了本申请实施例提供的一种将历史防晒霜涂抹状况和当前涂抹状况进行比对的流程,其中各步骤的执行主体也可以为手机100,具体地,图11所示的流程图包括以下各步骤:
S1101:确定用户点选“与历史对比”功能。
例如,在图12A所示的实施例中,用户点击“与历史对比”控件1007,手机100检测到用户的这一点击操作,确定出用户点选了“与历史对比”功能。可以理解的是,这里“与历史对比”控件1007的名称只是一种示例,在实际应用中,可以以其他名称来表示与历史对比功能对应的控件,本申请对此不作限定。
S1102:判断是否有保存的预拍摄记录。如果是,则表明有保存的预拍摄记录,进入S1104,显示保存的预拍摄记录供用户选择;否则表明没有保存的预拍摄记录,则进入S1103,提示用户没有可对比的历史记录。
S1103:提示用户没有可对比的历史记录。
例如,用户点击图12A所示的“与历史对比”控件1007,手机100判断出没有保存的预拍摄记录,则弹出如图12B所示的“没有历史记录”控件1008,以提示用户没有可对比的历史记录。
S1104:显示所有保存的预拍摄记录的时间供用户选择。
例如,用户点击图12A所示的“与历史对比”控件1007,手机100判断出有保存的预拍摄记录,则进入如图12C所示的界面,显示出保存的预拍摄记录的时间等信息,供用户选择。
S1105:获取用户选择的预拍摄记录。
在一些实施例中,用户可以选择一条预拍摄记录。在一些实施例中,用户还可以同时选择多条预拍摄记录。例如,用户在图12C所示的界面中点击控件1009,则手机100获取用户选择的预拍摄记录2。
S1106:开启前置摄像头101并在内屏103显示预览画面。
例如,手机100确定用户选择某一条或多条预拍摄记录之后,开启前置摄像头101采集用户的前置预览图像,并在内屏103显示预览图像。
S1107:将前置摄像头101采集的图片序列及用户选择的预拍摄记录作为输入,运行异步变换算法,获取该预拍摄时间的历史检测结果在当前预览图片序列上的对应结果。
也即通过运行异步变换算法,确定出预拍摄记录所对应的预拍摄时间的历史检测结果(mask 图像)和前置摄像头101实时采集的前置预览图像之间的映射关系,从而根据该映射关系,确定出预拍摄记录所对应的预拍摄时间的历史检测结果(mask图像)映射得到的映射检测结果。也就是说,将该预拍摄记录所对应的历史检测结果,经过异步变换算法,变换为与当前预览界面的人脸各区域位置相对应的显示结果。
其中异步变换变换算法可以为图6或者图9所示的实施例中的任意一种异步变换算法。
S1108:执行实时查看阶段流程,获得当前实时检测结果。
也就是说针对当前防晒霜涂抹状况,执行实时查看阶段流程,获取当前实时检测结果。
S1109:将历史结果与实时检测结果以不同图层叠加显示在预览画面上。以供用户可以直观地对比查看到用户所选取的历史记录对应的历史检测结果,和当前实时检测结果。
例如,在如图12D所示的实施例中,手机100将历史结果与实时检测结果,以及当前帧(前置预览图像)进行融合,使得用户可以直观地查看到历史记录的未涂抹防晒霜的区域为Z9,而当前未涂抹防晒霜的区域为Z4和Z5。其中,历史结果与实时检测结果可以以不同的颜色进行区别显示。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例还提供了一种电子设备,该电子设备包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。
本申请公开的机制的各实施例可以被实现在硬件、软件、固件或这些实现方法的组合中。本申请的实施例可实现为在可编程系统上执行的计算机程序或程序代码,该可编程系统包括至少一个处理器、存储系统(包括易失性和非易失性存储器和/或存储元件)、至少一个输入设备以及至少一个输出设备。
可将程序代码应用于输入指令,以执行本申请描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理系统包括具有诸如例如数字信号处理器(Digital Signal Processor,DSP)、微控制器、专用集成电路(Application Specific Integrated Circuit,ASIC)或微处理器之类的处理器的任何系统。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理系统通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本申请中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
在一些情况下,所公开的实施例可以以硬件、固件、软件或其任何组合来实现。所公开的实施例还可以被实现为由一个或多个暂时或非暂时性机器可读(例如,计算机可读)存储介质承载或存储在其上的指令,其可以由一个或多个处理器读取和执行。例如,指令可以通过网络或通过其他计算机可读介质分发。因此,机器可读介质可以包括用于以机器(例如,计算机)可读的形式存储或传输信息的任何机制,包括但不限于,软盘、光盘、光碟、只读存储器(CD-ROMs)、磁光盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、磁卡或光 卡、闪存、或用于利用因特网以电、光、声或其他形式的传播信号来传输信息(例如,载波、红外信号数字信号等)的有形的机器可读存储器。因此,机器可读介质包括适合于以机器(例如计算机)可读的形式存储或传输电子指令或信息的任何类型的机器可读介质。
在附图中,可以以特定布置和/或顺序示出一些结构或方法特征。然而,应该理解,可能不需要这样的特定布置和/或排序。而是,在一些实施例中,这些特征可以以不同于说明性附图中所示的方式和/或顺序来布置。另外,在特定图中包括结构或方法特征并不意味着暗示在所有实施例中都需要这样的特征,并且在一些实施例中,可以不包括这些特征或者可以与其他特征组合。
需要说明的是,本申请各设备实施例中提到的各单元/模块都是逻辑单元/模块,在物理上,一个逻辑单元/模块可以是一个物理单元/模块,也可以是一个物理单元/模块的一部分,还可以以多个物理单元/模块的组合实现,这些逻辑单元/模块本身的物理实现方式并不是最重要的,这些逻辑单元/模块所实现的功能的组合才是解决本申请所提出的技术问题的关键。此外,为了突出本申请的创新部分,本申请上述各设备实施例并没有将与解决本申请所提出的技术问题关系不太密切的单元/模块引入,这并不表明上述设备实施例并不存在其它的单元/模块。
需要说明的是,在本专利的示例和说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
虽然通过参照本申请的某些优选实施例,已经对本申请进行了图示和描述,但本领域的普通技术人员应该明白,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (25)

  1. 一种面部特征检测方法,应用于电子设备,所述电子设备包括位于所述电子设备不同侧的第一摄像头和第二摄像头,以及位于所述第二摄像头同一侧的第一屏幕,其特征在于,所述方法包括:
    检测到对用户的面部特征进行检测的检测指令;
    响应于所述检测指令,获取通过所述第一摄像头采集的多张包括用户面部的紫外图像;
    获取通过所述第二摄像头采集的包括用户面部的第一预览图像,其中所述第一预览图像为彩色图像;
    根据各紫外图像中用户的面部特征,确定出所述预览图像中用户的面部特征;
    根据确定出的所述第一预览图像中用户的面部特征,对所述第一预览图像中用户面部区域的像素点的像素值进行调整,得到目标图像;
    通过所述第一屏幕显示所述目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述紫外图像用于进行对用户面部的防晒霜涂抹情况的检测。
  3. 根据权利要求2所述的方法,其特征在于,
    所述响应于所述检测指令,获取通过所述第一摄像头采集的多张包括用户面部的紫外图像,包括:
    响应于所述检测指令,在确定出需要重新采集多张包括用户面部的紫外图像时,提示用户面对所述第一摄像头,并且通过所述第一摄像头采集到多张包括用户面部的紫外图像。
  4. 根据权利要求3所述的方法,其特征在于,所述获取通过所述第二摄像头采集的包括用户面部的第一预览图像,包括:
    在确定出已完成对所述多张包括用户面部的紫外图像的重新采集,或者在确定出无需重新采集多张包括用户面部的紫外图像时,提示用户面对所述第二摄像头,并且通过所述第二摄像头采集包括用户面部的第一预览图像。
  5. 根据权利要求2至4中任一项所述的方法,其特征在于,还包括:
    确定出各紫外图中用户面部的防晒霜检测结果、各紫外图像的关键点信息以及第一预览图像的关键点信息。
  6. 根据权利要求5所述的方法,其特征在于,所述用户的面部特征包括用户面部的防晒霜检测结果,
    所述根据各紫外图像中用户的面部特征,确定出所述第一预览图像中用户的面部特征,包括:
    根据确定出的所述各紫外图中用户面部的防晒霜检测结果,以及所述各紫外图像的关键点信息和所述第一预览图像的关键点信息,确定出所述第一预览图像中面部的防晒霜检测结果。
  7. 根据权利要求6所述的方法,其特征在于,
    所述根据确定出的所述各紫外图中用户面部的防晒霜检测结果,以及所述各紫外图像的关键点信息和所述第一预览图像的关键点信息,确定出所述第一预览图像中面部的防晒霜检测结果,包括:
    根据所述各紫外图像的关键点信息、所述第一预览图像的关键点信息,确定出所述多张用户面部的紫外图像中和所述第一预览图像匹配的目标紫外图像;
    根据所述目标紫外图像的关键点信息、所述第一预览图像的关键点信息,确定出所述目标紫外图像和所述第一预览图像之间的变换关系;
    根据所述变换关系以及所述目标紫外图像对应的防晒霜检测结果,确定出所述第一预览图像中用户面部的防晒霜检测结果。
  8. 根据权利要求7所述的方法,其特征在于,
    所述根据所述各紫外图像的关键点信息、所述第一预览图像的关键点信息,确定出所述多张用户面部的紫外图像中和所述第一预览图像匹配的目标紫外图像,包括:
    根据各紫外图像的关键点信息分别计算出各紫外图像中的人脸位置、人脸大小以及人脸角度,并且根据所述第一预览图像的关键点信息计算出所述预览图中的人脸位置、人脸大小以及人脸角度;
    从所述多张用户面部的紫外图像中确定出人脸位置、人脸大小以及人脸角度和所述第一预览图像接近的紫外图像作为所述目标紫外图像。
  9. 根据权利要求5所述的方法,其特征在于,所述电子设备还包括位于所述第一摄像头同一侧的第二屏幕,在确定出所述第一预览图像中面部的防晒霜检测结果之前,所述方法还包括:
    根据各紫外图像的关键点信息分别计算出各紫外图像中的人脸位置、人脸大小以及人脸角度;
    根据计算出的各紫外图像中的人脸位置、人脸大小以及人脸角度,生成第一提示信息;
    通过所述第二屏幕显示所述第一提示信息,以提示用户调整面部位置和/或面部姿态。
  10. 根据权利要求9所述的方法,其特征在于,所述电子设备还包括位于所述第一摄像头同一侧的第三摄像头,所述方法还包括:
    响应于所述检测指令,获取通过所述第三摄像头采集的多张包括用户面部的第二预览图像;
    确定出各第二预览图像的关键点信息;
    根据各第二预览图像的关键点信息分别计算出各第二预览图像中的人脸位置、人脸大小以及人脸角度;
    根据计算出的各第二预览图像中的人脸位置、人脸大小以及人脸角度,生成第二提示信息;
    通过所述第二屏幕显示所述第二提示信息,以提示用户调整面部位置和/或面部姿态。
  11. 根据权利要求7所述的方法,其特征在于,
    所述根据所述目标紫外图像的关键点信息、所述第一预览图像的关键点信息,确定出所述目标紫外图像和所述第一预览图像之间的变换关系,包括:
    根据所述目标紫外图像的关键点信息对所述目标紫外图像的关键点进行三角化处理,并且根据所述第一预览图像的关键点信息对所述第一预览图像的关键点进行三角化处理;
    对所述目标紫外图像中三角化处理后得到的各人脸三角,和所述第一预览图像中对应的人脸三角之间进行仿射变换,计算出所述目标紫外图像中的各人脸三角和所述第一预览图像中对应的人脸三角之间的变换关系;
    将所述目标紫外图像中的各人脸三角和所述第一预览图像中对应的人脸三角之间的变换关系的构成的集合,确定为所述目标紫外图像和所述第一预览图像之间的变换关系。
  12. 根据权利要求7所述的方法,其特征在于,
    所述根据所述变换关系以及所述目标紫外图像对应的防晒霜检测结果,确定出所述第一预览图像中用户面部的防晒霜检测结果,包括:
    基于所述目标紫外图像和所述第一预览图像之间的变换关系,对所述目标紫外图像对应的防晒霜检测结果进行变换,得到对应于所述目标紫外图像的变换后的防晒霜检测结果;
    将所述变换后的防晒霜检测结果,确定为所述第一预览图像中用户面部的防晒霜检测结果。
  13. 根据权利要求6所述的方法,其特征在于,
    所述根据确定出的所述各紫外图中用户面部的防晒霜检测结果,以及所述各紫外图像的关键点信息和所述第一预览图像的关键点信息,确定出所述第一预览图像中面部的防晒霜检测结果,包括:
    根据所述各紫外图像的关键点信息,进行三维人脸重建,得到未着色的三维人脸以及所述三维人脸的关键点信息;
    根据所述各紫外图中用户面部的防晒霜检测结果,对所述未着色的三维人脸进行着色,得到着色后的三维人脸;
    根据所述着色后的三维人脸、所述三维人脸的关键点信息、所述第一预览图像的关键点信息,确定出所述第一预览图像中用户面部的防晒霜检测结果。
  14. 根据权利要求13所述的方法,其特征在于,
    所述根据所述着色后的三维人脸、所述三维人脸的关键点信息、所述第一预览图像的关键点信息,确定出所述第一预览图像中用户面部的防晒霜检测结果,包括:
    在三维空间中旋转所述着色后的三维人脸,得到多角度的三维人脸;
    将所述多角度的三维人脸进行二维投影,得到多角度的投影人脸,并基于所述三维人脸的关键点信息确定出各角度的投影人脸对应的人脸关键点信息;
    基于所述各角度的投影人脸对应的人脸关键点信息、所述第一预览图像的关键点信息,从所述多角度的投影人脸中筛选出和所述第一预览图像匹配的投影人脸,将所述匹配的投影人脸确定为所述第一预览图像中用户面部的防晒霜检测结果。
  15. 根据权利要求14所述的方法,其特征在于,
    所述基于所述各角度的投影人脸对应的人脸关键点信息、所述第一预览图像的关键点信息,从所述多角度的投影人脸中筛选出和所述第一预览图像匹配的投影人脸,包括:
    分别计算出各角度的投影人脸中的人脸关键点和所述第一预览图像的对应的人脸关键点的位置差,并且分别对各角度的投影人脸中所有的位置差进行加权求和;
    将加权求和的结果满足条件的投影人脸确定为和所述第一预览图像匹配的投影人脸。
  16. 根据权利要求1至15中任一项所述的方法,其特征在于,还包括:
    检测到对用户面部的防晒霜涂抹情况和历史防晒霜检测记录进行对比的指令;
    响应于所述检测指令,确定出所述历史防晒霜检测记录对应的多张包括用户面部的历史紫外图像,以及各历史紫外图像的关键点信息和各历史紫外图像中用户面部的防晒霜检测结果;
    根据所述各历史紫外图像的关键点信息、所述各历史紫外图像中用户面部的防晒霜检测结果以及所述第一预览图像的关键点信息,确定出第一预览图像中用户面部的历史防晒霜检测结果;
    根据确定出的第一预览图像中用户面部的历史防晒霜检测结果,对所述第一预览图像中用户面部区域的像素点的像素值进行调整,得到待叠加图像,其中,在所述待叠加图像的用户面部区域中,防晒霜涂抹程度不同的子区域的像素点的像素值不同;
    将所述目标图像和所述待叠加图像作为不同图层进行叠加,得到叠加图像;通过所述第一屏幕显示所述叠加图像。
  17. 根据权利要求5所述的方法,其特征在于,所述确定出各紫外图中用户面部的防晒霜检测结果、各紫外图像的关键点信息以及第一预览图像的关键点信息,包括:
    利用深度学习的方法或者级联形状回归的方法,分别对所述各紫外图像、所述第一预览图像进行关键点检测,得到所述各紫外图像的关键点信息以及所述第一预览图像的关键点信息;
    利用预设的防晒霜检测模型对所述各紫外图像分别进行用户面部防晒霜涂抹情况检测,得到所述各紫外图中用户面部的防晒霜检测结果。
  18. 根据权利要求1至17中任一项所述的方法,其特征在于,
    所述各紫外图像的采集时间和所述第一预览图像的采集时间之间的差值小于时间阈值。
  19. 根据权利要求9所述的方法,其特征在于,
    所述第一屏幕为内屏,所述第二屏幕为外屏,或者
    所述第一屏幕为折叠屏,所述第二屏幕为外屏。
  20. 根据权利要求18所述的方法,其特征在于,所述第一屏幕的面积大于所述第二屏幕的面积。
  21. 根据权利要求10所述的方法,其特征在于,
    所述第一摄像头为紫外摄像头,所述第二摄像头为前置摄像头,所述第三摄像头为后置摄像头。
  22. 根据权利要求1至21中任一项所述的方法,其特征在于,还包括:
    在所述目标图像的用户面部区域中,防晒霜涂抹程度不同的子区域的显示色彩不同。
  23. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,该指令在电子设备上执行时使电子设备执行权利要求1-22中任一项所述的面部特征检测方法。
  24. 一种计算机程序产品,其特征在于,所述计算机程序产品包括指令,所述指令当被一个或多个处理器执行时用于实现如权利要求1-22中任一项所述的面部特征检测方法。
  25. 一种电子设备,其特征在于,包括:
    存储器,用于存储指令,以及
    一个或多个处理器,当所述指令被所述一个或多个处理器执行时,所述处理器执行如权利要求1-22中任一项所述的面部特征检测方法。
PCT/CN2023/087675 2022-04-15 2023-04-11 面部特征检测方法、可读介质和电子设备 WO2023198073A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210400185.X 2022-04-15
CN202210400185.XA CN116959052A (zh) 2022-04-15 2022-04-15 面部特征检测方法、可读介质和电子设备

Publications (1)

Publication Number Publication Date
WO2023198073A1 true WO2023198073A1 (zh) 2023-10-19

Family

ID=88328956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/087675 WO2023198073A1 (zh) 2022-04-15 2023-04-11 面部特征检测方法、可读介质和电子设备

Country Status (2)

Country Link
CN (1) CN116959052A (zh)
WO (1) WO2023198073A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119307A (zh) * 2023-10-23 2023-11-24 珠海九松科技有限公司 一种基于深度学习的视频交互方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946514A (zh) * 2012-11-08 2013-02-27 广东欧珀移动通信有限公司 移动终端的自拍方法和装置
CN104715393A (zh) * 2013-12-16 2015-06-17 国际商业机器公司 用于推荐正确防晒霜使用的方法和系统
CN104935698A (zh) * 2015-06-23 2015-09-23 上海卓易科技股份有限公司 一种智能终端的拍摄方法、拍摄装置及智能手机
CN111797264A (zh) * 2019-04-09 2020-10-20 北京京东尚科信息技术有限公司 图像增广与神经网络训练方法、装置、设备及存储介质
US20210068745A1 (en) * 2019-09-06 2021-03-11 SmileDirectClub LLC Systems and methods for user monitoring
CN113034354A (zh) * 2021-04-20 2021-06-25 北京优彩科技有限公司 一种图像处理方法、装置、电子设备和刻度存储介质
CN115700841A (zh) * 2021-07-31 2023-02-07 华为技术有限公司 检测方法及电子设备
CN115760931A (zh) * 2021-09-03 2023-03-07 华为技术有限公司 图像处理方法及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102946514A (zh) * 2012-11-08 2013-02-27 广东欧珀移动通信有限公司 移动终端的自拍方法和装置
CN104715393A (zh) * 2013-12-16 2015-06-17 国际商业机器公司 用于推荐正确防晒霜使用的方法和系统
CN104935698A (zh) * 2015-06-23 2015-09-23 上海卓易科技股份有限公司 一种智能终端的拍摄方法、拍摄装置及智能手机
CN111797264A (zh) * 2019-04-09 2020-10-20 北京京东尚科信息技术有限公司 图像增广与神经网络训练方法、装置、设备及存储介质
US20210068745A1 (en) * 2019-09-06 2021-03-11 SmileDirectClub LLC Systems and methods for user monitoring
CN113034354A (zh) * 2021-04-20 2021-06-25 北京优彩科技有限公司 一种图像处理方法、装置、电子设备和刻度存储介质
CN115700841A (zh) * 2021-07-31 2023-02-07 华为技术有限公司 检测方法及电子设备
CN115760931A (zh) * 2021-09-03 2023-03-07 华为技术有限公司 图像处理方法及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119307A (zh) * 2023-10-23 2023-11-24 珠海九松科技有限公司 一种基于深度学习的视频交互方法
CN117119307B (zh) * 2023-10-23 2024-03-08 珠海九松科技有限公司 一种基于深度学习的视频交互方法

Also Published As

Publication number Publication date
CN116959052A (zh) 2023-10-27

Similar Documents

Publication Publication Date Title
CN108764091B (zh) 活体检测方法及装置、电子设备和存储介质
WO2019228473A1 (zh) 人脸图像的美化方法和装置
WO2022179025A1 (zh) 图像处理方法及装置、电子设备和存储介质
KR101733512B1 (ko) 얼굴 특징 기반의 가상 체험 시스템 및 그 방법
CN113160094A (zh) 图像处理方法及装置、电子设备和存储介质
WO2021078001A1 (zh) 一种图像增强方法及装置
WO2021036853A1 (zh) 一种图像处理方法及电子设备
CN109345485A (zh) 一种图像增强方法、装置、电子设备及存储介质
WO2022110837A1 (zh) 图像处理方法及装置
US11030733B2 (en) Method, electronic device and storage medium for processing image
US11308692B2 (en) Method and device for processing image, and storage medium
CN107730448B (zh) 基于图像处理的美颜方法及装置
CN104077563B (zh) 人脸识别方法和装置
WO2023198073A1 (zh) 面部特征检测方法、可读介质和电子设备
WO2019037014A1 (zh) 一种图像检测的方法、装置及终端
CN113850726A (zh) 图像变换方法和装置
US11032529B2 (en) Selectively applying color to an image
CN113570581A (zh) 图像处理方法及装置、电子设备和存储介质
CN113822798B (zh) 生成对抗网络训练方法及装置、电子设备和存储介质
KR101647318B1 (ko) 휴대용 피부상태 분석 장치 및 이를 이용한 피부 관리 서비스 방법
KR20170046635A (ko) 얼굴 특징 기반의 가상 체험 시스템 및 그 방법
WO2018133305A1 (zh) 一种图像处理的方法及装置
CN112153300A (zh) 多目摄像头曝光方法、装置、设备及介质
CN106327588B (zh) 智能终端及其图像处理方法和装置
CN110766631A (zh) 人脸图像的修饰方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787716

Country of ref document: EP

Kind code of ref document: A1