CN114155569B - Cosmetic progress detection method, device, equipment and storage medium - Google Patents

Cosmetic progress detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114155569B
CN114155569B CN202111015242.4A CN202111015242A CN114155569B CN 114155569 B CN114155569 B CN 114155569B CN 202111015242 A CN202111015242 A CN 202111015242A CN 114155569 B CN114155569 B CN 114155569B
Authority
CN
China
Prior art keywords
image
makeup
frame image
target
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111015242.4A
Other languages
Chinese (zh)
Other versions
CN114155569A (en
Inventor
刘聪
苗锋
张梦洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202111308461.1A priority Critical patent/CN115731142A/en
Priority to CN202111308470.0A priority patent/CN115761827A/en
Priority to CN202111015242.4A priority patent/CN114155569B/en
Priority to CN202111308454.1A priority patent/CN115937919A/en
Priority to CN202111306864.2A priority patent/CN115731591A/en
Publication of CN114155569A publication Critical patent/CN114155569A/en
Application granted granted Critical
Publication of CN114155569B publication Critical patent/CN114155569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application provides a makeup progress detection method, a makeup progress detection device, makeup progress detection equipment and a storage medium, wherein the method comprises the following steps: acquiring a real-time makeup video of a user for performing a specific makeup currently; and determining the current makeup progress of the user for making up a specific makeup according to the initial frame image and the current frame image of the real-time makeup video. The method and the device for determining the makeup progress compare the current frame image and the initial frame image in the makeup process of the user to determine the makeup progress. The makeup progress can be detected only through image processing, the accuracy of the makeup progress detection is high, and the makeup progress of a user can be detected in real time in the makeup processes of highlight, face beautification, blush, foundation make-up, concealer, eye shadow, eye liner, eyebrow and the like. The method has the advantages of no need of using a deep learning model, small calculation amount, low cost, reduction of the processing pressure of the server, improvement of the efficiency of the makeup progress detection and capability of meeting the real-time requirement of the makeup progress detection.

Description

Cosmetic progress detection method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a makeup progress detection method, device, equipment and storage medium.
Background
Make up has become the essential link of many people's daily life, and the step of makeup process is various, if can feed back the user with the progress of making up in real time, will greatly reduce the consumption of making up to user's energy, save user's makeup time.
At present, some functions of providing virtual makeup trial, skin color detection, personalized product recommendation and the like by using a deep learning model exist in the related technology, and the functions all need to collect a large number of face pictures in advance to train the deep learning model.
However, the face picture is privacy data of the user, and it is difficult to collect a huge face picture. And a large amount of computing resources are consumed for model training, so that the cost is high. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the face of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is high, the deep learning model capable of meeting the real-time performance requirement is not high in detection accuracy.
Disclosure of Invention
The application provides a makeup progress detection method, a makeup progress detection device, makeup progress detection equipment and a storage medium, wherein the current makeup progress is determined through the difference between an initial frame image and a current frame image, the accuracy of the makeup progress detection is very high, the calculation amount is small, the cost is low, the processing pressure of a server is reduced, the efficiency of the makeup progress detection is improved, and the real-time requirement of the makeup progress detection can be met.
The embodiment of the first aspect of the application provides a makeup progress detection method, which comprises the following steps:
acquiring a real-time makeup video of a user for performing a specific makeup currently;
and determining the current makeup progress of the user for making up the specific makeup according to the initial frame image and the current frame image of the real-time makeup video.
In some embodiments of the present application, the particular makeup includes a high gloss makeup, or a makeup finish; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
acquiring at least one target makeup area corresponding to the specific makeup;
according to the target makeup area, acquiring a first target area image corresponding to the specific makeup from the initial frame image, and acquiring a second target area image corresponding to the specific makeup from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
In some embodiments of the present application, the obtaining, from the initial frame image according to the target makeup area, a first target area image corresponding to the specific makeup includes:
Detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and acquiring a first target area image corresponding to the specific makeup from the face area image according to the first face key point and the target makeup area.
In some embodiments of the present application, the extracting a first target area image corresponding to the specific makeup from the face area image according to the first face key point and the target makeup area includes:
determining one or more target key points on a region outline corresponding to the target makeup region in the face region image from the first face key points;
generating a mask image corresponding to the face region image according to the target key points corresponding to the target makeup region;
and calculating the mask image and the face area image to obtain a first target area image corresponding to the specific makeup.
In some embodiments of the present application, the generating a mask image corresponding to the face region image according to the target key point corresponding to the target makeup region includes:
If a plurality of target key points corresponding to the target makeup areas exist, determining each edge coordinate of the target makeup area in the face area image according to the target key points; modifying the pixel values of all pixel points in the area defined by the edge coordinates into preset values to obtain a mask area corresponding to the target makeup area;
if the number of target key points corresponding to the target makeup area is one, drawing an elliptical area with a preset size by taking the target key points as the center, and modifying the pixel values of all pixel points in the elliptical area to preset values to obtain a mask area corresponding to the target makeup area;
and modifying the pixel values of all the pixel points outside the mask area to be zero to obtain a mask image corresponding to the face area image.
In some embodiments of the present application, the specific makeup includes a blush makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
acquiring at least one target makeup area corresponding to the specific makeup;
Generating a makeup mask image according to the target makeup area;
and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
In some embodiments of the application, the determining, according to the makeup mask map, the initial frame image, and the current frame image, a current makeup progress corresponding to the current frame image includes:
taking the makeup mask image as a reference, acquiring a first target area image for makeup from the initial frame image, and acquiring a second target area image for makeup from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
In some embodiments of the present application, the specific makeup includes eyeliner makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises the following steps:
acquiring a makeup mask image corresponding to the initial frame image and the current frame image;
according to the initial frame image, simulating to generate a result image after the eye line is made up;
And determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the result image, the initial frame image and the current frame image.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the makeup mask map, the result image, the initial frame image, and the current frame image includes:
taking a makeup mask image corresponding to the initial frame image as a reference, and acquiring a first makeup target area image from the initial frame image;
acquiring a second target area image for makeup from the current frame image according to the makeup mask image corresponding to the current frame image;
acquiring a third target area image of eye line makeup according to the result image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image, the second target area image and the third target area image.
In some embodiments of the application, the determining a current makeup progress corresponding to the current frame image according to the first target area image, the second target area image, and the third target area image includes:
Respectively converting the first target area image, the second target area image and the third target area image into images containing saturation channels in an HLS color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image, the converted second target area image and the converted third target area image.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the converted first target area image, second target area image and third target area image includes:
calculating a first average pixel value corresponding to the first target area image, a second average pixel value corresponding to the second target area image and a third average pixel value corresponding to the third target area image after conversion respectively;
calculating a first difference between a second average pixel value and the first average pixel value, and calculating a second difference between the third average pixel value and the first average pixel value;
and calculating the ratio of the first difference value to the second difference value to obtain the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, before determining, according to the first target area image, the second target area image, and the third target area image, a current makeup progress corresponding to the current frame image, the method further includes:
aligning the first target area image and the second target area image;
and carrying out alignment processing on the first target area image and the third target area image.
In some embodiments of the present application, the aligning the first target area image and the second target area image includes:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
and computing the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image.
In some embodiments of the present application, the aligning the first target area image and the second target area image further includes:
Acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the result image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
and computing the second mask image and the face area image corresponding to the result image to obtain a new second target area image corresponding to the result image.
In some embodiments of the present application, the obtaining a cosmetic mask map corresponding to the initial frame image and the current frame image includes:
acquiring an eye line style graph selected by a user;
if the eye state of the user in the initial frame image is the eye opening state, acquiring an eye opening pattern image corresponding to the eye line pattern image; determining the eye opening pattern image as a cosmetic mask image corresponding to the initial frame image;
if the eye state of the user in the initial frame image is the eye closing state, acquiring a eye closing pattern image corresponding to the eye line pattern image, and determining the eye closing pattern image as a makeup masking image corresponding to the initial frame image.
In some embodiments of the present application, the particular makeup includes an eye shadow makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
Acquiring an eye shadow mask image;
according to each target makeup area of eye shadow makeup, respectively splitting a makeup mask image corresponding to each target makeup area from the eye shadow mask image;
and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the makeup mask image corresponding to each target makeup area.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image, and the makeup mask image corresponding to each of the target makeup areas includes:
respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a first target area image corresponding to each target makeup area from the initial frame image;
respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a second target area image corresponding to each target makeup area from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each of the target makeup areas includes:
respectively converting a first target area image and a second target area image corresponding to each target makeup area into images containing preset single-channel components in an HLS color space;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area.
In some embodiments of the application, the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area includes:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in a first target area image and a second target area image which correspond to the same target makeup area after conversion respectively;
counting the number of pixel points of which the absolute value of the difference value corresponding to each target makeup area meets a preset makeup completion condition;
Respectively calculating the ratio of the number of the pixel points corresponding to each target makeup area to the total number of the pixel points in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area;
and calculating the current makeup progress corresponding to the current frame image according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area.
In some embodiments of the present application, the obtaining a first target area image from the initial frame image with reference to the cosmetic mask image includes:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and taking the makeup mask image as a reference, and acquiring a first target area image for makeup from the face area image.
In some embodiments of the present application, the obtaining a first target area image of makeup from the face area image with reference to the makeup mask image includes:
respectively converting the makeup mask image and the face region image into binary images;
performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image;
And computing the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image.
In some embodiments of the present application, before performing and operation on the binarized image corresponding to the cosmetic mask image and the binarized image corresponding to the face region image, the method further includes:
determining one or more first positioning points on the outline of each makeup area in the makeup mask map according to the standard human face key points corresponding to the makeup mask map;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
In some embodiments of the present application, the obtaining a first target area image of makeup from the face area image with reference to the makeup mask image includes:
splitting the cosmetic mask map into a plurality of sub-mask maps, wherein each sub-mask map comprises at least one target cosmetic area;
respectively converting each sub-mask image and the face region image into a binary image;
Respectively performing AND operation on the binary image corresponding to each sub-mask image and the binary image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image;
respectively performing AND operation on each sub-mask image and the face region image corresponding to the initial frame image to obtain a plurality of sub-target region images corresponding to the initial frame image;
and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
In some embodiments of the present application, before performing and operation on the binarized image corresponding to each sub-mask map and the binarized image corresponding to the face region image, the method further includes:
determining one or more first positioning points on the outline of a target makeup area in a first sub-mask map according to the standard face key points corresponding to the makeup mask map, wherein the first sub-mask map is any one of the sub-mask maps;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the first sub-mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
In some embodiments of the present application, the particular makeup includes an eyebrow makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
acquiring a first target area image corresponding to eyebrows from the initial frame image, and acquiring a second target area image corresponding to the eyebrows from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
In some embodiments of the present application, the acquiring a first target area image corresponding to an eyebrow from the initial frame image includes:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
In some embodiments of the present application, the intercepting, from the face region image according to the eyebrow key points included in the first face key points, a first target region image corresponding to eyebrows includes:
Interpolating eyebrow key points between the eyebrows and the eyebrow peaks included in the first face key points to obtain a plurality of interpolation points;
intercepting all eyebrow key points between the eyebrows and the eyebrow peaks and a closed area formed by connecting the interpolation points from the face area image to obtain partial eyebrow images between the eyebrows and the eyebrow peaks;
intercepting a closed region formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the face region image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail;
and splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
In some embodiments of the application, the determining, according to the first target area image and the second target area image, a current makeup progress corresponding to the current frame image includes:
respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
In some embodiments of the application, the determining a current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image includes:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in the converted first target area image and the converted second target area image respectively;
counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions;
and calculating the ratio of the counted pixel point number to the total number of pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
In some embodiments of the application, before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further includes:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
Performing and operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to an intersection area of the first target area image and the second target area image;
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
and calculating the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
In some embodiments of the present application, before determining the current makeup progress corresponding to the current frame image, the method further includes:
and respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
In some embodiments of the present application, the specific makeup includes foundation makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises the following steps:
Simulating and generating a result image after finishing the specific makeup according to the initial frame image;
respectively obtaining integral image brightness corresponding to the initial frame image, the result image and the current frame image;
respectively obtaining the brightness of the face region corresponding to the initial frame image, the result image and the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the overall image brightness and the face area brightness corresponding to the initial frame image, the result image and the current frame image respectively.
In some embodiments of the present application, the obtaining the overall image brightness corresponding to the initial frame image, the result image, and the current frame image respectively includes:
respectively converting the initial frame image, the result image and the current frame image into gray level images;
respectively calculating gray average values of pixel points in gray images corresponding to the initial frame image, the result image and the current frame image after conversion;
and respectively determining the gray average values corresponding to the initial frame image, the result image and the current frame image as the integral image brightness corresponding to the initial frame image, the result image and the current frame image.
In some embodiments of the present application, the obtaining the brightness of the face region corresponding to the initial frame image, the result image, and the current frame image respectively includes:
respectively obtaining face area images corresponding to the initial frame image, the result image and the current frame image;
respectively converting the face area images corresponding to the initial frame image, the result image and the current frame image into face gray level images;
and respectively calculating the gray average value of pixel points in the face gray images corresponding to the initial frame image, the result image and the current frame image to obtain the face region brightness corresponding to the initial frame image, the result image and the current frame image.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the overall image brightness and the face area brightness corresponding to the initial frame image, the result image, and the current frame image respectively includes:
determining a first environment change brightness corresponding to the current frame image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the current frame image;
Determining second environment change brightness corresponding to the result image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the result image;
and determining the current makeup progress corresponding to the current frame image according to the first environment change brightness, the second environment change brightness, the face area brightness corresponding to the initial frame image, the face area brightness corresponding to the current frame image and the face area brightness corresponding to the result image.
In some embodiments of the present application, the determining, according to the overall image brightness and the face area brightness corresponding to the initial frame image and the overall image brightness and the face area brightness corresponding to the current frame image, the first environment change brightness corresponding to the current frame image includes:
calculating the difference value between the brightness of the whole image corresponding to the initial frame image and the brightness of the face area corresponding to the initial frame image to obtain the ambient brightness of the initial frame image;
calculating the difference value between the brightness of the whole image corresponding to the current frame image and the brightness of the face area corresponding to the current frame image to obtain the ambient brightness of the current frame image;
And determining the absolute value of the difference between the ambient brightness of the current frame image and the ambient brightness of the initial frame image as a first ambient variation brightness corresponding to the current frame image.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the first environment change brightness, the second environment change brightness, the face area brightness corresponding to the initial frame image, the face area brightness corresponding to the current frame image, and the face area brightness corresponding to the result image includes:
determining a makeup brightness change value corresponding to the current frame image according to the first environment change brightness, the brightness of the face region corresponding to the initial frame image and the brightness of the face region corresponding to the current frame image;
determining a makeup brightness change value corresponding to the result image according to the second environment change brightness, the face area brightness corresponding to the initial frame image and the face area brightness corresponding to the result image;
and calculating the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, the determining a makeup brightness change value corresponding to the current frame image according to the first environment change brightness, the face area brightness corresponding to the initial frame image, and the face area brightness corresponding to the current frame image includes:
calculating the difference value between the brightness of the face area corresponding to the current frame image and the brightness of the face area corresponding to the initial frame image to obtain the total brightness change value corresponding to the current frame image;
and calculating a difference value between the total brightness change value and the first environment change brightness to obtain a makeup brightness change value corresponding to the current frame image.
In some embodiments of the present application, the method further comprises:
if the first environment change brightness is larger than a preset threshold value, determining the makeup progress corresponding to the previous frame of image as the current makeup progress corresponding to the current frame of image;
and sending first prompt information to the terminal of the user, wherein the first prompt information is used for prompting the user to make up in the brightness environment corresponding to the initial frame image.
In some embodiments of the present application, the specific makeup includes concealer makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
Respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image;
calculating a face flaw difference value between the current frame image and the initial frame image according to the face flaw information corresponding to the initial frame image and the face flaw information corresponding to the current frame image;
if the facial flaw difference value is larger than a preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the facial flaw difference value and facial flaw information corresponding to the initial frame image;
if the difference value of the facial flaws is smaller than or equal to the preset threshold value, obtaining a result image after the user finishes concealing and making up, and determining the current making-up progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
In some embodiments of the present application, the facial blemish information includes a blemish category and a corresponding blemish number; the calculating a difference value of the facial flaws between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image includes:
Respectively calculating the difference between the flaw number corresponding to the initial frame image and the flaw number corresponding to the current frame image under each flaw type;
and calculating the sum of the difference values corresponding to each defect type, and taking the obtained sum as the difference value of the facial defects between the current frame image and the initial frame image.
In some embodiments of the present application, the calculating a current makeup progress corresponding to the current frame image according to the facial defect difference value and the facial defect information corresponding to the initial frame image includes:
calculating the sum of the flaw numbers corresponding to all flaw categories in the facial flaw information corresponding to the initial frame image to obtain the total flaw number;
and calculating the ratio of the difference value of the facial flaws to the total number of the flaws, and taking the ratio as the current makeup progress corresponding to the current frame image.
In some embodiments of the application, the obtaining a result image after the user finishes concealing and making up, and determining a current make-up progress corresponding to the current frame image according to the initial frame image, the result image, and the current frame image includes:
according to the initial frame image, simulating and generating a result image after the user finishes concealing and making up;
Respectively obtaining face area images corresponding to the initial frame image, the result image and the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the face area images corresponding to the initial frame image, the result image and the current frame image respectively.
In some embodiments of the present application, the determining, according to the face area images corresponding to the initial frame image, the result image, and the current frame image, a current makeup progress corresponding to the current frame image includes:
respectively converting the face region images corresponding to the initial frame image, the result image and the current frame image into images containing saturation channels in an HLS color space;
respectively calculating smoothing factors corresponding to the respective face region images of the converted initial frame image, the converted result image and the converted current frame image through a preset filtering algorithm;
and determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image and the current frame image respectively.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to respective smoothing factors corresponding to the initial frame image, the result image, and the current frame image includes:
Calculating a first difference value between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image;
calculating a second difference value between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image;
and calculating the ratio of the first difference value to the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, the obtaining facial defect information corresponding to each of the initial frame image and the current frame image respectively includes:
respectively acquiring face area images corresponding to the initial frame image and the current frame image;
and respectively detecting the number of flaws corresponding to each flaw category in the face area images corresponding to the initial frame image and the current frame image through a preset skin detection model to obtain the face flaw information corresponding to the initial frame image and the current frame image.
In some embodiments of the present application, the obtaining of the face region image corresponding to the initial frame image includes:
performing rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image;
Intercepting an image containing a face area from the corrected initial frame image according to the corrected first face key point;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
In some embodiments of the present application, the performing rotation correction on the initial frame image and the first face keypoints according to the first face keypoints includes:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
In some embodiments of the present application, the intercepting an image including a face region from the corrected initial frame image according to the corrected first face key point includes:
and according to the corrected first face key point, carrying out image interception on a face area contained in the corrected initial frame image.
In some embodiments of the present application, the performing, according to the corrected first face keypoint, image capture on a face region included in the corrected initial frame image includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and intercepting an image containing the face area from the corrected initial frame image according to the intercepting frame.
In some embodiments of the present application, the method further comprises:
amplifying the intercepting frame by a preset multiple;
and according to the amplified intercepting frame, intercepting an image containing the face region from the corrected initial frame image.
In some embodiments of the present application, the method further comprises:
and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
In some embodiments of the present application, the method further comprises:
detecting whether the initial frame image and the current frame image only contain face images of the same user;
if yes, executing the operation of determining the current makeup progress of the specific makeup by the user;
and if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
An embodiment of a second aspect of the present application provides a makeup progress detection device including:
the video acquisition module is used for acquiring a real-time makeup video of a user for making up a specific makeup currently;
and the makeup progress determining module is used for determining the current makeup progress of the specific makeup performed by the user according to the initial frame image and the current frame image of the real-time makeup video.
Embodiments of the third aspect of the present application provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of the first aspect.
An embodiment of a fourth aspect of the present application provides a computer-readable storage medium having a computer program stored thereon, the program being executable by a processor to implement the method of the first aspect.
The technical scheme provided in the embodiment of the application at least has the following technical effects or advantages:
in the embodiment of the application, the current frame image of the user makeup process is compared with the initial frame image to determine the makeup progress. The makeup progress can be detected only through image processing, the accuracy of the makeup progress detection is high, and the makeup progress of a user can be detected in real time in the makeup processes of highlight, face correction, blush, foundation make-up, concealer, eye shadow, eyeliner, eyebrow and the like. The method has the advantages of no need of using a deep learning model, small calculation amount, low cost, reduction of the processing pressure of the server, improvement of the efficiency of the makeup progress detection and capability of meeting the real-time requirement of the makeup progress detection.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings.
In the drawings:
fig. 1 is a flowchart illustrating a method for detecting a progress of makeup according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a makeup progress detection method for detecting makeup such as highlight and cosmetic repair according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a display interface displayed by a client for a user to select a target makeup area according to an embodiment of the application;
FIG. 4 is a schematic diagram illustrating the rotation angles of a solution image provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating two coordinate system transformations provided by an embodiment of the present application;
fig. 6 is a schematic block flow chart illustrating a makeup progress detection method for detecting makeup such as highlight and cosmetic repair according to an embodiment of the present application;
fig. 7 is a flowchart illustrating a makeup progress detection method for detecting makeup such as blush according to an embodiment of the present disclosure;
FIG. 8 illustrates another schematic view of a display interface displayed by a client for a user to select a makeup area provided in an embodiment of the present application;
fig. 9 is a block flow diagram illustrating a method for detecting a progress of makeup such as blush according to an embodiment of the present disclosure;
Fig. 10 is a flowchart illustrating a makeup progress detection method for detecting eyeliner makeup according to an embodiment of the present application;
fig. 11 is a schematic block flow chart illustrating a method for detecting a makeup progress of an eyeliner according to an embodiment of the present disclosure;
fig. 12 is a flowchart illustrating a makeup progress detection method for detecting an eye shadow makeup according to an embodiment of the present application;
fig. 13 is a schematic block flow chart illustrating a makeup progress detection method for detecting eye shadow makeup provided in an embodiment of the present application;
fig. 14 is a schematic structural view illustrating a makeup progress detection device for detecting an eye shadow makeup provided in an embodiment of the present application;
FIG. 15 is a flowchart illustrating a method for detecting a cosmetic progress for detecting a makeup of an eyebrow according to an embodiment of the present application;
fig. 16 is a block flow diagram illustrating a makeup progress detection method for detecting the makeup of eyebrows according to an embodiment of the present application;
fig. 17 is a schematic structural view illustrating a makeup progress detection apparatus for detecting a makeup of eyebrows according to an embodiment of the present application;
fig. 18 is a flowchart illustrating a makeup progress detection method for detecting makeup of foundation, loose powder, etc., according to an embodiment of the present application;
Fig. 19 is another flowchart illustrating a makeup progress detection method for detecting makeup of foundation, loose powder, etc., according to an embodiment of the present application;
fig. 20 is a block flow diagram illustrating a makeup progress detection method for detecting makeup of foundation, loose powder, and the like according to an embodiment of the present application;
fig. 21 is a flowchart showing a makeup progress detection method for detecting a concealer cosmetic provided in an embodiment of the present application;
fig. 22 is a block flow diagram illustrating a makeup progress detection method for detecting concealer makeup according to an embodiment of the present application;
FIG. 23 is a flow chart illustrating a method of cosmetic color identification provided in an embodiment of the present application;
fig. 24 is a schematic view showing a configuration of a makeup color recognition apparatus according to an embodiment of the present application;
FIG. 25 is a flow chart of a method of facial blemish removal image processing provided by an embodiment of the present application;
FIG. 26 (a) shows a face image of a user, and FIG. 26 (b) shows a corresponding defective texture map of the face image shown in FIG. 26 (a);
fig. 27 is a schematic structural diagram of an image processing apparatus for removing facial blemishes according to an embodiment of the present application;
Fig. 28 is a schematic structural view illustrating a makeup progress detection apparatus according to an embodiment of the present application;
fig. 29 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 30 is a schematic diagram of a storage medium provided by an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
A makeup progress detection method, a makeup progress detection device, a makeup progress detection apparatus, and a storage medium according to embodiments of the present application will be described below with reference to the accompanying drawings.
At present, some virtual makeup trying functions exist in the related technology, the virtual makeup trying functions can be applied to sales counters or mobile phone application software, a face recognition technology is adopted to provide virtual makeup trying services for users, and various makeup can be matched and displayed in a face fitting mode in real time. In addition, the face skin detection service is provided, but the services only can meet the requirements of users for selecting cosmetics suitable for the users or selecting skin care schemes suitable for the users. Based on the services, the users can be helped to select highlight/cosmetic products suitable for the users, but the makeup progress cannot be displayed, and the real-time makeup requirements of the users cannot be met. Some functions of providing virtual makeup trial, skin color detection, personalized product recommendation and the like by using a deep learning model also exist in the related technology, and all the functions need to collect a large number of face pictures in advance to train the deep learning model. However, the face picture is privacy data of the user, and it is difficult to collect a huge face picture. And a large amount of computing resources are consumed for model training, so that the cost is high. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the face of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is high, the deep learning model capable of meeting the real-time performance requirement is not high in detection accuracy.
Based on this, the embodiment of the present application provides a makeup progress detection method, which compares a current frame image of a user makeup process with an initial frame image (i.e., a first frame image) to determine a makeup progress. The makeup progress can be detected only through image processing, the accuracy of the makeup progress detection is high, and the makeup progress of a user can be detected in real time in the makeup processes of highlight, face beautification, blush, foundation make-up, concealer, eye shadow, eye liner, eyebrow and the like. The method has the advantages of no need of using a deep learning model, small calculation amount, low cost, reduction of the processing pressure of the server, improvement of the efficiency of the makeup progress detection and capability of meeting the real-time requirement of the makeup progress detection.
Referring to fig. 1, the method specifically includes the following steps:
step 101: and acquiring a real-time makeup video of the current specific makeup of the user.
Step 102: and determining the current makeup progress of the user for making up a specific makeup look according to the initial frame image and the current frame image of the real-time makeup video.
The specific makeup may be highlight makeup, blush makeup, foundation makeup, concealer makeup, eye shadow makeup, eyeliner makeup, eyebrow makeup, or the like. The process of checking the progress of makeup for different makeup is described in detail below.
Example one
The embodiment of the application provides a makeup progress detection method, which is used for detecting the makeup progress corresponding to high-gloss makeup, makeup finishing or any other makeup appearance which can generate light and shade change. Referring to fig. 2, this embodiment specifically includes the following steps:
step 201: and acquiring at least one target makeup area corresponding to the specific makeup and a real-time makeup video of the user currently making up the specific makeup.
The execution subject of the embodiment of the application is the server. And a client matched with the makeup progress detection service provided by the server is installed on a terminal of a user, such as a mobile phone or a computer. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, the client displays all target makeup areas corresponding to the specific makeup, and the specific makeup can comprise high makeup, repair makeup and the like. The target makeup area may include a forehead area, a nose bridge area, a nose tip area, a left cheek area, a right cheek area, a chin area, and the like. The user selects one or more target makeup areas, which he or she needs to make up a specific makeup, from among the displayed plurality of target makeup areas.
As an example, the client may display all target makeup areas corresponding to a particular makeup in the form of text selections. As shown in fig. 3, the display interface includes text selection items and a submit button corresponding to a plurality of target makeup areas, and a user may click the text selection item selecting a desired target makeup area and then click the submit button. And after detecting a submission instruction triggered by the submission key, the client acquires one or more target makeup areas selected by the user from the display interface.
As another example, the client may display a facial image on which all target makeup areas corresponding to a particular makeup are identified. The user selects a desired target makeup area from the displayed face image by a single-click operation. The display interface displayed by the client side can comprise a face image and a submit key, each target makeup area corresponding to a specific makeup can be circled in the face image through a solid line, and after a user clicks the required target makeup area, the contour line of the selected target makeup area can be displayed as a preset color (such as red or yellow), or all the selected target makeup areas can be displayed as the preset color. And clicking a submit button after the user selects the needed target makeup area. And after the client detects a submission instruction triggered by the submission key, acquiring one or more target makeup areas selected by the user from the displayed face image.
After the terminal of the user obtains the one or more target makeup areas selected by the user through any one of the above manners, the terminal sends area identification information of the one or more target makeup areas to the server, where the area identification information may include identification information such as a name or a number of each target makeup area selected by the user. And the server receives the area identification information sent by the terminal of the user, and determines at least one target makeup area corresponding to the specific makeup selected by the user according to the area identification information.
By the mode, the user can self-define and select the area needing special makeup, and the individual requirements of different users on special makeup such as highlight, face repair and the like can be met.
In other embodiments of the present application, instead of selecting a target makeup area by the user himself, a plurality of target makeup areas corresponding to a specific makeup may be configured in advance in the server. Therefore, when the user needs to detect the makeup progress, the target makeup area does not need to be acquired from the terminal of the user, the bandwidth is saved, the user operation is simplified, and the processing time is shortened.
And when the fact that the user clicks the video uploading interface is detected, a camera device of the terminal is called to shoot the makeup video of the user, and the user performs the makeup operation of the specific makeup in the target makeup area on the face in the shooting process. And the terminal of the user transmits the shot makeup video to the server in a video streaming mode. The server receives each frame image of the makeup video transmitted by the user's terminal.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the specific makeup corresponding to each frame image received subsequently with the initial frame image as a reference. Since the processing manner of each subsequent frame of image is the same, the embodiment of the present application explains the process of cosmetic progress detection by taking the current frame of image received at the current time as an example.
After the server obtains at least one target makeup area corresponding to the specific makeup and the initial frame image and the current frame image of the makeup video of the user through the step, the current makeup progress of the user is determined through the following operations of the steps 202 and 203.
Step 202: according to the target makeup area, a first target area image corresponding to the specific makeup is obtained from the initial frame image, and a second target area image corresponding to the specific makeup is obtained from the current frame image.
The server specifically obtains a first target area image corresponding to the initial frame image through the following operations in steps S1 to S3, including:
s1: and detecting a first face key point corresponding to the initial frame image.
The server is configured with a pre-trained detection model for detecting the face key points, and the detection model provides interface services for detecting the face key points. After the server acquires the initial frame image of the user makeup video, the server calls an interface service for detecting the face key points, and all face key points of the user face in the initial frame image are identified through a detection model. In order to distinguish from the face key points corresponding to the current frame image, all the face key points corresponding to the initial frame image are referred to as first face key points in the embodiment of the present application. And all the face key points corresponding to the current frame image are called second face key points.
The identified key points of the human face comprise key points on the face contour of the user and key points of the mouth, the nose, the eyes, the eyebrows and other parts. The number of the identified face key points can be 106.
S2: and acquiring a face region image corresponding to the initial frame image according to the first face key point.
The server specifically obtains the face region image corresponding to the initial frame image through the following operations in steps S20 to S22, including:
s20: and performing rotation correction on the initial frame image and the first face key point according to the first face key point.
Because a user cannot guarantee that the pose angles of the face in each frame of image are the same when shooting a makeup video through a terminal, in order to improve the accuracy of comparison between the current frame of image and the initial frame of image, the face in each frame of image needs to be rotationally corrected, so that the connecting lines of the face and the eyes in each frame of image after correction are all on the same horizontal line, thereby ensuring that the pose angles of the face in each frame of image are the same, and avoiding the problem of larger makeup progress detection errors caused by different pose angles.
Specifically, the left-eye central coordinate and the right-eye central coordinate are respectively determined according to the left-eye key point and the right-eye key point included in the first face key point. And determining all the left eye key points of the left eye region and all the right eye key points of the right eye region from the first face key points. And averaging the determined abscissa of all the left-eye key points, averaging the ordinate of all the left-eye key points, forming a coordinate by the average of the abscissa and the average of the ordinate corresponding to the left eye, and determining the coordinate as the central coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then, according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image. As shown in fig. 4, the horizontal difference dx and the vertical difference dy of the left-eye center coordinate and the right-eye center coordinate are calculated, and the link length d between the left-eye center coordinate and the right-eye center coordinate is calculated. And calculating an included angle theta between the two-eye connecting line and the horizontal direction according to the length d of the two-eye connecting line, the horizontal difference value dx and the vertical difference value dy, wherein the included angle theta is the rotating angle corresponding to the initial frame image. And then calculating the coordinate of the central point of the connecting line of the two eyes according to the central coordinates of the left eye and the right eye, wherein the coordinate of the central point is the coordinate of the rotating central point corresponding to the initial frame image.
And performing rotation correction on the initial frame image and the first face key point according to the calculated rotation angle and the rotation center point coordinate. Specifically, the rotation angle and the rotation center point coordinate are input into a preset function for calculating a rotation matrix of the picture, where the preset function may be a function cv2. Getrototematrixmix2d () in OpenCV. And obtaining a rotation matrix corresponding to the initial frame image by calling the preset function. And then calculating the product of the initial frame image and the rotation matrix to obtain the corrected initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be completed by calling a function cv2.Warpaffine () in OpenCV.
For the first face key points, each first face key point needs to be corrected one by one to correspond to the corrected initial frame image. When the first face key points are corrected one by one, two times of coordinate system conversion are required, the coordinate system with the upper left corner of the initial frame image as the origin is converted into the coordinate system with the lower left corner as the origin for the first time, and the coordinate system with the lower left corner as the origin is further converted into the coordinate system with the rotation center point coordinate as the origin for the second time, as shown in fig. 5. After two times of coordinate system conversion, the following formula (1) conversion is carried out on each first face key point, and the rotation correction of the first face key points can be completed.
Figure RE-GDA0003326777920000221
In the formula (1), x 0 、y 0 The abscissa and ordinate of the first face key point before rotation correction are respectively, x and y are respectively the abscissa and ordinate of the first face key point after rotation correction, and θ is the rotation angle.
The corrected initial frame image and the first face key point are based on the whole image, and the whole image not only includes the face information of the user, but also includes other redundant image information, so that the face region of the corrected image needs to be cropped in the following step S21.
S21: and according to the corrected first face key point, intercepting an image containing a face area from the corrected initial frame image.
Firstly, determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points. And then determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, the minimum abscissa value and the minimum ordinate value are combined into a coordinate point, and the coordinate point is used as a top left corner vertex of the intercepting frame corresponding to the face area. And forming another coordinate point by using the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the top of the lower right corner of the intercepting frame corresponding to the face area. And determining the position of an intercepting frame in the corrected initial frame image according to the top left corner vertex and the bottom right corner vertex, and intercepting the image in the intercepting frame from the corrected initial frame image, namely intercepting the image containing the face area.
In other embodiments of the present application, in order to ensure that all face areas of the user are intercepted, and avoid the occurrence of a situation where the subsequent makeup progress detection error is large due to incomplete interception, the intercepting frame may be further enlarged by a preset multiple, where the preset multiple may be 1.15 or 1.25, and the like. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical application. And after amplifying the interception frame to the periphery by a preset multiple, intercepting the image in the amplified interception frame from the corrected initial frame image, thereby intercepting the image containing the complete face area of the user.
S22: and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
After the image containing the face area of the user is intercepted from the initial frame image in the mode, the image containing the face area is zoomed to the preset size, and the face area image corresponding to the initial frame image is obtained. The predetermined size may be 390 × 390, 400 × 400, or the like. The embodiment of the application does not limit the specific value of the preset dimension, and the specific value can be set according to requirements in practical application.
In order to adapt the first face key point to the zoomed face region image, after the captured image containing the face region is zoomed to a preset size, the corrected first face key point is zoomed and translated according to the size of the image containing the face region before zooming and the preset size. Specifically, according to the size of an image including a face region before zooming and a preset size to which the image needs zooming, the translation direction and the translation distance of each first face key point are determined, then, according to the translation direction and the translation distance corresponding to each first face key point, translation operation is respectively carried out on each first face key point, and the coordinates of each translated first face key point are recorded.
The face region image is obtained from the initial frame image in the above manner, the first face key point is adapted to the obtained face region image through operations such as rotation correction and translation scaling, and then the image region corresponding to the target makeup region is extracted from the face region image in the following manner of step S3.
In other embodiments of the present application, before step S3 is executed, gaussian filtering may be performed on the face region image to remove noise in the face region image. Specifically, according to a gaussian kernel with a preset size, gaussian filtering processing is performed on a face region image corresponding to an initial frame image.
The Gaussian kernel of the Gaussian filter is a key parameter of the Gaussian filter processing, if the Gaussian kernel is too small, a good filtering effect cannot be achieved, and if the Gaussian kernel is too large, although noise information in an image can be filtered, useful information in the image can be smoothed. In the embodiment of the present application, a gaussian kernel with a predetermined size is selected, and the predetermined size may be 9 × 9. In addition, the other group of parameters sigmaX and sigmaY of the Gaussian filter function are set to be 0, and after Gaussian filtering, image information is smoother, so that the accuracy of subsequently obtaining the makeup progress is improved.
The face area image is obtained in the above manner, or after the face area image is subjected to gaussian filtering processing, the target area image corresponding to the specific makeup is extracted from the face area image in step S3.
S3: and extracting a first target area image corresponding to the specific makeup from the face area image according to the first face key points and the target makeup area.
Specific makeup such as high makeup or cosmetic makeup is a makeup method for making up a fixed area of the face, and for example, high makeup or cosmetic makeup is generally performed for a specific area such as a forehead area, a left cheek area, a right cheek area, a nose bridge area, a nose tip area, or a chin area. Therefore, the specific areas needing to be specially made up can be directly extracted from the face area image, interference of the invalid area on the making-up progress detection of the specific making-up is further avoided, and the accuracy of the making-up progress detection is improved.
The server obtains the first target area image by specifically performing the following operations of steps S30 to S32, including:
s30: and determining one or more target key points on the region outline corresponding to the target makeup region in the face region image from the first face key points.
The method comprises the steps of firstly determining an area position corresponding to a target makeup area from a face area image, and then determining one or more first face key points located in the area position from the first face key points. And determining the determined one or more first face key points as one or more target key points on the area outline corresponding to the target makeup area.
For each target makeup area corresponding to the specific makeup obtained in step 201, the target key points corresponding to each target makeup area are determined respectively according to the above manner.
S31: and generating a mask image corresponding to the face region image according to the target key points corresponding to the target makeup region.
Since one or more target makeup areas corresponding to a specific makeup are obtained in step 201, the specific operation in generating a mask image corresponding to a face area image is different for different target makeup areas. For each target makeup area corresponding to a specific makeup, it is first determined whether the number of target key points corresponding to the target makeup area is plural, and usually, the number of target key points corresponding to the target makeup areas such as a forehead area, a left cheek area, a right cheek area, a nose bridge area, and a chin area is plural. And for the target makeup areas, determining each edge coordinate of the target makeup area in the face area image according to each target key point corresponding to the target makeup area. And shifting the coordinates of the target key points to the left and right or up and down by partial pixels according to a preset shifting rule, so as to obtain edge coordinates corresponding to the target makeup area.
The method and the device for processing the key point on the target makeup area have the advantages that the preset offset rules corresponding to different target makeup areas are configured in the server in advance, and the offset directions and the offset pixel numbers of the target key points corresponding to the target makeup areas are specified in the preset offset rules.
For example, for the forehead area, the corresponding target key points include face key points of the face contour at both sides of the forehead and face key points at the hairline, the preset offset rule corresponding to the forehead area may specify that the target key points at the hairline are offset downward, the target key points at the left side of the forehead are offset rightward, the target key points at the right side of the forehead are offset leftward, and the number of pixels specifying that the target key points are offset is 4 pixels. And offsetting each target key point corresponding to the forehead area according to the preset offset rule, wherein a coordinate point after each target key point is offset is an edge coordinate corresponding to the forehead area.
For another example, for the nose bridge region, the corresponding target key points may include a plurality of face key points vertically arranged on the nose bridge, the preset offset rule corresponding to the nose bridge region may specify that each target key point vertically arranged is offset to the right and left by a certain number of pixels, and may specify that the number of pixels offset by the target key point is 3 pixels or 4 pixels, etc. And offsetting each target key point corresponding to the nose bridge region according to the preset offset rule, wherein a coordinate point after each target key point is offset is an edge coordinate corresponding to the nose bridge region.
After the edge coordinates corresponding to the target makeup area including the target key points are obtained in the above mode, the pixel values of all pixel points in the area defined by each edge coordinate are modified into preset values through the area filling function, and the mask area corresponding to the target makeup area is obtained. Wherein the preset value may be 255.
If the number of the target key points corresponding to a certain target makeup area is judged to be one, the number of the target key points corresponding to a target makeup area such as a nose tip area is usually one. And drawing an elliptical area with a preset size by taking the target key point as a center for the target makeup area with only one target key point. And modifying the pixel values of all the pixel points in the elliptical area to preset values to obtain a mask area corresponding to the target makeup area.
After the mask area corresponding to each target makeup area corresponding to the specific makeup is obtained in the above manner, the pixel values of all pixel points outside all the mask areas are modified to be zero, and then the mask image corresponding to the face area image is obtained.
S32: and performing AND operation on the mask image and the face area image to obtain a first target area image corresponding to the specific makeup.
And after the mask image corresponding to the face region image is obtained, performing and operation on the mask image and the face region image, namely performing and operation on pixel values of pixel points with the same coordinates in the mask image and the face region image respectively. Because only the pixel values of the pixel points in the target makeup area corresponding to the specific makeup in the mask image are preset values, the pixel values of the pixel points at other positions are all zero. Therefore, after the and operation is performed, the pixel values of the pixel points at the positions corresponding to the target makeup area in the obtained first target area image are not zero, and the pixel values of the pixel points at other positions are all zero.
In the embodiment of the application, the pixel value of the pixel point in the target makeup area in the mask image is 255, and after the and operation is performed on the mask image and the face area image, the obtained pixel value of the pixel point in the target makeup area in the first target area image is the pixel value of the pixel point in the target makeup area in the face area image. The method is equivalent to the method that the image of the target makeup area corresponding to the specific makeup is extracted from the image of the human face area.
The color space of the first target area image corresponding to the initial frame image obtained in the above way is an RGB color space, and for high makeup, the skin color is mainly brightened, and for makeup, the skin color is mainly dimmed to form a shadow, so that five sense organs are more three-dimensional. For a specific makeup such as a high makeup, a makeup, and the like, the change in brightness is mainly noted. The RGB color space is the most common color space in daily life, the color is represented by adopting the linear combination of red, green and blue color components, the image information acquired under the natural environment is sensitive to the brightness, and the RGB components can be correspondingly changed after the brightness information of the same picture is changed. The HSV color space is composed of three components, namely Hue, saturation and Value, and if the brightness information of a picture is changed, only the brightness component is changed, but the Hue and Saturation components are not obviously changed.
Therefore, after the first target area image corresponding to the initial frame image is obtained, the color space of the first target area image is converted from the RGB color space to the HSV color space. And then separating a channel component corresponding to the specific makeup from the HSV color space of the converted first target area image to obtain a single-channel first target area image only containing the channel component.
For example, the channel component corresponding to a specific makeup such as a highlight makeup, a makeup, etc. is a luminance component, and the channel component corresponding to other specific makeup that changes the color of the face is a color tone component, etc.
And converting the first target area image into HSV color space conversion, and separating out the channel component corresponding to the specific makeup through channel separation, so that whether the makeup process of the specific makeup is finished in the target makeup area or not is conveniently judged.
And for the current frame image, the same as the processing process of the initial frame image, detecting a second face key point corresponding to the current frame image, intercepting a face area image corresponding to the current frame image according to the second face key point, and generating a mask image corresponding to the current frame image according to the second face key point image. And extracting a second target area image corresponding to the specific makeup from the face area image corresponding to the current frame image according to the mask image. And converting the second target area image into an HSV color space, and separating a channel component corresponding to the specific makeup. Details of the processing operation for the current frame image are the same as those of the processing operation for the initial frame image, and are not described herein again.
After the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image are obtained in the above manner, the makeup progress corresponding to the current frame image is determined in the following manner.
Step 203: and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
Specifically, the absolute values of the differences of the channel components corresponding to the pixel points with the same position in the first target area image and the second target area image are calculated respectively. For example, if the specific makeup is high makeup or cosmetic makeup, the absolute value of the difference in luminance components between pixels having the same coordinates in the first target area image and the second target area image is calculated.
And determining the area of the area with finished specific makeup according to the absolute value of the difference value corresponding to each pixel point. Specifically, the number of pixel points whose corresponding absolute value of the difference value satisfies the preset makeup completion condition corresponding to the specific makeup is counted. When the specific makeup is a high makeup, the preset makeup finishing condition is that the absolute value of the difference value corresponding to the pixel point is greater than a first preset threshold, and the first preset threshold may be 11 or 12. When the specific makeup is a makeup finishing, the preset makeup finishing condition is that the absolute value of the difference value corresponding to the pixel point is smaller than a second preset threshold, and the second preset threshold may be 7 or 8.
And determining the counted number of the pixel points meeting the preset makeup finishing condition as the area of the area where the specific makeup is finished. And counting the total number of all pixel points in all target makeup areas in the first target area image or the second target area image, and determining the total number as the total area corresponding to all the target makeup areas. And then calculating the ratio of the area where the specific makeup is finished to the total area corresponding to the target makeup area, and determining the ratio as the current makeup progress of the specific makeup corresponding to the user.
In other embodiments of the present application, in order to further improve the accuracy of the makeup progress detection, the target makeup areas in the first target area image and the second target area image are further aligned. Specifically, binarization processing is performed on a first target area image and a second target area image of a single channel only containing the channel components, that is, values of the channel components corresponding to pixel points in a target makeup area in the first target area image and the second target area image are both modified to 1, and values of the channel components of pixel points at other positions are both modified to 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing and operation on the first binary mask image and the second binary mask image, namely performing and operation on pixel points at the same positions in the first binary mask image and the second binary mask image respectively to obtain an intersection area mask image. And the area, in which the channel components of the pixel points in the mask image of the intersected area are not zero, is a target makeup area superposed in the first target area image and the second target area.
The face region image corresponding to the initial frame image and the face region image corresponding to the current frame image are obtained through the operation of step 202. Performing AND operation on the mask image of the intersected area and the face area image corresponding to the initial frame image to obtain a new first target area image corresponding to the initial frame image; and performing AND operation on the mask image of the intersected area and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the current frame image.
Because the mask image of the intersected area contains the target makeup area overlapped in the initial frame image and the current frame image, the new first target area image and the new second target area image are respectively extracted from the initial frame image and the current frame image through the mask image of the intersected area according to the mode, so that the positions of the target makeup areas in the new first target area image and the new second target area image are completely consistent, the makeup progress is determined by subsequently comparing the change of the target makeup area in the current frame image and the target makeup area in the initial frame image, the comparison area is ensured to be completely consistent, and the accuracy of the makeup progress detection is greatly improved.
After aligning the target makeup areas in the initial frame image and the current frame image in the above manner to obtain a new first target area image and a new second target area image, the current makeup progress corresponding to the current frame image is determined again through the operation of step 203.
After the current makeup progress is determined in any mode, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up by a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the own making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 6, the server may mainly include six modules, such as face correction and clipping, gaussian filtering, object makeup area matting, HSV color space conversion, object makeup area alignment, and makeup progress calculation. The face correction and cutting module is used for correcting and cutting a face area of the initial frame image and the current frame image acquired by the server to obtain a face area image corresponding to the initial frame image and a face area image corresponding to the current frame image. The Gaussian filtering module is used for carrying out Gaussian smoothing on the human face region image obtained by the human face correction and cutting module and removing noise in the image. The target makeup area matting module is used for matting the makeup area of the target on the face area image corresponding to the initial frame image after Gaussian filtering according to the first face key point information corresponding to the initial frame image to obtain a corresponding first target area image. Similarly, the object makeup region matting module also scratches a second object region image corresponding to the current frame image. The HSV color space conversion module converts the RGB color spaces of the first target area image and the second target area image obtained by the target makeup area matting module into HSV color spaces. The target makeup area alignment module is used for carrying out pixel alignment operation on a first target area image and a second target area image which only contain a single channel of channel components corresponding to a specific makeup after passing through an HSV color space. The makeup progress calculation module performs difference calculation on the first target area image and the second target area image of the single channel aligned by the target makeup area alignment module to obtain the percentage of makeup progress.
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of identifying the face area in the video frame is improved. And based on the key points of the human face, the target makeup area is determined from the image of the human face area, and the target makeup area in the initial frame image and the target makeup area in the current frame image are subjected to pixel alignment, so that the accuracy of identifying the target makeup area of the user face in each frame image is improved. By adopting Gaussian smoothing and HSV color space conversion, the difference of the target makeup area in the initial frame image and the current frame image on the channel component corresponding to the specific makeup can be acquired clearly subsequently, and the precision of makeup progress detection is improved. In addition, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
Example two
The embodiment of the application provides a makeup progress detection method, which is used for detecting the makeup progress corresponding to color makeup in special fields such as blush makeup or Beijing opera. Referring to fig. 7, this embodiment specifically includes the following steps:
Step 301: the method comprises the steps of obtaining at least one target makeup area, and obtaining an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently.
The execution subject of the embodiment of the application is the server. And a client matched with the makeup progress detection service provided by the server is installed on a terminal of a user, such as a mobile phone or a computer. When a user needs to use the makeup progress detection service, the user opens the client on the terminal, and the client displays a plurality of target makeup areas corresponding to preset types of makeup, such as a plurality of target makeup areas corresponding to blush. The displayed target makeup area may be classified according to the face area, such as a nose area, two cheek areas, a chin area, and the like. Each region category may include contours of multiple target makeup regions of different shapes and/or sizes. The user selects one or more target makeup areas that the user needs to make up from the displayed plurality of target makeup areas. And the client sends the target makeup area selected by the user to the server.
As an example, as shown in fig. 8, in the display interface including the outlines of the target makeup areas corresponding to the nose area, the cheek areas on both sides, and the chin area, the user may select a face area that needs to be made up by himself or herself and an outline of the makeup on the selected face area from among a plurality of outlines corresponding to the respective areas, and after the selection, click the confirmation key to submit the outline of the target makeup area selected by himself or herself. The client detects one or more target makeup areas submitted by the user and sends the target makeup areas to the server.
As another example, in the embodiment of the application, a plurality of makeup style maps may be further generated based on the preset standard face image, where the makeup style maps include outlines of one or more target makeup areas. The preset standard face image is a face image with no face shading, clear five sense organs and parallel two eye connecting lines and a horizontal line. The interface displayed by the client side can simultaneously display a plurality of makeup style maps, and the user selects one makeup style map from the displayed plurality of makeup style maps. And the client sends the makeup style drawing selected by the user to the server. And the server receives the makeup style drawing sent by the client, and acquires one or more target makeup areas selected by the user from the makeup style drawing.
By any mode, the user can select the target makeup area needing makeup by self, and the personalized makeup requirements of different users on preset types of makeup such as blush can be met.
In other embodiments of the present application, instead of selecting a target makeup area by the user, a fixed target makeup area may be preset in the server, and the position and shape of the target makeup area may be set. After the user opens the client, the client prompts the user to make up at the parts corresponding to the target make-up areas set by the server. When receiving a request of a user for detecting the makeup progress, the server directly acquires one or more preset target makeup areas from the local configuration file.
The target makeup area is configured in the server in advance, when the user needs to detect the makeup progress, the target makeup area does not need to be obtained from the terminal of the user, the bandwidth is saved, the user operation is simplified, and the processing time is shortened.
The display interface of the client is also provided with a video uploading interface, when the fact that the user clicks the video uploading interface is detected, a camera device of the terminal is called to shoot the makeup video of the user, and the user carries out makeup operations of preset types such as blush in the target makeup area of the face in the shooting process. And the terminal of the user transmits the shot makeup video to the server in a video stream mode. And the server receives each frame of image of the makeup video transmitted by the terminal of the user.
In other embodiments of the present application, after obtaining an initial frame image and a current frame image of a makeup video of a user, a server further detects whether both the initial frame image and the current frame image only contain a face image of the same user. Firstly, whether an initial frame image and a current frame image both contain only one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face images, prompt information is sent to a terminal of a user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video. For example, the hint information may be "please keep only the face of the same person appearing within the shot".
If it is detected that both the initial frame image and the current frame image only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, the face feature information corresponding to the face image in the initial frame image may be extracted through a face recognition technique, the face feature information corresponding to the face image in the current frame image may be extracted, the similarity of the face feature information extracted from the two frame images may be calculated, if the calculated similarity is greater than or equal to a set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user, and then the current makeup progress corresponding to the current frame image may be determined through the following operations of steps 302 and 303. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the specific makeup corresponding to each frame image received subsequently with the initial frame image as a reference. Since the processing manner of each subsequent frame of image is the same, the embodiment of the present application explains the process of cosmetic progress detection by taking the current frame of image received at the current time as an example.
After the server obtains at least one target makeup area and the initial frame image and the current frame image of the user makeup video through the present step, the current makeup progress of the user is determined through the following operations of steps 302 and 303.
Step 302: and generating a makeup mask image according to the obtained target makeup area.
Specifically, the outline of each target makeup area is drawn in a preset blank face image according to the position and the shape of each target makeup area. The preset blank face image may be formed by removing pixels from the preset standard face image. After the outline of each target makeup area is drawn in a preset blank face image, pixel filling is carried out in each drawn outline, pixel points with the same pixel value are filled in the outline of the same target makeup area, and the pixel values of the pixel points filled in different target makeup areas are different. And taking the image after the filling operation as a cosmetic mask image.
Step 303: and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
Firstly, taking a makeup mask image as a reference, and respectively acquiring a first target area image for makeup from an initial frame image and a second target area image for makeup from a current frame image. Namely, the makeup mask image is taken as a mask, and the image of the target makeup area which needs to be made up by the user is respectively intercepted from the initial frame image and the current frame image. And then determining the current makeup progress corresponding to the current frame image according to the intercepted first target area image and second target area image.
Similar to the operation of step 102 in the first embodiment, the process of obtaining the first target area image from the initial frame image first detects a first face key point corresponding to the initial frame image through step S1. And obtaining a face region image corresponding to the initial frame image according to the first face key point through the operation of the step S2. The specific operation process of obtaining the face region image corresponding to the initial frame image may refer to the related description in the first embodiment, and is not described herein again. The process of acquiring the second target area image from the current frame image is the same as the process of acquiring the first target area image from the initial frame image.
Then, with the cosmetic mask image generated in step 302 as a reference, a first target area image to be made up is obtained from the face area image corresponding to the initial frame image. The face-painting such as blush is a way of applying a face to a fixed area of the face, such as a specific area of the nose, cheek areas on both sides, chin area, etc. Therefore, the specific regions needing to be made up can be directly extracted from the face region image, interference of the invalid region on makeup progress detection is avoided, and accuracy of the makeup progress detection is improved.
The server obtains the first target area image by specifically performing the following operations of steps S40 to S42, including:
s40: and respectively converting the makeup mask image and the face region image into binary images.
S41: and performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to the intersection region of the cosmetic mask image and the face region image.
And respectively carrying out AND operation on pixel values of pixel points with the same coordinates in the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image. Because the pixel value of the pixel point in the target makeup area in the makeup mask image is not zero, the pixel points in other areas are all zero. Therefore, the first mask image obtained by the operation is equivalent to the face area image corresponding to the initial frame image, and each target makeup area is cut out.
In other embodiments of the present application, since the makeup mask image is generated based on the preset standard face image, the target makeup area in the makeup mask image may not completely coincide with the area actually made up by the user in the initial frame image, thereby affecting the accuracy of the makeup progress detection. Therefore, before the and operation is performed on the binary image corresponding to the makeup mask image and the binary image corresponding to the face region image, the alignment operation can be performed on the target makeup region in the makeup mask image and the corresponding region in the initial frame image.
Specifically, one or more first positioning points, which are positioned on the outline of each target makeup area, in the makeup mask map are determined according to the standard human face key points corresponding to the makeup mask map. And the standard face key points corresponding to the makeup mask image are the standard face key points corresponding to the preset standard face image. For any target makeup area in the makeup mask map, firstly, whether the outline of the target makeup area contains a standard face key point or not is determined, and if so, the standard face key point on the outline is determined as a first fixed point corresponding to the target makeup area. And if not, generating a first fixed position on the outline of the target makeup area by utilizing the standard human face key points around the target makeup area in a linear transformation mode. Specifically, the first fixed point can be obtained by performing translation operations such as upward movement, downward movement, left movement or right movement on surrounding standard face key points.
For example, for the nose region, the key point located on the nose can be moved to the left by a certain pixel distance to obtain a point located on the left alar part of the nose. And moving the key point of the nose head to the right by a certain pixel distance to obtain a point on the right alar wing of the nose. The key point of the nose, this point on the left alar and this point on the right alar are taken as the three first fixation points corresponding to the nose area.
In this embodiment of the application, the number of the first positioning points corresponding to each target makeup area may be a preset number, and the preset number may be 3 or 4, and the like.
After the first positioning point corresponding to each target makeup area in the makeup mask image is obtained in the above manner, the second positioning point corresponding to each first positioning point is determined from the initial frame image according to the first face key point corresponding to the initial frame image. Because the standard face key point corresponding to the makeup mask image and the first face key point corresponding to the initial frame image are obtained through the same detection model, the key points at different positions have respective numbers. Therefore, for the first positioning point belonging to the standard human face key points, the first human face key points with the same number as the standard human face key points corresponding to the first positioning point are determined from the first human face key points corresponding to the initial frame image, and the determined first human face key points are used as the second positioning points corresponding to the first positioning point. And for a first positioning point obtained by linear transformation of the standard human face key points, determining a first human face key point corresponding to the first positioning point from first human face key points corresponding to the initial frame image, and determining a point obtained by the same linear transformation of the first human face key point as a second positioning point corresponding to the first positioning point.
After the second positioning point corresponding to each first positioning point is determined in the above manner, the makeup mask map is stretched, and each first positioning point is stretched to the position corresponding to each corresponding second positioning point, that is, the position of each first positioning point in the makeup mask map after stretching is the same as the position of the corresponding second positioning point.
By means of the mode, the target makeup area in the makeup mask image can be aligned with the area actually made up by the user in the initial frame image, so that the first makeup target area image can be accurately extracted from the initial frame image through the makeup mask image, and accuracy of makeup progress detection is improved.
After aligning the cosmetic mask image with the initial frame image, obtaining a first mask image corresponding to an intersection region between the cosmetic mask image and the face region image of the initial frame image through the operation of step S41, and then deducting a first target region image corresponding to the initial frame image through the method of step S42.
S42: and calculating the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image corresponding to the initial frame image.
Because the first mask image is a binary image, and operation is carried out on the first mask image and the face area image corresponding to the initial frame image, and images of various target makeup areas are intercepted from the face area image corresponding to the initial frame image, so that the first target area image corresponding to the initial frame image is obtained.
In other embodiments of the present application, since each target makeup area in the cosmetic mask pattern is discontinuous, the cosmetic mask pattern may be further split into a plurality of sub-mask patterns, where the target makeup areas included in each sub-mask pattern are different. And then, acquiring a first target area image from the face area image corresponding to the initial frame image by using the split sub-mask image. Specifically, the following steps S43 to S47 may be implemented, including:
s43: and splitting the cosmetic mask pattern into a plurality of sub-mask patterns, wherein each sub-mask pattern comprises at least one target cosmetic area.
The makeup mask image comprises a plurality of target makeup areas which are not communicated with each other, and the target makeup areas which are not communicated with each other are split to obtain a plurality of sub-mask images, wherein each sub-mask image only comprises one target makeup area or more than one target makeup area. The target makeup areas included in the sub-mask images are different from each other, and except that the pixel values of the pixel points in the target makeup areas in the sub-mask images are not zero, the pixel values of the pixel points in other areas are all zero.
S44: and respectively converting each sub-mask image and the face region image into a binary image.
S45: and respectively carrying out AND operation on the binary image corresponding to each sub-mask image and the binary image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image.
And for any sub-mask image, performing AND operation on pixel values of pixel points with the same coordinates in the binary image of the sub-mask image and the binary image corresponding to the face region image. Because only the pixel values of the pixel points in the target makeup area in the sub-makeup mask image are not zero, the pixel points in other areas are all zero. Therefore, the sub-mask image obtained by the operation corresponds to a target makeup area corresponding to the sub-mask image which is cut out from the face area image corresponding to the initial frame image.
In other embodiments of the present application, since the cosmetic mask map is generated based on the preset standard face image, and the sub-mask map is separated from the cosmetic mask map, the target makeup area in the sub-mask map may not completely coincide with the area actually made up by the user in the initial frame image, thereby affecting the accuracy of the makeup progress detection. Therefore, before the and operation is performed on the binarized image corresponding to the sub-mask image and the binarized image corresponding to the face region image, the alignment operation can be performed on the target makeup region in the sub-mask image and the corresponding region in the initial frame image.
Specifically, one or more first positioning points on the outline of the target makeup area in the sub-mask map are determined according to the standard human face key points corresponding to the makeup mask map. And determining a second positioning point corresponding to each first positioning point from the initial frame image according to the first face key point corresponding to the initial frame image. And stretching the sub-mask graph, and stretching each first positioning point to a position corresponding to each corresponding second positioning point, namely, the position of each first positioning point in the stretched sub-mask graph is the same as the position of the corresponding second positioning point.
By the method, the target makeup area in the sub-mask image can be aligned with the area actually made up by the user in the initial frame image, so that the first target image made up can be accurately extracted from the initial frame image through each sub-mask image, and the accuracy of makeup progress detection is improved. By splitting the beauty mask image into a plurality of sub-mask images and respectively aligning each sub-mask image with the initial frame image in the above manner, the accuracy of alignment after splitting is higher compared with the manner of directly aligning the beauty mask image with the initial frame image.
S46: and respectively carrying out AND operation on each sub-mask image and the initial frame image to obtain a plurality of sub-target area images corresponding to the initial frame image.
S47: and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
For the current frame image, the second target area image corresponding to the current frame image can be obtained in the same manner. Namely, the face area image corresponding to the current frame image is converted into a binary image, and then the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face area image of the current frame image are subjected to and operation to obtain a second mask image corresponding to an intersection area between the cosmetic mask image and the face area image of the current frame image. And the second mask image and the face area image corresponding to the current frame image are subjected to AND operation to obtain a second target area image corresponding to the current frame image. Or, performing and operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face region image of the current frame image to obtain each sub-mask image corresponding to the intersection region between each sub-mask image and the face region image of the current frame image. And performing AND operation on the sub-mask images and the face area image corresponding to the current frame image, and combining the obtained sub-target area images into a second target area image corresponding to the current frame image.
In other embodiments of the present application, it is considered that the edge of the target makeup area in the actual makeup scene may not have a clear outline, for example, the color is lighter closer to the edge in the blush scene, so that the blush makeup is more natural and does not appear too obtrusive. Therefore, after the first target area image and the second target area image are obtained through the embodiment, the boundary corrosion processing is further performed on the target makeup areas in the first target area image and the second target area image respectively, so that the boundary of the target makeup area is blurred, the target makeup areas in the first target area image and the second target area image are closer to the real makeup range, and the accuracy of makeup progress detection is further improved.
The color spaces of the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image obtained in the above manner are both RGB color spaces. The embodiment of the application determines the influence of preset types of makeup such as blush on each channel component of the color space through a large number of experiments in advance, and finds that the influence difference on each color channel in the RGB color space is not large. The HSV color space is composed of three components, namely Hue, saturation and Value, wherein when one component changes, the values of the other two components do not change obviously, and compared with the RGB color space, the HSV color space can separate one channel component. And determining which channel component of the brightness, the hue and the saturation is most influenced by the preset type of makeup through experiments, and configuring the channel component with the most influence as a preset single-channel component corresponding to the preset type of makeup in the server. For a preset type of makeup such as blush, the corresponding preset single-channel component may be a brightness component.
After the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image are obtained in any mode, the first target area image and the second target area image are converted into HSV color space from RGB color space. And separating a preset single-channel component from the HSV color space of the converted first target area image to obtain a first target area image only containing the preset single-channel component. And separating a preset single-channel component from the HSV color space of the converted second target area image to obtain a second target area image only containing the preset single-channel component.
And then determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
Specifically, the absolute values of the differences of the channel components corresponding to the pixel points with the same position in the first target area image and the second target area image are calculated respectively. For example, if the preset type of makeup is blush, the absolute value of the difference in luminance components between pixel points having the same coordinates in the converted first target area image and second target area image is calculated.
And determining the area of the area with finished specific makeup according to the absolute value of the difference value corresponding to each pixel point. Specifically, the number of pixel points whose corresponding absolute value of the difference satisfies the preset makeup completion condition is counted. The preset makeup completing condition is that the absolute value of the difference value corresponding to the pixel point is greater than a first preset threshold, and the first preset threshold can be 7 or 8.
And determining the counted number of the pixel points meeting the preset makeup finishing condition as the area of the area where the specific makeup is finished. And counting the total number of all pixel points in all target makeup areas in the first target area image or the second target area image, and determining the total number of the pixel points as the total area corresponding to all the target makeup areas. And then calculating the ratio between the area of the area where the specific makeup is finished and the total area corresponding to the target makeup area, and determining the ratio as the current makeup progress of the specific makeup corresponding to the user. The ratio of the counted number of the pixel points to the total number of the pixel points in all the target makeup areas in the first target area image is calculated, and the current makeup progress corresponding to the current frame image is obtained.
In other embodiments of the present application, in order to further improve the accuracy of the makeup progress detection, the target makeup areas in the first target area image and the second target area image are further aligned. Specifically, binarization processing is performed on a first target area image and a second target area image which only contain the preset single-channel components, namely, values of the preset single-channel components corresponding to pixel points in target makeup areas of the first target area image and the second target area image are modified to be 1, and values of the preset single-channel components of pixel points at other positions are modified to be 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing and operation on the first binary mask image and the second binary mask image, namely performing and operation on pixel points at the same positions in the first binary mask image and the second binary mask image respectively to obtain a second mask image corresponding to the intersection region of the first target region image and the second target region image. The preset single-channel component non-zero area of the pixel points in the second mask image is the target makeup area superposed in the first target area image and the second target area.
The face region image corresponding to the initial frame image and the face region image corresponding to the current frame image are obtained through the operation of step 303. Performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and performing and operation on the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
Because the second mask image contains the target makeup area overlapped in the initial frame image and the current frame image, the new first target area image and the new second target area image are respectively extracted from the initial frame image and the current frame image through the second mask image according to the mode, so that the positions of the target makeup areas in the new first target area image and the new second target area image are completely consistent, the makeup progress is determined by subsequently comparing the change of the target makeup area in the current frame image and the target makeup area in the initial frame image, the areas subjected to comparison are completely consistent, and the accuracy of the makeup progress detection is greatly improved.
After aligning the target makeup areas in the initial frame image and the current frame image in the above manner to obtain a new first target area image and a new second target area image, determining the current makeup progress corresponding to the current frame image again through the operation of step 303.
After the current makeup progress is determined in any mode, the server sends the current makeup progress to the terminal of the user. And after the terminal of the user receives the current makeup progress, displaying the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up by a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the own making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 9, according to the initial frame image and the first face key point corresponding thereto, and the current frame image and the second face key point corresponding thereto, the faces in the initial frame image and the current frame image are aligned and cut, respectively, and then the two cut face region images are smoothed and denoised by the laplacian algorithm. And then aligning the makeup mask image with the two face area images respectively, and deducting a first target area image and a second target area image from the two face area images respectively according to the makeup mask image. And carrying out boundary corrosion treatment on the first target area image and the second target area image. And then converting the first target area image and the second target area image into an image which only contains preset single-channel components in an HSV color space. And aligning the first target area image and the second target area image again, and then calculating the current makeup progress according to the first target area image and the second target area image.
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of face area identification is improved. And determining a target makeup area from the face area image based on the face key points, and performing pixel alignment on the target makeup area in the initial frame image and the current frame image, so that the accuracy of target makeup area identification is improved. And aligning the target makeup areas in the initial frame image and the current frame image, and reducing errors caused by position difference of the target makeup areas. When the target makeup area is scratched, the discontinuous target makeup area can be separately calculated, and the accuracy of obtaining the target makeup area is increased. And the target makeup area in the makeup mask image is aligned with the target makeup area in the face area image, so that the scratched target makeup areas are ensured to be in the face area image and cannot exceed the boundary of the face. In addition, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
EXAMPLE III
The embodiment of the application provides a makeup progress detection method, which is used for a makeup progress corresponding to an eyeliner makeup. Referring to fig. 10, this embodiment specifically includes the following steps:
step 401: the method comprises the steps of obtaining an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently, and obtaining a makeup mask image corresponding to the initial frame image and the current frame image.
The execution subject of the embodiment of the application is the server. And a client matched with the makeup progress detection service provided by the server is installed on a terminal of a user, such as a mobile phone or a computer. When a user needs to use the makeup progress detection service, the user opens the client on the terminal, the client displays a plurality of eye pattern diagrams, the eye pattern diagrams are manufactured based on a preset standard face image, and the preset standard face image is a face image with a face not shielded, five sense organs are clear, and a two-eye connecting line is parallel to a horizontal line. Each eye-line pattern corresponds to eye-line effects of different eye-line shapes, respectively, such as eye-line pattern corresponding to eye-line effects of a round eye shape, eye-line pattern corresponding to eye-line effects of a drooping eye, eye-line pattern corresponding to eye-line effects of an upper-glaring eye, and so on. Considering that the eyes of the user are open most of the time during the eye line makeup process, the eye state of the human face in the standard human face image is preset to be the eye open state, and the eye line pattern diagram is made by using the preset standard human face image in the eye open state.
The client side can display a plurality of eye pattern graphs simultaneously in the displayed interface, and the user selects one eye pattern graph from the displayed eye pattern graphs. The client sends the eye pattern graph selected by the user to the server, and the server receives the eye pattern graph sent by the client.
The display interface of the client is also provided with a video uploading interface, when the fact that the user clicks the video uploading interface is detected, the camera device of the terminal is called to shoot the makeup video of the user, and the user conducts eye line makeup operation on the face of the user in the shooting process. And the terminal of the user transmits the shot makeup video to the server in a video streaming mode. The server receives each frame image of the makeup video transmitted by the user's terminal.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the specific makeup corresponding to each frame image received subsequently with the initial frame image as a reference. Since the processing manner of each subsequent frame of image is the same, the embodiment of the present application explains the process of cosmetic progress detection by taking the current frame of image received at the current time as an example.
In other embodiments of the present application, after obtaining an initial frame image and a current frame image of a makeup video of a user, a server further detects whether both the initial frame image and the current frame image only contain a face image of the same user. Firstly, whether an initial frame image and a current frame image only contain one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face images, prompt information is sent to a terminal of a user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video. For example, the hint information may be "please keep only the face of the same person appearing within the shot".
If it is detected that both the initial frame image and the current frame image only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, the face feature information corresponding to the face image in the initial frame image and the face feature information corresponding to the face image in the current frame image can be extracted through a face recognition technology, the similarity of the face feature information extracted from the two frame images is calculated, and if the calculated similarity is greater than or equal to a set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video.
After the server obtains the eye pattern image selected by the user and obtains the initial frame image and the current frame image in the makeup process of the user, the server can directly determine the eye pattern image selected by the user as a makeup mask image corresponding to the initial frame image and the current frame image.
Or, in other embodiments, when the eye line pattern diagram is manufactured, for each eye line effect of the eye line shape, an eye opening pattern diagram corresponding to an eye opening state is manufactured, a eye closing pattern diagram corresponding to an eye avoiding state is manufactured, and the eye opening pattern diagram and the eye closing pattern diagram corresponding to each eye line pattern are configured in advance in the server. The eye-line pattern diagrams displayed in the interface displayed by the client may all be open-eye pattern diagrams or closed-eye pattern diagrams. After the user selects the eye pattern diagram required by the user from the displayed eye pattern diagrams, the client sends the eye pattern diagram selected by the user to the server. And the server respectively determines a cosmetic mask image corresponding to the initial frame image and a cosmetic mask image corresponding to the current frame image according to the eye pattern image selected by the user and the eye states of the user in the initial frame image and the current frame image.
For the initial frame image, firstly, the texture characteristics of the eye region of the human face in the initial frame image are analyzed through image processing, and whether the eye state of the user in the initial frame image is the open eye state or not is determined. If so, acquiring an eye opening pattern map corresponding to the eye line pattern map selected by the user from a plurality of groups of pre-configured eye opening pattern maps and eye closing pattern maps according to the eye line pattern map selected by the user, and determining the eye opening pattern map as a makeup mask map corresponding to the initial frame image. If the eye state of the user in the initial frame image is determined to be the eye closing state, acquiring a eye closing pattern diagram corresponding to the eye line pattern diagram selected by the user from a plurality of groups of eye opening pattern diagrams and eye closing pattern diagrams configured in advance according to the eye line pattern diagram selected by the user, and determining the eye closing pattern diagram as a makeup mask diagram corresponding to the initial frame image.
For the current frame image, the operation is the same as that of the initial frame image, and the cosmetic mask image corresponding to the current frame image is determined according to the mode.
And respectively determining a cosmetic mask image corresponding to the initial frame image and a cosmetic mask image corresponding to the current frame image according to the eye states of the users in the initial frame image and the current frame image. Therefore, the eye state corresponding to the initial frame image and the corresponding cosmetic mask image is consistent, and the eye state corresponding to the current frame image and the corresponding cosmetic mask image is consistent. And then the accuracy is higher when the eye line makeup area is deducted according to the makeup mask image subsequently, the error caused by the inconsistency of the eye states in the initial frame image and the current frame image is eliminated, and the accuracy of eye line makeup progress detection is improved.
After the server obtains the initial frame image and the corresponding makeup mask image which are made up by the user through the step, and obtains the current frame image and the corresponding makeup mask image which are made up by the user, the server determines the current makeup progress of the user through the following operations of the steps 402 and 403.
Step 402: and according to the initial frame image, simulating to generate a result image after the eye line makeup is finished.
And rendering the effect of eye line makeup on the initial frame image by using a 3D rendering technology to obtain a result image.
Step 403: and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the result image, the initial frame image and the current frame image.
The eyeliner make-up area generally includes the upper eyelid, lower eyelid, the tail of the eye, and the like. Therefore, the specific areas needing to be made up can be directly extracted from the face area image, interference of the invalid area on the eye line makeup progress detection is avoided, and the accuracy of the eye line makeup progress detection is improved.
Firstly, a first target area image of eye line makeup is obtained from an initial frame image according to a makeup mask image corresponding to the initial frame image. And acquiring a second target area image of eye line makeup from the current frame image according to the makeup mask image corresponding to the current frame image. Since the result image is generated on the basis of the initial frame image, the makeup mask image corresponding to the initial frame image can be used to deduct the eye line makeup area in the result image. And acquiring a third target area image on which eye lines are made up from the result image according to the makeup mask image corresponding to the initial frame image. And then determining the current makeup progress corresponding to the current frame image according to the intercepted first target area image, second target area image and third target area image.
Similar to the operation of step 102 in the first embodiment, the process of obtaining the first target area image from the initial frame image first detects a first face key point corresponding to the initial frame image through step S1. And obtaining a face region image corresponding to the initial frame image according to the first face key point through the operation of the step S2. The specific operation process of obtaining the face region image corresponding to the initial frame image may refer to the relevant description in the first embodiment, and is not described herein again. Then, according to the makeup mask image corresponding to the initial frame image, a first target area image of eye line makeup is obtained from the face area image corresponding to the initial frame image, and the specific process may adopt the operations of steps S30 to S32 in embodiment two to obtain the first target area image, or may also adopt the operations of steps S33 to S37 in embodiment two to obtain the first target area image, which is not described herein again.
For the result image and the current frame image, the second target area image corresponding to the current frame image and the third target area image corresponding to the result image can be obtained respectively in the same manner as for the initial frame.
The first target area image corresponding to the initial frame image, the second target area image corresponding to the current frame image and the third target area image corresponding to the result image are obtained in the above manner, and the color spaces of the images are RGB color spaces. According to the embodiment of the application, the influence of the eyeliner makeup on each channel component of the color space is determined in advance through a large number of tests, and the influence difference on each color channel in the RGB color space is found to be small. And the HLS color space is composed of three components of Hue, saturation and Light, and it is found through experiments that the eye line makeup can cause the Saturation component of the HLS color space to change obviously.
After the first target area image corresponding to the initial frame image, the second target area image corresponding to the current frame image and the third target area image corresponding to the result image are obtained in any one of the above manners, the first target area image, the second target area image and the third target area image are all converted from the RGB color space to the HLS color space. And separating a saturation channel from the HLS color space of the converted first target area image to obtain a first target area image only containing the saturation channel. And separating a saturation channel from the HLS color space of the converted second target area image to obtain a second target area image only containing the saturation channel. And separating a saturation channel from the HLS color space of the converted third target area image to obtain a third target area image only containing the saturation channel.
And then determining the current makeup progress corresponding to the current frame image according to the converted first target area image, second target area image and third target area image.
Specifically, a first average pixel value corresponding to the converted first target area image, a second average pixel value corresponding to the second target area image, and a third average pixel value corresponding to the third target area image are calculated, respectively. The average pixel value is an average value of saturation components of all pixel points in the eye line makeup area in the image.
And calculating a first difference value between the second average pixel value and the first average pixel value, wherein the first difference value can represent the saturation change of the eye line makeup area from the current frame image to the initial frame image, and the saturation change is formed by the currently performed eye line makeup operation corresponding to the current frame image.
And calculating a second difference value between the third average pixel value and the first average pixel value, wherein the second difference value can represent the saturation change of the eye line makeup area from the result image to the initial frame image, and the saturation change is formed by finishing the eye line makeup.
And calculating the ratio of the first difference value to the second difference value to obtain the current makeup progress corresponding to the current frame image. Namely, the ratio of the saturation change caused by the currently performed eye line makeup to the saturation change caused by the completed eye line makeup is used as the current makeup progress.
In other embodiments of the present application, in order to further improve the accuracy of the eye-line makeup progress detection, a first target area image corresponding to the initial frame image and a second target area image corresponding to the current frame image are also aligned; and aligning the first target area image corresponding to the initial frame image and the third target area image corresponding to the result image.
Due to the operation of aligning the first target area image and the second target area image, the same operation as the operation of aligning the first target area image and the third target area image is performed. Therefore, the embodiment of the present application will be described in detail only with the first target area image and the second target area image aligned.
Specifically, binarization processing is respectively carried out on a first target area image and a second target area image which only comprise saturation channels, namely, the values of saturation components corresponding to pixel points in eye line makeup areas of the first target area image and the second target area image are both modified into 1, and the values of saturation components of pixel points at other positions are both modified into 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing and operation on the first binary mask image and the second binary mask image, namely performing and operation on pixel points at the same positions in the first binary mask image and the second binary mask image respectively to obtain a second mask image corresponding to the intersection region of the first target region image and the second target region image. And the region with the saturation component of the pixel points in the second mask image being not zero is the eye line makeup region superposed in the first target region image and the second target region.
The face region image corresponding to the initial frame image and the face region image corresponding to the current frame image are obtained through the operation of step 403. Performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and the second mask image and the face area image corresponding to the current frame image are subjected to AND operation to obtain a new second target area image corresponding to the current frame image.
Because the second mask image contains the eye line makeup areas superposed in the initial frame image and the current frame image, the new first target area image and the new second target area image are respectively extracted from the initial frame image and the current frame image through the second mask image according to the mode, so that the positions of the eye line makeup areas in the new first target area image and the new second target area image are completely consistent, namely the eye line makeup areas in the initial frame image and the result image are aligned, and the accuracy of eye line makeup progress detection can be improved.
Similarly, the first target area image and the third target area image are aligned according to the above manner, so that the positions of the eye line makeup areas in the new first target area image and the new third target area image are completely consistent, that is, the eye line makeup areas in the initial frame image and the current frame image are aligned, and the accuracy of eye line makeup progress detection can be improved.
After obtaining the new first target area image, the new second target area image and the new third target area image in the above manner, the current makeup progress corresponding to the current frame image is determined again through the operation of the above step 403.
After the current makeup progress is determined in any mode, the server sends the current makeup progress to the terminal of the user. And after the terminal of the user receives the current makeup progress, displaying the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up by a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the eye line making-up progress of the user, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 11, according to the initial frame image and the corresponding first face key point, the current frame image and the corresponding second face key point, and the result image and the corresponding third face key point, the faces in the initial frame image, the result image, and the current frame image are respectively corrected and cut, and then the three cut face region images are subjected to smooth denoising through the laplacian algorithm. And then aligning the makeup mask image with the three face region images respectively, and deducting a first target region image corresponding to the initial frame image, a second target region image corresponding to the current frame image and a third target region image corresponding to the result image according to the makeup mask image respectively. And then converting the first target area image, the second target area image and the third target area image into an image only containing a saturation channel in an HLS color space. Calculating a first average pixel value, a second average pixel value and a third average pixel value corresponding to the converted first target area image, the converted second target area image and the converted third target area image respectively, calculating a first difference value between the second average pixel value and the first average pixel value and a second difference value between the third average pixel value and the first average pixel value, and calculating a ratio of the first difference value to the second difference value to obtain the current makeup progress.
In the embodiment of the application, a current frame image and an initial frame image of a user makeup process are obtained, and a result image for finishing eye line makeup is rendered on the basis of the initial frame image. Determining a saturation change value of the eye line makeup area from the current frame image to the initial frame image, determining a saturation change value of the eye line makeup area from the result image to the initial frame image, and calculating a ratio of the saturation change value corresponding to the current frame image to the saturation change value corresponding to the result image, namely obtaining the current makeup progress of eye line makeup. The eye line makeup progress can be accurately detected only through image processing without adopting a deep learning model, the calculation amount is small, the cost is low, the processing pressure of a server is reduced, the eye line makeup progress detection efficiency is improved, and the real-time requirement of eye line makeup progress detection can be met.
Furthermore, the human face key points are utilized to correct and cut the human face area of the user in the video frame, and the accuracy of recognizing the human face area is improved. And aligning the eye line makeup areas in the initial frame image and the current frame image and the initial frame image and the result image, and reducing errors caused by position differences of the eye line makeup areas. When the eye line makeup areas are buckled, the discontinuous eye line makeup areas can be calculated separately, and the accuracy of obtaining the eye line makeup areas is improved. And the eyeliner makeup area in the makeup mask image is aligned with the eyeliner makeup area in the face area image, so that the accuracy of the deducted eyeliner makeup area is ensured.
Example four
1. A makeup progress detection method comprising:
acquiring an eye shadow mask image and an initial frame image and a current frame image in a real-time makeup video for a user to make up a specific makeup currently;
according to each target makeup area of eye shadow makeup, respectively splitting a makeup mask image corresponding to each target makeup area from the eye shadow mask image;
and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the makeup mask image corresponding to each target makeup area.
2. According to 1, the determining a current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and a makeup mask image corresponding to each target makeup area includes:
respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a first target area image corresponding to each target makeup area from the initial frame image;
respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a second target area image corresponding to each target makeup area from the current frame image;
And determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area.
3. According to 2, the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area includes:
respectively converting a first target area image and a second target area image corresponding to each target makeup area into images containing preset single-channel components in an HLS color space;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area.
4. According to 3, the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area includes:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in a first target area image and a second target area image which correspond to the same target makeup area after conversion respectively;
Counting the number of pixel points of which the absolute value of the difference value corresponding to each target makeup area meets a preset makeup completion condition;
respectively calculating the ratio of the number of the pixel points corresponding to each target makeup area to the total number of the pixel points in the corresponding target makeup area to obtain the makeup progress corresponding to each target makeup area;
and calculating the current makeup progress corresponding to the current frame image according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area.
5. According to 2, with reference to the makeup mask image corresponding to each target makeup area, respectively, obtaining a first target area image corresponding to each target makeup area from the initial frame image, including:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and respectively taking the makeup mask image corresponding to each target makeup area as a reference, and respectively acquiring a first target area image corresponding to each target makeup area from the face area image.
6. According to 5, respectively acquiring a first target area image corresponding to each target makeup area from the face area image by taking the makeup mask image corresponding to each target makeup area as a reference, including:
Respectively converting a makeup mask image corresponding to a first target makeup area and the face area image into binary images; the first target makeup area is any one of the target makeup areas;
performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image;
and computing the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image.
7. According to 6, before performing and operation on the binarized image corresponding to the cosmetic mask image and the binarized image corresponding to the face region image, the method further includes:
determining one or more first positioning points, which are positioned on the outline of a target makeup area included by the makeup mask map, in the makeup mask map according to the standard human face key points corresponding to the makeup mask map;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
And stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
8. According to 1, before the step of splitting the makeup mask pattern corresponding to each target makeup area from the eye shadow mask pattern according to each target makeup area of eye shadow makeup, the method further includes:
determining one or more first positioning points on the outline of each makeup area in the eye shadow mask image according to the standard human face key points corresponding to the eye shadow mask image;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the eye shadow mask image to stretch each first positioning point to a position corresponding to each corresponding second positioning point.
9. According to 5, the obtaining of the face region image corresponding to the initial frame image according to the first face key point includes:
performing rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image;
According to the corrected first face key point, intercepting an image containing a face region from the corrected initial frame image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
10. According to 9, the performing rotation rectification on the initial frame image and the first face keypoint according to the first face keypoint includes:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
11. According to 9, the capturing an image including a face region from the corrected initial frame image according to the corrected first face key point includes:
and according to the corrected first face key point, carrying out image interception on a face area contained in the corrected initial frame image.
12. According to 11, the image capturing, according to the corrected first face keypoint, a face region included in the corrected initial frame image includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining an intercepting frame corresponding to the face area in the initial frame image after correction according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and intercepting an image containing the face area from the corrected initial frame image according to the intercepting frame.
13. According to 12, the method further comprises:
amplifying the intercepting frame by a preset multiple;
and according to the amplified intercepting frame, intercepting an image containing the face region from the corrected initial frame image.
14. According to 9, the method further comprises:
and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
15. According to 1, the method further comprises:
detecting whether the initial frame image and the current frame image only contain face images of the same user;
If yes, executing the operation of determining the current makeup progress of the specific makeup for the user;
and if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
16. A makeup progress detection device comprising:
the system comprises an acquisition module, a makeup processing module and a makeup processing module, wherein the acquisition module is used for acquiring an eye shadow mask image and an initial frame image and a current frame image in a real-time makeup video for a user to make up a specific makeup currently;
the splitting module is used for splitting a makeup mask image corresponding to each target makeup area from the eye shadow mask image according to each target makeup area of eye shadow makeup;
and the makeup progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the makeup mask image corresponding to each target makeup area.
17. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method of any one of claims 1-15.
18. A computer-readable storage medium, on which a computer program is stored, which program is executed by a processor to implement the method of any one of claims 1-15.
The embodiment of the application provides a makeup progress detection method, which is used for a makeup progress corresponding to an eye shadow makeup. Referring to fig. 12, this embodiment specifically includes the following steps:
step 501: and acquiring an eye shadow mask image, and an initial frame image and a current frame image in a real-time makeup video for a user to make up a specific makeup currently.
The execution subject of the embodiment of the application is the server. And a client matched with the makeup progress detection service provided by the server is installed on a terminal of a user, such as a mobile phone or a computer. When a user needs to use the makeup progress detection service, the user opens the client on the terminal, the client displays a plurality of eye shadow mask images, the eye shadow mask images are manufactured based on a preset standard face image, and the preset standard face image is a face image with a face not shielded, five sense organs are clear, and a two-eye connecting line is parallel to a horizontal line. Each eye shadow mask pattern corresponds to a different eye shadow cosmetic effect.
The client displays an interface in which a plurality of eye shadow mask patterns can be simultaneously displayed, and the user selects one of the displayed plurality of eye shadow mask patterns. The client sends the eye shadow mask image selected by the user to the server, and the server receives the eye shadow mask image sent by the client.
The display interface of the client is also provided with a video uploading interface, when the fact that the user clicks the video uploading interface is detected, the camera device of the terminal is called to shoot the real-time makeup video of the user, and the user carries out eye shadow makeup operation on the face of the user in the shooting process. And the terminal of the user transmits the shot real-time makeup video to the server in a video streaming mode. The server receives each frame image of the real-time makeup video transmitted by the terminal of the user.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the specific makeup corresponding to each frame image received subsequently with the initial frame image as a reference. Since the processing mode of each subsequent frame image is the same, the embodiment of the present application explains the process of cosmetic progress detection by taking the current frame image received at the current time as an example.
In other embodiments of the present application, after obtaining the initial frame image and the current frame image of the real-time makeup video of the user, the server further detects whether both the initial frame image and the current frame image only contain the face image of the same user. Firstly, whether an initial frame image and a current frame image both contain only one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face images, prompt information is sent to a terminal of a user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the real-time makeup video. For example, the hint information may be "please keep only the face of the same person appearing within the shot".
If it is detected that both the initial frame image and the current frame image only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, the face feature information corresponding to the face image in the initial frame image and the face feature information corresponding to the face image in the current frame image can be extracted through a face recognition technology, the similarity of the face feature information extracted from the two frame images is calculated, and if the calculated similarity is greater than or equal to a set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the real-time makeup video.
After the server obtains the initial frame image, the current frame image and the eye shadow mask image of the user for makeup through the present step, the current makeup progress of the user is determined through the following operations of steps 502 and 503.
Step 502: according to each target makeup area of the eye shadow makeup, the makeup mask image corresponding to each target makeup area is respectively split from the eye shadow mask image.
The eye shadow makeup mainly relates to operations of upper eyelid large-area halation, upper eyelid middle part brightening, lower eyelid lying silkworm, eye head brightening and the like. Wherein, the operations of the large-area halation of the upper eyelid and the brightening of the middle part of the upper eyelid are carried out on the upper eyelid, and an overlapping coating area exists. And eye shadow make-up is only completed after all areas have been made up. Therefore, the eye shadow mask obtained in step 501 needs to be split into a makeup mask corresponding to each target makeup area according to each target makeup area to be made up by the eye shadow. For example, a makeup mask pattern corresponding to a target makeup area corresponding to a large-area halation on the eyelid, a makeup mask pattern corresponding to a target makeup area corresponding to the middle part of the upper eyelid, a makeup mask pattern corresponding to the silkworm part of the lower eyelid, and a makeup mask pattern corresponding to the head part of the eye are split.
Step 503: and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the makeup mask image corresponding to each target makeup area.
Firstly, a makeup mask image corresponding to each target makeup area is taken as a reference, and a first target area image corresponding to each target makeup area is obtained from an initial frame image. And respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a second target area image corresponding to each target makeup area from the current frame image. And then determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area.
The acquisition processes of the first target area image and the second target area image corresponding to each target makeup area are the same. The embodiment of the present application takes an example of an acquisition process of a first target area image corresponding to a target makeup area as an example for detailed description. The server specifically obtains a first target area image corresponding to the target makeup area through the following operations in steps S1 to S3, including:
s1: and detecting a first face key point corresponding to the initial frame image.
The server is configured with a pre-trained detection model for detecting the face key points, and the detection model provides interface services for detecting the face key points. After the server acquires the initial frame image of the user makeup video, the server calls an interface service for detecting the face key points, and all face key points of the user face in the initial frame image are identified through a detection model. In order to distinguish from the face key points corresponding to the current frame image, all the face key points corresponding to the initial frame image are referred to as first face key points in the embodiment of the application. And all the face key points corresponding to the current frame image are called second face key points.
The identified key points of the human face comprise key points on the face contour of the user and key points of the mouth, the nose, the eyes, the eyebrows and other parts. The number of the identified face key points can be 106.
S2: and acquiring a face region image corresponding to the initial frame image according to the first face key point.
The server specifically obtains a face region image corresponding to the initial frame image through the following operations in steps S20 to S22, including:
s20: and according to the first face key point, performing rotation correction on the initial frame image and the first face key point.
Because a user can not ensure that the pose angles of the face in each frame of image are the same when shooting a makeup video through a terminal, in order to improve the accuracy of comparison between the current frame of image and the initial frame of image, the face in each frame of image needs to be rotationally corrected, so that the connecting lines of the face and the eyes in each frame of image after correction are all on the same horizontal line, thereby ensuring that the pose angles of the face in each frame of image are the same, and avoiding the problem of larger detection errors of the makeup progress due to different pose angles.
Specifically, the left-eye central coordinate and the right-eye central coordinate are respectively determined according to the left-eye key point and the right-eye key point included in the first face key point. And determining all the left eye key points of the left eye region and all the right eye key points of the right eye region from the first face key points. And averaging the determined abscissa of all the left-eye key points, averaging the ordinate of all the left-eye key points, forming a coordinate by the average of the abscissa and the average of the ordinate corresponding to the left eye, and determining the coordinate as the center coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then, according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image. As shown in fig. 4, the horizontal difference dx and the vertical difference dy of the left-eye center coordinate and the right-eye center coordinate are calculated, and the link length d between the left-eye center coordinate and the right-eye center coordinate is calculated. And calculating an included angle theta between the two-eye connecting line and the horizontal direction according to the length d of the two-eye connecting line, the horizontal difference value dx and the vertical difference value dy, wherein the included angle theta is the rotating angle corresponding to the initial frame image. And then calculating the coordinate of the central point of the connecting line of the two eyes according to the central coordinate of the left eye and the central coordinate of the right eye, wherein the coordinate of the central point is the coordinate of the rotating central point corresponding to the initial frame image.
And performing rotation correction on the initial frame image and the first face key point according to the calculated rotation angle and the rotation center point coordinate. Specifically, the rotation angle and the rotation center point coordinate are input into a preset function for calculating a rotation matrix of the picture, where the preset function may be a function cv2. Getrototematrixmix2d () in OpenCV. And obtaining a rotation matrix corresponding to the initial frame image by calling the preset function. And then calculating the product of the initial frame image and the rotation matrix to obtain the corrected initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be completed by calling a function cv2.Warpaffine () in OpenCV.
For the first face key points, each first face key point needs to be corrected one by one to correspond to the corrected initial frame image. When the first face key points are corrected one by one, two times of coordinate system conversion are required, the coordinate system with the upper left corner of the initial frame image as the origin is converted into the coordinate system with the lower left corner as the origin for the first time, and the coordinate system with the lower left corner as the origin is further converted into the coordinate system with the rotation center point coordinate as the origin for the second time, as shown in fig. 5. After two times of coordinate system conversion, the following formula (1) conversion is carried out on each first face key point, and the rotation correction of the first face key points can be completed.
Figure RE-GDA0003326777920000541
In the formula (1), x 0 、y 0 The x and y are respectively the abscissa and ordinate of the first face key point before rotation correction, and theta is the rotation angle.
The corrected initial frame image and the first face key point are based on the entire image, and the entire image includes not only the face information of the user but also other redundant image information, so that the face region of the corrected image needs to be clipped in the following step S21.
S21: and according to the corrected first face key point, intercepting an image containing a face area from the corrected initial frame image.
Firstly, determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected key points of the first face. And then determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, the minimum abscissa value and the minimum ordinate value are combined into a coordinate point, and the coordinate point is used as a top left corner vertex of the intercepting frame corresponding to the face area. And forming another coordinate point by using the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the top of the lower right corner of the capturing frame corresponding to the face region. And determining the position of an intercepting frame in the corrected initial frame image according to the top left corner vertex and the bottom right corner vertex, and intercepting the image in the intercepting frame from the corrected initial frame image, namely intercepting the image containing the face region.
In other embodiments of the present application, in order to ensure that all face areas of the user are intercepted, and avoid the occurrence of a situation where the subsequent makeup progress detection error is large due to incomplete interception, the intercepting frame may be further enlarged by a preset multiple, where the preset multiple may be 1.15 or 1.25, and the like. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical application. And after amplifying the interception frame to the periphery by a preset multiple, intercepting the image in the amplified interception frame from the corrected initial frame image, thereby intercepting the image containing the complete face area of the user.
S22: and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
After the image containing the face area of the user is intercepted from the initial frame image in the mode, the image containing the face area is zoomed to the preset size, and the face area image corresponding to the initial frame image is obtained. The predetermined size may be 390 × 390, 400 × 400, or the like. The embodiment of the application does not limit the specific value of the preset dimension, and the specific value can be set according to requirements in practical application.
In order to adapt the first face key point to the zoomed face region image, the captured image including the face region is zoomed to a preset size, and then the corrected first face key point is subjected to zooming translation according to the size of the image including the face region before zooming and the preset size. Specifically, the translation direction and the translation distance of each first face key point are determined according to the size of the image containing the face area before the zooming and the preset size to which the image needs to be zoomed, then, the translation operation is respectively carried out on each first face key point according to the translation direction and the translation distance corresponding to each first face key point, and the coordinates of each first face key point after the translation are recorded.
The face region image is obtained from the initial frame image in the above manner, the first face key point is adapted to the obtained face region image through operations such as rotation correction and translation scaling, and then the image region corresponding to the target makeup region is extracted from the face region image in the following manner of step S3.
In other embodiments of the present application, before step S3 is executed, gaussian filtering may be performed on the face region image to remove noise in the face region image. Specifically, according to a gaussian kernel with a preset size, gaussian filtering processing is performed on a face region image corresponding to an initial frame image.
The Gaussian kernel of the Gaussian filter is a key parameter of the Gaussian filter processing, if the Gaussian kernel is too small, a good filtering effect cannot be achieved, and if the Gaussian kernel is too large, although noise information in an image can be filtered, useful information in the image can be smoothed. In the embodiment of the present application, a gaussian kernel with a predetermined size is selected, and the predetermined size may be 9 × 9. In addition, the other group of parameters sigmaX and sigmaY of the Gaussian filter function are set to be 0, and after Gaussian filtering, image information is smoother, so that the accuracy of subsequently acquiring the makeup progress is improved.
The face area image is obtained in the above manner, or after the face area image is subjected to gaussian filtering processing, a first target area image corresponding to the target makeup area is extracted from the face area image through step S3.
S3: and taking the makeup mask image corresponding to the target makeup area as a reference, and extracting a first target area image corresponding to the target makeup area from the face area image.
The first target area image corresponding to the target makeup area is directly scratched from the face area image, and the interference of other areas on the makeup progress detection of the target makeup area can be avoided. Particularly, mutual interference between overlapped target makeup areas can be avoided, and accuracy of eye shadow makeup progress detection is improved.
The server obtains the first target area image by specifically performing the following operations of steps S40 to S42, including:
s40: and respectively converting the makeup mask image and the face region image corresponding to the target makeup region into binary images.
S41: and computing the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image.
And respectively carrying out AND operation on pixel values of pixel points with the same coordinates in the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image. Because the pixel value of the pixel point in the target makeup area in the makeup mask image is not zero, the pixel points in other areas are all zero. Therefore, the first mask image obtained by the operation is equivalent to that each target makeup area is cut out from the face area image corresponding to the initial frame image.
In other embodiments of the present application, since the makeup mask image is generated based on the preset standard face image, the target makeup area in the makeup mask image may not completely coincide with the area actually made up by the user in the initial frame image, thereby affecting the accuracy of the makeup progress detection. Therefore, before the and operation is performed on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image, the alignment operation can be performed on the target cosmetic area in the cosmetic mask image and the corresponding area in the initial frame image.
Specifically, one or more first positioning points, which are positioned on the outline of the target makeup area, in the makeup mask map are determined according to the standard human face key points corresponding to the makeup mask map. And the standard face key points corresponding to the makeup mask image are the standard face key points corresponding to the preset standard face image. Firstly, whether the outline of the target makeup area contains a standard face key point is determined, and if so, the standard face key point on the outline is determined as a first fixed point corresponding to the target makeup area. And if not, generating a first positioning point on the outline of the target makeup area by using the key points of the standard face around the target makeup area in a linear transformation mode. Specifically, the first fixed point can be obtained by performing translation operations such as upward movement, downward movement, left movement or right movement on surrounding standard face key points.
In this embodiment of the application, the number of the first positioning points corresponding to the target makeup area may be a preset number, and the preset number may be 3 or 4.
After the first positioning points corresponding to the target makeup area in the makeup mask image are obtained in the mode, the second positioning points corresponding to each first positioning point are determined from the initial frame image according to the first face key points corresponding to the initial frame image. Because the standard face key point corresponding to the makeup mask image and the first face key point corresponding to the initial frame image are obtained through the same detection model, the key points at different positions have respective numbers. Therefore, for the first positioning point belonging to the standard human face key points, the first human face key points with the same number as the standard human face key points corresponding to the first positioning point are determined from the first human face key points corresponding to the initial frame image, and the determined first human face key points are used as the second positioning points corresponding to the first positioning point. And for a first positioning point obtained by linear transformation by using the standard face key points, determining a first face key point corresponding to the first positioning point from first face key points corresponding to the initial frame image, and determining a point obtained by performing the same linear transformation on the first face key point as a second positioning point corresponding to the first positioning point.
After the second positioning point corresponding to each first positioning point is determined in the above manner, the makeup mask map is stretched, and each first positioning point is stretched to a position corresponding to each corresponding second positioning point, that is, the position of each first positioning point in the makeup mask map after stretching is the same as the position of the corresponding second positioning point.
By means of the mode, the target makeup area in the makeup mask image can be aligned with the area actually made up by the user in the initial frame image, so that the first target area image corresponding to the target makeup area can be accurately extracted from the initial frame image through the makeup mask image, and accuracy of makeup progress detection is improved.
After aligning the makeup mask image corresponding to the target makeup area with the initial frame image, a first mask image corresponding to an intersection area between the makeup mask image and the face area image of the initial frame image is obtained through the operation of step S41, and then a first target area image corresponding to the target makeup area is deducted through the method of step S42.
S42: and operating the face area image corresponding to the first mask image and the initial frame image to obtain a first target area image corresponding to the target makeup area.
And performing AND operation on the first mask image and the face region image corresponding to the initial frame image because the first mask image is a binary image, and intercepting an image of the target makeup region from the face region image corresponding to the initial frame image to obtain a first target region image corresponding to the target makeup region.
The first target area image and the second target area image corresponding to each target makeup area may be obtained through the operations of steps S40 to S42 described above.
In other embodiments of the present application, it is considered that the edge of the target makeup area in the actual makeup scene may not have a clear outline, for example, the closer the edge in the eye shadow makeup scene is, the lighter the color is, so that the eye shadow makeup is more natural and does not appear to be very obtrusive. Therefore, after the first target area image and the second target area image are obtained through the embodiment, the boundary corrosion processing is further performed on the target makeup areas in the first target area image and the second target area image respectively, so that the boundary of the target makeup area is blurred, the target makeup areas in the first target area image and the second target area image are closer to the real makeup range, and the accuracy of makeup progress detection is further improved.
The color spaces of the first target area image and the second target area image corresponding to the target makeup areas are RGB color spaces. According to the embodiment of the application, the influence of eye shadow makeup on each channel component of the color space is determined in advance through a large number of tests, and the influence difference on each color channel in the RGB color space is found to be small. While the HLS color space is composed of three components, hue (Hue), saturation (Saturation) and Light (brightness), it has been found through experiments that eye shadow makeup can cause significant changes in the brightness component of the HLS color space.
Therefore, after the first target area image and the second target area image corresponding to each target makeup area are obtained by any one of the above methods, the first target area image and the second target area image corresponding to each target makeup area are converted from the RGB color space to the HLS color space. And separating preset single-channel components from the HLS color space in each converted first target area image and each converted second target area image to obtain a first target area image and a second target area image which only contain the preset single-channel components in the HLS color space corresponding to each target makeup area. The preset single-channel component may be a luminance component.
And then determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area.
Specifically, the absolute value of the difference value of the preset single-channel component corresponding to the pixel point with the same position in the first target area image and the second target area image corresponding to the same target makeup area after conversion is calculated respectively. For example, assuming that the target makeup area is a silkworm-laying area, the absolute value of the difference between the luminance components of the pixels with the same coordinates in the first target area image corresponding to the silkworm-laying area in the initial frame image and the second target area image corresponding to the silkworm-laying area in the current frame image is calculated.
And counting the number of pixel points of which the absolute value of the difference value corresponding to each target makeup area meets the preset makeup finishing condition corresponding to eye shadow makeup. The preset makeup finishing condition is that the absolute value of the difference value corresponding to the pixel point is greater than a first preset threshold value corresponding to the eye shadow makeup, and the first preset threshold value can be 11 or 12.
And counting the total number of all pixel points in the first target area image or the second target area image corresponding to each target makeup area. And then, for each target makeup area, respectively calculating the ratio of the number of the pixel points of which the statistical difference absolute value meets the preset makeup finishing condition to the total number of the pixel points in the target makeup area, and respectively obtaining the makeup progress corresponding to each target makeup area. And then, calculating the current makeup progress corresponding to the current frame image according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area.
The sum of the preset weights corresponding to each target makeup area is 1, and the preset weights corresponding to the target makeup areas can be the same or different. For example, if there are 4 target makeup areas including an upper eyelid, a middle part of the upper eyelid, a silkworm sleeping part, and an eye head part, the preset weights corresponding to the 4 target makeup areas may all be 0.25. The embodiment of the application does not limit the value of the preset weight of each target makeup area, and the value can be limited according to requirements in practical application.
In other embodiments of the present application, in order to further improve the accuracy of the cosmetic progress detection, the target cosmetic areas in the first target area image and the second target area image corresponding to the same target cosmetic area are further aligned.
Specifically, binarization processing is performed on a first target area image and a second target area image which correspond to the same target makeup area and only contain the preset single-channel component, that is, the values of the preset single-channel component corresponding to the pixel points in the target makeup area in the first target area image and the second target area image are both modified to 1, and the values of the preset single-channel components of the pixel points at other positions are both modified to 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to an intersection area of the first target area image and the second target area image corresponding to the target makeup area. And operation is carried out on pixel points at the same positions in the first binarization mask image and the second binarization mask image respectively to obtain a second mask image of an intersection area. The preset single-channel component non-zero area of the pixel points in the second mask image is the target makeup area superposed in the first target area image and the second target area.
And obtaining a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image through the operation of the previous step. Performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the target makeup region; and performing AND operation on the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the target makeup region.
In other embodiments of the present application, an and operation may also be performed by using the second mask image and the first target area image corresponding to the target makeup area after the boundary erosion, so as to obtain a new first target area image corresponding to the target makeup area. And performing and operation on the second mask image and the second target area image corresponding to the target makeup area after the boundary corrosion to obtain a new second target area image corresponding to the target makeup area.
Because the second mask image contains the overlapped area of the first target area image and the second target area image corresponding to the target makeup area, the new first target area image and the new second target area image are obtained through the second mask image according to the method, so that the positions of the target makeup areas in the new first target area image and the new second target area image are completely consistent, the makeup progress is determined according to the change of the completely aligned target makeup areas on the preset single-channel component, the comparison area is ensured to be completely consistent, and the accuracy of the makeup progress detection is greatly improved.
After the target makeup areas in the initial frame image and the current frame image are aligned in any of the above manners to obtain a new first target area image and a new second target area image, the makeup progress corresponding to the target makeup area is determined again through the operation of step 503. And for each other target makeup area, aligning each other target makeup area in the initial frame image and each other target makeup area in the current frame image according to the mode, and respectively obtaining the makeup progress corresponding to each other target makeup area. And then calculating the current makeup progress corresponding to the current frame image according to the obtained makeup progress of each target makeup area.
After the current makeup progress is determined in any mode, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up for a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the own making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 13, the face in the initial frame image and the face in the current frame image are respectively corrected and clipped according to the initial frame image and the corresponding first face key point, and the current frame image and the corresponding second face key point. And splitting the eye shadow mask image into a makeup mask image corresponding to each target makeup area. And then aligning the makeup mask image corresponding to each target makeup area with the face area images corresponding to the initial frame image and the current frame image respectively. And deducting a first target area image corresponding to each target makeup area from the initial frame image by taking the makeup mask image corresponding to each target makeup area as a reference, and deducting a second target area image corresponding to each target makeup area from the current frame image by taking the makeup mask image corresponding to each target makeup area as a reference. And then converting the first target area image and the second target area image corresponding to each target makeup area into an image only containing a preset single-channel component in the HLS color space. And calculating the absolute value of the difference value of the preset single-channel components corresponding to the pixel points with the same position in the first target area image and the second target area image which correspond to the same target makeup area after conversion. And counting the number of pixel points of which the absolute value of the difference value corresponding to each target makeup area meets the preset makeup completion condition. And respectively calculating the ratio of the number of the pixel points corresponding to each target makeup area to the total number of the pixel points corresponding to the target makeup area to obtain the makeup progress corresponding to each target makeup area. And calculating the current makeup progress corresponding to the current frame image according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area.
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of face area recognition is improved. According to the characteristic that the eye shadow makeup areas are overlapped, the eye shadow mask image is split into the makeup mask images corresponding to the target makeup areas, and the makeup progress detection is respectively carried out on the target makeup areas, so that the accuracy of the makeup progress detection is improved. And aligning the target makeup area in each makeup mask image with the target makeup area in the face area image in the video frame respectively to ensure that each makeup mask image is consistent with the positions of the corresponding target makeup areas in the initial frame image and the current frame image. And respectively deducting a first target area image and a second target area image corresponding to each target makeup area from the initial frame image and the current frame image through the aligned makeup mask images. Furthermore, the first target area image and the second target area image corresponding to the same target makeup area are aligned again, and errors caused by position information during comparison are reduced. And moreover, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
The embodiment of the application further provides a makeup progress detection device, and the device is used for executing the makeup progress detection method to detect the progress of eye shadow makeup in real time. Referring to fig. 14, the apparatus specifically includes:
an obtaining module 1401, configured to obtain an eye shadow mask image and an initial frame image and a current frame image in a real-time makeup video in which a user currently performs a specific makeup;
a splitting module 1402, configured to split a makeup mask map corresponding to each target makeup area from the eye shadow mask map according to each target makeup area of eye shadow makeup;
a makeup progress determining module 1403, configured to determine a current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image, and the makeup mask map corresponding to each target makeup area.
A makeup progress determining module 1403, configured to obtain, from the initial frame image, a first target area image corresponding to each target makeup area by using the makeup mask image corresponding to each target makeup area as a reference; respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a second target area image corresponding to each target makeup area from the current frame image; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area.
A makeup progress determining module 1403, configured to convert the first target area image and the second target area image corresponding to each target makeup area into images including a preset single channel component in the HLS color space, respectively; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area.
A makeup progress determining module 1403, configured to calculate difference absolute values of preset single-channel components corresponding to pixel points with the same position in the first target area image and the second target area image corresponding to the same target makeup area after conversion, respectively; counting the number of pixel points of which the absolute value of the difference value corresponding to each target makeup area meets a preset makeup completion condition; respectively calculating the ratio of the number of pixel points corresponding to each target makeup area to the total number of pixel points corresponding to the target makeup area to obtain the makeup progress corresponding to each target makeup area; and calculating the current makeup progress corresponding to the current frame image according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area.
A makeup progress determining module 1403, configured to detect a first face key point corresponding to the initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key point; and respectively taking the makeup mask image corresponding to each target makeup area as a reference, and respectively acquiring a first target area image corresponding to each target makeup area from the face area image.
A makeup progress determining module 1403, configured to convert the makeup mask image and the face area image corresponding to the first target makeup area into binary images, respectively; the first target makeup area is any one of the target makeup areas; performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image; and computing the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image.
The makeup progress determining module 1403 is further configured to determine, according to the standard face key points corresponding to the makeup mask map, one or more first positioning points located on the contour of the target makeup area included in the makeup mask map; determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points; and stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
The makeup progress determining module 1403 is further configured to determine one or more first positioning points, located on the outline of each makeup area, in the eye shadow mask map according to the standard face key points corresponding to the eye shadow mask map; determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points; and stretching the eye shadow mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
A makeup progress determining module 1403, configured to perform rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image; according to the corrected first face key point, an image containing a face area is intercepted from the corrected initial frame image; and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
A makeup progress determining module 1403, configured to determine a left-eye central coordinate and a right-eye central coordinate according to the left-eye key point and the right-eye key point included in the first face key point; determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate; and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
A makeup progress determining module 1403, configured to perform image capture on the face region included in the corrected initial frame image according to the corrected first face key point.
A makeup progress determination module 1403, configured to determine a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key point; determining an intercepting frame corresponding to a face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and intercepting an image containing a face region from the corrected initial frame image according to the interception frame.
The makeup progress determination module 1403 is further configured to amplify the capture frame by a preset multiple; and intercepting an image containing a face region from the corrected initial frame image according to the amplified intercepting frame.
The makeup progress determining module 1403 is further configured to perform scaling and translation processing on the corrected first face key points according to the size of the image including the face area and a preset size.
The device also includes: the face detection module is used for detecting whether the initial frame image and the current frame image only contain the face image of the same user; if yes, executing the operation of determining the current makeup progress of the specific makeup by the user; if not, sending prompt information to a terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
The makeup progress detection device provided by the above embodiment of the application and the makeup progress detection method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the stored application program.
EXAMPLE five
1. A makeup progress detection method comprising:
acquiring an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently;
Acquiring a first target area image corresponding to eyebrows from the initial frame image, and acquiring a second target area image corresponding to the eyebrows from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
2. According to 1, the acquiring a first target area image corresponding to an eyebrow from the initial frame image includes:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
3. According to 2, the step of intercepting a first target area image corresponding to eyebrows from the face area image according to eyebrow key points included in the first face key points comprises the steps of:
interpolating eyebrow key points between the eyebrows and the eyebrow peaks included in the first face key points to obtain a plurality of interpolation points;
intercepting all eyebrow key points between the eyebrows and the eyebrow peaks and a closed area formed by connecting the interpolation points from the face area image to obtain partial eyebrow images between the eyebrows and the eyebrow peaks;
Intercepting a closed region formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the face region image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail;
and splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
4. According to 1, the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image includes:
respectively converting the first target area image and the second target area image into images containing preset single-channel components in HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
5. According to 4, the determining a current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image includes:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in the converted first target area image and the converted second target area image respectively;
Counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions;
and calculating the ratio of the counted pixel point number to the total number of pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
6. According to 1, before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further includes:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
performing and operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image;
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
And calculating the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
7. According to 1, before determining the current makeup progress corresponding to the current frame image, the method further includes:
and respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
8. According to 2, the obtaining of the face region image corresponding to the initial frame image according to the first face key point includes:
according to a first face key point corresponding to the initial frame image, performing rotation correction on the initial frame image and the first face key point;
according to the corrected first face key point, intercepting an image containing a face region from the corrected initial frame image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
9. According to 8, the performing rotation correction on the initial frame image and the first face key point according to the first face key point includes:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
Determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
10. According to 8, the capturing an image including a face region from the initial frame image after rectification according to the rectified first face keypoint includes:
and according to the corrected key points of the first face, carrying out image interception on a face area contained in the corrected initial frame image.
11. According to 10, the image capturing, according to the corrected first face key point, a face region included in the corrected initial frame image includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining an intercepting frame corresponding to the face area in the initial frame image after correction according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
And intercepting an image containing the face area from the corrected initial frame image according to the interception frame.
12. According to 11, the method further comprises:
amplifying the intercepting frame by preset times;
and intercepting an image containing the face region from the corrected initial frame image according to the amplified interception frame.
13. According to 8, the method further comprises:
and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
14. According to any one of claims 1-13, the method further comprises:
detecting whether the initial frame image and the current frame image only contain face images of the same user;
if yes, executing the operation of determining the current makeup progress of the specific makeup for the user;
if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
15. A makeup progress detection device comprising:
the video acquisition module is used for acquiring an initial frame image and a current frame image in a real-time makeup video for a user to make up a specific makeup at present;
A target area acquisition module, configured to acquire a first target area image corresponding to an eyebrow from the initial frame image, and acquire a second target area image corresponding to the eyebrow from the current frame image;
and the progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
16. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method of any one of claims 1-14.
17. A computer-readable storage medium, on which a computer program is stored, which program is executed by a processor to implement the method of any one of claims 1-14.
The embodiment of the application provides a makeup progress detection method, which is used for a makeup progress corresponding to an eyebrow makeup. Referring to fig. 15, this embodiment specifically includes the following steps:
step 601: the method comprises the steps of obtaining an initial frame image and a current frame image in a real-time makeup video of a user currently carrying out specific makeup.
The execution subject of the embodiment of the application is the server. And a client matched with the makeup progress detection service provided by the server is installed on a terminal of a user, such as a mobile phone or a computer. When a user needs to use the makeup progress detection service, the user opens the client on the terminal, a video uploading interface is arranged in a display interface of the client, when the user clicks the video uploading interface, a camera device of the terminal is called to shoot a makeup video of the user, and the user carries out eyebrow makeup operation on the face of the user in the shooting process. And the terminal of the user transmits the shot makeup video to the server in a video stream mode. The server receives each frame image of the makeup video transmitted by the user's terminal.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the specific makeup corresponding to each frame image received subsequently with the initial frame image as a reference. Since the processing mode of each subsequent frame image is the same, the embodiment of the present application explains the process of cosmetic progress detection by taking the current frame image received at the current time as an example.
In other embodiments of the present application, after obtaining an initial frame image and a current frame image of a makeup video of a user, a server further detects whether both the initial frame image and the current frame image only contain a face image of the same user. Firstly, whether an initial frame image and a current frame image only contain one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face images, prompt information is sent to a terminal of a user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video. For example, the hint information may be "please keep only the face of the same person appearing within the shot".
If it is detected that both the initial frame image and the current frame image only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, the face feature information corresponding to the face image in the initial frame image and the face feature information corresponding to the face image in the current frame image can be extracted through a face recognition technology, the similarity of the face feature information extracted from the two frame images is calculated, and if the calculated similarity is larger than or equal to a set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video.
After the server obtains the initial frame image and the current frame image of the user's makeup through this step, the server determines the current makeup progress of the user through the following operations of steps 602 and 603.
Step 602: and acquiring a first target area image corresponding to the eyebrow from the initial frame image, and acquiring a second target area image corresponding to the eyebrow from the current frame image.
The process of acquiring the first target area image is the same as the process of acquiring the second target area image. The embodiment of the present application will be described in detail by taking an example of an acquisition process of the first target area image. The server obtains the first target area image from the initial frame image specifically by the operations of steps S5 to S7 below.
S5: and detecting a first face key point corresponding to the initial frame image.
The server is configured with a pre-trained detection model for detecting the face key points, and the detection model provides interface services for detecting the face key points. After the server acquires the initial frame image of the user makeup video, the server calls an interface service for detecting the face key points, and all face key points of the user face in the initial frame image are identified through a detection model. In order to distinguish from the face key points corresponding to the current frame image, all the face key points corresponding to the initial frame image are referred to as first face key points in the embodiment of the application. All face key points corresponding to the current frame image are called second face key points.
The identified key points of the human face comprise key points on the face contour of the user and key points of the mouth, the nose, the eyes, the eyebrows and other parts. The number of face key points identified may be 106.
S6: and acquiring a face region image corresponding to the initial frame image according to the first face key point.
The server specifically obtains a face region image corresponding to the initial frame image through the following operations in steps S60 to S62, including:
s60: and performing rotation correction on the initial frame image and the first face key point according to the first face key point.
Because a user can not ensure that the pose angles of the face in each frame of image are the same when shooting a makeup video through a terminal, in order to improve the accuracy of comparison between the current frame of image and the initial frame of image, the face in each frame of image needs to be rotationally corrected, so that the connecting lines of the face and the eyes in each frame of image after correction are all on the same horizontal line, thereby ensuring that the pose angles of the face in each frame of image are the same, and avoiding the problem of larger detection errors of the makeup progress due to different pose angles.
Specifically, the left-eye central coordinate and the right-eye central coordinate are respectively determined according to the left-eye key point and the right-eye key point included in the first face key point. And determining all the left eye key points of the left eye region and all the right eye key points of the right eye region from the first face key points. And averaging the determined abscissa of all the left-eye key points, averaging the ordinate of all the left-eye key points, forming a coordinate by the average of the abscissa and the average of the ordinate corresponding to the left eye, and determining the coordinate as the center coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then, according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image. As shown in fig. 4, a horizontal difference dx and a vertical difference dy between the left-eye center coordinate and the right-eye center coordinate are calculated, and a length d of a link between the left-eye center coordinate and the right-eye center coordinate is calculated. And calculating an included angle theta between the two-eye connecting line and the horizontal direction according to the length d of the two-eye connecting line, the horizontal difference value dx and the vertical difference value dy, wherein the included angle theta is the rotating angle corresponding to the initial frame image. And then calculating the coordinate of the central point of the connecting line of the two eyes according to the central coordinate of the left eye and the central coordinate of the right eye, wherein the coordinate of the central point is the coordinate of the rotating central point corresponding to the initial frame image.
And performing rotation correction on the initial frame image and the first face key point according to the calculated rotation angle and the rotation center point coordinate. Specifically, the rotation angle and the rotation center point coordinate are input into a preset function used for calculating a rotation matrix of the picture, where the preset function may be a function cv2. Getrototionmatrix 2d () in OpenCV. And obtaining a rotation matrix corresponding to the initial frame image by calling the preset function. And then calculating the product of the initial frame image and the rotation matrix to obtain the corrected initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be completed by calling a function cv2.Warpaffine () in OpenCV.
For the first face key points, each first face key point needs to be corrected one by one to correspond to the corrected initial frame image. When the first face key points are corrected one by one, two times of coordinate system conversion are required, the coordinate system with the upper left corner of the initial frame image as the origin is converted into the coordinate system with the lower left corner as the origin for the first time, and the coordinate system with the lower left corner as the origin is further converted into the coordinate system with the rotation center point coordinate as the origin for the second time, as shown in fig. 5. After two times of coordinate system conversion, the following formula (1) conversion is carried out on each first face key point, and the rotation correction of the first face key points can be completed.
Figure RE-GDA0003326777920000701
In the formula (1), x 0 、y 0 The abscissa and ordinate of the first face key point before rotation correction are respectively, x and y are respectively the abscissa and ordinate of the first face key point after rotation correction, and θ is the rotation angle.
The corrected initial frame image and the first face key point are based on the entire image, and the entire image includes not only the face information of the user but also other redundant image information, so that the face region of the corrected image needs to be clipped in the following step S61.
S61: and according to the corrected first face key point, intercepting an image containing a face area from the corrected initial frame image.
Firstly, determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected key points of the first face. And then determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, the minimum abscissa value and the minimum ordinate value are combined into a coordinate point, and the coordinate point is used as a top left corner vertex of the intercepting frame corresponding to the face area. And forming another coordinate point by using the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the top of the lower right corner of the capturing frame corresponding to the face region. And determining the position of an intercepting frame in the corrected initial frame image according to the top left corner vertex and the bottom right corner vertex, and intercepting the image in the intercepting frame from the corrected initial frame image, namely intercepting the image containing the face area.
In other embodiments of the present application, in order to ensure that all face areas of the user are intercepted, and avoid the occurrence of a situation where the subsequent makeup progress detection error is large due to incomplete interception, the intercepting frame may be further enlarged by a preset multiple, where the preset multiple may be 1.15 or 1.25, and the like. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical application. And after amplifying the interception frame to the periphery by a preset multiple, intercepting the image in the amplified interception frame from the corrected initial frame image, thereby intercepting the image containing the complete face area of the user.
S62: and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
After the image containing the face area of the user is intercepted from the initial frame image in the mode, the image containing the face area is zoomed to the preset size, and the face area image corresponding to the initial frame image is obtained. The predetermined size may be 390 × 390, 400 × 400, or the like. The embodiment of the application does not limit the specific value of the preset dimension, and the specific value can be set according to requirements in practical application.
In order to adapt the first face key point to the zoomed face region image, after the captured image containing the face region is zoomed to a preset size, the corrected first face key point is zoomed and translated according to the size of the image containing the face region before zooming and the preset size. Specifically, according to the size of an image including a face region before zooming and a preset size to which the image needs zooming, the translation direction and the translation distance of each first face key point are determined, then, according to the translation direction and the translation distance corresponding to each first face key point, translation operation is respectively carried out on each first face key point, and the coordinates of each translated first face key point are recorded.
The face region image is obtained from the initial frame image in the above manner, the first face key point is adapted to the obtained face region image through operations such as rotation correction and translation scaling, and then the first target region image corresponding to the eyebrow is extracted from the face region image in the following manner of step S7.
In other embodiments of the present application, before performing step S7, a gaussian filtering process may be performed on the face region image to remove noise in the face region image. Specifically, according to a gaussian kernel with a preset size, gaussian filtering processing is performed on a face region image corresponding to an initial frame image.
The Gaussian kernel of the Gaussian filter is a key parameter of the Gaussian filter processing, if the Gaussian kernel is too small, a good filtering effect cannot be achieved, and if the Gaussian kernel is too large, although noise information in an image can be filtered, useful information in the image can be smoothed. In the embodiment of the present application, a gaussian kernel with a preset size is selected, where the preset size may be 9 × 9. In addition, the other group of parameters sigmaX and sigmaY of the Gaussian filter function are set to be 0, and after Gaussian filtering, image information is smoother, so that the accuracy of subsequently obtaining the makeup progress is improved.
The face region image is obtained in the above manner, or after the face region image is subjected to gaussian filtering processing, the first target region image corresponding to the eyebrow is extracted from the face region image corresponding to the initial frame image in step S7.
S7: and extracting a first target area image corresponding to eyebrows from the face area image corresponding to the initial frame image according to the eyebrow key points included in the first face key points.
When progress detection is carried out on eyebrow dressing, images of areas where eyebrows are located need to be deducted out, so that the influence of other areas on eyebrow dressing progress detection is avoided, eyebrow areas are deducted out, and then only eyebrow areas are operated, so that the operation amount is reduced, and meanwhile, the accuracy is improved.
The obtained first face key points include a plurality of eyebrow key points, such as 18 eyebrow key points. The plurality of eyebrow key points are distributed at different positions from the head to the tail of the eyebrow on the eyebrow outline. In order to improve the accuracy of extracting the image of the first target region corresponding to the eyebrow, the embodiment of the application obtains more points located on the outline of the eyebrow by means of linear interpolation, so that the image is extracted according to more points. Since the eyebrow tail is pointed, the linear interpolation operation is not convenient to carry out. Therefore, the method and the device divide the process of extracting the first target area image corresponding to the eyebrow into two parts. One part is the section from the eyebrow to the eyebrow peak, and more points are obtained in a linear difference mode to further deduct the image. And the other part is an eyebrow peak to eyebrow tail section, and an eyebrow key point of the currently obtained eyebrow peak to eyebrow tail section is used for deducting the image.
Specifically, linear interpolation is performed on eyebrow key points between the eyebrow and the eyebrow peak included in the first face key points to obtain a plurality of interpolation points. And sequentially connecting all eyebrow key points and the obtained plurality of interpolation points between the eyebrows in the face region image corresponding to the initial frame image along the eyebrow contour line to obtain a closed region, wherein the closed region encloses partial eyebrow regions from the eyebrows to the eyebrow peak section. And intercepting the image of the closed area from the face area image corresponding to the initial frame image to obtain a partial eyebrow image between the eyebrow and the eyebrow peak.
And sequentially connecting all eyebrow key points between the eyebrow peak and the eyebrow tail in the face region image corresponding to the initial frame image along the eyebrow contour line to obtain a closed region, wherein partial eyebrow regions from the eyebrow peak to the eyebrow tail are encircled by the closed region. And intercepting the image of the closed region from the face region image corresponding to the initial frame image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail.
And splicing a part of eyebrow images between the eyebrow head and the eyebrow peak and a part of eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
For the current frame image, the second target area image corresponding to the eyebrow is obtained from the current frame image according to the operations of the above steps S5-S7.
In other embodiments of the present application, the boundary is usually blurred and does not appear as abrupt considering that the edge of the eyebrow makeup in the actual makeup scene may not have a clear outline. Therefore, after the first target area image and the second target area image are obtained through the embodiment, the boundary erosion processing is further performed on the eyebrow areas in the first target area image and the second target area image respectively, so that the boundary of the target makeup area for eyebrow makeup is blurred, and the accuracy of makeup progress detection is further improved.
Step 603: and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
The color spaces of the first target area image corresponding to the eyebrow in the initial frame image and the second target area image corresponding to the eyebrow in the current frame image obtained in the above manner are both RGB color spaces. According to the embodiment of the application, the influence of the eyebrow makeup on each channel component of the color space is determined in advance through a large number of tests, and the influence difference on each color channel in the RGB color space is found to be small. The HSV color space is composed of three components, namely Hue, saturation and Value, and when one component changes, the values of the other two components do not change obviously. And determining which channel component of brightness, hue and saturation is most influenced by the eyebrow makeup through experiments, and configuring the channel component with the most influence as a preset single-channel component corresponding to the preset type of makeup in the server. For the eyebrow makeup, the corresponding preset single-channel component may be a brightness component.
After the first target area image corresponding to the eyebrow in the initial frame image and the second target area image corresponding to the eyebrow in the current frame image are obtained in the above manner, both the first target area image and the second target area image are converted into HSV color space by RGB color space. And separating the preset single-channel component from the HSV color space of the converted first target area image to obtain the first target area image only containing the preset single-channel component. And separating a preset single-channel component from the HSV color space of the converted second target area image to obtain a second target area image only containing the preset single-channel component.
And then determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
Specifically, the absolute values of the differences of the channel components corresponding to the pixel points with the same position in the converted first target area image and the converted second target area image are calculated respectively. For example, the absolute value of the difference in luminance components between pixel points having the same coordinates in the first target region image and the second target region image after conversion is calculated. And counting the number of pixel points of which the corresponding absolute value of the difference value meets the preset makeup finishing condition. The preset makeup completing condition is that the absolute value of the difference value corresponding to the pixel point is greater than a first preset threshold, and the first preset threshold can be 7 or 8.
And counting the total number of all pixel points in the eyebrow area in the first target area image or the second target area image. And then calculating the ratio of the number of the pixel points meeting the preset makeup finishing condition to the total number of the pixel points in the eyebrow area, and determining the ratio as the current makeup progress.
In other embodiments of the present application, in order to further improve the accuracy of the cosmetic progress detection, the eyebrow regions in the first target region image and the second target region image are further aligned. Specifically, binarization processing is performed on a first target area image and a second target area image which only contain the preset single-channel components, namely, values of the preset single-channel components corresponding to pixel points in target makeup areas of the first target area image and the second target area image are modified to be 1, and values of the preset single-channel components of pixel points at other positions are modified to be 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing AND operation on the first binarization mask image and the second binarization mask image, namely performing AND operation on pixel points at the same positions in the first binarization mask image and the second binarization mask image respectively to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image. The preset single-channel component non-zero area of the pixel points in the second mask image is the target makeup area superposed in the first target area image and the second target area.
And obtaining a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image through the operation of the previous step. Performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to eyebrows in the initial frame image; and computing the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the eyebrow in the current frame image.
In other embodiments of the present application, an and operation may be further performed on the second mask image and the first target area image corresponding to the eyebrow after the boundary erosion processing, so as to obtain a new first target area image corresponding to the eyebrow. And performing AND operation on the second mask image and the second target area image corresponding to the eyebrow after the boundary erosion processing to obtain a new second target area image corresponding to the eyebrow.
Because the second mask image contains the target makeup area overlapped in the initial frame image and the current frame image, the new first target area image and the new second target area image are respectively scratched out through the second mask image according to the mode, so that the positions of the target makeup areas in the new first target area image and the new second target area image are completely consistent, the makeup progress is determined by subsequently comparing the change of the target makeup area in the current frame image and the target makeup area in the initial frame image, the comparison areas are ensured to be completely consistent, and the accuracy of the makeup progress detection is greatly improved.
After aligning the target makeup areas in the initial frame image and the current frame image in the above manner to obtain a new first target area image and a new second target area image, determining the current makeup progress corresponding to the current frame image again through the operation of the above step 603.
After the current makeup progress is determined by any one of the above modes, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up for a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the own making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 16, according to the initial frame image and the first face key point corresponding thereto, and the current frame image and the second face key point corresponding thereto, the faces in the initial frame image and the current frame image are aligned and cut, respectively, and then the two cut face region images are smoothed and denoised by the laplacian algorithm. And then respectively deducting a first target area image and a second target area image corresponding to eyebrows from the two face area images. And carrying out boundary corrosion treatment on the first target area image and the second target area image. And then converting the first target area image and the second target area image into an image which only contains preset single-channel components in an HSV color space. And aligning the first target area image and the second target area image again, and then calculating the current makeup progress according to the first target area image and the second target area image.
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of face area identification is improved. And deducting a target area image corresponding to the eyebrow from the face area image based on the face key point, and performing pixel alignment on the target area images corresponding to the initial frame image and the current frame image respectively, so that the accuracy of the target area image corresponding to the eyebrow is improved. And aligning the target makeup areas in the initial frame image and the current frame image, and reducing errors caused by position difference of the target makeup areas. When the eyebrow area is scratched, a segmented interpolation algorithm is introduced, so that the eyebrow area which is clipped is more coherent and accurate. In addition, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
The embodiment of the application also provides a makeup progress detection device, and the device is used for executing the makeup progress detection method for detecting the makeup progress of the eyebrows. Referring to fig. 17, the apparatus specifically includes:
The video acquiring module 1601 is used for acquiring an initial frame image and a current frame image in a real-time makeup video for a user to make up a specific makeup at present;
a target region obtaining module 1602, configured to obtain a first target region image corresponding to an eyebrow from an initial frame image, and obtain a second target region image corresponding to the eyebrow from a current frame image;
the progress determining module 1603 is configured to determine a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
A target region obtaining module 1602, configured to detect a first face key point corresponding to an initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key point; and acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
A target region obtaining module 1602, configured to interpolate eyebrow key points between the eyebrows and the eyebrows included in the first face key point to obtain multiple interpolation points; intercepting all eyebrow key points and a closed area formed by connecting a plurality of interpolation points between the eyebrows and the eyebrow peaks from the face area image to obtain a partial eyebrow image between the eyebrows and the eyebrow peaks; intercepting a closed region formed by connecting all eyebrow key points between an eyebrow peak and an eyebrow tail from a face region image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail; and splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
A progress determining module 1603, configured to respectively convert the first target area image and the second target area image into images including preset single channel components in HSV color space; and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
The progress determining module 1603 is configured to calculate absolute difference values of preset single-channel components corresponding to pixels with the same position in the converted first target area image and the converted second target area image respectively; counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions; and calculating the ratio of the counted number of the pixel points to the total number of the pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
The progress determining module 1603 is further configured to perform binarization processing on the first target area image and the second target area image respectively to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image; performing AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image; acquiring a face region image corresponding to an initial frame image and a face region image corresponding to a current frame image; performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and performing AND operation on the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
The device also includes: and the boundary corrosion module is used for performing boundary corrosion treatment on the makeup areas in the first target area image and the second target area image respectively.
A target region obtaining module 1602, configured to perform rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image; according to the corrected first face key point, an image containing a face area is intercepted from the corrected initial frame image; and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
A target area obtaining module 1602, configured to determine a left-eye central coordinate and a right-eye central coordinate according to a left-eye key point and a right-eye key point included in the first face key point; determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate; and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
A target region obtaining module 1602, configured to perform image capture on a face region included in the corrected initial frame image according to the corrected first face key point.
A target region obtaining module 1602, configured to determine a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key points; determining an intercepting frame corresponding to a face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and intercepting an image containing a face region from the corrected initial frame image according to the interception frame.
The target area obtaining module 1602 is further configured to amplify the capture frame by a preset multiple; and according to the amplified intercepting frame, intercepting an image containing a human face region from the corrected initial frame image.
The target area obtaining module 1602 is further configured to perform scaling translation processing on the corrected first face key points according to the size of the image including the face area and a preset size.
The device also includes: the face detection module is used for detecting whether the initial frame image and the current frame image only contain the face image of the same user; if yes, executing the operation of determining the current makeup progress of the specific makeup by the user; if not, sending prompt information to a terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
The makeup progress detection device provided by the embodiment of the application and the makeup progress detection method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as methods adopted, operated or realized by application programs stored by the device.
EXAMPLE six
The embodiment of the application provides a makeup progress detection method, which is used for the makeup progress corresponding to makeup such as foundation make-up and loose powder which can be applied on the whole face. Referring to fig. 18, this embodiment specifically includes the following steps:
step 701: the method comprises the steps of obtaining an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently.
The operation of step 701 is the same as the operation of step 601 in the fifth embodiment, and is not repeated herein.
Step 702: and simulating to generate a result image after finishing the specific makeup according to the initial frame image.
And rendering the effect of finishing the specific makeup on the initial frame image by using a 3D rendering technology to obtain a result image. For example, a result image after finishing the makeup on the powder base is rendered on the initial frame image by a 3D rendering technique.
Step 703: and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
The server specifically determines the current makeup progress through the following operations of steps S8-S10, including:
s8: and respectively obtaining the integral image brightness corresponding to the initial frame image, the result image and the current frame image.
For an initial frame image, firstly converting the initial frame image into a gray image, then calculating the gray average value of all pixel points in the gray image corresponding to the converted initial frame image, and determining the calculated gray average value as the brightness of the whole image corresponding to the initial frame image.
And calculating the gray average value of all pixel points in the gray image corresponding to the result image according to the same mode for the result image and the current frame image to obtain the overall image brightness corresponding to the result image. And calculating the gray average value of all pixel points in the gray image corresponding to the current frame image to obtain the overall image brightness corresponding to the current frame image.
The overall image brightness includes the brightness of the face region in the image and the ambient brightness of the background where the face is located.
S9: and respectively obtaining the brightness of the face region corresponding to the initial frame image, the result image and the current frame image.
The server specifically acquires the brightness of the face region corresponding to each image through the following operations in steps S91 to S93, and specifically includes:
S91: and respectively acquiring the face region images corresponding to the initial frame image, the result image and the current frame image.
The specific operation processes for obtaining the face region images corresponding to the initial frame image, the result image and the current frame image are the same, and reference may be made to the processes of steps S5 to S6 in the fifth embodiment, which is not described herein again.
S92: and respectively converting the face region images corresponding to the initial frame image, the result image and the current frame image into face gray level images.
S93: and respectively calculating the gray average value of pixel points in the face gray images corresponding to the initial frame image, the result image and the current frame image to obtain the face region brightness corresponding to the initial frame image, the result image and the current frame image.
And converting the face region image corresponding to the initial frame image into a face gray image, calculating the gray average value of all pixel points in the face gray image, and determining the gray average value as the face region brightness corresponding to the initial frame image. And respectively calculating the brightness of the face region corresponding to the result image and the brightness of the face region corresponding to the current frame image in the same manner.
The steps S8 and S9 may be executed in parallel or in series, and the execution order of the steps S8 and S9 is not limited in the embodiment of the present application. The overall image brightness and the face region brightness corresponding to the initial frame image, the overall image brightness and the face region brightness corresponding to the result image, and the overall image brightness and the face region brightness corresponding to the current frame image are respectively obtained through the above steps S8 and S9, and then the current makeup progress is determined through the following operation of step S10.
S10: and determining the current makeup progress corresponding to the current frame image according to the overall image brightness and the face area brightness corresponding to the initial frame image, the result image and the current frame image respectively.
Because the user makes up in the target type under a certain brightness environment, the brightness of the face of the user is superposed with the effects of the make-up effect and the ambient light, so that the effect of the ambient light needs to be eliminated when the current make-up progress is determined, only the brightness change caused by the make-up effect in the face image is considered, and the accuracy of the obtained current make-up progress is ensured.
And determining first environment change brightness corresponding to the current frame image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the current frame image for the environment change between the current frame image and the initial frame image. Specifically, a difference between the brightness of the whole image corresponding to the initial frame image and the brightness of the face region corresponding to the initial frame image is calculated, and the difference is the brightness of all background portions in the initial frame image and is referred to as the ambient brightness of the initial frame image. And calculating the difference value between the brightness of the whole image corresponding to the current frame image and the brightness of the face region corresponding to the current frame image, wherein the difference value is the brightness of all background parts in the current frame image and is called the ambient brightness of the current frame image. And calculating the absolute value of the difference between the ambient brightness of the current frame image and the ambient brightness of the initial frame image, and determining the absolute value of the difference as the first ambient change brightness corresponding to the current frame image. The first environment change brightness represents the change condition of the environment brightness between the current frame image and the initial frame image.
Since the result image is obtained by rendering the makeup effect of the target type on the basis of the initial frame image, it may be affected by the rendering operation, so that the brightness of the background portion in the result image may not coincide with the brightness of the background portion in the initial frame image, that is, there may be an ambient brightness variation between the result image and the initial frame image. Therefore, the embodiment of the application also determines the second environment change brightness corresponding to the result image according to the overall image brightness and the face area brightness corresponding to the initial frame image and the overall image brightness and the face area brightness corresponding to the result image. Specifically, a difference between the overall image brightness corresponding to the result image and the face region brightness corresponding to the current frame image is calculated, and the difference is the brightness of all background portions in the result image and is called the ambient brightness of the result image. And calculating the absolute value of the difference between the environmental brightness of the result image and the environmental brightness of the initial frame image, and determining the absolute value of the difference as the second environmental change brightness corresponding to the result image. The second environment change brightness represents the change of the environment brightness between the result image and the initial frame image.
After the first environment change brightness and the second environment change brightness are obtained in the above manner, the current makeup progress corresponding to the current frame image is determined according to the first environment change brightness, the second environment change brightness, the face region brightness corresponding to the initial frame image, the face region brightness corresponding to the current frame image, and the face region brightness corresponding to the result image.
Firstly, determining a makeup brightness change value corresponding to a current frame image according to first environment change brightness, face area brightness corresponding to an initial frame image and face area brightness corresponding to the current frame image. Specifically, a difference value between the brightness of the face region corresponding to the current frame image and the brightness of the face region corresponding to the initial frame image is calculated to obtain a total brightness change value corresponding to the current frame image, where the total brightness change value includes brightness change caused by makeup effect and brightness change caused by ambient light change. And calculating the difference value between the total brightness change value and the first environment change brightness to obtain a makeup brightness change value corresponding to the current frame image.
The total brightness change value from the current frame image to the face area of the reference frame image is calculated through the method, the ambient brightness change from the current frame image to the reference frame image is deducted from the total brightness change value, the obtained makeup brightness change value is closer to the actual brightness change value formed through the makeup operation of the target type, and the accuracy is high.
And then determining a makeup brightness change value corresponding to the result image according to the second environment change brightness, the face area brightness corresponding to the initial frame image and the face area brightness corresponding to the result image. Specifically, a difference value between the brightness of the face region corresponding to the result image and the brightness of the face region corresponding to the initial frame image is calculated to obtain a total brightness change value corresponding to the result image, where the total brightness change value includes brightness changes caused by completing makeup of the target type and brightness changes formed on the background portion by the rendering operation used for generating the result image. And calculating the difference value between the total brightness change value and the second environment change brightness to obtain a makeup brightness change value corresponding to the result image.
The total brightness change value from the result image to the face area of the reference frame image is calculated through the method, the environment brightness change caused by the rendering operation on the background part of the result image is deducted from the total brightness change value, the obtained makeup brightness change value is closer to the actual brightness change value formed by finishing the makeup operation of the target type, and the accuracy is high.
And finally, calculating the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image.
The ratio of the makeup brightness change value caused by makeup in the current frame image to the makeup brightness change value caused by makeup completion in the result image is calculated, so that the current makeup progress corresponding to the current frame image can be accurately obtained. In the process, the influence of the change of the ambient brightness is eliminated, and the accuracy of the cosmetic progress detection is greatly improved.
In other embodiments of the present application, the ambient brightness may change significantly during the application of makeup by the user, such as by suddenly brightening or suddenly darkening. When the ambient brightness changes greatly, the change of the brightness of the face area is greatly influenced, so that the accuracy of the makeup progress detection is reduced. Therefore, after the first environment change brightness from the current frame image to the reference frame image is obtained in the above manner, the first environment change brightness is compared with a preset threshold, where the preset threshold may be 50 or 60, and the specific value of the preset threshold is not limited in the embodiment of the present application, and may be set according to requirements in practical applications.
If the first environmental change brightness is smaller than or equal to the preset threshold value through comparison, it is determined that the environmental brightness change between the current frame image and the reference frame image is not large, and the current makeup progress corresponding to the current frame image is continuously determined according to the mode.
If the first environmental change brightness is larger than the preset threshold value through comparison, the environmental brightness change between the current frame image and the reference frame image is large, and the makeup progress corresponding to the previous frame image is directly determined as the current makeup progress corresponding to the current frame image. And sending the first prompt information to a terminal of the user, and receiving and displaying the first prompt information by the terminal of the user so as to prompt the user to make up in the brightness environment corresponding to the initial frame image.
Therefore, under the condition that the environmental brightness changes greatly, the makeup progress corresponding to the previous frame of image is directly used as the current makeup progress, and the calculation resource is saved. And the condition that the detected makeup progress is suddenly and greatly increased or reduced is avoided, and the stability and the accuracy of the makeup progress detection are improved.
After the current makeup progress is determined by any one of the above modes, the server sends the current makeup progress to the terminal of the user. And after the terminal of the user receives the current makeup progress, displaying the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up for a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the own making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 19, A1: and acquiring an initial frame image and a current frame image in the real-time makeup video of the user. A2: and rendering the makeup effect of the target type on the initial frame image to obtain a result image. A3: respectively obtaining the overall image brightness corresponding to the initial frame image, the result image and the current frame image, and then executing the steps A6 and A7 in parallel. A4: and respectively acquiring the face area images corresponding to the initial frame image, the result image and the current frame image. A5: respectively obtaining the face region brightness of the face region image corresponding to the initial frame image, the result image and the current frame image, and then executing the steps A6 and A7 in parallel. A6: and determining first environment change brightness corresponding to the current frame image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the current frame image, and then executing the steps A8 and A9 in parallel. A7: and determining the second environment change brightness corresponding to the result image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the result image, and then executing the steps A8 and A9 in parallel. A8: and determining a makeup brightness change value corresponding to the current frame image according to the first environment change brightness, the face area brightness corresponding to the initial frame image and the face area brightness corresponding to the current frame image, and then executing the step A10. A9: and determining a makeup brightness change value corresponding to the result image according to the second environment change brightness, the face area brightness corresponding to the initial frame image and the face area brightness corresponding to the result image. A10: and calculating the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image.
As shown in fig. 20, the overall image brightness corresponding to each of the three images is calculated according to the reference frame image and the corresponding first face key point thereof, the current frame image and the corresponding second face key point thereof, and the result image and the corresponding third face key point thereof. And then respectively correcting and cutting the human face in the three images according to the human face key point information corresponding to each image to respectively obtain human face area images corresponding to the reference frame image, the current frame image and the result image. And respectively calculating the brightness of the face regions corresponding to the three face region images obtained by cutting. And calculating the first environment change brightness corresponding to the current frame image according to the overall image brightness and the face region brightness corresponding to each image. And if the first environment change brightness is larger than the preset threshold value, prompting the user to return to the environment with the brightness corresponding to the initial frame image for makeup. If the first environment change brightness is smaller than or equal to the preset threshold value, the brightness difference between the brightness of the face area of the current frame image and the brightness of the face area of the initial frame image is calculated, and the difference between the brightness difference and the first environment change brightness is calculated. And calculating the brightness difference between the brightness of the face area of the result image and the brightness of the face area of the initial frame image, and calculating the difference between the brightness difference and the second environment change brightness. And calculating the ratio of the difference value corresponding to the current frame image to the difference value corresponding to the result image to obtain the current makeup progress.
In the embodiment of the application, on the basis of the initial frame image of the makeup process of the user, a result image after makeup is completed is generated in a simulated mode. The method has the advantages that the current makeup progress is determined according to the brightness change of the face area in the current frame image, the initial frame image and the result image, the influence of the environment brightness change is deducted from the brightness change of the face area, the accuracy of the makeup progress detection is greatly improved, a deep learning model is not adopted, a large amount of data does not need to be collected in advance, the calculation amount is small, the cost is low, the processing pressure of a server is reduced, the efficiency of the makeup progress detection is improved, and the real-time requirement of the makeup progress detection can be met.
EXAMPLE seven
The embodiment of the application provides a makeup progress detection method, which is used for covering the makeup progress corresponding to the flaw makeup. Referring to fig. 21, this embodiment specifically includes the following steps:
step 801: the method comprises the steps of obtaining an initial frame image and a current frame image in a real-time makeup video of a user currently carrying out specific makeup.
The operation of step 801 is the same as the operation of step 601 in the fifth embodiment, and is not repeated herein.
Step 802: and respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image.
Firstly, respectively obtaining the face region images corresponding to the initial frame image and the current frame image. The specific operation processes for obtaining the respective face region images corresponding to the initial frame image and the current frame image are the same, and reference may be made to the processes in steps S5 to S6 in the fifth embodiment, which are not described herein again.
And then detecting the number of flaws corresponding to each flaw type in the face area image corresponding to the initial frame image through a preset skin detection model, and taking each detected flaw type and the number of flaws corresponding to the detected flaw type as face flaw information corresponding to the initial frame image. Similarly, the number of flaws corresponding to each flaw category in the face area image corresponding to the current frame image is detected through a preset skin detection model, and face flaw information corresponding to the current frame image is obtained.
The preset skin detection model is obtained by training the neural network model through a large number of face images in advance, and can identify and classify flaws such as pockmarks, spots and wrinkles of the face in the face images. The blemish category comprises one or more blemishes such as pox, spots, wrinkles, etc. And identifying the number of acnes, the number of spots, the number of wrinkles and the like in the face area image corresponding to the initial frame image and the current frame image through a preset skin detection model.
Step 803: and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the respective corresponding face flaw information.
The server specifically determines the current makeup progress through the following operations of steps B1-B3, including:
b1: and calculating the difference value of the facial flaws between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image.
The facial defect information includes defect types and corresponding defect numbers, and the defect types include one or more of pox, spots, wrinkles and other defects. And respectively calculating the difference between the number of the flaws corresponding to the initial frame image and the number of the flaws corresponding to the current frame image under each flaw type, then calculating the sum of the differences corresponding to each flaw type, and taking the obtained sum as the difference value of the facial flaws between the current frame image and the initial frame image.
For example, assume that the defect categories include two categories, namely vaccinia and speckle, the facial defect information corresponding to the initial frame image includes vaccinia number of 5 and speckle number of 4. The number of pox included in the facial flaw information corresponding to the current frame image is 3, and the number of speckle is 1. The difference in pox between the initial frame image and the current frame image is 2 and the difference in speckle is 3. And calculating the sum of the difference of the pox and the difference of the spots to be 5, and then obtaining the difference value of the facial flaws between the current frame image and the initial frame image to be 5.
The difference value of the facial flaws can reflect the difference between the current frame image and the initial frame image in the aspects of smoothness and fineness, and the difference is caused by various factors such as flaw covering makeup, light change, shooting angle change and the like. According to the embodiment of the application, when the face defect difference value is larger than a certain value through a large number of tests, the difference can be determined to be mainly caused by concealer makeup, the certain value is used as a preset threshold value to be configured in the server, and the preset threshold value can be 4 or 5.
After the face defect difference value is calculated in the above mode, the face defect difference value is compared with a preset threshold value, if the face defect difference value is larger than the preset threshold value, it is indicated that the face change caused by concealer makeup is obvious at present, and the current makeup progress is determined through the operation of the step B2. If the face defect difference value is smaller than or equal to the preset threshold value, it is indicated that the face change caused by concealer makeup is small, and then the current makeup progress is determined through the operation of the step B3.
B2: and if the face flaw difference value is larger than the preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the face flaw difference value and the face flaw information corresponding to the initial frame image.
And calculating the sum of the flaw numbers corresponding to all flaw categories in the facial flaw information corresponding to the initial frame image to obtain the total flaw number. And calculating the ratio of the difference value of the facial flaws to the total number of the flaws, and taking the ratio as the current makeup progress corresponding to the current frame image.
For example, assume that the facial defect information corresponding to the initial frame image includes 5 number of vaccinia and 4 number of speckle. The number of pox included in the facial flaw information corresponding to the current frame image is 3, and the number of speckle is 1. The difference value of the facial flaws between the current frame image and the initial frame image is 5, the total number of flaws corresponding to the initial frame image is 9, and the current makeup progress corresponding to the current frame image is 5/9.
The on-site difference value of the face between the current frame image and the initial frame image is larger than a preset threshold value, namely, when the face difference is mainly caused by concealer makeup, the ratio between the face flaw difference value and the total flaw number corresponding to the initial frame image is directly used as the current makeup progress, the makeup progress determination process is simple, the calculated amount is small, the current makeup progress can be rapidly determined, the efficiency is high, and the real-time requirement of the concealer makeup progress detection can be met.
After the current makeup progress is determined by the method, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
B3: and if the difference value of the face flaws is smaller than or equal to a preset threshold value, acquiring a result image after the makeup of the face flaws is finished by the user, and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
When the difference value of the facial flaws is judged to be smaller than or equal to the preset threshold value, the difference between the current frame image and the initial frame image is considered to be small, and if the current makeup progress is determined by directly utilizing the difference value of the facial flaws and the total number of flaws of the initial frame image, the error is large. Therefore, in this case, the current makeup progress is determined without using the difference value of the facial flaws and the total number of flaws in the initial frame image, but the result image after the user finishes covering makeup is obtained first, and then the current makeup progress corresponding to the current frame image is determined according to the initial frame image, the result image and the current frame image.
Firstly, a 3D rendering technology is utilized to render the effect of concealing and making up on an initial frame image, and a result image is obtained. Before generating the result image in the embodiment of the application, the initial frame image may be rotated and corrected to make the two-eye connection line in the image parallel to the horizontal line, and then the result image after completing the concealing and making up is rendered on the basis of the rotated and corrected initial frame image, so that the two-eye connection line of the human face in the result image is also parallel to the horizontal line, and thus, the rotation correction of the result image is not needed, and the operation amount is saved.
After the result image after covering and makeup is obtained, a third face key point corresponding to the result image is detected through a detection model for detecting the face key point, and according to the third face key point, the face region image is intercepted from the result image in the manner of the steps S5-S6 in the fifth embodiment, and the specific intercepting process is not repeated here.
Obtaining a face area image corresponding to the initial frame image, a face area image corresponding to the current frame image and a face area image corresponding to the result image, and then determining the current makeup progress according to the face area images corresponding to the initial frame image, the current frame image and the result image.
The color spaces of the obtained face region image corresponding to the initial frame image, the face region image corresponding to the current frame image and the face region image corresponding to the result image are RGB color spaces. The influence of concealer on each channel component of the color space is determined in advance through a large number of tests, and the difference of the influence on each color channel in the RGB color space is found to be small. While the HLS color space is composed of three components, hue (Hue), saturation (Saturation) and Light (brightness), it has been found through experimentation that concealer makeup can cause significant changes in the Saturation component of the HLS color space. Therefore, the face region images corresponding to the initial frame image, the result image and the current frame image are respectively converted from the RGB color space to the HLS color space. And then separating a saturation channel from the HLS color space of the converted image to obtain images of the face region images corresponding to the initial frame image, the result image and the current frame image respectively, wherein the images only comprise the saturation channel in the HLS color space.
After the conversion, respectively calculating the smoothing factors corresponding to the respective face region images of the initial frame image, the result image and the current frame image after the conversion by a preset filtering algorithm. The preset filtering algorithm may be a laplacian algorithm or a gaussian filtering algorithm. Taking a gaussian filtering algorithm as an example, the smoothing factor corresponding to each face region image may be calculated according to a gaussian kernel with a preset size. The preset size may be 7 × 7, and the preset size may also be other values, and the embodiment of the present application does not limit a specific value of the preset size.
After the smoothing factors corresponding to the initial frame image, the result image and the current frame image are obtained, the current makeup progress corresponding to the current frame image is determined according to the smoothing factors. Specifically, a first difference between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image is calculated, and a second difference between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image is calculated. And calculating the ratio of the first difference value to the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
After the current makeup progress is determined in the above mode, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up by a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image after the initial frame of image relative to the initial frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the self covering making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 22, the faces in the initial frame image and the current frame image are respectively corrected and clipped according to the initial frame image and the corresponding first face key point thereof, and the current frame image and the corresponding second face key point thereof, and then the skin detection model is called to respectively detect the respective face defect information corresponding to the initial frame image and the current frame image. And calculating a face defect difference value between the current frame image and the initial frame image based on the face defect information of the current frame image and the initial frame image. If the facial flaw difference value is larger than a preset threshold value, calculating the ratio of the facial flaw difference value to the total number of flaws corresponding to the initial frame image to obtain the current makeup progress. If the difference value of the facial flaws is smaller than or equal to a preset threshold value, rendering on the initial frame image to generate a result image after the flaws are covered and made up, detecting a third face key point corresponding to the result image, and correcting and cutting the face in the result image according to the third face key point. And converting the respective corresponding human face area images of the initial frame image, the current frame image and the result image into an image which only contains a saturation channel in an HLS color space. And respectively calculating smoothing factors corresponding to the three converted face region images through a preset filtering algorithm. And calculating a first difference value between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image. And calculating a second difference value between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image. And calculating the ratio of the first difference value to the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
In the embodiment of the application, a current frame image and an initial frame image of a user makeup process are obtained, a face flaw difference value from the current frame image to the initial frame image is determined, and when the face flaw difference value is larger than a preset threshold value, a ratio between the face flaw difference value and a total number of flaws corresponding to the initial frame image is calculated, so that a current makeup progress of concealing makeup is obtained. When the difference value of the facial flaws is smaller than or equal to a preset threshold value, simulating a result image for finishing concealing and making up on the basis of the initial frame image, respectively determining smoothing factors corresponding to the initial frame image, the result image and the current frame image, calculating the difference value of the smoothing factors between the current frame image and the initial frame image, and calculating the difference value of the smoothing factors between the result image and the initial frame image, wherein the ratio of the two difference values is the current making up progress.
Therefore, when the difference value of the facial flaws between the current frame image and the initial frame image is larger than the preset threshold value, the cosmetic progress determination process is simple, the calculation amount is small, the current cosmetic progress can be determined quickly, the efficiency is high, and the real-time requirement of the flaw covering cosmetic progress detection can be met. When the difference value of the facial flaws is smaller than or equal to the preset threshold value, a filtering algorithm is introduced to calculate the smoothing factors corresponding to the images, the makeup progress is determined according to the change condition of the smoothing factors, more accurate calculation can be performed on finer changes, and the situation that the makeup progress is suddenly increased in a short time is avoided.
Furthermore, the embodiment of the application corrects and cuts the face area of the user in the video frame by using the key points of the face, so that the accuracy of identifying the face area is improved.
The concealer progress can be accurately detected only through image processing, the calculation amount is small, the cost is low, the processing pressure of a server is reduced, the concealer progress detection efficiency is improved, the real-time requirement of concealer progress detection can be met, and the dependence of an algorithm on hardware resources and the input cost of manpower are reduced.
Example eight
1. A method of cosmetic color identification, the method comprising:
acquiring a makeup image of a user;
identifying the tone category to which the color of the preset part of the face of the user belongs in the makeup image;
and determining the makeup color tone of the user according to the color tone category corresponding to the preset part.
2. According to 1, identifying the tone category to which the color of the preset part of the user face in the makeup image belongs comprises:
detecting a face key point corresponding to the makeup image;
acquiring a face region image corresponding to the makeup image according to the face key points;
and identifying the tone category to which the color of the preset part of the user face in the face region image belongs.
3. According to 2, identifying the tone category to which the color of the preset part of the user face in the face region image belongs includes:
intercepting a target area image corresponding to a preset part from the face area image according to the face key point;
acquiring a pixel dominant color of the target area image;
and determining the tone category to which the color of the preset part belongs according to the dominant color of the pixel.
4. According to 3, the predetermined area comprises facial skin; according to the key points of the face, intercepting a target area image corresponding to a preset part from the face area image, wherein the target area image comprises the following steps:
intercepting a face image from the face region image according to the face key points;
and according to the key points of the human face, removing an eyebrow region, an eye region and a mouth region from the facial image to obtain a target region image corresponding to the facial skin.
5. According to 3, the preset part comprises an eye shadow part; according to the key points of the face, intercepting a target area image corresponding to a preset part from the face area image, wherein the target area image comprises the following steps:
according to the eye key points included by the face key points, intercepting an eye image from the face region image;
Performing image expansion processing on the eye image for preset times;
and according to the eye key points included by the eye key points, the eye areas are scratched out from the eye areas after the expansion treatment, and a target area image corresponding to the eye shadow part is obtained.
6. According to 3, the predetermined location comprises a mouth; according to the key points of the face, intercepting a target area image corresponding to a preset part from the face area image, wherein the steps of:
interpolating an upper lip upper edge key point, an upper lip lower edge key point, a lower lip upper edge key point and a lower lip lower edge key point which are included in the face key points respectively to obtain an upper lip upper edge interpolation point, an upper lip lower edge interpolation point, a lower lip upper edge interpolation point and a lower lip lower edge interpolation point;
intercepting and taking out an upper lip image from the face region image according to the upper lip upper edge key point, the upper lip upper edge interpolation point, the upper lip lower edge key point and the upper lip lower edge interpolation point;
intercepting a lower lip image from the face region image according to the lower lip upper edge key point, the lower lip upper edge interpolation point, the lower lip lower edge key point and the lower lip lower edge interpolation point;
And splicing the upper lip image and the lower lip image into a target area image corresponding to the mouth.
7. According to 3, determining the hue class to which the color of the preset part belongs according to the pixel dominant color, including:
converting the color space of the pixel dominant color into a preset color space corresponding to the preset part;
determining a hue interval to which each color channel value belongs according to each color channel value of the pixel dominant color in the preset color space;
and determining the tone category corresponding to the tone interval as the tone category to which the color of the preset part belongs.
8. According to 7, the preset site comprises at least one of facial skin, eye shadow site, mouth; the preset color space corresponding to the facial skin comprises an LAB color space, and the preset color space corresponding to the eye shadow part and the mouth part comprises an HSV color space.
9. According to any one of the claims 1 to 8, the preset parts comprise one or more parts of facial skin, eye shadow parts and mouth parts, and the makeup color tone of the user is determined according to the color tone category corresponding to the preset parts, and the method comprises the following steps:
determining a color coefficient corresponding to the tone category of each preset part;
Calculating a makeup color coefficient of the user according to a preset weight corresponding to each preset part and the color coefficient;
and determining the color tone category corresponding to the makeup color coefficient as the makeup color tone of the user.
10. According to any one of 2-8, acquiring a face region image corresponding to the makeup image according to the face key point, wherein the face region image comprises:
performing rotation correction on the makeup image and the face key points according to the face key points;
according to the corrected key points of the human face, an image containing a human face area is intercepted from the corrected makeup image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the makeup image.
11. According to 10, according to the key points of the face, performing rotation correction on the makeup image and the key points of the face, including:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the face key points;
determining a rotation angle and a rotation center point coordinate corresponding to the makeup image according to the left eye center coordinate and the right eye center coordinate;
And performing rotation correction on the makeup image and the key points of the human face according to the rotation angle and the coordinates of the rotation central point.
12. According to 10, the method for intercepting an image containing a face area from the corrected makeup image according to the corrected face key points comprises:
and according to the corrected key points of the face, carrying out image interception on a face area contained in the corrected makeup image.
13. According to 12, image capturing is performed on the face region included in the corrected makeup image according to the corrected face key point, and the image capturing includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected face key points;
determining a capturing frame corresponding to a face area in the corrected makeup image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and according to the intercepting frame, intercepting an image containing the face area from the corrected makeup image.
14. According to 13, further comprising:
amplifying the intercepting frame by a preset multiple;
and according to the enlarged intercepting frame, intercepting an image containing the face area from the corrected makeup image.
15. According to 10, further comprising:
and carrying out scaling translation processing on the corrected key points of the human face according to the size of the image containing the human face area and the preset size.
16. According to any one of claims 1 to 8, further comprising:
detecting whether the makeup image at least comprises a complete face image;
if yes, executing the operation of identifying the tone category to which the color of the preset part of the face of the user belongs in the makeup image;
if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to provide a makeup image at least comprising a complete face image.
17. A makeup color recognition device, comprising:
the obtaining module is used for obtaining a makeup image of a user;
the recognition module is used for recognizing the tone category to which the color of the preset part of the face of the user belongs in the makeup image;
and the determining module is used for determining the makeup color tone of the user according to the color tone category corresponding to the preset part.
18. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method of any of claims 1-16.
19. A computer-readable storage medium, having stored thereon a computer program for execution by a processor to perform the method of any of claims 1-16.
The embodiment of the application provides a makeup color identification method, which is used for identifying the color tone category to which the color of a preset part of a user belongs in a makeup image and further determining the makeup color tone of the user according to the color tone category corresponding to the preset part. The makeup color tone of the user can be accurately identified only through image processing, a deep learning model is not needed for processing, the calculation amount is small, the cost is low, and the processing pressure of a server is reduced.
Referring to fig. 23, the method specifically includes the following steps:
step 901: a makeup image of a user is acquired.
The makeup image of the user comprises at least one face image. The makeup image may be a single image or any one of video frames in the makeup video of the user. The execution subject of the embodiment of the application is the server. The user terminal such as the mobile phone or the computer of the user is provided with a client matched with the service for identifying the makeup color provided by the server. The client is provided with an interface for submitting makeup images. When the user needs to identify the makeup tone corresponding to a certain makeup, the user submits a makeup image through the interface.
Specifically, when the client detects that the user clicks the interface, a makeup color recognition interface is displayed, which may include a photograph button and/or a local file upload interface. And if the client detects that the user clicks the shooting button, a camera on the user terminal is called to shoot the makeup image or the makeup video. And sending the shot makeup image or makeup video to a server. And if the client detects that the user clicks the local file uploading interface, displaying a local folder list so that the user can select a required makeup image or makeup video. And the client transmits the makeup image or the makeup video selected by the user to the server.
And the server receives the makeup image or the makeup video sent by the user terminal. And if the server receives the makeup video sent by the user terminal, taking the currently received current frame image as a makeup image to be identified.
In other embodiments of the present application, after obtaining the makeup image of the user, the server further detects whether the makeup image includes at least one complete face image. If the makeup image includes one or more complete face images, the method provided in this embodiment is used to identify the makeup tone corresponding to each complete face image in the makeup image. And if the makeup image is detected not to contain the face image or all the face images are incomplete, sending prompt information to a terminal of the user. The terminal of the user receives and displays the prompt message to prompt the user to provide a makeup image at least comprising a complete face image. For example, the prompt may be "please keep at least one complete face included in the makeup image".
After the server obtains the makeup image of the user through this step, the makeup tone for identifying the face of the user in the makeup image is determined through the operations of the following steps 902 and 903.
Step 902: and identifying the tone category to which the color of the preset part of the user face belongs in the makeup image.
The makeup image includes at least one complete face image, and since the process of identifying the makeup tone of each complete face image is the same, the embodiment of the present application takes the process of identifying the makeup tone of one complete face image as an example for explanation. And selecting any one complete face image from at least one complete face image included in the makeup images as the face of the user to be processed currently.
The server specifically identifies the hue category to which the color of the preset part of the face of the user belongs through the following operations in steps S1-S3, including:
s1: and detecting the key points of the face corresponding to the makeup image.
The server is configured with a pre-trained detection model for detecting the face key points, and the detection model provides interface services for detecting the face key points. After the server acquires the makeup image of the user, the server calls an interface service for face key point detection, and all face key points of the face of the user in the makeup image are identified through a detection model.
The identified key points of the human face comprise key points on the face contour of the user and key points of the mouth, the nose, the eyes, the eyebrows and other parts. The number of the identified face key points can be 106.
S2: and acquiring a face region image corresponding to the makeup image according to the face key points.
The server specifically obtains a face region image corresponding to the face of the current user through the following operations in steps S20 to S22, including:
s20: and performing rotation correction on the makeup image and the face key points according to the face key points.
Specifically, the left eye center coordinate and the right eye center coordinate are respectively determined according to the left eye key point and the right eye key point included in the face key point corresponding to the user face. And determining all the left eye key points of the left eye region and all the right eye key points of the right eye region from the face key points. And averaging the determined abscissa of all the left-eye key points, averaging the ordinate of all the left-eye key points, forming a coordinate by the average of the abscissa and the average of the ordinate corresponding to the left eye, and determining the coordinate as the center coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then, according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the makeup image. As shown in fig. 4, a horizontal difference dx and a vertical difference dy between the left-eye center coordinate and the right-eye center coordinate are calculated, and a length d of a link between the left-eye center coordinate and the right-eye center coordinate is calculated. And calculating an included angle theta between the two eye connecting lines and the horizontal direction according to the length d of the two eye connecting lines, the horizontal difference value dx and the vertical difference value dy, wherein the included angle theta is the rotation angle corresponding to the makeup image. And then calculating the coordinate of the central point of the connecting line of the two eyes according to the central coordinates of the left eye and the right eye, wherein the coordinate of the central point is the coordinate of the rotating central point corresponding to the makeup image.
And performing rotation correction on the makeup image and the key points of the human face according to the calculated rotation angle and the coordinates of the rotation central point. Specifically, the rotation angle and the rotation center point coordinate are input into a preset function used for calculating a rotation matrix of the picture, where the preset function may be a function cv2. Getrototionmatrix 2d () in OpenCV. And obtaining a rotation matrix corresponding to the makeup image by calling the preset function. And then calculating the product of the makeup image and the rotation matrix to obtain the corrected makeup image. The operation of correcting the makeup image by using the rotation matrix can also be completed by calling a function cv2.Warpaffine () in OpenCV.
For the face key points, each face key point needs to be corrected one by one so as to correspond to the corrected makeup image. When the key points of the face are corrected one by one, two times of coordinate system conversion are required, the coordinate system with the upper left corner of the makeup image as the origin is converted into the coordinate system with the lower left corner as the origin for the first time, and the coordinate system with the lower left corner as the origin is further converted into the coordinate system with the rotation center point coordinate as the origin for the second time, as shown in fig. 5. After two times of coordinate system conversion, the following formula (1) conversion is carried out on each key point of the face, and the rotation correction of the key points of the face can be completed.
Figure RE-GDA0003326777920000941
In the formula (1), x 0 、y 0 Respectively are the abscissa and ordinate of the key point of the face before rotation correction, x and y are respectively the abscissa and ordinate of the key point of the face after rotation correction, and theta is the rotation angle.
The corrected makeup image and the key points of the face are based on the whole image, and the whole image not only includes the face information of the user but also includes other redundant image information, so that the face region of the corrected image needs to be cut out through the following step S21.
S21: and according to the corrected key points of the face, intercepting an image containing a face area from the corrected makeup image.
And according to the corrected key points of the face, carrying out image interception on the area containing the face of the user in the corrected makeup image. Firstly, determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected face key points. And then determining a corresponding capture frame of the face area in the corrected makeup image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, the minimum abscissa value and the minimum ordinate value form a coordinate point, and the coordinate point is used as a top point of the top left corner of the capturing frame corresponding to the face area. And forming another coordinate point by using the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the top of the lower right corner of the capturing frame corresponding to the face region. And determining the position of a capturing frame in the corrected makeup image according to the top left corner vertex and the bottom right corner vertex, and capturing the image in the capturing frame from the corrected makeup image, namely capturing the image containing the face of the user.
In other embodiments of the present application, in order to ensure that all face areas of the user are intercepted, and avoid the occurrence of a situation where the subsequent makeup progress detection error is large due to incomplete interception, the intercepting frame may be further enlarged by a preset multiple, where the preset multiple may be 1.15 or 1.25, and the like. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical application. And after the interception frame is amplified by a preset multiple to the periphery, intercepting the image in the amplified interception frame from the corrected makeup image, thereby intercepting the image containing the complete face area of the user.
S22: and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the makeup image.
After the image containing the face area of the user is intercepted from the makeup image in the mode, the image containing the face area is zoomed to the preset size, and the face area image corresponding to the makeup image is obtained. The predetermined size may be 390 × 390, 400 × 400, or the like. The embodiment of the application does not limit the specific value of the preset dimension, and the specific value can be set according to requirements in practical application.
In order to adapt the key points of the face to the zoomed face region image, the captured image containing the face region is zoomed to a preset size, and then the corrected key points of the face are zoomed and translated according to the size of the image containing the face region before zooming and the preset size. Specifically, the translation direction and the translation distance of each face key point are determined according to the size of the image containing the face area before the zooming and the preset size to which the image needs to be zoomed, then, the translation operation is respectively carried out on each face key point according to the translation direction and the translation distance corresponding to each face key point, and the coordinates of each face key point after the translation are recorded.
The face region image is obtained from the makeup image in the mode, and the key points of the face are matched with the obtained face region image through operations such as rotation correction, translation scaling and the like. And then the hue class to which the color of the preset part of the user' S face belongs is identified through the operation of step S3.
S3: and identifying the tone category to which the color of the preset part of the user face in the face region image belongs.
Specifically, the following steps S30 to S32 are performed to identify the hue class corresponding to the preset portion, including:
S30: and intercepting a target area image corresponding to the preset part from the face area image according to the face key point.
In the embodiment of the present application, the preset portion may include one or more of facial skin, an eye shadow portion, a mouth portion, and the like.
For the facial skin, firstly, according to all face key points of the face of the user, a face image is intercepted from a face region image so as to remove hair and a background region. And then, based on all eyebrow key points included in the key points of the face, extracting eyebrow regions from the face image. And according to all the eye key points included by the key points of the human face, removing the eye regions from the face image. And picking out the mouth region from the face image according to all the mouth key points included by the face key points. Therefore, the target area image only comprising the facial skin is obtained, so that the subsequent identification of the color tone of the facial skin is facilitated, the interference of the colors of other parts on the color tone of the facial skin is eliminated, and the accuracy of identifying the color tone of the facial skin is improved.
For the eye shadow part, firstly, according to the eye key points included by the face key points, an eye image is intercepted from the face region image, and the eye image comprises an upper eyelid region, a lower eyelid region and an eye region. Eye shadow makeup is located in the upper and lower eyelid areas where the white and eyeball colors interfere with color recognition of the eye shadow. The eye area needs to be removed. Firstly, performing image expansion processing on the obtained eye image for preset times to ensure that the eye image after the expansion processing comprises an eye shadow part with eye makeup. The preset number of times may be 3 or 4, etc. And then, according to the eye key points included in the eye key points, the eye areas are scratched out from the eye areas subjected to the expansion processing, and a target area image corresponding to the eye shadow part is obtained. Therefore, the subsequent identification of the color tone of the eye shadow is facilitated, the interference of the color of the eyes on the identification of the color tone of the eye shadow is eliminated, and the accuracy of the identification of the color tone of the eye shadow is improved.
For the mouth, firstly, the key points of the upper lip upper edge, the upper lip lower edge, the lower lip upper edge and the lower lip lower edge are determined from the key points of the face corresponding to the face of the user. And performing linear interpolation on key points of the upper edge of the upper lip to obtain a plurality of upper edge interpolation points of the upper lip. And interpolating key points of the lower edge of the upper lip to obtain a plurality of interpolation points of the lower edge of the upper lip. And intercepting the upper lip image from the face region image according to the upper lip upper edge key point, the upper lip upper edge interpolation point, the upper lip lower edge key point and the upper lip lower edge interpolation point.
Specifically, the key points of the upper edges of the upper lips and the interpolation points of the upper edges of the upper lips are sequentially connected to obtain a smoother curve, and the curve is the boundary line of the upper edges of the upper lips. And the lower edge key points of the plurality of upper lips and the lower edge interpolation points of the plurality of upper lips are sequentially connected to obtain a smoother curve, and the curve is the boundary line of the lower edge of the upper lips. The area enclosed by the boundary line of the upper edge of the upper lip and the boundary line of the lower edge of the upper lip is the area of the upper lip. And intercepting the upper lip area from the face area image to obtain an upper lip image.
And performing linear interpolation on the key points of the upper edge of the lower lip to obtain a plurality of upper edge interpolation points of the lower lip. And performing linear interpolation on the lower lip lower edge key points to obtain a plurality of lower lip lower edge interpolation points. And intercepting the lower lip image from the face region image according to the lower lip upper edge key point, the lower lip upper edge interpolation point, the lower lip lower edge key point and the lower lip lower edge interpolation point.
Specifically, the key points of the upper edges of the plurality of lower lips and the interpolation points of the upper edges of the plurality of lower lips are sequentially connected to obtain a smoother curve, and the curve is the boundary line of the upper edge of the lower lips. And connecting the plurality of lower lip lower edge key points with the plurality of lower lip lower edge interpolation points in sequence to obtain a smoother curve, wherein the curve is the boundary line of the lower edge of the lower lip. The area enclosed by the boundary line of the upper edge of the lower lip and the boundary line of the lower edge of the lower lip is the lower lip area. And intercepting the lower lip area from the face area image to obtain a lower lip image.
And splicing the obtained upper lip image and the lower lip image into a target area image corresponding to the mouth. Therefore, the region between the upper lip and the lower lip can be removed, particularly, the image of the region in the oral cavity can be removed under the condition of opening the mouth, the region between the upper lip and the lower lip is prevented from influencing the recognition of the color tone of the mouth, and the accuracy of recognizing the color tone of the lipstick of the mouth is improved.
According to the method and the device, the preset part is not specially limited, the preset part can be other parts, and the target area image corresponding to the preset part is deducted from the face area image according to the key point of the face, so that the color of the preset part is recognized from the target area image.
S31: and acquiring the pixel dominant color of the target area image.
After the target area image corresponding to the preset portion is obtained in step S30, dominant hue extraction is performed on the target area image, and the dominant color of the pixel of the target area image is determined. Specifically, the target area image read out by opencv is converted from a BGRA color space into an RGBA color space conforming to the PIL reading format, then a get _ palette (image) function is called to establish a palette, similar colors are gathered by using a median cut algorithm, and the pixel dominant color of the target area image is obtained.
S32: and determining the tone category to which the color of the preset part belongs according to the dominant color of the pixel of the target area image.
The pixel dominant color of the target area image comprises values of three color channels of R, G and B, and has respective characteristics for different preset part makeup colors, for example, the makeup color of facial skin may comprise different colors of pink, fair, natural, wheat, bronze, dark, and the like, and for the lighter color of pink, fair, natural, and the like, it is difficult to judge the color tone category corresponding to the pixel dominant color through RGB color space. While the color tones of lipstick on the mouth may include pink, red, orange, reddish brown, etc., the color tones with small difference like pink, red, etc. are difficult to distinguish by the RGB color space. Therefore, the preset color spaces corresponding to different preset parts are configured in advance, and different makeup colors of the preset parts can be distinguished more easily through the preset color spaces corresponding to the preset parts. In addition, a plurality of tone categories corresponding to different preset positions are configured in advance, and a tone interval corresponding to each tone category is configured. The hue interval corresponding to the hue category comprises the value intervals of all color channels of the colors belonging to the hue category in the corresponding preset color space.
After the primary color of the pixel of the target area image is obtained in step S31, the color space of the primary color of the pixel of the target area image is RGB converted into the preset color space corresponding to the preset portion. And then determining the hue interval to which each color channel value belongs according to each color channel value of the pixel dominant color in the preset color space. And determining the tone category corresponding to the tone interval as the tone category to which the color of the preset part belongs.
In the embodiment of the application, the preset part comprises at least one of facial skin, an eye shadow part and a mouth; the preset color space corresponding to the facial skin may include an LAB color space, which is composed of one luminance channel and two color channels, each color being represented by three values L, a, B. Where L denotes luminance, a denotes a component from green to red, and B denotes a component from blue to yellow. Lighter colors in the LAB color space can also be accurately distinguished. The preset color space corresponding to the eye shadow part and the mouth part can comprise an HSV color space, and colors with smaller hue difference under the HSV color space can be accurately distinguished.
Therefore, in this step, for the facial skin region, the pixel dominant color of the target region image corresponding to the facial skin is converted from the RGB color space to the LAB color space. And determining a hue interval to which the L, A and B channel values belong according to the L, A and B channel values of the pixel dominant colors corresponding to the facial skin in the LAB color space. And determining the tone category corresponding to the tone interval as the tone category to which the skin color of the face of the user belongs.
And for the eye shadow part, converting the pixel dominant color of the target area image corresponding to the eye shadow part from the RGB color space to the HSV color space. And determining the hue interval to which the H, S and V channel values belong according to the H, S and V channel values of the pixel main color corresponding to the eye shadow part in the HSV color space. And determining the tone category corresponding to the tone interval as the tone category to which the eye shadow color of the face of the user belongs.
For the mouth, converting the pixel dominant color of the target area image corresponding to the mouth from the RGB color space to the HSV color space. And determining the hue interval to which the H, S and V channel values belong according to the H, S and V channel values of the pixel main color corresponding to the lower mouth part of the HSV color space. And determining the tone category corresponding to the tone interval as the tone category to which the lipstick color of the face of the user belongs.
Step 903: and determining the makeup color tone of the user according to the color tone category corresponding to the preset part.
In the embodiment of the present application, the predetermined portion is one or more portions, and the color coefficient corresponding to each hue category is pre-configured, for example, the color coefficient corresponding to fair hue may be 0.2, and the color coefficient corresponding to brown orange hue may be 0.3, and so on. After the hue class corresponding to each preset part is identified through the steps, the color coefficient corresponding to the hue class of each preset part is respectively determined. And calculating the makeup color coefficient of the user according to the preset weight and the color coefficient corresponding to each preset part. Specifically, products of preset weights and color coefficients corresponding to each preset part are respectively calculated, then the products corresponding to each preset part are summed, and the obtained sum value is used as the makeup color coefficient of the user.
The embodiment of the application also configures color coefficient sections corresponding to the color tone types of different makeup in advance. After the makeup color coefficient of the user is determined in the above way, the color coefficient section to which the makeup color coefficient belongs is determined, and the color tone category corresponding to the color coefficient section is determined as the makeup color tone of the user. The makeup tone may be considered as a dominant tone of the makeup of the face of the user.
As an example, the preset portions include facial skin, eye shadow portions, and a mouth. After the skin color tone of the face skin, the eye shadow tone of the eye shadow part, and the lipstick tone of the mouth are recognized, a color coefficient corresponding to the skin color tone, a color coefficient corresponding to the eye shadow tone, and a color coefficient corresponding to the lipstick tone are determined. Assuming that the color coefficient corresponding to the skin color tone of "fair" is 0.2, the color coefficient corresponding to the eye shadow tone of "brown orange" is 0.3, the color coefficient corresponding to the lipstick tone of "naked powder" is 0.5, and the preset weights of the facial skin, the eye shadow part and the mouth part are all 0.33, 0.33 + 0.2+0.33 + 0.5=0.33 is calculated, that is, the cosmetic color coefficient of the user is 0.33. And determining that the makeup tone of the user is the bare powder tone if the color coefficient interval corresponding to the bare powder tone is 0-10.
As another example, the preset portion in the embodiment of the present application may include only facial skin and mouth, and combine facial skin color and lipstick color to identify the makeup color tone of the user. Alternatively, the preset portion may only include facial skin and eye shadow portions, and the skin color and eye shadow color of the combined surface portion may be used to identify the makeup color tone of the user. Alternatively, the preset portion may include only an eye shadow portion and a mouth portion, and the user's makeup tone may be recognized by combining the lipstick color and the eye shadow color. Alternatively, the preset portion may include only facial skin, and the user's makeup tone may be identified by the facial skin color. Alternatively, the preset portion may include only an eye shadow portion, and the makeup color tone of the user may be recognized by the eye shadow color. Alternatively, the preset portion may include only a mouth portion, and the user's makeup tone may be recognized by a color of lipstick.
In the embodiment of the application, the color tone category to which the color of the preset part of the user belongs in the makeup image is identified, and then the makeup color tone of the user is determined according to the color tone category corresponding to the preset part. The method includes extracting color tone of at least one part of a user's face, and automatically recognizing the user's makeup color tone using the extracted color tone. Furthermore, through the makeup combination of one or more dimensions such as skin color, eye shadow, lipstick and the like, only the color information of the part with larger influence on the makeup color is extracted, the influence of other makeup effects on the makeup color tone is eliminated, and the uniformity and the accuracy of the identification of the makeup color tone are increased. And the makeup color tone of the user can be accurately identified only through image processing, a deep learning model is not required for processing, the calculation amount is small, the cost is low, the processing pressure of a server is reduced, and the dependence of an algorithm on hardware resources and the input cost of manpower are reduced.
The embodiment of the application also provides a makeup color recognition device which is used for executing the makeup color recognition method provided by any one of the embodiments. As shown in fig. 24, the apparatus includes:
an obtaining module 100, configured to obtain a makeup image of a user;
the recognition module 200 is used for recognizing the tone category to which the color of the preset part of the user face belongs in the makeup image;
the determining module 300 is configured to determine a makeup color tone of the user according to the color tone category corresponding to the preset portion.
The recognition module 200 is used for detecting face key points corresponding to the makeup images; acquiring a face region image corresponding to the makeup image according to the face key points; and identifying the tone category to which the color of the preset part of the user face in the face region image belongs.
The recognition module 200 is configured to intercept a target area image corresponding to a preset part from the face area image according to the face key point; acquiring the pixel dominant color of a target area image; and determining the tone category to which the color of the preset part belongs according to the dominant color of the pixel.
The predetermined location comprises facial skin; the recognition module 200 is configured to intercept a face image from the face region image according to the face key point; and according to the key points of the human face, removing the eyebrow region, the eye region and the mouth region from the facial image to obtain a target region image corresponding to the facial skin.
The preset part comprises an eye shadow part; the recognition module 200 is configured to intercept an eye image from the face region image according to eye key points included in the face key points; performing image expansion processing on the eye image for preset times; and according to the eye key points included by the eye key points, the eye areas are scratched out from the eye areas after the expansion processing, and a target area image corresponding to the eye shadow part is obtained.
The preset part comprises a mouth part; the identification module 200 is configured to interpolate upper lip upper edge key points, upper lip lower edge key points, lower lip upper edge key points, and lower lip lower edge key points included in the key points of the human face, respectively, to obtain upper lip upper edge interpolation points, upper lip lower edge interpolation points, lower lip upper edge interpolation points, and lower lip lower edge interpolation points; intercepting an upper lip image from the face region image according to the upper lip upper edge key point, the upper lip upper edge interpolation point, the upper lip lower edge key point and the upper lip lower edge interpolation point; intercepting a lower lip image from the face region image according to the lower lip upper edge key point, the lower lip upper edge interpolation point, the lower lip lower edge key point and the lower lip lower edge interpolation point; and splicing the upper lip image and the lower lip image into a target area image corresponding to the mouth.
A determining module 300, configured to convert a color space of a dominant color of a pixel into a preset color space corresponding to a preset location; determining a hue interval to which each color channel value belongs according to each color channel value of the pixel dominant color in a preset color space; and determining the tone category corresponding to the tone interval as the tone category to which the color of the preset part belongs.
The preset part comprises at least one of facial skin, eye shadow part and mouth; the preset color space corresponding to the facial skin comprises an LAB color space, and the preset color space corresponding to the eye shadow part and the mouth part comprises an HSV color space.
The preset parts comprise one or more parts of facial skin, eye shadow parts and mouths, and the determining module 300 is used for determining the color coefficient corresponding to the tone category of each preset part; calculating a makeup color coefficient of the user according to the preset weight and the color coefficient corresponding to each preset part; and determining the tone category corresponding to the makeup color coefficient as the makeup tone of the user.
The recognition module 200 is used for performing rotation correction on the makeup image and the key points of the face according to the key points of the face; according to the corrected key points of the face, an image containing a face area is intercepted from the corrected makeup image; and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the makeup image.
The recognition module 200 is configured to determine a left-eye center coordinate and a right-eye center coordinate according to a left-eye key point and a right-eye key point included in the face key point; determining a rotation angle and a rotation center point coordinate corresponding to the makeup image according to the left eye center coordinate and the right eye center coordinate; and performing rotation correction on the makeup image and the key points of the face according to the rotation angle and the coordinates of the rotation center point.
And the recognition module 200 is configured to perform image interception on a face region included in the corrected makeup image according to the corrected face key point.
The recognition module 200 is configured to determine a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected face key points; determining an intercepting frame corresponding to a face area in the corrected makeup image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and according to the intercepting frame, intercepting an image containing a face area from the corrected makeup image.
The recognition module 200 is further configured to amplify the capture frame by a preset multiple; and intercepting an image containing a face area from the corrected makeup image according to the amplified intercepting frame.
The recognition module 200 is further configured to perform scaling and translation processing on the corrected key points of the face according to the size of the image including the face region and a preset size.
The device also includes: the complete face detection module is used for detecting whether the makeup image at least comprises a complete face image; if yes, performing operation of identifying the tone category to which the color of the preset part of the user face in the makeup image belongs; if not, sending prompt information to a terminal of the user, wherein the prompt information is used for prompting the user to provide a makeup image at least comprising a complete face image.
The makeup color recognition device provided by the embodiment of the application and the makeup color recognition method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the stored application program.
The embodiment of the application also provides electronic equipment for implementing the cosmetic color identification method. A schematic diagram of an electronic device provided by some embodiments of the present application is shown. The electronic device includes: the system comprises a processor, a memory, a bus and a communication interface, wherein the processor, the communication interface and the memory are connected through the bus; the memory is stored with a computer program which can run on the processor, and the processor executes the makeup color identification method provided by any one of the previous embodiments when running the computer program.
The Memory may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the apparatus and at least one other network element is realized through at least one communication interface (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The bus may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory is used for storing a program, and the processor executes the program after receiving an execution instruction, and the method for identifying the makeup color disclosed by any one of the embodiments of the application can be applied to or realized by the processor.
The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and combines hardware thereof to complete the steps of the method.
The electronic equipment provided by the embodiment of the application and the method for identifying the makeup color provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
The embodiment of the present application further provides a computer-readable storage medium corresponding to the method for identifying makeup color provided by the foregoing embodiment, wherein the computer-readable storage medium is an optical disc, and a computer program (i.e., a program product) is stored on the optical disc, and when the computer program is executed by a processor, the computer program executes the method for identifying makeup color provided by any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiments of the present application has the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer-readable storage medium, based on the same inventive concept as the method for identifying the makeup color provided by the embodiments of the present application.
Example nine
1. An image processing method comprising:
detecting key point coordinates corresponding to the face flaws in the face image of the user;
generating a flaw texture mapping according to the key point coordinates corresponding to the facial flaws and a preset material image;
and generating a concealed face image corresponding to the face image according to the face image and the defective texture mapping.
2. According to 1, generating a flaw texture map according to the key point coordinates corresponding to the facial flaws and a preset material image, and the method comprises the following steps:
acquiring a blank texture image corresponding to the face image;
according to the key point coordinates corresponding to the facial flaws, positioning positions corresponding to the facial flaws on the blank texture image;
and mapping the preset material image at a position corresponding to the facial flaw on the blank texture image to obtain a flaw texture mapping.
3. According to 1, generating a concealed face image corresponding to the face image according to the face image and the defective texture map comprises:
blurring the face image to obtain a buffing image corresponding to the face image;
and according to the flaw texture mapping, carrying out image fusion on a flaw area in the face image and a flaw area with the same position in the buffing image to obtain a concealed face image corresponding to the face image.
4. According to 3, according to the flaw texture mapping, image fusion is carried out on the flaw area in the face image and the flaw area with the same position in the buffing image, and the method comprises the following steps:
obtaining the transparency A value of a pixel point at the coordinate of a first key point from the flaw texture mapping; the first key point coordinate is any one of the key point coordinates corresponding to the facial blemish;
and according to the transparency A value, carrying out fusion processing on the pixel point at the first key point coordinate in the face image and the pixel point at the first key point coordinate in the buffing image.
5. According to 4, according to the transparency A value, the fusion processing is carried out on the pixel point at the first key point coordinate in the face image and the pixel point at the first key point coordinate in the buffing image, and the fusion processing comprises the following steps:
acquiring a first RGB color value of a pixel point at the first key point coordinate from the face image, and acquiring a second RGB color value of the pixel point at the first key point coordinate from the buffing image;
calculating a fusion pixel value obtained by fusing a pixel point at the first key point coordinate in the face image and a pixel point at the first key point coordinate in the buffing image according to the first RGB color value, the second RGB color value and the transparency A value;
And resetting the current pixel value of the pixel point at the first key point coordinate in the face image as the fusion pixel value.
6. According to 5, calculating a fusion pixel value after the pixel point at the first key point coordinate in the face image is fused with the pixel point at the first key point coordinate in the buffing image according to the first RGB color value, the second RGB color value and the transparency A value, and including:
calculating a first product between the first RGB color value and the transparency A value and a second product between the second RGB color value and the transparency A value; calculating an average of the first product and the second product;
and determining the average value as a fusion pixel value after the pixel point at the first key point coordinate in the face image is fused with the pixel point at the first key point coordinate in the buffing image.
7. According to any one of 1 to 6, the preset material image is an image with gradually changed transparency, and the transparency of the pixel points is gradually decreased from the edge of the preset material image to the center of the preset material image.
8. According to any one of 1 to 6, further comprising:
detecting whether the face image contains a face area;
If yes, executing the operation of detecting the key point coordinates corresponding to the facial flaws in the facial image of the user;
if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user that the face image needs to contain a face area.
9. An image processing apparatus comprising:
the detection module is used for detecting the coordinates of key points corresponding to the facial flaws in the facial image of the user;
the first generation module is used for generating a flaw texture mapping according to the key point coordinates corresponding to the facial flaws and a preset material image;
and the second generation module is used for generating a concealing face image corresponding to the face image according to the face image and the flaw texture mapping.
10. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method of any of claims 1-8.
11. A computer-readable storage medium having stored thereon a computer program for execution by a processor to perform the method of any of claims 1-8.
At present, there is an increasing demand for a retouching function, for example, blemishes such as pockmarks and spots on a human face, which are usually desired to be removed by the retouching function after a user takes a picture. In the related art, a user is usually required to perform fuzzy processing on a defective area in a face image aiming at each pixel, so that the user is required to determine the position of the defect in the image and manually trigger the fuzzy processing, the accuracy is poor, and the efficiency is low.
The embodiment of the application provides an image processing method, which is used for automatically identifying facial flaws such as pox and spots in a face image and performing material mapping on the facial flaws in a blank texture image corresponding to the face image to obtain a flaw texture mapping. And automatically generating a concealing face image with a concealing effect based on the flaw texture mapping. Therefore, the user does not need to manually optimize each pixel, the processing performance is greatly improved, and the effect of concealing the face image in real time can be obtained.
Referring to fig. 25, the method specifically includes the steps of:
step 1001: and detecting the coordinates of key points corresponding to the facial flaws in the facial image of the user.
The execution main body in the embodiment of the present application is a server or any other terminal capable of providing a function of concealing a defect map.
When the user needs to use the concealing and repairing function, the user sends the face image needing repairing to the server. Specifically, a client matched with the function of the concealing chart provided by the server may be arranged on a user terminal such as a mobile phone or a computer of the user, and an interface for submitting the image to be processed is arranged in the client. When the client detects that the user clicks the interface, a retouching interface is displayed, and a shooting button and/or a local file uploading interface can be included in the retouching interface. And when the client detects that the user clicks the shooting button, calling a camera on the user terminal to shoot the face image of the user. And when the client detects that the user clicks the local file uploading interface, displaying the local folder directory so that the user can select the facial image to be uploaded from the local folder directory.
The client shoots the face image of the user through the camera or receives the face image uploaded by the user, and then sends the face image to the server. The server receives a face image of a user.
In other embodiments of the present application, after obtaining the face image of the user, the server further detects whether the face image includes a face area. If the face image includes a face region, performing concealing processing on the facial flaws existing in the face region according to the method provided by the embodiment. And if the face image is detected not to contain the face area, sending prompt information to a terminal of the user. And the terminal of the user receives and displays the prompt information to prompt the user that the face image needs to contain the face area. For example, the prompt information may be "please provide a face image containing a face region".
After the face image containing the face area is obtained in the above mode, whether face flaws such as acne or spots exist in the face image is detected through a preset skin detection model, if the face image is detected not to contain the face flaws, the face image is directly returned to the user terminal, or information for prompting that no face flaws exist is returned to the user terminal.
If the face image is detected to include at least one facial flaw through a preset skin detection model, identifying and recording the key point coordinates corresponding to each facial flaw, wherein the key point coordinates corresponding to the facial flaws include the vertex coordinates and the texture coordinates of the facial flaws.
Step 1002: and generating a flaw texture mapping according to the key point coordinates corresponding to the facial flaws and a preset material image.
Firstly, a blank texture image corresponding to a face image of a user is obtained. The texture with the color in the face image can be modified into blank texture to obtain a blank texture image. That is, the positions corresponding to the human faces in the blank texture map are blank.
And then, according to the detected key point coordinates corresponding to the facial flaws, positioning positions corresponding to the facial flaws on the blank texture image. And mapping the preset material image at a position corresponding to the upper defect of the blank texture image by using a rendering technology of a GPU (Graphics Processing Unit) to obtain a defective texture map.
The preset material image may be any color such as red, yellow, green, and the like. The preset material image is an image with gradually changed transparency, and the transparency of the pixel points is decreased from the edge of the preset material image to the center of the preset material image. The preset material image is made by imitating the characteristics of blemishes such as pimples, color spots and the like, most of the facial blemishes are in the shapes of circles, ellipses and the like, the edge color is lighter, and the color is darker towards the center. Therefore, the preset material image in the embodiment of the present application is set to be a circular or elliptical image, and the transparency decreases from the edge to the center, that is, the more transparent the closer to the edge, the more opaque the closer to the center. Therefore, the preset material image is more fit with the characteristics of real facial flaws, the facial flaws in the facial image are concealed by utilizing the preset material image subsequently, the accuracy is higher, and the concealing effect is better.
And pasting the preset material image at the position of the key point coordinate corresponding to the facial defect of the user in the blank texture map to obtain a defect texture map. The defect texture map includes both the position information of the facial defect and the transparency distribution information of the preset material image.
As shown in fig. 26, (a) is a face image of a user, and (b) is a defective texture map corresponding to the face image shown in (a). (b) The circular spot image in the figure is the preset material image, and it can be seen from the figure that the edge color of the preset material image is lighter, the transparency is higher, and the deeper the color goes to the center, the lower the transparency.
Step 1003: and generating a concealer face image corresponding to the face image according to the face image and the flaw texture mapping.
The server specifically generates a concealer face image through the following operations in steps S1 and S2, and specifically includes:
s1: and carrying out fuzzy processing on the face image to obtain a buffing image corresponding to the face image.
The face image is subjected to fuzzy processing through a preset fuzzy algorithm, and the preset fuzzy algorithm can be a mean value fuzzy algorithm, a Gaussian fuzzy algorithm and the like. According to the embodiment of the application, only the face area in the face image can be subjected to fuzzy processing, the face area is firstly identified from the face image, then the full-face skin grinding processing is carried out on the face area through a preset fuzzy algorithm, and a corresponding skin grinding image is obtained.
S2: and according to the flaw texture mapping, carrying out image fusion on a flaw area in the face image and a flaw area with the same position in the buffing image to obtain a concealed face image corresponding to the face image.
According to the method and the device, only aiming at the defect area where the detected facial defect is located in the face image, the defect area in the face image and the defect area with the same position in the buffing image are subjected to image fusion, the non-defect area in the face image is not processed, and the original image of the non-defect area in the face image is kept unchanged. And when fusing the flaw area, fusing images based on the position of the facial flaw provided in the flaw texture map and the transparency distribution condition of the preset material image.
Since the fusion process corresponding to each pixel point of the defective region is the same, the embodiment of the present application only uses one pixel point as an example for detailed description.
Specifically, any one of the keypoint coordinates corresponding to the facial flaw is referred to as a first keypoint coordinate. In the embodiments of the present application, the color space of each image is RGBA, where a is also called Alpha and is a transparency parameter. And obtaining the transparency A value of the pixel point at the first key point coordinate from the flaw texture mapping. And then according to the transparency A value, carrying out fusion processing on the pixel point at the first key point coordinate in the face image and the pixel point at the first key point coordinate in the buffing image.
Specifically, a first RGB color value of a pixel point at a first key point coordinate is obtained from the face image, and a second RGB color value of the pixel point at the first key point coordinate is obtained from the buffing image. And calculating a fusion pixel value after the pixel point at the first key point coordinate in the face image is fused with the pixel point at the first key point coordinate in the buffing image according to the first RGB color value, the second RGB color value and the transparency A value. And resetting the current pixel value of the pixel point at the first key point coordinate in the face image as a fusion pixel value.
In the process of calculating the fused pixel value corresponding to the first key point coordinate, calculating a first product between the first RGB color value and the transparency A value and a second product between the second RGB color value and the transparency A value; an average of the first product and the second product is calculated. And determining the average value as a fusion pixel value after the pixel point at the first key point coordinate in the face image and the pixel point at the first key point coordinate in the buffing image are fused.
And for each key point coordinate corresponding to the detected facial flaw, respectively calculating a fusion pixel value obtained by fusing a pixel point in the face image corresponding to each key point coordinate with a pixel point in the buffing image according to the mode. And then, respectively replacing the current pixel value of the pixel point at the coordinate of each key point in the face image with the corresponding fusion pixel value, thereby completing the fusion between the flaw area in the face image and the flaw area at the corresponding position in the buffing image in the face image.
In other embodiments of the present application, the face image may include face areas of a plurality of users, and each face area may be concealed in parallel or in series by the concealing processing method provided in the embodiments of the present application.
After the face area in the face image is subjected to concealing processing by the above mode to obtain a concealed face image corresponding to the face image, the concealed face image can be returned to the user terminal. And the user terminal receives and displays the concealer face image.
In the embodiment of the application, the facial flaws in the face image are automatically identified, and material mapping is performed on the facial flaws in the blank texture image corresponding to the face image to obtain the flaw texture mapping. And carrying out full face buffing on the face image to obtain a buffing image. And based on the flaw texture mapping, carrying out image fusion on the flaw area in the face image and the flaw area with the same position in the buffing image, and automatically generating a concealing face image with a concealing effect. The method has the advantages that the user does not need to manually optimize each pixel, the preset material image is pasted at the position corresponding to the face flaw in the blank texture image corresponding to the face image, the transparency of the preset material image is gradually decreased from the edge to the center, and the actual characteristics of the face flaws such as pox, spots and the like are met. Flaw areas in the face image and the buffing image are fused based on the flaw texture mapping, so that a more natural flaw removing effect can be obtained. And the whole process is automatically carried out without manual intervention, so that the processing performance is greatly improved, and the effect of concealing the face image in real time can be obtained.
The embodiment of the application also provides an image processing device, which is used for executing the image processing method provided by any one of the embodiments. As shown in fig. 27, the apparatus includes:
the detection module 400 is configured to detect a key point coordinate corresponding to a facial defect in a facial image of a user;
the first generation module 500 is configured to generate a flaw texture map according to the coordinates of the key points corresponding to the facial flaw and a preset material image;
the second generating module 600 is configured to generate a concealed face image corresponding to the face image according to the face image and the defective texture map.
The first generation module 500 is configured to obtain a blank texture image corresponding to the face image; according to the key point coordinates corresponding to the facial flaws, positioning positions corresponding to the facial flaws on the blank texture image; and (4) mapping the preset material image at a position corresponding to the upper flaw of the blank texture image to obtain a flaw texture mapping.
The second generating module 600 is configured to perform a blurring process on the face image to obtain a buffing image corresponding to the face image; and according to the flaw texture mapping, carrying out image fusion on a flaw area in the face image and a flaw area with the same position in the buffing image to obtain a concealed face image corresponding to the face image.
A second generating module 600, configured to obtain a transparency a value of a pixel point at the first key point coordinate from the defect texture map; the first key point coordinate is any one of the key point coordinates corresponding to the facial flaws; and according to the transparency A value, carrying out fusion processing on the pixel point at the first key point coordinate in the face image and the pixel point at the first key point coordinate in the buffing image.
The second generating module 600 is configured to obtain a first RGB color value of a pixel point at the first key point coordinate from the face image, and obtain a second RGB color value of a pixel point at the first key point coordinate from the buffing image; calculating a fusion pixel value after the pixel point at the first key point coordinate in the face image is fused with the pixel point at the first key point coordinate in the buffing image according to the first RGB color value, the second RGB color value and the transparency A value; and resetting the current pixel value of the pixel point at the first key point coordinate in the face image as a fusion pixel value.
A second generation module 600, configured to calculate a first product between the first RGB color value and the transparency a value and a second product between the second RGB color value and the transparency a value; calculating an average of the first product and the second product; and determining the average value as a fusion pixel value after the pixel point at the first key point coordinate in the face image is fused with the pixel point at the first key point coordinate in the buffing image.
The preset material image is an image with gradually changed transparency, and the transparency of the pixel points is decreased from the edge of the preset material image to the center of the preset material image.
The device includes: the face detection module is used for detecting whether the face image contains a face area; if so, executing the operation of detecting the key point coordinates corresponding to the facial flaws in the facial image of the user; if not, sending prompt information to a terminal of the user, wherein the prompt information is used for prompting that the face image of the user needs to contain a face area.
The image processing apparatus provided by the above embodiment of the present application and the image processing method provided by the embodiment of the present application have the same advantages as the method adopted, run or implemented by the application program stored in the image processing apparatus.
The embodiment of the application also provides electronic equipment for executing the image processing method. A schematic diagram of an electronic device provided by some embodiments of the present application is shown. The electronic device includes: the system comprises a processor, a memory, a bus and a communication interface, wherein the processor, the communication interface and the memory are connected through the bus; the memory stores a computer program that can be executed on the processor, and the processor executes the image processing method provided by any one of the foregoing embodiments when executing the computer program.
The Memory may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the apparatus and at least one other network element is realized through at least one communication interface (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
The bus may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The image processing method disclosed in any of the foregoing embodiments of the present application may be applied to a processor, or implemented by a processor.
The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and combines hardware thereof to complete the steps of the method.
The electronic device provided by the embodiment of the application and the image processing method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
The embodiment of the present application further provides a computer-readable storage medium corresponding to the image processing method provided in the foregoing embodiment, where the illustrated computer-readable storage medium is an optical disc, and a computer program (i.e., a program product) is stored on the optical disc, and when the computer program is executed by a processor, the computer program will execute the image processing method provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the image processing method provided by the embodiment of the present application have the same beneficial effects as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
Example ten
The embodiment of the application also provides a makeup progress detection device, which is used for executing the makeup progress detection method provided by any one of the embodiments. As shown in fig. 28, the apparatus includes:
the video acquisition module 700 is used for acquiring a real-time makeup video of a user currently performing a specific makeup;
the makeup progress determination module 800 is configured to determine a current makeup progress of the user for making up a specific makeup look according to the initial frame image and the current frame image of the real-time makeup video.
The special makeup includes high gloss makeup, or makeup; a makeup progress determination module 800 for acquiring at least one target makeup area corresponding to a specific makeup; according to the target makeup area, acquiring a first target area image corresponding to a specific makeup from an initial frame image, and acquiring a second target area image corresponding to the specific makeup from a current frame image; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
A makeup progress determining module 800, configured to detect a first face key point corresponding to the initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key point; and acquiring a first target area image corresponding to the specific makeup from the face area image according to the first face key point and the target makeup area.
A makeup progress determination module 800, configured to determine, from the first face key points, one or more target key points located on an area contour corresponding to a target makeup area in the face area image; generating a mask image corresponding to the face region image according to the target key points corresponding to the target makeup region; and the mask image and the face area image are subjected to AND operation to obtain a first target area image corresponding to the specific makeup.
The makeup progress determining module 800 is configured to determine, according to a plurality of target key points, each edge coordinate of the target makeup area in the face area image if the number of the target key points corresponding to the target makeup area is multiple; modifying the pixel values of all pixel points in the area defined by the edge coordinates into preset values to obtain a mask area corresponding to the target makeup area; if the number of target key points corresponding to the target makeup area is one, drawing an elliptical area with a preset size by taking the target key points as the center, and modifying the pixel values of all pixel points in the elliptical area to preset values to obtain a mask area corresponding to the target makeup area; and modifying the pixel values of all pixel points outside the mask area to be zero to obtain a mask image corresponding to the face area image.
Specific makeup includes blush makeup; a makeup progress determination module 800 for obtaining at least one target makeup area corresponding to a specific makeup; generating a makeup mask image according to the target makeup area; and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
A makeup progress determining module 800, configured to obtain a first target area image for makeup from the initial frame image and obtain a second target area image for makeup from the current frame image with reference to the makeup mask image; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
Specific makeup includes eyeliner makeup; a makeup progress determining module 800, configured to obtain a makeup mask map corresponding to the initial frame image and the current frame image; according to the initial frame image, simulating to generate a result image after the eye line is made up; and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the result image, the initial frame image and the current frame image.
A makeup progress determining module 800, configured to obtain a first target area image to be made up from the initial frame image by using a makeup mask image corresponding to the initial frame image as a reference; acquiring a second target area image to be made up from the current frame image according to the cosmetic mask image corresponding to the current frame image; acquiring a third target area image of eye line makeup according to the result image; and determining the current makeup progress corresponding to the current frame image according to the first target area image, the second target area image and the third target area image.
A makeup progress determination module 800, configured to convert the first target area image, the second target area image, and the third target area image into images including a saturation channel in the HLS color space, respectively; and determining the current makeup progress corresponding to the current frame image according to the converted first target area image, second target area image and third target area image.
A makeup progress determining module 800, configured to calculate a first average pixel value corresponding to the converted first target area image, a second average pixel value corresponding to the second target area image, and a third average pixel value corresponding to the third target area image, respectively; calculating a first difference between the second average pixel value and the first average pixel value, and calculating a second difference between the third average pixel value and the first average pixel value; and calculating the ratio of the first difference value to the second difference value to obtain the current makeup progress corresponding to the current frame image.
A makeup progress determination module 800 for performing alignment processing on the first target area image and the second target area image; and carrying out alignment processing on the first target area image and the third target area image.
A makeup progress determining module 800, configured to perform binarization processing on the first target area image and the second target area image, respectively, to obtain a first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image; and the first binarization mask image and the second binarization mask image are subjected to AND operation to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image.
A makeup progress determination module 800, configured to obtain a face region image corresponding to the initial frame image and a face region image corresponding to the result image; performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and performing AND operation on the second mask image and the face region image corresponding to the result image to obtain a new second target region image corresponding to the result image.
A makeup progress determination module 800 for obtaining an eye-line style diagram selected by a user; if the eye state of the user in the initial frame image is the eye opening state, acquiring an eye opening pattern image corresponding to the eye liner pattern image; determining the eye opening pattern image as a cosmetic mask image corresponding to the initial frame image; and if the eye state of the user in the initial frame image is the eye closing state, acquiring a eye closing pattern image corresponding to the eye line pattern image, and determining the eye closing pattern image as a makeup mask image corresponding to the initial frame image.
The special makeup includes eye shadow makeup; a makeup progress determination module 800 for obtaining an eye shadow mask map; according to each target makeup area of the eye shadow makeup, splitting a makeup mask image corresponding to each target makeup area from the eye shadow mask image respectively; and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the makeup mask image corresponding to each target makeup area.
A makeup progress determining module 800, configured to obtain, from the initial frame image, a first target area image corresponding to each target makeup area with reference to a makeup mask image corresponding to each target makeup area; respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a second target area image corresponding to each target makeup area from the current frame image; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area.
A makeup progress determining module 800, configured to convert the first target area image and the second target area image corresponding to each target makeup area into images including a preset single-channel component in the HLS color space, respectively; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area.
A makeup progress determining module 800, configured to calculate absolute values of differences between preset single-channel components corresponding to pixels with the same position in a first target area image and a second target area image corresponding to the same target makeup area after conversion, respectively; counting the number of pixel points of which the absolute value of the difference value corresponding to each target makeup area meets a preset makeup completion condition; respectively calculating the ratio of the number of the pixel points corresponding to each target makeup area to the total number of the pixel points corresponding to the target makeup area to obtain a makeup progress corresponding to each target makeup area; and calculating the current makeup progress corresponding to the current frame image according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area.
A makeup progress determination module 800, configured to detect a first face key point corresponding to an initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key point; and taking the makeup mask image as a reference, and acquiring a first target area image for makeup from the face area image.
A makeup progress determination module 800, configured to convert the makeup mask image and the face region image into binary images, respectively; performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image; and calculating the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image.
The makeup progress determining module 800 is configured to determine one or more first positioning points located on the outline of each makeup area in the makeup mask map according to the standard face key points corresponding to the makeup mask map; determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points; and stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
A makeup progress determination module 800, configured to split the makeup mask map into a plurality of sub-mask maps, where each sub-mask map includes at least one target makeup area; respectively converting each sub-mask image and the face region image into binary images; respectively carrying out AND operation on the binarization image corresponding to each sub-mask image and the binarization image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image; respectively carrying out AND operation on each sub-mask image and the face region image corresponding to the initial frame image to obtain a plurality of sub-target region images corresponding to the initial frame image; and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
A makeup progress determining module 800, configured to determine, according to a standard face key point corresponding to a makeup mask map, one or more first positioning points located on an outline of a target makeup area in a first sub-mask map, where the first sub-mask map is any one of multiple sub-mask maps; determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points; and stretching the first sub-mask graph, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
The special makeup includes eyebrow makeup; a makeup progress determining module 800, configured to obtain a first target area image corresponding to an eyebrow from an initial frame image, and obtain a second target area image corresponding to the eyebrow from a current frame image; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
A makeup progress determination module 800, configured to detect a first face key point corresponding to an initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key point; and acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
The makeup progress determining module 800 is configured to interpolate eyebrow key points between the eyebrows and the eyebrows included in the first face key point to obtain a plurality of interpolated points; intercepting all eyebrow key points and a closed area formed by connecting a plurality of interpolation points between the eyebrows and the eyebrow peaks from the face area image to obtain a partial eyebrow image between the eyebrows and the eyebrow peaks; intercepting a closed area formed by connecting all eyebrow key points between an eyebrow peak and an eyebrow tail from a face area image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail; and splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
A makeup progress determining module 800, configured to convert the first target area image and the second target area image into images including a preset single channel component in an HSV color space, respectively; and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
A makeup progress determining module 800, configured to calculate absolute values of differences between preset single-channel components corresponding to pixels with the same position in the converted first target area image and second target area image, respectively; counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions; and calculating the ratio of the counted pixel point number to the total number of the pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
A makeup progress determining module 800, configured to perform binarization processing on the first target area image and the second target area image, respectively, to obtain a first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image; performing AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image; acquiring a face region image corresponding to an initial frame image and a face region image corresponding to a current frame image; performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and the second mask image and the face area image corresponding to the current frame image are subjected to AND operation to obtain a new second target area image corresponding to the current frame image.
And a makeup progress determining module 800, configured to perform boundary erosion processing on the makeup areas in the first target area image and the second target area image, respectively.
Specific makeup includes foundation makeup; a makeup progress determination module 800 for generating a result image after finishing a specific makeup by simulation based on the initial frame image; respectively obtaining integral image brightness corresponding to the initial frame image, the result image and the current frame image; respectively obtaining the brightness of the face area corresponding to the initial frame image, the result image and the current frame image; and determining the current makeup progress corresponding to the current frame image according to the overall image brightness and the face area brightness corresponding to the initial frame image, the result image and the current frame image respectively.
A makeup progress determination module 800 for converting the initial frame image, the result image, and the current frame image into grayscale images, respectively; respectively calculating gray average values of pixel points in gray images corresponding to the initial frame image, the result image and the current frame image after conversion; and respectively determining the gray average values corresponding to the initial frame image, the result image and the current frame image as the overall image brightness corresponding to the initial frame image, the result image and the current frame image.
A makeup progress determination module 800, configured to obtain face region images corresponding to the initial frame image, the result image, and the current frame image, respectively; respectively converting the face area images corresponding to the initial frame image, the result image and the current frame image into face gray level images; and respectively calculating the gray average value of pixel points in the face gray images corresponding to the initial frame image, the result image and the current frame image to obtain the face region brightness corresponding to the initial frame image, the result image and the current frame image.
A makeup progress determination module 800, configured to determine a first environment change brightness corresponding to the current frame image according to an overall image brightness and a face region brightness corresponding to the initial frame image and an overall image brightness and a face region brightness corresponding to the current frame image; determining second environment change brightness corresponding to the result image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the result image; and determining the current makeup progress corresponding to the current frame image according to the first environment change brightness, the second environment change brightness, the face area brightness corresponding to the initial frame image, the face area brightness corresponding to the current frame image and the face area brightness corresponding to the result image.
A makeup progress determination module 800, configured to calculate a difference between an overall image brightness corresponding to the initial frame image and a face area brightness corresponding to the overall image brightness, to obtain an environment brightness of the initial frame image; calculating the difference between the brightness of the whole image corresponding to the current frame image and the brightness of the face area corresponding to the current frame image to obtain the ambient brightness of the current frame image; and determining the absolute value of the difference value between the ambient brightness of the current frame image and the ambient brightness of the initial frame image as the first ambient change brightness corresponding to the current frame image.
A makeup progress determination module 800, configured to determine a makeup brightness change value corresponding to the current frame image according to the first environment change brightness, the face area brightness corresponding to the initial frame image, and the face area brightness corresponding to the current frame image; determining a makeup brightness change value corresponding to the result image according to the second environment change brightness, the face area brightness corresponding to the initial frame image and the face area brightness corresponding to the result image; and calculating the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image.
A makeup progress determination module 800, configured to calculate a difference between the brightness of the face region corresponding to the current frame image and the brightness of the face region corresponding to the initial frame image, so as to obtain a total brightness change value corresponding to the current frame image; and calculating the difference between the total brightness change value and the first environment change brightness to obtain a makeup brightness change value corresponding to the current frame image.
A makeup progress determining module 800, configured to determine a makeup progress corresponding to a previous frame of image as a current makeup progress corresponding to a current frame of image if the first environment change brightness is greater than a preset threshold; and sending first prompt information to a terminal of the user, wherein the first prompt information is used for prompting the user to make up in a brightness environment corresponding to the initial frame image.
Specific makeup includes concealer makeup; a makeup progress determining module 800, configured to obtain respective facial flaw information corresponding to the initial frame image and the current frame image; calculating a face flaw difference value between the current frame image and the initial frame image according to the face flaw information corresponding to the initial frame image and the face flaw information corresponding to the current frame image; if the face flaw difference value is larger than the preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the face flaw difference value and the face flaw information corresponding to the initial frame image; and if the difference value of the face flaws is smaller than or equal to a preset threshold value, acquiring a result image after the makeup of the face flaws is finished by the user, and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
A makeup progress determination module 800, configured to calculate a difference between the number of defects corresponding to the initial frame image and the number of defects corresponding to the current frame image in each defect type; and calculating the sum of the difference values corresponding to each defect type, and taking the obtained sum as the difference value of the facial defects between the current frame image and the initial frame image.
A makeup progress determination module 800, configured to calculate a sum of defect numbers corresponding to defect categories in the facial defect information corresponding to the initial frame image, to obtain a total defect number; and calculating the ratio of the difference value of the facial flaws to the total number of the flaws, and taking the ratio as the current makeup progress corresponding to the current frame image.
A makeup progress determination module 800, configured to generate, according to the initial frame image, a result image after the user completes concealer makeup in a simulated manner; respectively acquiring face area images corresponding to an initial frame image, a result image and a current frame image; and determining the current makeup progress corresponding to the current frame image according to the face area images corresponding to the initial frame image, the result image and the current frame image.
A makeup progress determining module 800, configured to convert the respective face region images corresponding to the initial frame image, the result image, and the current frame image into an image including a saturation channel in an HLS color space; respectively calculating smoothing factors corresponding to the respective face region images of the converted initial frame image, the converted result image and the converted current frame image through a preset filtering algorithm; and determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image and the current frame image.
A makeup progress determining module 800, configured to calculate a first difference between a smoothing factor corresponding to the current frame image and a smoothing factor corresponding to the initial frame image; calculating a second difference value between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image; and calculating the ratio of the first difference value to the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
A makeup progress determination module 800, configured to obtain face region images corresponding to an initial frame image and a current frame image, respectively; and respectively detecting the number of flaws corresponding to each flaw type in the face area image corresponding to the initial frame image and the current frame image through a preset skin detection model to obtain the face flaw information corresponding to the initial frame image and the current frame image.
A makeup progress determining module 800, configured to perform rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image; according to the corrected first face key point, an image containing a face area is intercepted from the corrected initial frame image; and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
The makeup progress determining module 800 is configured to determine a left-eye central coordinate and a right-eye central coordinate according to a left-eye key point and a right-eye key point included in the first face key point; determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate; and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
And a makeup progress determining module 800, configured to perform image interception on a face region included in the corrected initial frame image according to the corrected first face key point.
A makeup progress determination module 800 for determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key points; determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and according to the intercepting frame, intercepting an image containing a human face region from the corrected initial frame image.
The makeup progress determining module 800 is further configured to amplify the capture frame by a preset multiple; and according to the amplified intercepting frame, intercepting an image containing a human face region from the corrected initial frame image.
The makeup progress determining module 800 is further configured to perform scaling translation processing on the corrected first face key points according to the size of the image including the face area and a preset size.
The device also includes: the face detection module is used for detecting whether the initial frame image and the current frame image only contain the face image of the same user; if yes, performing the operation of determining the current makeup progress of the specific makeup by the user; if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
The makeup progress detection device provided by the embodiment of the application and the makeup progress detection method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as methods adopted, operated or realized by application programs stored by the device.
The embodiment of the application also provides electronic equipment for executing the makeup progress detection method. Referring to fig. 29, a schematic diagram of an electronic device provided in some embodiments of the present application is shown. As shown in fig. 29, the electronic apparatus 11 includes: a processor 1100, a memory 1101, a bus 1102 and a communication interface 1103, the processor 1100, the communication interface 1103 and the memory 1101 being connected by the bus 1102; the memory 1101 stores a computer program operable on the processor 1100, and the processor 1100 executes the computer program to perform the method for detecting progress of makeup according to any one of the embodiments of the present application.
The Memory 1101 may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the apparatus and at least one other network element is implemented through at least one communication interface 1103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used.
Bus 1102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 1101 is used for storing a program, and the processor 1100 executes the program after receiving an execution instruction, and the method for detecting a progress of makeup disclosed in any one of the embodiments of the present application may be applied to the processor 1100, or implemented by the processor 1100.
Processor 1100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 1100. The Processor 1100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in the memory 1101, and the processor 1100 reads the information in the memory 1101 to complete the steps of the above method in combination with the hardware thereof.
The electronic equipment provided by the embodiment of the application and the cosmetic progress detection method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
Referring to fig. 30, the computer readable storage medium is an optical disc 30, and a computer program (i.e., a program product) is stored thereon, and when being executed by a processor, the computer program executes the method for detecting progress of makeup provided by any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the cosmetic progress detection method provided by the embodiment of the present application have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
in the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted to reflect the following schematic: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Moreover, those of skill in the art will understand that although some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (55)

1. A makeup progress detection method characterized by comprising:
acquiring a real-time makeup video of a user for performing a specific makeup currently;
determining the current makeup progress of the user for making up the specific makeup according to the initial frame image and the current frame image of the real-time makeup video;
wherein, according to the initial frame image and the current frame image of the real-time makeup video, determining the current makeup progress of the user for the specific makeup, comprises:
The specific makeup comprises at least one of high-gloss makeup, blush makeup, eyeliner makeup, eyeshadow makeup or eyebrow makeup, a first target area image corresponding to the specific makeup is obtained from the initial frame image, and a second target area image corresponding to the specific makeup is obtained from the current frame image; acquiring the first target area image and the second target area image of a single channel containing only a channel component corresponding to the specific makeup; respectively carrying out binarization processing on the first target area image and the second target area image of a single channel only containing the channel component to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image; performing AND operation on the first binarization mask image and the second binarization mask image to obtain an intersection area mask image; respectively generating a new first target area image and a new second target area image based on the crossed area mask image; and determining the current makeup progress of the user for the specific makeup according to the new first target area image and the new second target area image.
2. The method of claim 1, wherein the specific makeup includes a high gloss makeup, or a makeup finish; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
acquiring at least one target makeup area corresponding to the specific makeup;
according to the target makeup area, acquiring a first target area image corresponding to the specific makeup from the initial frame image, and acquiring a second target area image corresponding to the specific makeup from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
3. The method as claimed in claim 2, wherein the obtaining a first target area image corresponding to the specific makeup from the initial frame image according to the target makeup area comprises:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and acquiring a first target area image corresponding to the specific makeup from the face area image according to the first face key point and the target makeup area.
4. The method according to claim 3, wherein the extracting a first target area image corresponding to the specific makeup from the face area image according to the first face key point and the target makeup area comprises:
determining one or more target key points on a region contour corresponding to the target makeup region in the face region image from the first face key points;
generating a mask image corresponding to the face region image according to the target key points corresponding to the target makeup region;
and calculating the mask image and the face area image to obtain a first target area image corresponding to the specific makeup.
5. The method according to claim 4, wherein the generating a mask image corresponding to the face region image according to the target key point corresponding to the target makeup region comprises:
if a plurality of target key points corresponding to the target makeup area exist, determining each edge coordinate of the target makeup area in the face area image according to the target key points; modifying the pixel values of all pixel points in the area defined by the edge coordinates into preset values to obtain a mask area corresponding to the target makeup area;
If the number of target key points corresponding to the target makeup area is one, drawing an elliptical area with a preset size by taking the target key points as the center, and modifying the pixel values of all pixel points in the elliptical area to preset values to obtain a mask area corresponding to the target makeup area;
and modifying the pixel values of all the pixel points outside the mask area to be zero to obtain a mask image corresponding to the face area image.
6. The method of claim 1, wherein the specific makeup includes a blush makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
acquiring at least one target makeup area corresponding to the specific makeup;
generating a makeup mask image according to the target makeup area;
and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
7. The method of claim 6, wherein determining a current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image, and the current frame image comprises:
Taking the makeup mask image as a reference, acquiring a first target area image for makeup from the initial frame image, and acquiring a second target area image for makeup from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
8. The method of claim 1, wherein the specific makeup includes eyeliner makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
acquiring a makeup mask image corresponding to the initial frame image and the current frame image;
according to the initial frame image, simulating to generate a result image after the eye line is made up;
and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the result image, the initial frame image and the current frame image.
9. The method of claim 8, wherein the determining a current makeup progress corresponding to the current frame image according to the makeup mask map, the result image, the initial frame image, and the current frame image comprises:
Taking a makeup mask image corresponding to the initial frame image as a reference, and acquiring a first target area image for makeup from the initial frame image;
acquiring a second target area image for makeup from the current frame image according to the makeup mask image corresponding to the current frame image;
acquiring a third target area image of the eye line makeup according to the result image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image, the second target area image and the third target area image.
10. The method according to claim 9, wherein the determining a current makeup progress corresponding to the current frame image according to the first target area image, the second target area image and the third target area image comprises:
respectively converting the first target area image, the second target area image and the third target area image into images containing saturation channels in an HLS color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image, the converted second target area image and the converted third target area image.
11. The method according to claim 10, wherein the determining a current makeup progress corresponding to the current frame image according to the converted first target area image, second target area image and third target area image comprises:
calculating a first average pixel value corresponding to the first target area image, a second average pixel value corresponding to the second target area image and a third average pixel value corresponding to the third target area image after conversion respectively;
calculating a first difference between a second average pixel value and the first average pixel value, and calculating a second difference between the third average pixel value and the first average pixel value;
and calculating the ratio of the first difference value to the second difference value to obtain the current makeup progress corresponding to the current frame image.
12. The method according to any one of claims 9 to 11, wherein before determining the current makeup progress corresponding to the current frame image according to the first target area image, the second target area image, and the third target area image, the method further comprises:
aligning the first target area image and the second target area image;
And carrying out alignment processing on the first target area image and the third target area image.
13. The method of claim 12, wherein said aligning the first target area image and the second target area image comprises:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
and computing the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image.
14. The method of claim 13, wherein said aligning the first target area image and the second target area image further comprises:
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the result image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
And computing the second mask image and the face area image corresponding to the current frame image to obtain a new second target area image corresponding to the result image.
15. The method according to any one of claims 8-11, wherein the obtaining of the cosmetic mask map corresponding to the initial frame image and the current frame image comprises:
acquiring an eye line style graph selected by a user;
if the eye state of the user in the initial frame image is the eye opening state, acquiring an eye opening pattern image corresponding to the eye line pattern image; determining the eye opening pattern image as a cosmetic mask image corresponding to the initial frame image;
if the eye state of the user in the initial frame image is the eye closing state, acquiring a eye closing pattern image corresponding to the eye line pattern image, and determining the eye closing pattern image as a makeup masking image corresponding to the initial frame image.
16. The method of claim 1, wherein the specific makeup includes eye shadow makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises the following steps:
Acquiring an eye shadow mask image;
according to each target makeup area of eye shadow makeup, respectively splitting a makeup mask image corresponding to each target makeup area from the eye shadow mask image;
and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the makeup mask image corresponding to each target makeup area.
17. The method of claim 16, wherein the determining a current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and a makeup mask map corresponding to each of the target makeup areas comprises:
respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a first target area image corresponding to each target makeup area from the initial frame image;
respectively taking the makeup mask image corresponding to each target makeup area as a reference, and acquiring a second target area image corresponding to each target makeup area from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each target makeup area.
18. The method according to claim 17, wherein the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each of the target makeup areas comprises:
respectively converting a first target area image and a second target area image corresponding to each target makeup area into images containing preset single-channel components in an HLS color space;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each converted target makeup area.
19. The method according to claim 18, wherein the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image corresponding to each of the converted target makeup areas comprises:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in a first target area image and a second target area image which correspond to the same target makeup area after conversion respectively;
counting the number of pixel points of which the absolute value of the difference value corresponding to each target makeup area meets a preset makeup completion condition;
Respectively calculating the ratio of the number of the pixel points corresponding to each target makeup area to the total number of the pixel points corresponding to the target makeup area to obtain a makeup progress corresponding to each target makeup area;
and calculating the current makeup progress corresponding to the current frame image according to the makeup progress corresponding to each target makeup area and the preset weight corresponding to each target makeup area.
20. The method according to any one of claims 7, 9 and 17, wherein the obtaining a first target area image from the initial frame image with reference to a cosmetic mask image comprises:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and taking the makeup mask image as a reference, and acquiring a first target area image for makeup from the face area image.
21. The method according to claim 20, wherein the obtaining a first target area image from the face area image by taking the cosmetic mask image as a reference comprises:
respectively converting the makeup mask image and the face region image into binary images;
Performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image;
and calculating the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image.
22. The method according to claim 21, wherein before performing and operation on the binarized image corresponding to the cosmetic mask image and the binarized image corresponding to the face region image, the method further comprises:
determining one or more first positioning points on the outline of each makeup area in the makeup mask image according to the standard human face key points corresponding to the makeup mask image;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
23. The method according to claim 20, wherein the obtaining a first target area image from the face area image by taking the cosmetic mask image as a reference comprises:
Splitting the cosmetic mask map into a plurality of sub-mask maps, wherein each sub-mask map comprises at least one target cosmetic area;
respectively converting each sub-mask image and the face region image into a binary image;
respectively performing AND operation on the binary image corresponding to each sub-mask image and the binary image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image;
respectively performing AND operation on each sub-mask image and the face region image corresponding to the initial frame image to obtain a plurality of sub-target region images corresponding to the initial frame image;
and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
24. The method according to claim 23, wherein before performing and operation on the binarized image corresponding to each sub-mask map and the binarized image corresponding to the face region image, the method further comprises:
determining one or more first positioning points on the outline of a target makeup area in a first sub-mask map according to the standard face key points corresponding to the makeup mask map, wherein the first sub-mask map is any one of the sub-mask maps;
Determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the first sub-mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
25. The method of claim 1, wherein the specific makeup includes an eyebrow makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
acquiring a first target area image corresponding to eyebrow from the initial frame image, and acquiring a second target area image corresponding to eyebrow from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
26. The method of claim 25, wherein obtaining a first target area image corresponding to an eyebrow from the initial frame image comprises:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
And acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
27. The method according to claim 26, wherein the intercepting a first target area image corresponding to eyebrows from the face area image according to eyebrow key points included in the first face key points comprises:
interpolating eyebrow key points between the eyebrows and the eyebrow peaks included in the first face key points to obtain a plurality of interpolation points;
intercepting all eyebrow key points between the eyebrows and the eyebrow peaks and a closed area formed by connecting the interpolation points from the face area image to obtain partial eyebrow images between the eyebrows and the eyebrow peaks;
intercepting a closed region formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the face region image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail;
and splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
28. The method according to any one of claims 2, 7 and 25, wherein the determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image comprises:
Respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
29. The method of claim 28, wherein determining a current makeup progress corresponding to the current frame image according to the converted first target area image and the converted second target area image comprises:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in the converted first target area image and the converted second target area image respectively;
counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions;
and calculating the ratio of the counted pixel point number to the total number of pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
30. The method according to any one of claims 2, 7 and 25, wherein before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further comprises:
Respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
performing and operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image;
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
and calculating the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
31. The method according to any one of claims 7, 17 and 25, wherein before determining the current makeup progress corresponding to the current frame image, the method further comprises:
And respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
32. The method of claim 1, wherein the specific makeup includes a foundation makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
simulating and generating a result image after finishing the specific makeup according to the initial frame image;
respectively obtaining integral image brightness corresponding to the initial frame image, the result image and the current frame image;
respectively obtaining the brightness of the face area corresponding to the initial frame image, the result image and the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the overall image brightness and the face area brightness corresponding to the initial frame image, the result image and the current frame image respectively.
33. The method of claim 32, wherein said obtaining overall image brightness corresponding to said initial frame image, said result image and said current frame image respectively comprises:
Respectively converting the initial frame image, the result image and the current frame image into gray level images;
respectively calculating gray average values of pixel points in gray images corresponding to the initial frame image, the result image and the current frame image after conversion;
and respectively determining the gray average values corresponding to the initial frame image, the result image and the current frame image as the integral image brightness corresponding to the initial frame image, the result image and the current frame image.
34. The method of claim 32, wherein the obtaining the face region brightness corresponding to the initial frame image, the result image and the current frame image respectively comprises:
respectively obtaining face area images corresponding to the initial frame image, the result image and the current frame image;
respectively converting the face area images corresponding to the initial frame image, the result image and the current frame image into face gray level images;
and respectively calculating the gray average value of pixel points in the face gray images corresponding to the initial frame image, the result image and the current frame image to obtain the face region brightness corresponding to the initial frame image, the result image and the current frame image.
35. The method of any one of claims 32-34, wherein determining the current makeup progress corresponding to the current frame image based on the overall image brightness and the face region brightness corresponding to the initial frame image, the result image, and the current frame image comprises:
determining a first environment change brightness corresponding to the current frame image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the current frame image;
determining second environment change brightness corresponding to the result image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the result image;
and determining the current makeup progress corresponding to the current frame image according to the first environment change brightness, the second environment change brightness, the face area brightness corresponding to the initial frame image, the face area brightness corresponding to the current frame image and the face area brightness corresponding to the result image.
36. The method of claim 35, wherein determining the first ambient variation brightness corresponding to the current frame image according to the overall image brightness and the face region brightness corresponding to the initial frame image and the overall image brightness and the face region brightness corresponding to the current frame image comprises:
Calculating the difference value between the brightness of the whole image corresponding to the initial frame image and the brightness of the face area corresponding to the initial frame image to obtain the ambient brightness of the initial frame image;
calculating the difference value between the brightness of the whole image corresponding to the current frame image and the brightness of the face area corresponding to the current frame image to obtain the ambient brightness of the current frame image;
and determining the absolute value of the difference between the ambient brightness of the current frame image and the ambient brightness of the initial frame image as a first ambient variation brightness corresponding to the current frame image.
37. The method of claim 35, wherein determining the current makeup progress corresponding to the current frame image according to the first environment change brightness, the second environment change brightness, the brightness of the face region corresponding to the initial frame image, the brightness of the face region corresponding to the current frame image, and the brightness of the face region corresponding to the result image comprises:
determining a makeup brightness change value corresponding to the current frame image according to the first environment change brightness, the face area brightness corresponding to the initial frame image and the face area brightness corresponding to the current frame image;
Determining a makeup brightness change value corresponding to the result image according to the second environment change brightness, the face area brightness corresponding to the initial frame image and the face area brightness corresponding to the result image;
and calculating the ratio of the makeup brightness change value corresponding to the current frame image to the makeup brightness change value corresponding to the result image to obtain the current makeup progress corresponding to the current frame image.
38. The method of claim 37, wherein determining a makeup brightness variation value corresponding to the current frame image according to the first environment variation brightness, the brightness of the face region corresponding to the initial frame image, and the brightness of the face region corresponding to the current frame image comprises:
calculating the difference value between the brightness of the face area corresponding to the current frame image and the brightness of the face area corresponding to the initial frame image to obtain the total brightness change value corresponding to the current frame image;
and calculating a difference value between the total brightness change value and the first environment change brightness to obtain a makeup brightness change value corresponding to the current frame image.
39. The method of claim 35, further comprising:
If the first environment change brightness is larger than a preset threshold value, determining the makeup progress corresponding to the previous frame of image as the current makeup progress corresponding to the current frame of image;
and sending first prompt information to the terminal of the user, wherein the first prompt information is used for prompting the user to make up in the brightness environment corresponding to the initial frame image.
40. The method of claim 1, wherein the specific makeup includes concealer makeup; the determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video comprises:
respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image;
calculating a face flaw difference value between the current frame image and the initial frame image according to the face flaw information corresponding to the initial frame image and the face flaw information corresponding to the current frame image;
if the face flaw difference value is larger than a preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the face flaw difference value and the face flaw information corresponding to the initial frame image;
If the difference value of the face flaws is smaller than or equal to the preset threshold value, obtaining a result image after the user finishes concealing and making up, and determining the current making-up progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
41. The method of claim 40, wherein the facial blemish information comprises a blemish category and a corresponding blemish number; the calculating a difference value of the facial flaws between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image includes:
respectively calculating the difference value between the flaw number corresponding to the initial frame image and the flaw number corresponding to the current frame image in each flaw type;
and calculating the sum of the difference values corresponding to each defect type, and taking the obtained sum as the difference value of the facial defects between the current frame image and the initial frame image.
42. The method of claim 40, wherein calculating the current makeup progress corresponding to the current frame image according to the facial defect difference value and the facial defect information corresponding to the initial frame image comprises:
Calculating the sum of the flaw numbers corresponding to all flaw categories in the facial flaw information corresponding to the initial frame image to obtain the total flaw number;
and calculating the ratio of the difference value of the facial flaws to the total number of the flaws, and taking the ratio as the current makeup progress corresponding to the current frame image.
43. The method of claim 40, wherein the obtaining a result image after the user finishes concealer dressing, and determining a current dressing progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image comprises:
according to the initial frame image, simulating and generating a result image after the user finishes concealing and making up;
respectively obtaining face area images corresponding to the initial frame image, the result image and the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the face area images corresponding to the initial frame image, the result image and the current frame image respectively.
44. The method of claim 43, wherein the determining a current makeup progress corresponding to the current frame image according to the respective face region images corresponding to the initial frame image, the result image and the current frame image comprises:
Respectively converting the face region images corresponding to the initial frame image, the result image and the current frame image into images containing saturation channels in an HLS color space;
respectively calculating smoothing factors corresponding to the respective face region images of the converted initial frame image, the converted result image and the converted current frame image through a preset filtering algorithm;
and determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image and the current frame image respectively.
45. The method of claim 44, wherein determining a current makeup progress corresponding to the current frame image based on respective smoothing factors corresponding to the initial frame image, the result image, and the current frame image comprises:
calculating a first difference value between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image;
calculating a second difference value between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image;
and calculating the ratio of the first difference value to the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
46. The method of claim 40, wherein the obtaining facial defect information corresponding to each of the initial frame image and the current frame image comprises:
respectively acquiring face area images corresponding to the initial frame image and the current frame image;
and respectively detecting the number of flaws corresponding to each flaw category in the face area images corresponding to the initial frame image and the current frame image through a preset skin detection model to obtain the face flaw information corresponding to the initial frame image and the current frame image.
47. The method according to any one of claims 3, 26, 34, 43 and 46, wherein the obtaining of the face region image corresponding to the initial frame image comprises:
performing rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image;
according to the corrected first face key point, intercepting an image containing a face region from the corrected initial frame image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
48. The method of claim 47, wherein said performing rotation correction on the initial frame image and the first face keypoints according to the first face keypoints corresponding to the initial frame image comprises:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
49. The method according to claim 47, wherein said extracting an image containing a face region from said corrected initial frame image according to said corrected first face key point comprises:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
And intercepting an image containing the face area from the corrected initial frame image according to the intercepting frame.
50. The method of claim 49, further comprising:
amplifying the intercepting frame by a preset multiple;
and according to the amplified intercepting frame, intercepting an image containing the face region from the corrected initial frame image.
51. The method of claim 47, further comprising:
and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
52. The method of claim 1, further comprising:
detecting whether the initial frame image and the current frame image only contain face images of the same user;
if yes, executing the operation of determining the current makeup progress of the specific makeup by the user;
if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
53. A makeup progress detection device characterized by comprising:
the video acquisition module is used for acquiring a real-time makeup video of a user for currently making up a specific makeup;
the makeup progress determining module is used for determining the current makeup progress of the user for the specific makeup according to the initial frame image and the current frame image of the real-time makeup video;
the makeup progress determining module is specifically used for obtaining at least one of a highlight makeup look, a repair makeup look, a blush makeup look, an eyeliner makeup look, an eye shadow makeup look and an eyebrow makeup look from the initial frame image, obtaining a first target area image corresponding to the specific makeup look from the initial frame image, and obtaining a second target area image corresponding to the specific makeup look from the current frame image; acquiring the first target area image and the second target area image of a single channel containing only a channel component corresponding to the specific makeup; respectively carrying out binarization processing on the first target area image and the second target area image of a single channel only containing the channel component to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image; performing AND operation on the first binarization mask image and the second binarization mask image to obtain an intersection area mask image; respectively generating a new first target area image and a new second target area image based on the crossed area mask image; and determining the current makeup progress of the user for the specific makeup according to the new first target area image and the new second target area image.
54. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of any of claims 1-52.
55. A computer-readable storage medium, on which a computer program is stored, which program is executed by a processor to implement the method according to any one of claims 1-52.
CN202111015242.4A 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium Active CN114155569B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202111308461.1A CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111308470.0A CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111015242.4A CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111308454.1A CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111306864.2A CN115731591A (en) 2021-08-31 2021-08-31 Method, device and equipment for detecting makeup progress and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111015242.4A CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Related Child Applications (4)

Application Number Title Priority Date Filing Date
CN202111306864.2A Division CN115731591A (en) 2021-08-31 2021-08-31 Method, device and equipment for detecting makeup progress and storage medium
CN202111308470.0A Division CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111308454.1A Division CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111308461.1A Division CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114155569A CN114155569A (en) 2022-03-08
CN114155569B true CN114155569B (en) 2022-11-04

Family

ID=80461794

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202111308454.1A Pending CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111308461.1A Pending CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111306864.2A Pending CN115731591A (en) 2021-08-31 2021-08-31 Method, device and equipment for detecting makeup progress and storage medium
CN202111308470.0A Pending CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111015242.4A Active CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CN202111308454.1A Pending CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111308461.1A Pending CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111306864.2A Pending CN115731591A (en) 2021-08-31 2021-08-31 Method, device and equipment for detecting makeup progress and storage medium
CN202111308470.0A Pending CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (5) CN115937919A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078675B (en) * 2023-10-16 2024-02-06 太和康美(北京)中医研究院有限公司 Cosmetic efficacy evaluation method, device, equipment and medium based on image analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805090A (en) * 2018-06-14 2018-11-13 广东工业大学 A kind of virtual examination cosmetic method based on Plane Gridding Model
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111651040A (en) * 2020-05-27 2020-09-11 华为技术有限公司 Interaction method of electronic equipment for skin detection and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105278376A (en) * 2015-10-16 2016-01-27 珠海格力电器股份有限公司 Use method of device using human face identification technology and device
CN107820027A (en) * 2017-11-02 2018-03-20 北京奇虎科技有限公司 Video personage dresss up method, apparatus, computing device and computer-readable storage medium
CN107969058A (en) * 2017-12-29 2018-04-27 上海斐讯数据通信技术有限公司 A kind of intelligence dressing table and control method
CN110111245B (en) * 2019-05-13 2023-12-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN108805090A (en) * 2018-06-14 2018-11-13 广东工业大学 A kind of virtual examination cosmetic method based on Plane Gridding Model
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110008813A (en) * 2019-01-24 2019-07-12 阿里巴巴集团控股有限公司 Face identification method and system based on In vivo detection technology
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111651040A (en) * 2020-05-27 2020-09-11 华为技术有限公司 Interaction method of electronic equipment for skin detection and electronic equipment

Also Published As

Publication number Publication date
CN115731591A (en) 2023-03-03
CN115731142A (en) 2023-03-03
CN115937919A (en) 2023-04-07
CN114155569A (en) 2022-03-08
CN115761827A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN109690617B (en) System and method for digital cosmetic mirror
US9142054B2 (en) System and method for changing hair color in digital images
US9691136B2 (en) Eye beautification under inaccurate localization
JP3779570B2 (en) Makeup simulation apparatus, makeup simulation control method, and computer-readable recording medium recording makeup simulation program
JP2020526809A (en) Virtual face makeup removal, fast face detection and landmark tracking
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
CN108447017A (en) Face virtual face-lifting method and device
TW202234341A (en) Image processing method and device, electronic equipment and storage medium
JP2010507854A (en) Method and apparatus for virtual simulation of video image sequence
CN111445410A (en) Texture enhancement method, device and equipment based on texture image and storage medium
TW200416622A (en) Method and system for enhancing portrait images that are processed in a batch mode
CN106326823B (en) Method and system for obtaining head portrait in picture
CN108463823A (en) A kind of method for reconstructing, device and the terminal of user's Hair model
JP2024500896A (en) Methods, systems and methods for generating 3D head deformation models
CN116997933A (en) Method and system for constructing facial position map
JP2024503794A (en) Method, system and computer program for extracting color from two-dimensional (2D) facial images
CN113344836A (en) Face image processing method and device, computer readable storage medium and terminal
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113344837A (en) Face image processing method and device, computer readable storage medium and terminal
US10909351B2 (en) Method of improving image analysis
CN113837019A (en) Cosmetic progress detection method, device, equipment and storage medium
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
CN110544200A (en) method for realizing mouth interchange between human and cat in video
CN113837017B (en) Cosmetic progress detection method, device, equipment and storage medium
CN114565506B (en) Image color migration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant