CN115761827A - Cosmetic progress detection method, device, equipment and storage medium - Google Patents

Cosmetic progress detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN115761827A
CN115761827A CN202111308470.0A CN202111308470A CN115761827A CN 115761827 A CN115761827 A CN 115761827A CN 202111308470 A CN202111308470 A CN 202111308470A CN 115761827 A CN115761827 A CN 115761827A
Authority
CN
China
Prior art keywords
image
target area
frame image
face
eyebrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111308470.0A
Other languages
Chinese (zh)
Inventor
刘聪
苗锋
张梦洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202111308470.0A priority Critical patent/CN115761827A/en
Publication of CN115761827A publication Critical patent/CN115761827A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application provides a makeup progress detection method, a makeup progress detection device, makeup progress detection equipment and a storage medium, wherein the method comprises the following steps: acquiring an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently; acquiring a first target area image corresponding to eyebrows from an initial frame image, and acquiring a second target area image corresponding to the eyebrows from a current frame image; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image. According to the method and the device, the target area image corresponding to the eyebrow is deducted from the face area image, the target makeup areas in the initial frame image and the current frame image are aligned, and errors caused by position difference of the target makeup areas are reduced. A deep learning mode is not adopted, a large amount of data does not need to be collected in advance, the real-time picture of the makeup of the user is captured, the detection result is returned to the user through the calculation of the server side, and therefore the calculation cost is less consumed, and the processing pressure of the server is reduced.

Description

Cosmetic progress detection method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a makeup progress detection method, device, equipment and storage medium.
Background
Make-up has become the essential link of many people's daily life, and the step of makeup process is various, if can feed back the user with the makeup progress in real time, will greatly reduce the consumption of making up to user's energy, save user's makeup time.
At present, some functions of providing virtual makeup trial, skin color detection, personalized product recommendation and the like by using a deep learning model exist in the related technology, and the functions all need to collect a large number of face pictures in advance to train the deep learning model.
However, the face picture is privacy data of the user, and it is difficult to collect a huge face picture. And a large amount of computing resources are consumed for model training, so that the cost is high. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the face of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is high, the deep learning model capable of meeting the real-time performance requirement is not high in detection accuracy.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for detecting the dressing progress, wherein target area images corresponding to eyebrows are respectively deducted from an initial frame image and a current frame image, and the eyebrow dressing progress is determined according to the deducted target area images. A deep learning mode is not adopted, a large amount of data does not need to be collected in advance, the real-time picture of the makeup of the user is captured, the detection result is returned to the user through the calculation of the server side, and therefore the calculation cost is less consumed, and the processing pressure of the server is reduced.
The embodiment of the first aspect of the application provides a makeup progress detection method, which comprises the following steps:
acquiring an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently;
acquiring a first target area image corresponding to eyebrow from the initial frame image, and acquiring a second target area image corresponding to eyebrow from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
In some embodiments of the present application, the obtaining a first target area image corresponding to an eyebrow from the initial frame image includes:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
In some embodiments of the present application, the intercepting, from the face region image according to the eyebrow key points included in the first face key points, a first target region image corresponding to eyebrows includes:
interpolating eyebrow key points between the eyebrows and the eyebrow peaks included in the first face key points to obtain a plurality of interpolation points;
intercepting all eyebrow key points between the eyebrows and the eyebrow peaks and a closed area formed by connecting the interpolation points from the face area image to obtain partial eyebrow images between the eyebrows and the eyebrow peaks;
intercepting a closed area formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the face area image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail;
and splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
In some embodiments of the application, the determining, according to the first target area image and the second target area image, a current makeup progress corresponding to the current frame image includes:
respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image includes:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in the converted first target area image and the converted second target area image respectively;
counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions;
and calculating the ratio of the counted pixel point number to the total number of pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, before determining, according to the first target area image and the second target area image, a current makeup progress corresponding to the current frame image, the method further includes:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
performing and operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image;
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
and calculating the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
In some embodiments of the application, before determining the current makeup progress corresponding to the current frame image, the method further includes:
and respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
In some embodiments of the present application, the obtaining, according to the first face key point, a face region image corresponding to the initial frame image includes:
performing rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image;
according to the corrected first face key point, intercepting an image containing a face region from the corrected initial frame image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
In some embodiments of the present application, the performing rotation correction on the initial frame image and the first face keypoints according to the first face keypoints includes:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
In some embodiments of the present application, the intercepting an image including a face region from the initial frame image after rectification according to the rectified first face keypoint comprises:
and according to the corrected first face key point, carrying out image interception on a face area contained in the corrected initial frame image.
In some embodiments of the present application, the performing, according to the corrected first face keypoint, image capture on a face region included in the corrected initial frame image includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and intercepting an image containing the face area from the corrected initial frame image according to the interception frame.
In some embodiments of the present application, the method further comprises:
amplifying the intercepting frame by preset times;
and according to the amplified intercepting frame, intercepting an image containing the face region from the corrected initial frame image.
In some embodiments of the present application, the method further comprises:
and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
In some embodiments of the present application, the method further comprises:
detecting whether the initial frame image and the current frame image only contain face images of the same user;
if yes, executing the operation of determining the current makeup progress of the specific makeup for the user;
and if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
An embodiment of a second aspect of the present application provides a makeup progress detection device including:
the video acquisition module is used for acquiring an initial frame image and a current frame image in a real-time makeup video for a user to make up a specific makeup at present;
the target area acquisition module is used for acquiring a first target area image corresponding to the eyebrow from the initial frame image and acquiring a second target area image corresponding to the eyebrow from the current frame image;
and the progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
Embodiments of the third aspect of the present application provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of the first aspect.
An embodiment of a fourth aspect of the present application provides a computer-readable storage medium having a computer program stored thereon, the program being executable by a processor to implement the method of the first aspect.
The technical scheme provided in the embodiment of the application at least has the following technical effects or advantages:
in the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of face area recognition is improved. And deducting a target area image corresponding to the eyebrow from the face area image based on the face key point, and performing pixel alignment on the target area images corresponding to the initial frame image and the current frame image respectively, so that the accuracy of the target area image corresponding to the eyebrow is improved. And aligning the target makeup areas in the initial frame image and the current frame image, and reducing errors caused by position difference of the target makeup areas. When the eyebrow area is scratched, a segmented interpolation algorithm is introduced, so that the clipped eyebrow area is more coherent and accurate. In addition, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings.
In the drawings:
fig. 1 is a flowchart illustrating a makeup progress detection method for detecting a makeup of an eyebrow according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating the rotation angles of a solution image provided by an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of two coordinate system transformations provided by an embodiment of the present application;
FIG. 4 is a block diagram illustrating a method for detecting a makeup progress of an eyebrow makeup according to an embodiment of the present application;
fig. 5 is a schematic structural view illustrating a makeup progress detection device for detecting a makeup of eyebrows according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a storage medium according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
A makeup progress detection method, a makeup progress detection device, a makeup progress detection apparatus, and a storage medium according to embodiments of the present application will be described below with reference to the accompanying drawings.
At present, some virtual makeup trying functions exist in the related technology, the virtual makeup trying functions can be applied to sales counters or mobile phone application software, a face recognition technology is adopted to provide virtual makeup trying services for users, and various makeup can be matched and displayed in a face fitting mode in real time. In addition, the face skin detection service is provided, but the services only can meet the requirements of users for selecting cosmetics suitable for the users or selecting skin care schemes suitable for the users. Based on the services, the user can be helped to select the eyebrow makeup product suitable for the user, but the makeup progress cannot be displayed, and the real-time makeup requirement of the user cannot be met. In the related art, some functions such as virtual makeup fitting, skin color detection, personalized product recommendation and the like are provided by using a deep learning model, and the functions all need to collect a large number of face pictures in advance to train the deep learning model. However, the face picture is privacy data of the user, and it is difficult to collect a huge face picture. And a large amount of computing resources are consumed for model training, so that the cost is high. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the face of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is high, the deep learning model capable of meeting the real-time performance requirement is not high in detection accuracy.
Based on the above, the embodiment of the application provides a makeup progress detection method, the eyebrow makeup progress can be detected only through image processing, the accuracy of the makeup progress detection is high, and the eyebrow makeup progress of a user can be detected in real time in an eyebrow makeup process. The method has the advantages of no need of using a deep learning model, small calculation amount, low cost, reduction of the processing pressure of the server, improvement of the efficiency of the makeup progress detection and capability of meeting the real-time requirement of the makeup progress detection.
Referring to fig. 1, the method is used for a makeup progress corresponding to an eyebrow makeup, and specifically includes the following steps:
step 601: the method comprises the steps of obtaining an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently.
The execution subject of the embodiment of the application is the server. And a client matched with the makeup progress detection service provided by the server is installed on a terminal of a user, such as a mobile phone or a computer. When a user needs to use the makeup progress detection service, the user opens the client on the terminal, a video uploading interface is arranged in a display interface of the client, when the user clicks the video uploading interface, a camera device of the terminal is called to shoot a makeup video of the user, and the user carries out eyebrow makeup operation on the face of the user in the shooting process. And the terminal of the user transmits the shot makeup video to the server in a video streaming mode. The server receives each frame image of the makeup video transmitted by the user's terminal.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the specific makeup corresponding to each frame image received subsequently with the initial frame image as a reference. Since the processing manner of each subsequent frame of image is the same, the embodiment of the present application explains the process of cosmetic progress detection by taking the current frame of image received at the current time as an example.
In other embodiments of the present application, after obtaining the initial frame image and the current frame image of the makeup video of the user, the server further detects whether both the initial frame image and the current frame image only contain the face image of the same user. Firstly, whether an initial frame image and a current frame image only contain one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face images, prompt information is sent to a terminal of a user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video. For example, the hint information may be "please keep only the face of the same person appearing within the shot".
If it is detected that both the initial frame image and the current frame image only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, the face feature information corresponding to the face image in the initial frame image and the face feature information corresponding to the face image in the current frame image can be extracted through a face recognition technology, the similarity of the face feature information extracted from the two frame images is calculated, and if the calculated similarity is greater than or equal to a set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video.
After the server obtains the initial frame image and the current frame image of the user's makeup through this step, the server determines the current makeup progress of the user through the following operations of steps 602 and 603.
Step 602: and acquiring a first target area image corresponding to the eyebrow from the initial frame image, and acquiring a second target area image corresponding to the eyebrow from the current frame image.
The acquisition process of the first target area image is the same as the acquisition process of the second target area image. The embodiment of the present application will be described in detail by taking an example of an acquisition process of the first target area image. The server obtains the first target area image from the initial frame image specifically by the operations of steps S5 to S7 below.
S5: and detecting a first face key point corresponding to the initial frame image.
The server is configured with a pre-trained detection model for detecting the key points of the human face, and the detection model provides interface service for detecting the key points of the human face. After the server acquires the initial frame image of the user makeup video, the server calls an interface service for detecting the face key points, and all face key points of the user face in the initial frame image are identified through a detection model. In order to distinguish from the face key points corresponding to the current frame image, all the face key points corresponding to the initial frame image are referred to as first face key points in the embodiment of the application. And all the face key points corresponding to the current frame image are called second face key points.
The identified key points of the human face comprise key points on the face contour of the user and key points of the mouth, the nose, the eyes, the eyebrows and other parts. The number of the identified face key points can be 106.
S6: and acquiring a face region image corresponding to the initial frame image according to the first face key point.
The server specifically obtains the face region image corresponding to the initial frame image through the following operations in steps S60 to S62, including:
s60: and performing rotation correction on the initial frame image and the first face key point according to the first face key point.
Because a user can not ensure that the pose angles of the face in each frame of image are the same when shooting a makeup video through a terminal, in order to improve the accuracy of comparison between the current frame of image and the initial frame of image, the face in each frame of image needs to be rotationally corrected, so that the connecting lines of the face and the eyes in each frame of image after correction are all on the same horizontal line, thereby ensuring that the pose angles of the face in each frame of image are the same, and avoiding the problem of larger detection errors of the makeup progress due to different pose angles.
Specifically, the left-eye central coordinate and the right-eye central coordinate are respectively determined according to the left-eye key point and the right-eye key point included in the first face key point. And determining all the left-eye key points of the left-eye area and all the right-eye key points of the right-eye area from the first face key points. And averaging the determined abscissa of all the left-eye key points, averaging the ordinate of all the left-eye key points, forming a coordinate by the average of the abscissa and the average of the ordinate corresponding to the left eye, and determining the coordinate as the central coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then, according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image. As shown in fig. 2, the horizontal difference dx and the vertical difference dy of the left eye center coordinate and the right eye center coordinate are calculated, and the length d of the line connecting the left eye center coordinate and the right eye center coordinate is calculated. And calculating an included angle theta between the two-eye connecting line and the horizontal direction according to the length d of the two-eye connecting line, the horizontal difference value dx and the vertical difference value dy, wherein the included angle theta is the rotating angle corresponding to the initial frame image. And then calculating the coordinate of the central point of the connecting line of the two eyes according to the central coordinates of the left eye and the right eye, wherein the coordinate of the central point is the coordinate of the rotating central point corresponding to the initial frame image.
And performing rotation correction on the initial frame image and the first face key point according to the calculated rotation angle and the rotation center point coordinate. Specifically, the rotation angle and the rotation center point coordinate are input into a preset function for calculating a rotation matrix of the picture, where the preset function may be a function cv2. Getrototematrixmix2d () in OpenCV. And obtaining a rotation matrix corresponding to the initial frame image by calling the preset function. And then calculating the product of the initial frame image and the rotation matrix to obtain the corrected initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be completed by calling a function cv2.Warpaffine () in OpenCV.
For the first face key points, each first face key point needs to be corrected one by one to correspond to the corrected initial frame image. When the first face key points are corrected one by one, two times of coordinate system conversion are required, the coordinate system with the upper left corner of the initial frame image as the origin is converted into the coordinate system with the lower left corner as the origin for the first time, and the coordinate system with the lower left corner as the origin is further converted into the coordinate system with the rotation center point coordinate as the origin for the second time, as shown in fig. 3. After two times of coordinate system conversion, the following formula (1) conversion is carried out on each first face key point, and the rotation correction of the first face key points can be completed.
Figure BDA0003340987110000101
In the formula (1), x 0 、y 0 The x and y are respectively the abscissa and ordinate of the first face key point before rotation correction, and theta is the rotation angle.
The corrected initial frame image and the first face key point are based on the entire image, and the entire image includes not only the face information of the user but also other redundant image information, so that the face region of the corrected image needs to be clipped in the following step S61.
S61: and according to the corrected first face key point, intercepting an image containing a face area from the corrected initial frame image.
Firstly, determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points. And then determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, the minimum abscissa value and the minimum ordinate value are combined into a coordinate point, and the coordinate point is used as a top left corner vertex of the intercepting frame corresponding to the face area. And forming another coordinate point by using the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the top of the lower right corner of the capturing frame corresponding to the face region. And determining the position of an intercepting frame in the corrected initial frame image according to the top left corner vertex and the bottom right corner vertex, and intercepting the image in the intercepting frame from the corrected initial frame image, namely intercepting the image containing the face region.
In other embodiments of the present application, in order to ensure that all face areas of the user are intercepted, and avoid the occurrence of a situation where the subsequent makeup progress detection error is large due to incomplete interception, the intercepting frame may be further enlarged by a preset multiple, where the preset multiple may be 1.15 or 1.25, and the like. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical application. And after amplifying the interception frame to the periphery by a preset multiple, intercepting the image in the amplified interception frame from the corrected initial frame image, thereby intercepting the image containing the complete face area of the user.
S62: and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
After the image containing the face area of the user is intercepted from the initial frame image in the mode, the image containing the face area is zoomed to the preset size, and the face area image corresponding to the initial frame image is obtained. The predetermined size may be 390 × 390, 400 × 400, or the like. The embodiment of the application does not limit the specific value of the preset dimension, and the specific value can be set according to requirements in practical application.
In order to adapt the first face key point to the zoomed face region image, after the captured image containing the face region is zoomed to a preset size, the corrected first face key point is zoomed and translated according to the size of the image containing the face region before zooming and the preset size. Specifically, according to the size of an image including a face region before zooming and a preset size to which the image needs zooming, the translation direction and the translation distance of each first face key point are determined, then, according to the translation direction and the translation distance corresponding to each first face key point, translation operation is respectively carried out on each first face key point, and the coordinates of each translated first face key point are recorded.
The face region image is obtained from the initial frame image in the above manner, the first face key point is adapted to the obtained face region image through operations such as rotation correction and translation scaling, and then the first target region image corresponding to the eyebrow is extracted from the face region image in the following manner in step S7.
In other embodiments of the present application, before performing step S7, a gaussian filtering process may be performed on the face region image to remove noise in the face region image. Specifically, according to a gaussian kernel with a preset size, gaussian filtering processing is performed on a face region image corresponding to an initial frame image.
The Gaussian kernel of the Gaussian filter is a key parameter of the Gaussian filter processing, if the Gaussian kernel is too small, a good filtering effect cannot be achieved, and if the Gaussian kernel is too large, although noise information in an image can be filtered, useful information in the image can be smoothed. In the embodiment of the present application, a gaussian kernel with a preset size is selected, where the preset size may be 9 × 9. In addition, the other group of parameters sigmaX and sigmaY of the Gaussian filter function are set to be 0, and after the result passes through Gao Silv waves, image information is smoother, so that the accuracy of subsequently obtaining the makeup progress is improved.
The face region image is obtained in the above manner, or after the face region image is subjected to gaussian filtering processing, the first target region image corresponding to the eyebrow is extracted from the face region image corresponding to the initial frame image in step S7.
S7: and extracting a first target area image corresponding to eyebrows from the face area image corresponding to the initial frame image according to the eyebrow key points included in the first face key points.
When progress detection is carried out on eyebrow dressing, the image of the area where the eyebrows are located needs to be deducted so as to avoid the influence of other areas on eyebrow dressing progress detection, the eyebrow area is deducted, and then only the eyebrow area is operated, so that the operation amount is reduced, and meanwhile, the accuracy is improved.
The obtained first face key points include a plurality of eyebrow key points, such as 18 eyebrow key points. The plurality of eyebrow key points are distributed at different positions from the head to the tail of the eyebrow on the eyebrow outline. In order to improve the accuracy of extracting the image of the first target region corresponding to the eyebrow, the embodiment of the application obtains more points located on the outline of the eyebrow by means of linear interpolation, so that the image is extracted according to more points. Since the eyebrow tail is pointed, the linear interpolation operation is not convenient to carry out. Therefore, the method and the device divide the process of extracting the first target area image corresponding to the eyebrow into two parts. One part is the section from the eyebrow to the eyebrow peak, and more points are obtained in a linear difference mode to further deduct the image. And the other part is an eyebrow peak to eyebrow tail section, and an eyebrow key point of the currently obtained eyebrow peak to eyebrow tail section is used for deducting the image.
Specifically, linear interpolation is performed on eyebrow key points between the eyebrow and the eyebrow peak included in the first face key points to obtain a plurality of interpolation points. And sequentially connecting all eyebrow key points and the obtained plurality of interpolation points between the eyebrows in the face region image corresponding to the initial frame image along the eyebrow contour line to obtain a closed region, wherein the closed region encloses partial eyebrow regions from the eyebrows to the eyebrow peak section. And intercepting the image of the closed area from the face area image corresponding to the initial frame image to obtain a partial eyebrow image between the eyebrow and the eyebrow peak.
And sequentially connecting all eyebrow key points between the eyebrow peak and the eyebrow tail in the face region image corresponding to the initial frame image along the eyebrow contour line to obtain a closed region, wherein partial eyebrow regions from the eyebrow peak to the eyebrow tail are encircled by the closed region. And intercepting the image of the closed region from the face region image corresponding to the initial frame image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail.
And splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
For the current frame image, the second target area image corresponding to the eyebrow is obtained from the current frame image according to the operations of the above steps S5-S7.
In other embodiments of the present application, the boundary is usually blurred and does not appear as abrupt considering that the edge of the eyebrow makeup in the actual makeup scene may not have a clear outline. Therefore, after the first target area image and the second target area image are obtained through the embodiment, the boundary erosion processing is further performed on the eyebrow areas in the first target area image and the second target area image respectively, so that the boundary of the target makeup area for eyebrow makeup is blurred, and the accuracy of makeup progress detection is further improved.
Step 603: and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
The color spaces of the first target area image corresponding to the eyebrow in the initial frame image and the second target area image corresponding to the eyebrow in the current frame image obtained in the above manner are both RGB color spaces. According to the embodiment of the application, the influence of eyebrow makeup on each channel component of the color space is determined in advance through a large number of tests, and the influence difference on each color channel in the RGB color space is found to be small. The HSV color space is composed of three components, namely Hue, saturation and Value, and when one component changes, the values of the other two components do not change obviously. And determining which channel component of brightness, hue and saturation is most influenced by the eyebrow makeup through experiments, and configuring the channel component with the most influence as a preset single-channel component corresponding to the preset type of makeup in the server. For the eyebrow makeup, the corresponding preset single-channel component may be a brightness component.
After the first target area image of the eyebrow Mao Duiying in the initial frame image and the second target area image corresponding to the eyebrow in the current frame image are obtained in the above manner, the first target area image and the second target area image are both converted into HSV color space by RGB color space. And separating a preset single-channel component from the HSV color space of the converted first target area image to obtain a first target area image only containing the preset single-channel component. And separating a preset single-channel component from the HSV color space of the converted second target area image to obtain a second target area image only containing the preset single-channel component.
And then determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
Specifically, the absolute values of the differences of the channel components corresponding to the pixel points with the same position in the converted first target area image and the converted second target area image are calculated respectively. For example, the absolute value of the difference in luminance components between pixel points having the same coordinates in the converted first target region image and second target region image is calculated. And counting the number of pixel points of which the corresponding absolute value of the difference value meets the preset makeup finishing condition. The preset makeup finishing condition is that the absolute value of the difference value corresponding to the pixel point is greater than a first preset threshold, and the first preset threshold can be 7 or 8.
And counting the total number of all pixel points in the eyebrow area in the first target area image or the second target area image. And then calculating the ratio of the number of the pixel points meeting the preset makeup finishing condition to the total number of the pixel points in the eyebrow area, and determining the ratio as the current makeup progress.
In other embodiments of the present application, in order to further improve the accuracy of the cosmetic progress detection, the eyebrow regions in the first target region image and the second target region image are further aligned. Specifically, binarization processing is performed on a first target area image and a second target area image which only contain the preset single-channel components, that is, the values of the preset single-channel components corresponding to the pixel points in the target makeup areas of the first target area image and the second target area image are both modified to 1, and the values of the preset single-channel components of the pixel points at the rest positions are both modified to 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing AND operation on the first binarization mask image and the second binarization mask image, namely performing AND operation on pixel points at the same positions in the first binarization mask image and the second binarization mask image respectively to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image. The preset single-channel component non-zero region of the pixel points in the second mask image is the target makeup region overlapped in the first target region image and the second target region.
And obtaining a face area image corresponding to the initial frame image and a face area image corresponding to the current frame image through the operation of the previous step. Performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to eyebrows in the initial frame image; and computing the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the eyebrow in the current frame image.
In other embodiments of the present application, an and operation may be further performed on the second mask image and the first target area image corresponding to the eyebrow after the boundary erosion processing, so as to obtain a new first target area image corresponding to the eyebrow. And performing AND operation on the second mask image and the second target area image corresponding to the eyebrow after the boundary erosion processing to obtain a new second target area image corresponding to the eyebrow.
Because the second mask image contains the target makeup area overlapped in the initial frame image and the current frame image, the new first target area image and the new second target area image are respectively extracted through the second mask image according to the method, so that the positions of the target makeup areas in the new first target area image and the new second target area image are completely consistent, the makeup progress is determined by subsequently comparing the changes of the target makeup area in the current frame image and the target makeup area in the initial frame image, the comparison area is ensured to be completely consistent, and the accuracy of the makeup progress detection is greatly improved.
After the target makeup areas in the initial frame image and the current frame image are aligned in the above manner to obtain a new first target area image and a new second target area image, the current makeup progress corresponding to the current frame image is determined again through the operation of the step 603.
After the current makeup progress is determined by any one of the above modes, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up for a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the own making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 4, according to the initial frame image and the first face key point corresponding thereto, and the current frame image and the second face key point corresponding thereto, the faces in the initial frame image and the current frame image are aligned and cut, respectively, and then the two cut face region images are smoothed and denoised by the laplacian algorithm. And then, respectively deducting a first target area image and a second target area image corresponding to the eyebrows from the two face area images. And carrying out boundary corrosion treatment on the first target area image and the second target area image. And then converting the first target area image and the second target area image into an image which only contains preset single-channel components in an HSV color space. And aligning the first target area image and the second target area image again, and then calculating the current makeup progress according to the first target area image and the second target area image.
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of face area identification is improved. And deducting a target area image corresponding to the eyebrow from the face area image based on the face key point, and performing pixel alignment on the target area images corresponding to the initial frame image and the current frame image respectively, so that the accuracy of the target area image corresponding to the eyebrow is improved. And aligning the target makeup areas in the initial frame image and the current frame image, and reducing errors caused by position difference of the target makeup areas. When the eyebrow area is scratched, a segmented interpolation algorithm is introduced, so that the eyebrow area which is clipped is more coherent and accurate. In addition, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
The embodiment of the application also provides a makeup progress detection device, and the device is used for executing the makeup progress detection method for detecting the makeup progress of the eyebrows. Referring to fig. 5, the apparatus specifically includes:
the video obtaining module 1601 is configured to obtain an initial frame image and a current frame image in a real-time makeup video of a user currently performing a specific makeup;
a target region obtaining module 1602, configured to obtain a first target region image corresponding to an eyebrow from an initial frame image, and obtain a second target region image corresponding to the eyebrow from a current frame image;
the progress determining module 1603 is configured to determine a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
A target region obtaining module 1602, configured to detect a first face key point corresponding to an initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key point; and acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
A target region obtaining module 1602, configured to interpolate eyebrow key points between the eyebrows and the eyebrow peak included in the first face key points to obtain multiple interpolated points; intercepting all eyebrow key points and a closed region formed by connecting a plurality of interpolation points between the eyebrows and the eyebrow peaks from the face region image to obtain a partial eyebrow image between the eyebrows and the eyebrow peaks; intercepting a closed region formed by connecting all eyebrow key points between an eyebrow peak and an eyebrow tail from a face region image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail; and splicing a part of eyebrow images between the eyebrow head and the eyebrow peak and a part of eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
A progress determining module 1603, configured to respectively convert the first target area image and the second target area image into images including preset single channel components in HSV color space; and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
A progress determining module 1603, configured to calculate difference absolute values of preset single-channel components corresponding to pixels with the same position in the converted first target area image and the converted second target area image, respectively; counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions; and calculating the ratio of the counted number of the pixel points to the total number of the pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
The progress determining module 1603 is further configured to perform binarization processing on the first target area image and the second target area image respectively to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image; performing AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image; acquiring a face region image corresponding to an initial frame image and a face region image corresponding to a current frame image; performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and the second mask image and the face area image corresponding to the current frame image are subjected to AND operation to obtain a new second target area image corresponding to the current frame image.
The device also includes: and the boundary corrosion module is used for performing boundary corrosion treatment on the makeup areas in the first target area image and the second target area image respectively.
A target region obtaining module 1602, configured to perform rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image; according to the corrected first face key point, an image containing a face area is intercepted from the corrected initial frame image; and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
A target area obtaining module 1602, configured to determine a left-eye central coordinate and a right-eye central coordinate according to a left-eye key point and a right-eye key point included in the first face key point; determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate; and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
A target region obtaining module 1602, configured to perform image capture on a face region included in the corrected initial frame image according to the corrected first face key point.
A target region obtaining module 1602, configured to determine a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key points; determining an intercepting frame corresponding to a face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and according to the intercepting frame, intercepting an image containing a human face region from the corrected initial frame image.
The target area obtaining module 1602 is further configured to amplify the capture frame by a preset multiple; and according to the amplified intercepting frame, intercepting an image containing a human face region from the corrected initial frame image.
The target region obtaining module 1602 is further configured to perform scaling and translation processing on the corrected first face key points according to the size of the image including the face region and a preset size.
The device also includes: the face detection module is used for detecting whether the initial frame image and the current frame image only contain the face image of the same user; if yes, executing the operation of determining the current makeup progress of the specific makeup by the user; if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
The makeup progress detection device provided by the above embodiment of the application and the makeup progress detection method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the stored application program.
The embodiment of the application also provides electronic equipment for executing the makeup progress detection method. Please refer to fig. 6, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 6, the electronic apparatus 11 includes: a processor 1100, a memory 1101, a bus 1102 and a communication interface 1103, the processor 1100, the communication interface 1103 and the memory 1101 being connected by the bus 1102; the memory 1101 stores a computer program operable on the processor 1100, and the processor 1100 executes the computer program to perform the makeup progress detection method according to any one of the embodiments described above.
The Memory 1101 may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the apparatus and at least one other network element is realized through at least one communication interface 1103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used.
Bus 1102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 1101 is used for storing a program, and the processor 1100 executes the program after receiving an execution instruction, and the method for detecting a progress of makeup disclosed in any of the foregoing embodiments of the present application may be applied to the processor 1100, or implemented by the processor 1100.
Processor 1100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1100. The Processor 1100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1101, and the processor 1100 reads the information in the memory 1101, and completes the steps of the above method in combination with the hardware thereof.
The electronic equipment provided by the embodiment of the application and the cosmetic progress detection method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
Referring to fig. 7, the computer readable storage medium is an optical disc 30, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program executes the method for detecting progress of makeup provided by any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the cosmetic progress detection method provided by the embodiment of the present application have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
in the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted to reflect the following schematic: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A makeup progress detection method characterized by comprising:
acquiring an initial frame image and a current frame image in a real-time makeup video of a user for making up a specific makeup currently;
acquiring a first target area image corresponding to eyebrows from the initial frame image, and acquiring a second target area image corresponding to the eyebrows from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
2. The method of claim 1, wherein the obtaining of the first target area image corresponding to the eyebrow from the initial frame image comprises:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and acquiring a first target area image corresponding to eyebrows from the face area image according to the eyebrow key points included in the first face key points.
3. The method according to claim 2, wherein the step of capturing a first target area image corresponding to eyebrows from the face area image according to eyebrow key points included in the first face key point includes:
interpolating eyebrow key points between the eyebrows and the eyebrow peaks included in the first face key points to obtain a plurality of interpolation points;
intercepting all eyebrow key points between the eyebrows and the eyebrow peaks and a closed area formed by connecting the interpolation points from the face area image to obtain partial eyebrow images between the eyebrows and the eyebrow peaks;
intercepting a closed region formed by connecting all eyebrow key points between the eyebrow peak and the eyebrow tail from the face region image to obtain a partial eyebrow image between the eyebrow peak and the eyebrow tail;
and splicing the partial eyebrow images between the eyebrow head and the eyebrow peak and the partial eyebrow images between the eyebrow peak and the eyebrow tail into a first target area image corresponding to the eyebrows.
4. The method according to claim 1, wherein the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image comprises:
respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
5. The method according to claim 4, wherein the determining a current makeup progress corresponding to the current frame image according to the converted first target area image and the converted second target area image comprises:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in the converted first target area image and the converted second target area image respectively;
counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions;
and calculating the ratio of the counted pixel point number to the total number of pixel points in all the target makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
6. The method according to claim 1, wherein before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further comprises:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
performing and operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image;
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
and calculating the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
7. The method according to claim 1, wherein before determining the current makeup progress corresponding to the current frame image, the method further comprises:
and respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
8. The method according to claim 2, wherein the obtaining, according to the first face key point, a face region image corresponding to the initial frame image comprises:
performing rotation correction on the initial frame image and the first face key point according to the first face key point corresponding to the initial frame image;
according to the corrected first face key point, intercepting an image containing a face region from the corrected initial frame image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
9. The method of claim 8, wherein said rotationally rectifying the initial frame image and the first face keypoints according to the first face keypoints comprises:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
10. The method according to claim 8, wherein said extracting an image containing a face region from the corrected initial frame image according to the corrected first face key point comprises:
and according to the corrected first face key point, carrying out image interception on a face area contained in the corrected initial frame image.
11. The method according to claim 10, wherein said image-capturing the face region included in the corrected initial frame image according to the corrected first face key point comprises:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and intercepting an image containing the face area from the corrected initial frame image according to the intercepting frame.
12. The method of claim 11, further comprising:
amplifying the intercepting frame by a preset multiple;
and according to the amplified intercepting frame, intercepting an image containing the face region from the corrected initial frame image.
13. The method of claim 8, further comprising:
and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
14. The method according to any one of claims 1-13, further comprising:
detecting whether the initial frame image and the current frame image only contain face images of the same user;
if yes, executing the operation of determining the current makeup progress of the specific makeup for the user;
and if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the real-time makeup video.
15. A makeup progress detection device characterized by comprising:
the video acquisition module is used for acquiring an initial frame image and a current frame image in a real-time makeup video for a user to make up a specific makeup at present;
the target area acquisition module is used for acquiring a first target area image corresponding to the eyebrow from the initial frame image and acquiring a second target area image corresponding to the eyebrow from the current frame image;
and the progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of any one of claims 1-14.
17. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the method according to any of claims 1-14.
CN202111308470.0A 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium Pending CN115761827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111308470.0A CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111308470.0A CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111015242.4A CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202111015242.4A Division CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115761827A true CN115761827A (en) 2023-03-07

Family

ID=80461794

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202111306864.2A Pending CN115731591A (en) 2021-08-31 2021-08-31 Method, device and equipment for detecting makeup progress and storage medium
CN202111308461.1A Pending CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111308454.1A Pending CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium
CN202111308470.0A Pending CN115761827A (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium
CN202111015242.4A Active CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN202111306864.2A Pending CN115731591A (en) 2021-08-31 2021-08-31 Method, device and equipment for detecting makeup progress and storage medium
CN202111308461.1A Pending CN115731142A (en) 2021-08-31 2021-08-31 Image processing method, device, equipment and storage medium
CN202111308454.1A Pending CN115937919A (en) 2021-08-31 2021-08-31 Method, device and equipment for identifying makeup color and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111015242.4A Active CN114155569B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (5) CN115731591A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078675B (en) * 2023-10-16 2024-02-06 太和康美(北京)中医研究院有限公司 Cosmetic efficacy evaluation method, device, equipment and medium based on image analysis

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105278376A (en) * 2015-10-16 2016-01-27 珠海格力电器股份有限公司 Use method of device using human face identification technology and device
EP3652701A4 (en) * 2017-07-13 2021-11-03 Shiseido Company, Limited Virtual facial makeup removal, fast facial detection and landmark tracking
CN107820027A (en) * 2017-11-02 2018-03-20 北京奇虎科技有限公司 Video personage dresss up method, apparatus, computing device and computer-readable storage medium
CN107969058A (en) * 2017-12-29 2018-04-27 上海斐讯数据通信技术有限公司 A kind of intelligence dressing table and control method
CN108805090B (en) * 2018-06-14 2020-02-21 广东工业大学 Virtual makeup trial method based on planar grid model
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110008813B (en) * 2019-01-24 2023-06-30 创新先进技术有限公司 Face recognition method and system based on living body detection technology
CN110111245B (en) * 2019-05-13 2023-12-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer readable storage medium
CN111291642B (en) * 2020-01-20 2023-11-28 深圳市商汤科技有限公司 Dressing processing method and device, electronic equipment and storage medium
CN111651040B (en) * 2020-05-27 2021-11-26 华为技术有限公司 Interaction method of electronic equipment for skin detection and electronic equipment

Also Published As

Publication number Publication date
CN115937919A (en) 2023-04-07
CN115731142A (en) 2023-03-03
CN114155569A (en) 2022-03-08
CN114155569B (en) 2022-11-04
CN115731591A (en) 2023-03-03

Similar Documents

Publication Publication Date Title
EP3477931B1 (en) Image processing method and device, readable storage medium and electronic device
JP7027537B2 (en) Image processing methods and equipment, electronic devices, and computer-readable storage media
US11151690B2 (en) Image super-resolution reconstruction method, mobile terminal, and computer-readable storage medium
CN107730444B (en) Image processing method, image processing device, readable storage medium and computer equipment
US20190213434A1 (en) Image capture device with contemporaneous image correction mechanism
US8983202B2 (en) Smile detection systems and methods
EP3644599B1 (en) Video processing method and apparatus, electronic device, and storage medium
US20120069222A1 (en) Foreground/Background Separation Using Reference Images
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN112016469A (en) Image processing method and device, terminal and readable storage medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111199197B (en) Image extraction method and processing equipment for face recognition
WO2017173578A1 (en) Image enhancement method and device
CN115761827A (en) Cosmetic progress detection method, device, equipment and storage medium
CN107578372B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
JP6098133B2 (en) Face component extraction device, face component extraction method and program
KR101726692B1 (en) Apparatus and method for extracting object
CN112597911A (en) Buffing processing method and device, mobile terminal and storage medium
CN116403226A (en) Unconstrained fold document image correction method, system, equipment and storage medium
CN113837019A (en) Cosmetic progress detection method, device, equipment and storage medium
CN107770446B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN113837017B (en) Cosmetic progress detection method, device, equipment and storage medium
CN109361850A (en) Image processing method, device, terminal device and storage medium
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
CN109118427B (en) Image light effect processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination