CN113837019B - Cosmetic progress detection method, device, equipment and storage medium - Google Patents

Cosmetic progress detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113837019B
CN113837019B CN202111017059.8A CN202111017059A CN113837019B CN 113837019 B CN113837019 B CN 113837019B CN 202111017059 A CN202111017059 A CN 202111017059A CN 113837019 B CN113837019 B CN 113837019B
Authority
CN
China
Prior art keywords
frame image
current
current frame
image
initial frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111017059.8A
Other languages
Chinese (zh)
Other versions
CN113837019A (en
Inventor
刘聪
苗锋
张梦洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202111017059.8A priority Critical patent/CN113837019B/en
Publication of CN113837019A publication Critical patent/CN113837019A/en
Application granted granted Critical
Publication of CN113837019B publication Critical patent/CN113837019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application provides a cosmetic progress detection method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring an initial frame image and a current frame image of a user makeup video; respectively acquiring facial flaw information corresponding to an initial frame image and a current frame image; and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image. The application can determine the facial flaw change between the current frame image and the initial frame image based on the facial flaw information of the current frame image and the initial frame image, and can rapidly determine the current makeup progress when the change is larger than the threshold value. When the change is smaller than or equal to the threshold value, a filtering algorithm is introduced to calculate the smoothing factor of each image, the makeup progress is determined according to the change of the smoothing factor, more accurate calculation is performed on finer change, and the condition that the makeup progress is suddenly increased in a short time is avoided. The makeup progress can be detected only by image processing, the operation amount is small, the cost is low, and the processing pressure of the server is reduced.

Description

Cosmetic progress detection method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method, a device, equipment and a storage medium for detecting makeup progress.
Background
Cosmetic has become an indispensable link in many people's daily lives, wherein concealers are cosmetic means for covering blemishes such as acne, blemishes, wrinkles, dark circles, etc. on the face by concealers, which can make the face appear smooth and fine. Therefore, concealer is an important step in the make-up process, and if the make-up progress of concealer can be fed back to the user in real time, the energy consumption of the user due to make-up can be greatly reduced, and the make-up time can be saved.
Currently, there are some functions of providing virtual makeup, skin color detection, personalized product recommendation and the like by using a deep learning model in the related art, and all the functions need to collect a large number of face pictures in advance to train the deep learning model.
However, the face picture is private data of the user, and it is difficult to collect a huge face picture. And the model training consumes a large amount of calculation resources, and has high cost. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is very high, the deep learning model with the real-time performance requirement can be met, and the detection accuracy is not high.
Disclosure of Invention
The application provides a makeup progress detection method, a device, equipment and a storage medium, which are used for determining the current makeup progress based on the face flaw information of a current frame image and an initial frame image, and can detect the makeup progress through image processing, so that the operation amount is small, the cost is low, the processing pressure of a server is reduced, the efficiency of concealer makeup progress detection is improved, and the real-time requirement of concealer makeup progress detection can be met.
An embodiment of a first aspect of the present application provides a method for detecting a makeup progress, including:
Acquiring an initial frame image and a current frame image of a user makeup video;
respectively acquiring facial flaw information corresponding to the initial frame image and the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image.
In some embodiments of the present application, the determining, according to the initial frame image and the current frame image and the facial flaw information corresponding to the current frame image, the current makeup progress corresponding to the current frame image includes:
calculating a facial flaw difference value between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image;
If the facial blemish difference value is larger than a preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the facial blemish difference value and facial blemish information corresponding to the initial frame image;
And if the facial flaw difference value is smaller than or equal to the preset threshold value, acquiring a result image after finishing concealing makeup by the user, and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
In some embodiments of the present application, the facial blemish information includes a blemish category and a corresponding blemish number; the calculating a facial flaw difference value between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image comprises the following steps:
Respectively calculating the difference value between the flaw number corresponding to the initial frame image and the flaw number corresponding to the current frame image under each flaw type;
and calculating the sum of the differences corresponding to each flaw category, and taking the obtained sum value as a facial flaw difference value between the current frame image and the initial frame image.
In some embodiments of the present application, the calculating the current makeup progress corresponding to the current frame image according to the facial blemish difference value and the facial blemish information corresponding to the initial frame image includes:
calculating the sum of the flaw numbers corresponding to the flaw categories in the facial flaw information corresponding to the initial frame image to obtain the total flaw number;
And calculating the ratio between the facial flaw difference value and the total flaw number, and taking the ratio as the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, the obtaining a result image after the user finishes makeup, and determining, according to the initial frame image, the result image, and the current frame image, a current makeup progress corresponding to the current frame image includes:
simulating and generating a result image after the user finishes concealing the makeup according to the initial frame image;
respectively acquiring face area images corresponding to the initial frame image, the result image and the current frame image;
And determining the current makeup progress corresponding to the current frame image according to the face area images corresponding to the initial frame image, the result image and the current frame image.
In some embodiments of the present application, the determining, according to the face area images corresponding to the initial frame image, the result image and the current frame image, the current makeup progress corresponding to the current frame image includes:
respectively converting the face area images corresponding to the initial frame image, the result image and the current frame image into images only containing saturation channels under an HLS color space;
respectively calculating smoothing factors corresponding to the face area images of the converted initial frame image, the converted result image and the converted current frame image through a preset filtering algorithm;
And determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image and the current frame image.
In some embodiments of the present application, the determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image, and the current frame image respectively includes:
calculating a first difference value between a smoothing factor corresponding to the current frame image and a smoothing factor corresponding to the initial frame image;
Calculating a second difference value between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image;
and calculating a ratio between the first difference value and the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, the acquiring facial flaw information corresponding to each of the initial frame image and the current frame image includes:
Respectively acquiring face area images corresponding to the initial frame image and the current frame image;
And respectively detecting the flaw number corresponding to each flaw category in the face area image corresponding to each of the initial frame image and the current frame image through a preset skin detection model to obtain the face flaw information corresponding to each of the initial frame image and the current frame image.
In some embodiments of the present application, acquiring a face area image corresponding to the initial frame image includes:
detecting a first face key point corresponding to the initial frame image;
According to the first face key points, carrying out rotation correction on the initial frame image and the first face key points;
according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image;
And scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
In some embodiments of the present application, the performing rotation correction on the initial frame image and the first face key point according to the first face key point includes:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included by the first face key point;
Determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and carrying out rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
In some embodiments of the present application, the capturing an image including a face area from the corrected initial frame image according to the corrected first face keypoints includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key point;
Determining a interception frame corresponding to a face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and according to the interception frame, intercepting an image containing the face area from the corrected initial frame image.
In some embodiments of the application, the method further comprises:
amplifying the intercepting frame by a preset multiple;
and according to the enlarged intercepting frame, intercepting an image containing the face area from the corrected initial frame image.
In some embodiments of the present application, after the acquiring the initial frame image and the current frame image in the user cosmetic video, the method further includes:
Detecting whether the initial frame image and the current frame image both contain only the face image of the same user;
If yes, executing the operation of respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image;
If not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the cosmetic video.
In some embodiments of the application, the method further comprises:
and sending the current makeup progress to the terminal of the user so that the terminal of the user displays the current makeup progress.
An embodiment of the second aspect of the present application provides a makeup progress detecting device, including:
the video frame acquisition module is used for acquiring an initial frame image and a current frame image of the makeup video of the user;
The flaw detection module is used for respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image;
and the makeup progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image.
An embodiment of a third aspect of the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor running the computer program to implement the method of the first aspect.
An embodiment of the fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program for execution by a processor to perform the method of the first aspect described above.
The technical scheme provided by the embodiment of the application has at least the following technical effects or advantages:
In the embodiment of the application, a current frame image and an initial frame image of a user makeup process are acquired, face flaw information corresponding to the initial frame image and face flaw information in the current frame image are detected, and the current makeup progress is determined according to the initial frame image, the current frame image and the face flaw information corresponding to the current frame image. Based on the facial flaw information corresponding to the current frame image and the initial frame image, the change of facial flaws between the current frame image and the initial frame image can be determined, and then the current makeup progress is determined. The method can accurately detect the makeup progress of the concealer through image processing, has small operand and low cost, reduces the processing pressure of a server, improves the efficiency of the makeup progress detection of the concealer, and can meet the real-time requirement of the makeup progress detection of the concealer.
Further, the face flaw difference value from the current frame image to the initial frame image is determined, when the face flaw difference value is larger than a preset threshold value, the ratio between the face flaw difference value and the total flaw number corresponding to the initial frame image is calculated, and then the current makeup progress of concealing makeup is obtained. When the facial flaw difference value is smaller than or equal to a preset threshold value, a result image for finishing makeup on the initial frame image is simulated, a filtering algorithm is introduced to calculate smoothing factors corresponding to the initial frame image, the current frame image and the result image, the makeup progress is determined according to the change condition of the smoothing factors, finer change can be calculated more accurately, and short-time sudden increase of the makeup progress is avoided.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flowchart of a method for detecting progress of makeup according to an embodiment of the present application;
FIG. 2 is a schematic diagram of solving the rotation angle of an image according to an embodiment of the present application;
FIG. 3 is a schematic diagram of two coordinate system transformations provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of a method for detecting progress of makeup according to an embodiment of the present application;
fig. 5 is a schematic structural view of a makeup progress detecting device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a storage medium according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs.
A cosmetic progress detection method, apparatus, device, and storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
At present, some virtual makeup trying functions exist in the related technology, and the virtual makeup trying functions can be applied to sales counters or mobile phone application software, and the face recognition technology is adopted to provide virtual makeup trying services for users, so that the makeup effect of concealers can be simulated. In addition, people's face skin detects service still provides, can help the user to select the concealer product that is fit for oneself based on these services, but can't show the progress of concealer makeup, can not satisfy user's demand of making up in real time. In the related art, there are some functions of providing virtual trial makeup, skin color detection, personalized product recommendation and the like by using a deep learning model, and all the functions need to collect a large number of face pictures in advance to train the deep learning model. However, the face picture is private data of the user, and it is difficult to collect a huge face picture. And the model training consumes a large amount of calculation resources, and has high cost. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is very high, the deep learning model with the real-time performance requirement can be met, and the detection accuracy is not high.
Based on this, the embodiment of the application provides a makeup progress detection method, which is used for detecting makeup progress of concealers, acquiring a current frame image and an initial frame image (namely a first frame image) of a user in the makeup process, acquiring face flaw information corresponding to the initial frame image and face flaw information in the current frame image, and determining the current makeup progress according to the initial frame image, the current frame image and the face flaw information corresponding to the current frame image. With the makeup of concealers, some blemishes such as blemishes, spots, wrinkles and the like on the user's face are gradually covered by the concealers, so that the blemishes on the face in the current frame image are reduced relative to the initial frame image, and therefore, the change of the blemishes on the face between the current frame image and the initial frame image can be determined based on the facial blemish information corresponding to the current frame image and the initial frame image, and the current makeup progress is determined. The method can accurately detect the makeup progress of the concealer through image processing, has small operand and low cost, reduces the processing pressure of a server, improves the efficiency of the makeup progress detection of the concealer, and can meet the real-time requirement of the makeup progress detection of the concealer.
Referring to fig. 1, the method specifically includes the steps of:
Step 101: and acquiring an initial frame image and a current frame image of the user makeup video.
The execution subject of the embodiment of the application is a server. And a client matched with the makeup progress detection service provided by the server is installed on the mobile phone or the computer of the user. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, a video uploading interface is arranged in a display interface of the client, and when the user is detected to click the video uploading interface, the camera device of the terminal is called to shoot the makeup video of the user, and the user performs concealing and makeup operation on the face of the user in the shooting process. The user's terminal transmits the photographed cosmetic video to the server in the form of a video stream. The server receives each frame of image of the cosmetic video transmitted by the user's terminal.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the concealing makeup corresponding to each frame image received subsequently by taking the initial frame image as a reference. Since the processing manner of each subsequent frame of image is the same, the embodiment of the application takes the current frame of image received at the current moment as an example to illustrate the process of detecting the makeup progress.
In other embodiments of the present application, after obtaining the initial frame image and the current frame image of the makeup video of the user, the server further detects whether the initial frame image and the current frame image both include only the face image of the same user. Firstly, whether the initial frame image and the current frame image both contain only one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face image, prompt information is sent to a terminal of a user. The user terminal receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the cosmetic video. For example, the prompt message may be "please keep only the face of the same person in the lens".
If the initial frame image and the current frame image are detected to only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, face feature information corresponding to a face image in an initial frame image can be extracted through a face recognition technology, face feature information corresponding to a face image in a current frame image is extracted, similarity of the face feature information extracted in the two frame images is calculated, and if the calculated similarity is greater than or equal to a set value, it is determined that faces in the initial frame image and the current frame image belong to the same user. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. The user terminal receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the cosmetic video.
After obtaining the initial frame image and the current frame image of the user's makeup through this step, the server determines the current makeup progress of the user through the operations of steps 102 and 103 as follows.
Step 102: and respectively acquiring facial flaw information corresponding to the initial frame image and the current frame image.
Firstly, respectively acquiring face area images corresponding to an initial frame image and a current frame image. And detecting the number of flaws corresponding to each flaw category in the face area image corresponding to the initial frame image through a preset skin detection model, and taking each flaw category and the corresponding flaw number as face flaw information corresponding to the initial frame image. And detecting the number of flaws corresponding to each flaw category in the face region image corresponding to the current frame image through a preset skin detection model, and obtaining the face flaw information corresponding to the current frame image.
The preset skin detection model is obtained by training a neural network model through a large number of face images in advance, and can be used for identifying and classifying blemishes such as acnes, spots and wrinkles of the face in the face images. The blemish category includes one or more blemishes such as acne, blemishes, wrinkles, etc. And identifying the number of acnes, the number of spots, the number of wrinkles and the like in the face area image corresponding to each of the initial frame image and the current frame image through a preset skin detection model.
Because the process of acquiring the face region image corresponding to the initial frame image is the same as the process of acquiring the face region image corresponding to the current frame image, the embodiment of the application takes the initial frame image as an example to describe the specific process of acquiring the face region image in detail.
First, a first face key point corresponding to an initial frame image is detected. The server is provided with a pre-trained detection model for detecting the key points of the human face, and interface service for detecting the key points of the human face is provided through the detection model. After the server acquires the initial frame image of the user makeup video, the server invokes the interface service for detecting the key points of the human faces, and all the key points of the human faces of the user in the initial frame image are identified through the detection model. In order to distinguish the face key points corresponding to the current frame image, in the embodiment of the application, all the face key points corresponding to the initial frame image are called as first face key points.
The identified key points of the human face comprise key points on the outline of the face of the user, and key points of the parts such as mouth, nose, eyes, eyebrows and the like. The number of face keypoints identified may be 106.
And then acquiring a face region image corresponding to the initial frame image according to the first face key points. The server obtains a face area image corresponding to the initial frame image through the following steps S1-S3, comprising:
S1: and carrying out rotation correction on the initial frame image and the first face key points according to the first face key points.
Because the user can not guarantee that the posture angles of the faces in each frame of image are the same when shooting the makeup video through the terminal, in order to align the posture angles of the faces in each frame of image, the faces in each frame of image need to be rotationally corrected, so that the connecting lines of the eyes of the faces in each frame of image after correction are all on the same horizontal line, the posture angles of the faces in each frame of image are ensured to be the same, and the problem of large detection error of the makeup progress caused by different posture angles is avoided.
Specifically, a left eye center coordinate and a right eye center coordinate are respectively determined according to a left eye key point and a right eye key point included in the first face key point. All left eye key points of the left eye region and all right eye key points of the right eye region are determined from the first face key points. And taking an average value of the determined abscissas of all the left eye key points, taking an average value of the abscissas of all the left eye key points, forming a coordinate by the average value of the abscissas corresponding to the left eye and the average value of the ordinates, and determining the coordinate as a center coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate. As shown in fig. 2, the horizontal difference dx and the vertical difference dy of the left-eye center coordinate and the right-eye center coordinate are calculated from the two coordinates, and the two-eye connecting line length d of the left-eye center coordinate and the right-eye center coordinate is calculated. And calculating an included angle theta between the two-eye connecting line and the horizontal direction according to the length d of the two-eye connecting line, the horizontal difference dx and the vertical difference dy, wherein the included angle theta is the rotation angle corresponding to the initial frame image. And then calculating the center point coordinate of the connecting line of the two eyes according to the center coordinates of the left eye and the center coordinates of the right eye, wherein the center point coordinate is the rotation center point coordinate corresponding to the initial frame image.
And carrying out rotation correction on the initial frame image and the first face key points according to the calculated rotation angle and the rotation center point coordinates. Specifically, the rotation angle and the rotation center point coordinates are input into a preset function for calculating a rotation matrix of the picture, and the preset function may be a function cv2.getrotation matrix2d () in OpenCV. And obtaining a rotation matrix corresponding to the initial frame image by calling the preset function. And then calculating the product of the initial frame image and the rotation matrix to obtain a corrected initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be completed by calling the function cv2.warp affine () in OpenCV.
For the first face key points, correction is required to be carried out on each first face key point one by one so as to correspond to the corrected initial frame image. When the first face key points are corrected one by one, two coordinate system conversions are required, the first time of converting the coordinate system with the upper left corner of the initial frame image as the origin into the coordinate system with the lower left corner as the origin, and the second time of further converting the coordinate system with the lower left corner as the origin into the coordinate system with the rotation center point coordinates as the origin, as shown in fig. 3. After two times of coordinate system conversion, each first face key point is converted according to the following formula (1), and rotation correction of the first face key points can be completed.
In the formula (1), x 0、y0 is the abscissa and the ordinate of the first face key point before rotation correction, x and y are the abscissa and the ordinate of the first face key point after rotation correction, and θ is the rotation angle.
The corrected initial frame image and the first face key point are based on the whole image, and the whole image not only contains the face information of the user, but also contains other redundant image information, so that the corrected image needs to be subjected to face region clipping through the following step S2.
S2: and according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image.
First, a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value are determined from the corrected first face key point. And then determining a cut-out frame corresponding to the face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, a coordinate point is formed by the minimum abscissa value and the minimum ordinate value, and the coordinate point is used as the top left corner vertex of the intercept frame corresponding to the face region. And forming another coordinate point by the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the right lower corner vertex of the intercept frame corresponding to the face region. And determining the position of a cutting frame in the corrected initial frame image according to the left upper corner vertex and the right lower corner vertex, and cutting out an image positioned in the cutting frame from the corrected initial frame image, namely cutting out an image containing a face area.
In other embodiments of the present application, in order to ensure that all face areas of the user are cut out, so as to avoid a situation that the detection error of the subsequent makeup progress is large due to incomplete cutting out, the cutting-out frame may be further enlarged by a preset multiple, where the preset multiple may be 1.15 or 1.25. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to the requirement in practical application. And amplifying the interception frame to the periphery by a preset multiple, and intercepting the image in the amplified interception frame from the corrected initial frame image, thereby intercepting the image containing the complete face area of the user.
S3: and scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
After the image containing the face area of the user is intercepted from the initial frame image in the mode, the image containing the face area is scaled to a preset size, and the face area image corresponding to the initial frame image is obtained. The preset size may be 390×390 or 400×400, etc. The embodiment of the application does not limit the specific value of the preset size, and can be set according to the requirement in practical application.
In order to adapt the first face key point to the zoomed face region image, the zoomed image containing the face region is zoomed to a preset size, and then the zoomed and translated processing is performed on the corrected first face key point according to the size of the image containing the face region before zooming and the preset size. Specifically, according to the size of the image containing the face area before scaling and the preset size to which the image needs to be scaled, determining the translation direction and the translation distance of each first face key point, further respectively performing translation operation on each first face key point according to the translation direction and the translation distance corresponding to each first face key point, and recording the coordinates of each translated first face key point.
The face region image is obtained from the initial frame image in the above manner, and the first face key point is adapted to the obtained face region image through operations such as rotation correction, translation scaling and the like. And for the current frame image, detecting a second face key point corresponding to the current frame image in the same mode as the initial frame image, and carrying out rotation correction on the current frame image and the second face key point according to the second face key point. And according to the corrected second face key points, capturing an image containing a face area from the corrected current frame image, and scaling the image containing the face area to a preset size to obtain a face area image corresponding to the current frame image.
After the face area image corresponding to the initial frame image and the face area image corresponding to the current frame image are obtained in the mode, the face flaw information corresponding to the initial frame image and the face flaw information corresponding to the current frame image are detected from the two obtained face area images through the preset skin detection model respectively.
Step 103: and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image.
The server determines the current makeup progress by specifically performing the following steps A1 to A3, including:
a1: and calculating a facial flaw difference value between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image.
The process of obtaining facial blemish information in step 102 includes blemish types and the corresponding blemish numbers, the blemish types including one or more blemish types such as acne, blemish, wrinkle, etc. And respectively calculating the difference value between the flaw number corresponding to the initial frame image and the flaw number corresponding to the current frame image under each flaw type, then calculating the sum of the difference values corresponding to each flaw type, and taking the obtained sum value as the facial flaw difference value between the current frame image and the initial frame image.
For example, assume that the blemish categories include two types of blemish and spots, the number of blemish included in the facial blemish information corresponding to the initial frame image is 5, and the number of spots is 4. The number of acnes included in the facial blemish information corresponding to the current frame image is 3, and the number of spots is 1. The difference in acnes between the initial frame image and the current frame image is 2 and the difference in spots is 3. And calculating the sum value of the difference value of the vaccinia and the difference value of the spots to be 5, and obtaining the facial blemish difference value between the current frame image and the initial frame image to be 5.
The face flaw difference value can represent the difference between the current frame image and the initial frame image in terms of smoothness and fineness, and the difference is caused by various factors such as concealer makeup, light change, shooting angle change and the like. According to the embodiment of the application, when the facial flaw difference value is larger than a certain value, the difference can be determined to be mainly caused by concealer makeup through a plurality of test measurements, the certain value is used as a preset threshold value to be configured in a server, and the preset threshold value can be 4 or 5.
After the facial blemish difference value is calculated in the above manner, the facial blemish difference value is compared with a preset threshold, if the facial blemish difference value is larger than the preset threshold, the facial change caused by the current makeup is obvious, and the current makeup progress is determined through the operation of the step A2. If the facial blemish difference value is less than or equal to the preset threshold, it indicates that the facial change caused by concealing makeup is small, and the current makeup progress is determined by the operation of step A3.
A2: if the facial blemish difference value is larger than the preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the facial blemish difference value and facial blemish information corresponding to the initial frame image.
And calculating the sum of the flaw numbers corresponding to the flaw categories in the facial flaw information corresponding to the initial frame image to obtain the total flaw number. And calculating the ratio between the facial flaw difference value and the total flaw number, and taking the ratio as the current makeup progress corresponding to the current frame image.
For example, it is assumed that the number of acnes included in the facial blemish information corresponding to the initial frame image is 5 and the number of spots is 4. The number of acnes included in the facial blemish information corresponding to the current frame image is 3, and the number of spots is 1. The facial flaw difference value between the current frame image and the initial frame image is 5, the total flaw number corresponding to the initial frame image is 9, and the current makeup progress corresponding to the current frame image is 5/9.
When the face site difference value between the current frame image and the initial frame image is larger than a preset threshold, namely the face difference is mainly caused by concealing makeup, the ratio between the face flaw difference value and the total flaw number corresponding to the initial frame image is directly used as the current makeup progress, the makeup progress determining process is simple, the calculated amount is small, the current makeup progress can be rapidly determined, the efficiency is high, and the real-time requirement of concealing makeup progress detection can be met.
After the current makeup progress is determined in the above manner, the server transmits the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current make-up schedule may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
A3: if the facial flaw difference value is smaller than or equal to a preset threshold value, obtaining a result image after finishing concealing makeup by a user, and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
When the face flaw difference value is judged to be smaller than or equal to the preset threshold value, the difference between the current frame image and the initial frame image is considered to be smaller, and if the current makeup progress is determined by directly utilizing the face flaw difference value and the total flaw number of the initial frame image, the error is large. Therefore, the embodiment of the application does not adopt the face flaw difference value and the total flaw number of the initial frame image to determine the current makeup progress under the condition, but firstly obtains the result image after the user finishes makeup, and then determines the current makeup progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
Firstly, rendering the effect of finishing concealing and making up on an initial frame image by using a 3D rendering technology to obtain a result image. Before generating the result image in the embodiment of the present application, the operation in step 102 may be performed on the initial frame image to perform rotation correction, so that after the two-eye connection line in the image is parallel to the horizontal line, the result image after finishing concealing and making up is rendered on the basis of the initial frame image after rotation correction, so that the two-eye connection line of the face in the result image is also parallel to the horizontal line, and thus, the result image does not need to perform rotation correction any more, and the operation amount is saved.
After the result image after finishing makeup is obtained, a third face key point corresponding to the result image is detected through a detection model for detecting the face key point, and according to the third face key point, a face area image is intercepted from the result image in the mode of step 102, and the specific intercepting process is not repeated here.
And respectively obtaining a face area image corresponding to the initial frame image, a face area image corresponding to the current frame image and a face area image corresponding to the result image in a mode provided in the step 102, and then determining the current makeup progress according to the face area images corresponding to the three images.
The obtained face region image corresponding to the initial frame image, the face region image corresponding to the current frame image and the face region image corresponding to the result image are all RGB color spaces. The embodiment of the application determines the influence of concealer on each channel component of the color space through a large number of experiments in advance, and discovers that the influence on each color channel in the RGB color space is not different. While HLS color space is composed of three components, hue, saturation and Light, it was found through experimentation that makeup could cause significant changes in the Saturation component of HLS color space. Therefore, the face region images corresponding to the initial frame image, the result image and the current frame image are respectively converted from the RGB color space to the HLS color space. And then separating a saturation channel from the HLS color space of the converted image to respectively obtain images of the face region images corresponding to the initial frame image, the result image and the current frame image, wherein the images only contain the saturation channel under the HLS color space.
After the conversion, respectively calculating smoothing factors corresponding to the face area images of the converted initial frame image, the result image and the current frame image through a preset filtering algorithm. The preset filtering algorithm may be a laplace algorithm or a gaussian filtering algorithm. Taking a Gaussian filtering algorithm as an example, the smoothing factor corresponding to each face region image can be calculated according to a Gaussian kernel with a preset size. The preset size may be 7*7, and the preset size may also be other values, and the embodiment of the present application is not limited to a specific value of the preset size.
After the smoothing factors corresponding to the initial frame image, the result image and the current frame image are obtained, the current makeup progress corresponding to the current frame image is determined according to the smoothing factors. Specifically, a first difference between a smoothing factor corresponding to the current frame image and a smoothing factor corresponding to the initial frame image is calculated, and a second difference between a smoothing factor corresponding to the result image and a smoothing factor corresponding to the initial frame image is calculated. And calculating a ratio between the first difference value and the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
After the current makeup progress is determined in the above manner, the server transmits the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current make-up schedule may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up by a user, the make-up progress detection method provided by the embodiment of the application detects the make-up progress of each frame of image behind the initial frame of image relative to the initial frame of image in real time, and displays the detected make-up progress to the user, so that the user can intuitively see the make-up progress of concealing the user, and the make-up efficiency is improved.
In order to facilitate understanding of the method provided by the embodiments of the present application, the following description is made with reference to the accompanying drawings. As shown in fig. 4, according to the initial frame image and the corresponding first face key point thereof, the current frame image and the corresponding second face key point thereof, the faces in the initial frame image and the current frame image are corrected and cut respectively, and then a skin detection model is called to detect the face flaw information corresponding to the initial frame image and the current frame image respectively. And calculating a facial blemish difference value between the current frame image and the initial frame image based on the facial blemish information of the current frame image and the initial frame image. If the facial blemish difference value is larger than a preset threshold value, calculating the ratio of the facial blemish difference value to the total blemish number corresponding to the initial frame image, and obtaining the current makeup progress. And if the facial flaw difference value is smaller than or equal to a preset threshold value, rendering and generating a finished image with the made-up concealer on the initial frame image, detecting a third face key point corresponding to the finished image, and correcting and cutting the face in the finished image according to the third face key point. And converting the face area images corresponding to the initial frame image, the current frame image and the result image into images only containing saturation channels under the HLS color space. And respectively calculating smoothing factors corresponding to the three converted face region images through a preset filtering algorithm. And calculating a first difference value between the smoothing factor corresponding to the current frame image and the smoothing factor corresponding to the initial frame image. A second difference between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image is calculated. And calculating a ratio between the first difference value and the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
In the embodiment of the application, a current frame image and an initial frame image of a user in the makeup process are obtained, a facial flaw difference value from the current frame image to the initial frame image is determined, and when the facial flaw difference value is larger than a preset threshold value, the ratio between the facial flaw difference value and the total flaw number corresponding to the initial frame image is calculated, so that the current makeup progress of concealing makeup is obtained. When the face flaw difference value is smaller than or equal to a preset threshold value, simulating a finished makeup result image on the basis of an initial frame image, respectively determining smoothing factors corresponding to the initial frame image, the result image and a current frame image, calculating a difference value of the smoothing factors between the current frame image and the initial frame image, and calculating a difference value of the smoothing factors between the result image and the initial frame image, wherein the ratio of the two difference values is the current makeup progress.
When the face flaw difference value between the current frame image and the initial frame image is larger than the preset threshold, the determination process of the makeup progress is simple, the operand is small, the current makeup progress can be determined quickly, the efficiency is high, and the real-time requirement of concealing makeup progress detection can be met. When the facial flaw difference value is smaller than or equal to a preset threshold value, a filtering algorithm is introduced to calculate smoothing factors corresponding to the images, the makeup progress is determined according to the change condition of the smoothing factors, finer changes can be calculated more accurately, and short-time sudden increase of the makeup progress is avoided.
Further, the embodiment of the application corrects and cuts the face area of the user in the video frame by utilizing the face key points, thereby improving the accuracy of identifying the face area.
The embodiment of the application can accurately detect the makeup progress of the concealer only through image processing, has small operand and low cost, reduces the processing pressure of a server, improves the efficiency of detecting the makeup progress of the concealer, can meet the real-time requirement of detecting the makeup progress of the concealer, and reduces the dependence of an algorithm on hardware resources and the input cost of manpower.
The embodiment of the application also provides a makeup progress detecting device which is used for executing the makeup progress detecting method provided by any embodiment. As shown in fig. 5, the apparatus includes:
A video frame acquisition module 201, configured to acquire an initial frame image and a current frame image of a user makeup video;
The flaw detection module 202 is configured to respectively obtain facial flaw information corresponding to the initial frame image and the current frame image;
The makeup progress determining module 203 is configured to determine a current makeup progress corresponding to the current frame image according to the initial frame image and the current frame image and the facial flaw information corresponding to the current frame image.
The makeup progress determining module 203 is configured to calculate a facial blemish difference value between the current frame image and the initial frame image according to facial blemish information corresponding to the initial frame image and facial blemish information corresponding to the current frame image; if the facial blemish difference value is larger than the preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the facial blemish difference value and facial blemish information corresponding to the initial frame image; if the facial flaw difference value is smaller than or equal to a preset threshold value, obtaining a result image after finishing concealing makeup by a user, and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
The facial flaw information comprises flaw types and corresponding flaw numbers; the makeup progress determining module 203 is configured to calculate a difference between a number of flaws corresponding to the initial frame image and a number of flaws corresponding to the current frame image under each flaw category; and calculating the sum of the differences corresponding to each flaw category, and taking the obtained sum value as a facial flaw difference value between the current frame image and the initial frame image.
The makeup progress determining module 203 is configured to calculate a sum of the number of flaws corresponding to each flaw category in the face flaw information corresponding to the initial frame image, so as to obtain a total flaw number; and calculating the ratio between the facial flaw difference value and the total flaw number, and taking the ratio as the current makeup progress corresponding to the current frame image.
The makeup progress determining module 203 is configured to simulate and generate a result image after the user finishes makeup according to the initial frame image; respectively acquiring face area images corresponding to an initial frame image, a result image and a current frame image; and determining the current makeup progress corresponding to the current frame image according to the face area images corresponding to the initial frame image, the result image and the current frame image.
The makeup progress determining module 203 is configured to convert face area images corresponding to the initial frame image, the result image, and the current frame image respectively into images that only include a saturation channel in the HLS color space; respectively calculating smoothing factors corresponding to the face area images of the converted initial frame image, the result image and the current frame image through a preset filtering algorithm; and determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image and the current frame image.
A makeup progress determining module 203, configured to calculate a first difference between a smoothing factor corresponding to the current frame image and a smoothing factor corresponding to the initial frame image; calculating a second difference value between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image; and calculating the ratio between the first difference value and the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
The flaw detection module 202 is configured to respectively obtain face area images corresponding to the initial frame image and the current frame image; and respectively detecting the flaw number corresponding to each flaw type in the face area image corresponding to each of the initial frame image and the current frame image through a preset skin detection model to obtain the face flaw information corresponding to each of the initial frame image and the current frame image.
The makeup progress determining module 203 is configured to detect a first face key point corresponding to the initial frame image; according to the first face key points, carrying out rotation correction on the initial frame image and the first face key points; according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image; and scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
The makeup progress determining module 203 is configured to determine a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point included in the first face key point, respectively; according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image; and carrying out rotation correction on the initial frame image and the first face key points according to the rotation angle and the rotation center point coordinates.
A makeup progress determining module 203, configured to determine a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key point; determining a cut-out frame corresponding to the face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and according to the cut-out frame, cutting out an image containing the face area from the corrected initial frame image.
The make-up progress determining module 203 is configured to amplify the truncated frame by a preset multiple; and according to the enlarged cut-out frame, cutting out an image containing the face area from the corrected initial frame image.
The apparatus further comprises: the face detection module is used for detecting whether the initial frame image and the current frame image only contain the face image of the same user; if yes, executing the operation of respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image; if not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the cosmetic video.
The apparatus further comprises: and the sending module is used for sending the current makeup progress to the terminal of the user so as to enable the terminal of the user to display the current makeup progress.
The makeup progress detecting device provided by the embodiment of the present application and the makeup progress detecting method provided by the embodiment of the present application have the same advantageous effects as the method adopted, operated or implemented by the application program stored therein, because of the same inventive concept.
The embodiment of the application also provides electronic equipment for executing the makeup progress detection method. Referring to fig. 6, a schematic diagram of an electronic device according to some embodiments of the present application is shown. As shown in fig. 6, the electronic device 8 includes: a processor 800, a memory 801, a bus 802 and a communication interface 803, the processor 800, the communication interface 803 and the memory 801 being connected by the bus 802; the memory 801 stores a computer program executable on the processor 800, and the processor 800 executes the makeup progress detection method according to any one of the foregoing embodiments of the present application when the computer program is executed.
The memory 801 may include a high-speed random access memory (RAM: random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the device network element and the at least one other network element is achieved through at least one communication interface 803 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 802 may be an ISA bus, a PCI bus, or an EISA bus, among others. The buses may be classified as address buses, data buses, control buses, etc. The memory 801 is configured to store a program, and the processor 800 executes the program after receiving an execution instruction, and the method for detecting a makeup progress according to any of the foregoing embodiments of the present application may be applied to the processor 800 or implemented by the processor 800.
The processor 800 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in processor 800. The processor 800 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 801, and the processor 800 reads information in the memory 801 and performs the steps of the above method in combination with its hardware.
The electronic equipment provided by the embodiment of the application and the cosmetic progress detection method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the electronic equipment based on the same inventive concept.
The present application further provides a computer readable storage medium corresponding to the method for detecting a makeup progress provided in the foregoing embodiments, referring to fig. 7, the computer readable storage medium is shown as an optical disc 30, on which a computer program (i.e. a program product) is stored, which when executed by a processor, performs the method for detecting a makeup progress provided in any of the foregoing embodiments.
It should be noted that examples of the computer readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical or magnetic storage medium, which will not be described in detail herein.
The computer-readable storage medium provided by the above-described embodiments of the present application has the same advantageous effects as the method adopted, operated or implemented by the application program stored therein, for the same inventive concept as the cosmetic progress detection method provided by the embodiments of the present application.
It should be noted that:
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the following schematic diagram: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A cosmetic progress detection method, characterized by comprising:
Acquiring an initial frame image and a current frame image of a user makeup video;
respectively acquiring facial flaw information corresponding to the initial frame image and the current frame image;
determining a current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image;
The determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image respectively comprises the following steps: and if the facial flaw difference value of the initial frame image and the current frame image is smaller than or equal to a preset threshold value, acquiring a result image after finishing concealing and making up by the user, and determining the current make-up progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
2. The method according to claim 1, wherein the determining the current makeup progress corresponding to the current frame image according to the initial frame image and the current frame image and the face defect information corresponding to the current frame image, respectively, includes:
calculating a facial flaw difference value between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image;
If the facial blemish difference value is larger than a preset threshold value, calculating the current makeup progress corresponding to the current frame image according to the facial blemish difference value and facial blemish information corresponding to the initial frame image.
3. The method of claim 2, wherein the facial blemish information includes a blemish category and a corresponding blemish number; the calculating a facial flaw difference value between the current frame image and the initial frame image according to the facial flaw information corresponding to the initial frame image and the facial flaw information corresponding to the current frame image comprises the following steps:
Respectively calculating the difference value between the flaw number corresponding to the initial frame image and the flaw number corresponding to the current frame image under each flaw type;
and calculating the sum of the differences corresponding to each flaw category, and taking the obtained sum value as a facial flaw difference value between the current frame image and the initial frame image.
4. The method according to claim 2, wherein calculating the current makeup progress corresponding to the current frame image according to the facial blemish difference value and the facial blemish information corresponding to the initial frame image comprises:
calculating the sum of the flaw numbers corresponding to the flaw categories in the facial flaw information corresponding to the initial frame image to obtain the total flaw number;
And calculating the ratio between the facial flaw difference value and the total flaw number, and taking the ratio as the current makeup progress corresponding to the current frame image.
5. The method according to claim 2, wherein the obtaining the result image of the user after finishing makeup, and determining the current makeup progress corresponding to the current frame image according to the initial frame image, the result image, and the current frame image, comprises:
simulating and generating a result image after the user finishes concealing the makeup according to the initial frame image;
respectively acquiring face area images corresponding to the initial frame image, the result image and the current frame image;
And determining the current makeup progress corresponding to the current frame image according to the face area images corresponding to the initial frame image, the result image and the current frame image.
6. The method according to claim 5, wherein the determining the current makeup progress corresponding to the current frame image according to the face region images corresponding to the initial frame image, the result image, and the current frame image, respectively, includes:
respectively converting the face area images corresponding to the initial frame image, the result image and the current frame image into images only containing saturation channels under an HLS color space;
respectively calculating smoothing factors corresponding to the face area images of the converted initial frame image, the converted result image and the converted current frame image through a preset filtering algorithm;
And determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image and the current frame image.
7. The method of claim 6, wherein determining the current makeup progress corresponding to the current frame image according to the smoothing factors corresponding to the initial frame image, the result image, and the current frame image, respectively, comprises:
calculating a first difference value between a smoothing factor corresponding to the current frame image and a smoothing factor corresponding to the initial frame image;
Calculating a second difference value between the smoothing factor corresponding to the result image and the smoothing factor corresponding to the initial frame image;
and calculating a ratio between the first difference value and the second difference value, and taking the ratio as the current makeup progress corresponding to the current frame image.
8. The method according to claim 1, wherein the acquiring facial flaw information corresponding to each of the initial frame image and the current frame image includes:
Respectively acquiring face area images corresponding to the initial frame image and the current frame image;
And respectively detecting the flaw number corresponding to each flaw category in the face area image corresponding to each of the initial frame image and the current frame image through a preset skin detection model to obtain the face flaw information corresponding to each of the initial frame image and the current frame image.
9. The method according to claim 5 or 8, wherein acquiring the face area image corresponding to the initial frame image includes:
detecting a first face key point corresponding to the initial frame image;
According to the first face key points, carrying out rotation correction on the initial frame image and the first face key points;
according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image;
And scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
10. The method of claim 9, wherein the performing rotational correction on the initial frame image and the first face keypoints based on the first face keypoints comprises:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included by the first face key point;
Determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and carrying out rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
11. The method according to claim 9, wherein the capturing an image including a face region from the corrected initial frame image according to the corrected first face keypoints includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key point;
Determining a interception frame corresponding to a face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and according to the interception frame, intercepting an image containing the face area from the corrected initial frame image.
12. The method of claim 11, wherein the method further comprises:
amplifying the intercepting frame by a preset multiple;
and according to the enlarged intercepting frame, intercepting an image containing the face area from the corrected initial frame image.
13. The method of any one of claims 1-8, wherein after the acquiring the initial frame image and the current frame image in the user cosmetic video, further comprising:
Detecting whether the initial frame image and the current frame image both contain only the face image of the same user;
If yes, executing the operation of respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image;
If not, sending prompt information to the terminal of the user, wherein the prompt information is used for prompting the user to keep that only the face of the same user appears in the cosmetic video.
14. The method according to any one of claims 1-8, further comprising:
and sending the current makeup progress to the terminal of the user so that the terminal of the user displays the current makeup progress.
15. A makeup progress detecting device, characterized by comprising:
the video frame acquisition module is used for acquiring an initial frame image and a current frame image of the makeup video of the user;
The flaw detection module is used for respectively acquiring the facial flaw information corresponding to the initial frame image and the current frame image;
The makeup progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image;
The determining the current makeup progress corresponding to the current frame image according to the initial frame image, the current frame image and the facial flaw information corresponding to the current frame image respectively comprises the following steps: and if the facial flaw difference value of the initial frame image and the current frame image is smaller than or equal to a preset threshold value, acquiring a result image after finishing concealing and making up by the user, and determining the current make-up progress corresponding to the current frame image according to the initial frame image, the result image and the current frame image.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor runs the computer program to implement the method of any one of claims 1-14.
17. A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the method of any of claims 1-14.
CN202111017059.8A 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium Active CN113837019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111017059.8A CN113837019B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111017059.8A CN113837019B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113837019A CN113837019A (en) 2021-12-24
CN113837019B true CN113837019B (en) 2024-05-10

Family

ID=78961694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111017059.8A Active CN113837019B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113837019B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078675B (en) * 2023-10-16 2024-02-06 太和康美(北京)中医研究院有限公司 Cosmetic efficacy evaluation method, device, equipment and medium based on image analysis

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015126361A1 (en) * 2014-02-18 2015-08-27 Lisa Laforgia Cosmetic base matching system
CN106294820A (en) * 2016-08-16 2017-01-04 深圳市金立通信设备有限公司 A kind of method instructing cosmetic and terminal
CN109427078A (en) * 2017-08-24 2019-03-05 丽宝大数据股份有限公司 Biological information analytical equipment and its lip adornment analysis method
CN109901894A (en) * 2017-12-07 2019-06-18 腾讯科技(深圳)有限公司 A kind of progress bar image generating method, device and storage medium
CN110205366A (en) * 2018-02-28 2019-09-06 伽蓝(集团)股份有限公司 A kind of screening technique of skin intrinsic aging target and the active matter and its screening technique for improving skin intrinsic aging
CN110235169A (en) * 2017-02-01 2019-09-13 株式会社Lg生活健康 Evaluation system of making up and its method of operating
CN111145284A (en) * 2019-12-18 2020-05-12 广东美的厨房电器制造有限公司 Preheating progress display method and device, electronic equipment and storage medium
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
WO2020119665A1 (en) * 2018-12-10 2020-06-18 深圳先进技术研究院 Facial muscle training method and apparatus, and electronic device
KR20200107480A (en) * 2019-03-08 2020-09-16 주식회사 에이아이네이션 Virtual makeup composition processing apparatus and method
CN112507766A (en) * 2019-09-16 2021-03-16 珠海格力电器股份有限公司 Face image extraction method, storage medium and terminal equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015126361A1 (en) * 2014-02-18 2015-08-27 Lisa Laforgia Cosmetic base matching system
CN106294820A (en) * 2016-08-16 2017-01-04 深圳市金立通信设备有限公司 A kind of method instructing cosmetic and terminal
CN110235169A (en) * 2017-02-01 2019-09-13 株式会社Lg生活健康 Evaluation system of making up and its method of operating
CN109427078A (en) * 2017-08-24 2019-03-05 丽宝大数据股份有限公司 Biological information analytical equipment and its lip adornment analysis method
CN109901894A (en) * 2017-12-07 2019-06-18 腾讯科技(深圳)有限公司 A kind of progress bar image generating method, device and storage medium
CN110205366A (en) * 2018-02-28 2019-09-06 伽蓝(集团)股份有限公司 A kind of screening technique of skin intrinsic aging target and the active matter and its screening technique for improving skin intrinsic aging
WO2020119665A1 (en) * 2018-12-10 2020-06-18 深圳先进技术研究院 Facial muscle training method and apparatus, and electronic device
KR20200107480A (en) * 2019-03-08 2020-09-16 주식회사 에이아이네이션 Virtual makeup composition processing apparatus and method
CN112507766A (en) * 2019-09-16 2021-03-16 珠海格力电器股份有限公司 Face image extraction method, storage medium and terminal equipment
CN111145284A (en) * 2019-12-18 2020-05-12 广东美的厨房电器制造有限公司 Preheating progress display method and device, electronic equipment and storage medium
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Virtual Makeup Augmented Reality System;Aline de F´atima Soares Borges 等;《2019 21st Symposium on Virtual and Augmented Reality (SVR)》;20191031;34-42 *
个性化虚拟化妆效果迁移;杜辉 等;《计算机辅助设计与图形学学报》;20140531;第26卷(第5期);767-775 *

Also Published As

Publication number Publication date
CN113837019A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US10783354B2 (en) Facial image processing method and apparatus, and storage medium
CN106446873B (en) Face detection method and device
CN109952594B (en) Image processing method, device, terminal and storage medium
CN107993216B (en) Image fusion method and equipment, storage medium and terminal thereof
TW201911130A (en) Method and device for remake image recognition
Sepas-Moghaddam et al. Light field-based face presentation attack detection: reviewing, benchmarking and one step further
Rathgeb et al. Differential detection of facial retouching: A multi-biometric approach
CN111325107A (en) Detection model training method and device, electronic equipment and readable storage medium
CN111222433A (en) Automatic face auditing method, system, equipment and readable storage medium
CN112528902A (en) Video monitoring dynamic face recognition method and device based on 3D face model
WO2017173578A1 (en) Image enhancement method and device
CN111836058B (en) Method, device and equipment for playing real-time video and storage medium
CN113837019B (en) Cosmetic progress detection method, device, equipment and storage medium
CN109063598A (en) Face pore detection method, device, computer equipment and storage medium
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
US9996743B2 (en) Methods, systems, and media for detecting gaze locking
CN109919128B (en) Control instruction acquisition method and device and electronic equipment
US11080920B2 (en) Method of displaying an object
CN112381749A (en) Image processing method, image processing device and electronic equipment
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
WO2021237736A1 (en) Image processing method, apparatus and system, and computer-readable storage medium
CN111091089B (en) Face image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant