CN113837020A - Cosmetic progress detection method, device, equipment and storage medium - Google Patents

Cosmetic progress detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113837020A
CN113837020A CN202111017071.9A CN202111017071A CN113837020A CN 113837020 A CN113837020 A CN 113837020A CN 202111017071 A CN202111017071 A CN 202111017071A CN 113837020 A CN113837020 A CN 113837020A
Authority
CN
China
Prior art keywords
image
makeup
face
frame image
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111017071.9A
Other languages
Chinese (zh)
Other versions
CN113837020B (en
Inventor
刘聪
苗锋
张梦洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202111017071.9A priority Critical patent/CN113837020B/en
Publication of CN113837020A publication Critical patent/CN113837020A/en
Application granted granted Critical
Publication of CN113837020B publication Critical patent/CN113837020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application provides a makeup progress detection method, a makeup progress detection device, makeup progress detection equipment and a storage medium, wherein the method comprises the following steps: acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a makeup video of a user; generating a makeup mask image according to the makeup area; and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image. According to the makeup mask image, the current frame image and the initial frame image in the makeup process of the user are compared to determine the current makeup progress. The makeup progress detection can be realized only through image processing, a deep learning model is not needed, the calculation amount is small, the cost is low, the processing pressure of a server is reduced, the progress detection of the makeup steps such as blush is realized, the efficiency of the makeup progress detection is improved, and the real-time requirement of the makeup progress detection can be met.

Description

Cosmetic progress detection method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a makeup progress detection method, device, equipment and storage medium.
Background
Make-up has become the essential link of many people's daily life, and blush can make the face present healthy ruddy complexion to can highlight the third dimension of face. Therefore, blush application is an important step in the process of makeup, and if the makeup progress of blush application can be fed back to a user in real time, the energy consumption of makeup on the user can be greatly reduced, and the makeup time is saved.
At present, some functions of providing virtual makeup trial, skin color detection, personalized product recommendation and the like by using a deep learning model exist in the related technology, and the functions all need to collect a large number of face pictures in advance to train the deep learning model.
However, the face picture is privacy data of the user, and it is difficult to collect a huge face picture. And a large amount of computing resources are consumed for model training, so that the cost is high. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the face of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is high, the deep learning model capable of meeting the real-time performance requirement is not high in detection accuracy.
Disclosure of Invention
The application provides a makeup progress detection method, a makeup progress detection device, makeup equipment and a storage medium. The makeup progress detection can be realized only through image processing, a deep learning model is not needed, the calculation amount is small, the cost is low, the processing pressure of a server is reduced, the progress detection of the makeup steps such as blush is realized, the efficiency of the makeup progress detection is improved, and the real-time requirement of the makeup progress detection can be met.
The embodiment of the first aspect of the application provides a makeup progress detection method, which comprises the following steps:
acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a makeup video of a user;
generating a makeup mask image according to the makeup area;
and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
In some embodiments of the present application, the determining, according to the makeup mask map, the initial frame image, and the current frame image, a current makeup progress corresponding to the current frame image includes:
taking the makeup mask image as a reference, acquiring a first target area image for makeup from the initial frame image, and acquiring a second target area image for makeup from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
In some embodiments of the present application, the obtaining a first target area image of makeup from the initial frame image with reference to the makeup mask image includes:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and taking the makeup mask image as a reference, and acquiring a first target area image for makeup from the face area image.
In some embodiments of the present application, the obtaining a first target area image of makeup from the face area image with reference to the makeup mask image includes:
respectively converting the makeup mask image and the face region image into binary images;
performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image;
and calculating the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image corresponding to the initial frame image.
In some embodiments of the present application, before performing and operation on the binarized image corresponding to the cosmetic mask image and the binarized image corresponding to the face region image, the method further includes:
determining one or more first positioning points on the outline of each makeup area in the makeup mask image according to the standard human face key points corresponding to the makeup mask image;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
In some embodiments of the present application, the obtaining a first target area image of makeup from the face area image with reference to the makeup mask image includes:
splitting the cosmetic mask map into a plurality of sub-mask maps, wherein each sub-mask map comprises at least one cosmetic area;
respectively converting each sub-mask image and the face region image into a binary image;
respectively performing AND operation on the binary image corresponding to each sub-mask image and the binary image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image;
respectively carrying out AND operation on each sub-mask image and the face region image to obtain a plurality of sub-target region images corresponding to the initial frame image;
and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
In some embodiments of the present application, before performing and operation on the binarized image corresponding to each sub-mask map and the binarized image corresponding to the face region image, the method further includes:
determining one or more first positioning points on the outline of a makeup area in a first sub-mask image according to standard face key points corresponding to the makeup mask image, wherein the first sub-mask image is any one of the plurality of sub-mask images;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the first sub-mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
In some embodiments of the present application, the determining, according to the first target area image and the second target area image, a current makeup progress corresponding to the current frame image includes:
respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
In some embodiments of the present application, the determining a current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image includes:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in the converted first target area image and the converted second target area image respectively;
counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions;
and calculating the ratio of the counted pixel point number to the total number of pixel points in all makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, before determining, according to the first target area image and the second target area image, a current makeup progress corresponding to the current frame image, the method further includes:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
performing and operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image;
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and calculating the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
In some embodiments of the present application, before determining, according to the first target area image and the second target area image, a current makeup progress corresponding to the current frame image, the method further includes:
and respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
In some embodiments of the present application, the obtaining, according to the first face key point, a face region image corresponding to the initial frame image includes:
performing rotation correction on the initial frame image and the first face key point according to the first face key point;
according to the corrected first face key point, intercepting an image containing a face region from the corrected initial frame image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
In some embodiments of the present application, the method further comprises: and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
In some embodiments of the present application, the performing rotation correction on the initial frame image and the first face keypoints according to the first face keypoints includes:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
In some embodiments of the present application, the intercepting an image including a face region from the corrected initial frame image according to the corrected first face key point includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining a capturing frame corresponding to a face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and intercepting an image containing the face area from the corrected initial frame image according to the intercepting frame.
In some embodiments of the present application, the method further comprises:
amplifying the intercepting frame by a preset multiple;
and according to the amplified intercepting frame, intercepting an image containing the face region from the corrected initial frame image.
In some embodiments of the present application, the generating a cosmetic mask map according to the cosmetic area includes:
drawing the outline of each makeup area in a preset blank face image according to the position and the shape of each makeup area;
and filling pixels in each drawn outline to obtain a cosmetic mask image.
An embodiment of a second aspect of the present application provides a makeup progress detection device including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one makeup area and acquiring an initial frame image and a current frame image of a makeup video of a user;
the generating module is used for generating a makeup mask image according to the makeup area;
and the progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
Embodiments of the third aspect of the present application provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of the first aspect.
An embodiment of a fourth aspect of the present application provides a computer-readable storage medium having a computer program stored thereon, the program being executable by a processor to implement the method of the first aspect.
The technical scheme provided in the embodiment of the application at least has the following technical effects or advantages:
in the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of face area identification is improved. And determining a makeup area from the face area image based on the face key point, and aligning pixels of the makeup area in the initial frame image and the makeup area in the current frame image, so that the accuracy of the identification of the makeup area is improved. And aligning the makeup areas in the initial frame image and the current frame image, and reducing errors caused by position difference of the makeup areas. The discontinuous makeup area can be separately calculated when the makeup area is pulled out, and the accuracy rate of obtaining the makeup area is increased. Still align the make-up region in cosmetic mask picture with the make-up region in the face region image, guaranteed that the make-up region of picking up all is in the face region image, can not surpass facial boundary. In addition, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings.
In the drawings:
fig. 1 is a flowchart illustrating a makeup progress detection method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a display interface displayed by a client for a user to select a makeup area according to an embodiment of the application;
FIG. 3 is a schematic diagram illustrating the rotation angles of a solution image provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating two coordinate system transformations provided by an embodiment of the present application;
fig. 5 is another schematic flow chart of a makeup progress detection method according to an embodiment of the present application;
fig. 6 is a schematic structural view illustrating a makeup progress detection device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a storage medium provided in an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
A makeup progress detection method, a makeup progress detection device, a makeup progress detection apparatus, and a storage medium according to embodiments of the present application will be described below with reference to the accompanying drawings.
At present, some virtual makeup trying functions exist in the related technology, the virtual makeup trying functions can be applied to sales counters or mobile phone application software, a face recognition technology is adopted to provide virtual makeup trying services for users, and various makeup can be matched and displayed in a face fitting mode in real time. In addition, the face skin detection service is provided, but the services only can meet the requirement that a user selects cosmetics suitable for the user or selects a skin care scheme suitable for the user. Based on the services, the user can be helped to select the makeup product suitable for the user, but the makeup progress cannot be displayed, and the real-time makeup requirement of the user cannot be met. In the related art, some functions such as virtual makeup fitting, skin color detection, personalized product recommendation and the like are provided by using a deep learning model, and the functions all need to collect a large number of face pictures in advance to train the deep learning model. However, the face picture is privacy data of the user, and it is difficult to collect a huge face picture. And a large amount of computing resources are consumed for model training, so that the cost is high. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the face of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is high, the deep learning model capable of meeting the real-time performance requirement is not high in detection accuracy.
Based on this, the embodiment of the present application provides a makeup progress detection method for detecting a makeup progress of a preset type of makeup, where the preset type may be a type of makeup for making up a specific area of a face, and the preset type may be a blush, or a type of makeup for making up a specific area of a face in a special makeup such as a Beijing opera makeup.
The method compares a current frame image of a user makeup process with an initial frame image (i.e., a first frame image) to determine a makeup progress. And intercepting a face area image from the initial frame image and the current frame image based on the face key point by identifying the face key point of the initial frame image and the current frame image. And generating a makeup mask image according to the makeup area needing makeup. According to the makeup mask image, a first target area image and a second target area image which need to be made up are respectively intercepted from a face area image corresponding to an initial frame image and a current frame image, and the current makeup progress is determined by comparing the difference of preset single-channel components such as the brightness or the tone of pixel points in the first target area image and the second target area image. The accuracy of make-up progress detection is very high, and need not to adopt the degree of depth learning model, and the operand is little, and is with low costs, has reduced the processing pressure of server, has realized the progress detection to make-up steps such as blush, has improved the efficiency that makes-up progress detected, can satisfy the real-time requirement that makes-up progress detected.
Referring to fig. 1, the method specifically includes the following steps:
step 101: the method comprises the steps of obtaining at least one makeup area, and obtaining an initial frame image and a current frame image of a makeup video of a user.
The execution subject of the embodiment of the application is the server. And a client matched with the makeup progress detection service provided by the server is installed on a terminal of a user, such as a mobile phone or a computer. When a user needs to use the makeup progress detection service, the user opens the client on the terminal, and the client displays a plurality of makeup areas corresponding to preset types of makeup, such as a plurality of makeup areas corresponding to blush. The displayed makeup area may be classified according to the face area, such as a nose area, two cheek areas, a chin area, and the like. Each region category may include a plurality of outlines of makeup regions of different shapes and/or sizes. The user selects one or more makeup areas that the user needs to make up from the displayed plurality of makeup areas. And the client sends the makeup area selected by the user to the server.
As an example, as shown in fig. 2, in the display interface, the outlines of the makeup areas corresponding to the nose area, the cheek areas on both sides, and the chin area are included, the user may select a face area to be made up and an outline to be made up in the selected face area from among a plurality of outlines corresponding to the areas, and after the selection, the user may click the confirmation key to submit the outline of the makeup area selected by the user. The client detects one or more makeup areas submitted by the user and sends the makeup areas to the server.
As another example, in the embodiment of the application, a plurality of makeup style maps may be further generated based on the preset standard face image, where the makeup style maps include outlines of one or more makeup areas. The preset standard face image is a face image with no face shielding, clear five sense organs and parallel two eye connecting lines and a horizontal line. The interface displayed by the client side can simultaneously display a plurality of makeup style maps, and the user selects one makeup style map from the displayed plurality of makeup style maps. And the client sends the makeup style drawing selected by the user to the server. The server receives the makeup style drawing sent by the client, and acquires one or more makeup areas selected by the user from the makeup style drawing.
By any mode, the user can select the makeup area needing makeup by self, and the personalized makeup requirements of different users on preset types of makeup such as blush can be met.
In other embodiments of the present application, instead of selecting a makeup area by the user, a fixed makeup area may be preset in the server, and the position and shape of the makeup area may be set. After the user opens the client, the client prompts the user to make up at the parts corresponding to the make-up areas set by the server. When receiving a request of a user for detecting the makeup progress, the server directly acquires one or more preset makeup areas from the local configuration file.
The makeup area is configured in the server in advance, when a user needs to detect the makeup progress, the makeup area does not need to be acquired from the terminal of the user, the bandwidth is saved, the user operation is simplified, and the processing time is shortened.
The display interface of the client is also provided with a video uploading interface, when the fact that the user clicks the video uploading interface is detected, a camera device of the terminal is called to shoot the makeup video of the user, and the user carries out makeup operation of preset types such as blush in the makeup area of the face of the user in the shooting process. And the terminal of the user transmits the shot makeup video to the server in a video streaming mode. The server receives each frame image of the makeup video transmitted by the user's terminal.
In other embodiments of the present application, after obtaining an initial frame image and a current frame image of a makeup video of a user, a server further detects whether both the initial frame image and the current frame image only contain a face image of the same user. Firstly, whether an initial frame image and a current frame image both contain only one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face images, prompt information is sent to a terminal of a user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video. For example, the hint information may be "please keep only the face of the same person appearing within the shot".
If it is detected that both the initial frame image and the current frame image only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, the face feature information corresponding to the face image in the initial frame image may be extracted through a face recognition technique, the face feature information corresponding to the face image in the current frame image may be extracted, the similarity of the face feature information extracted from the two frame images may be calculated, if the calculated similarity is greater than or equal to a set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user, and then the current makeup progress corresponding to the current frame image may be determined through the following operations of steps 102 and 103. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. And the terminal of the user receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the makeup video.
In the embodiment of the application, the server takes the received first frame image as an initial frame image, and compares the current makeup progress of the specific makeup corresponding to each frame image received subsequently with the initial frame image as a reference. Since the processing manner of each subsequent frame of image is the same, the embodiment of the present application explains the process of cosmetic progress detection by taking the current frame of image received at the current time as an example.
After the server obtains at least one makeup area and the initial frame image and the current frame image of the user makeup video through the step, the server determines the current makeup progress of the user through the following operations of steps 102 and 103.
Step 102: and generating a makeup mask image according to the obtained makeup area.
Specifically, the outline of each makeup area is drawn in a preset blank face image according to the position and the shape of each makeup area. The preset blank face image may be formed by removing pixels from the preset standard face image. After the outline of each makeup area is drawn in a preset blank face image, pixel filling is carried out in each drawn outline, pixel points with the same pixel value are filled in the outline of the same makeup area, and the pixel values of the pixel points filled in different makeup areas are different from each other. And taking the image after the filling operation as a cosmetic mask image.
Step 103: and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
Firstly, according to a makeup mask image, a first target area image for makeup is obtained from an initial frame image, and a second target area image for makeup is obtained from a current frame image. Namely, the makeup mask image is taken as a mask, and the images of the makeup areas which need to be made up by the user are respectively intercepted from the initial frame image and the current frame image. And then determining the current makeup progress corresponding to the current frame image according to the intercepted first target area image and second target area image.
The process of acquiring the first target area image from the initial frame image is the same as the process of acquiring the second target area image from the current frame image. The embodiment of the present application specifically describes the process by taking an initial frame image as an example. The server specifically obtains a first target area image corresponding to the initial frame image through the following operations of steps S1-S3, including:
s1: and detecting a first face key point corresponding to the initial frame image.
The server is configured with a pre-trained detection model for detecting the face key points, and the detection model provides interface services for detecting the face key points. After the server acquires the initial frame image of the user makeup video, the server calls an interface service for detecting the face key points, and all face key points of the user face in the initial frame image are identified through a detection model. In order to distinguish from the face key points corresponding to the current frame image, all the face key points corresponding to the initial frame image are referred to as first face key points in the embodiment of the application. And all the face key points corresponding to the current frame image are called second face key points.
The identified key points of the human face comprise key points on the face contour of the user and key points of the mouth, the nose, the eyes, the eyebrows and other parts. The number of the identified face key points can be 106.
S2: and acquiring a face region image corresponding to the initial frame image according to the first face key point.
The server specifically obtains the face region image corresponding to the initial frame image through the following operations in steps S20-S22, including:
s20: and performing rotation correction on the initial frame image and the first face key point according to the first face key point.
Because a user can not ensure that the pose angles of the face in each frame of image are the same when shooting a makeup video through a terminal, in order to improve the accuracy of comparison between the current frame of image and the initial frame of image, the face in each frame of image needs to be rotationally corrected, so that the connecting lines of the face and the eyes in each frame of image after correction are all on the same horizontal line, thereby ensuring that the pose angles of the face in each frame of image are the same, and avoiding the problem of larger detection errors of the makeup progress due to different pose angles.
Specifically, the left-eye central coordinate and the right-eye central coordinate are respectively determined according to the left-eye key point and the right-eye key point included in the first face key point. And determining all the left eye key points of the left eye region and all the right eye key points of the right eye region from the first face key points. And averaging the determined abscissa of all the left-eye key points, averaging the ordinate of all the left-eye key points, forming a coordinate by the average of the abscissa and the average of the ordinate corresponding to the left eye, and determining the coordinate as the center coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then, according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image. As shown in fig. 3, the horizontal difference dx and the vertical difference dy of the left eye center coordinate and the right eye center coordinate are calculated, and the link length d between the left eye center coordinate and the right eye center coordinate is calculated. And calculating an included angle theta between the two-eye connecting line and the horizontal direction according to the length d of the two-eye connecting line, the horizontal difference value dx and the vertical difference value dy, wherein the included angle theta is the rotating angle corresponding to the initial frame image. And then calculating the coordinate of the central point of the connecting line of the two eyes according to the central coordinates of the left eye and the right eye, wherein the coordinate of the central point is the coordinate of the rotating central point corresponding to the initial frame image.
And performing rotation correction on the initial frame image and the first face key point according to the calculated rotation angle and the rotation center point coordinate. Specifically, the rotation angle and the rotation center point coordinate are input into a preset function for calculating a rotation matrix of the picture, where the preset function may be a function cv2. getrototematrixmix2d () in OpenCV. And obtaining a rotation matrix corresponding to the initial frame image by calling the preset function. And then calculating the product of the initial frame image and the rotation matrix to obtain the corrected initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be completed by calling a function cv2.warpaffine () in OpenCV.
For the first face key points, each first face key point needs to be corrected one by one to correspond to the corrected initial frame image. When the first face key points are corrected one by one, two times of coordinate system conversion are required, the coordinate system with the upper left corner of the initial frame image as the origin is converted into the coordinate system with the lower left corner as the origin for the first time, and the coordinate system with the lower left corner as the origin is further converted into the coordinate system with the rotation center point coordinate as the origin for the second time, as shown in fig. 4. After two times of coordinate system conversion, the following formula (1) conversion is carried out on each first face key point, and the rotation correction of the first face key points can be completed.
Figure BDA0003240253840000121
In the formula (1), x0、y0The x and y are respectively the abscissa and ordinate of the first face key point after the rotation correction, and theta is the rotation angle.
The corrected initial frame image and the first face key point are based on the entire image including not only the face information of the user but also other redundant image information, and therefore, the face region of the corrected image needs to be clipped in the following step S21.
S21: and according to the corrected first face key point, intercepting an image containing a face area from the corrected initial frame image.
Firstly, determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points. And then determining an intercepting frame corresponding to the face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, the minimum abscissa value and the minimum ordinate value are combined into a coordinate point, and the coordinate point is used as a top left corner vertex of the intercepting frame corresponding to the face area. And forming another coordinate point by using the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the top of the lower right corner of the intercepting frame corresponding to the face area. And determining the position of an intercepting frame in the corrected initial frame image according to the top left corner vertex and the bottom right corner vertex, and intercepting the image in the intercepting frame from the corrected initial frame image, namely intercepting the image containing the face region.
In other embodiments of the present application, in order to ensure that all face areas of the user are intercepted, and avoid the occurrence of a situation where the subsequent makeup progress detection error is large due to incomplete interception, the intercepting frame may be further enlarged by a preset multiple, where the preset multiple may be 1.15 or 1.25, and the like. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to requirements in practical application. And after amplifying the interception frame to the periphery by preset times, intercepting the image in the amplified interception frame from the corrected initial frame image, thereby intercepting the image containing the complete human face area of the user.
S22: and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
After the image containing the face area of the user is intercepted from the initial frame image in the mode, the image containing the face area is zoomed to the preset size, and the face area image corresponding to the initial frame image is obtained. The predetermined size may be 390 × 390, 400 × 400, or the like. The embodiment of the application does not limit the specific value of the preset dimension, and the specific value can be set according to requirements in practical application.
In order to adapt the first face key point to the zoomed face region image, after the captured image containing the face region is zoomed to a preset size, the corrected first face key point is zoomed and translated according to the size of the image containing the face region before zooming and the preset size. Specifically, the translation direction and the translation distance of each first face key point are determined according to the size of the image containing the face area before the zooming and the preset size to which the image needs to be zoomed, then, the translation operation is respectively carried out on each first face key point according to the translation direction and the translation distance corresponding to each first face key point, and the coordinates of each first face key point after the translation are recorded.
The face region image is obtained from the initial frame image in the above manner, the first face key point is adapted to the obtained face region image through operations such as rotation correction and translation scaling, and then the image region corresponding to the makeup region is extracted from the face region image in the following manner in step S3.
In other embodiments of the present application, before performing step S3, a filtering process may be performed on the face region image corresponding to the initial frame image, so as to remove noise in the face region image. Specifically, a gaussian filtering algorithm or a laplacian algorithm may be used to filter and smooth the face region image corresponding to the initial frame image.
Taking a gaussian filtering algorithm as an example, the gaussian filtering processing may be performed on the face region image corresponding to the initial frame image according to a gaussian kernel with a preset size. The Gaussian kernel of the Gaussian filter is a key parameter of the Gaussian filter processing, if the Gaussian kernel is too small, a good filtering effect cannot be achieved, and if the Gaussian kernel is too large, although noise information in an image can be filtered, useful information in the image can be smoothed. In the embodiment of the present application, a gaussian kernel with a predetermined size is selected, and the predetermined size may be 9 × 9. In addition, the other group of parameters sigmaX and sigmaY of the Gaussian filter function are set to be 0, and after Gaussian filtering, image information is smoother, so that the accuracy of subsequently obtaining the makeup progress is improved.
After the face region image corresponding to the initial frame image is obtained in the above manner, the target region image corresponding to the specific makeup is extracted from the face region image in step S3.
S3: and taking the makeup mask image as a reference, and acquiring a first target area image for makeup from the face area image corresponding to the initial frame image.
A preset type of makeup such as blush is a makeup method for making up a fixed area of the face, such as a specific area of the nose, cheek areas, chin area, and the like. Therefore, the specific areas needing to be made up can be directly extracted from the face area image, the interference of the invalid area on the making up progress detection is avoided, and the accuracy of the making up progress detection is improved.
The server obtains the first target area image specifically by the operations of the following steps S30-S32, including:
s30: and respectively converting the makeup mask image and the face region image into binary images.
S31: and performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to the intersection region of the cosmetic mask image and the face region image.
And respectively carrying out AND operation on pixel values of pixel points with the same coordinates in the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image. Because the pixel value of the pixel point in the makeup mask image is not zero, and the pixel points in other areas are zero. Therefore, the first mask image obtained by the operation is equivalent to the fact that each makeup area is cut out from the face area image corresponding to the initial frame image.
In other embodiments of the present application, since the makeup mask image is generated based on the preset standard face image, a makeup area in the makeup mask image may not completely coincide with an area actually made up by the user in the initial frame image, thereby affecting accuracy of makeup progress detection. Therefore, before the and operation is performed on the binary image corresponding to the makeup mask image and the binary image corresponding to the face region image, the alignment operation can be performed on the makeup region in the makeup mask image and the corresponding region in the initial frame image.
Specifically, one or more first positioning points on the outline of each makeup area in the makeup mask map are determined according to the standard human face key points corresponding to the makeup mask map. And the standard face key points corresponding to the makeup mask image are the standard face key points corresponding to the preset standard face image. For any makeup area in the makeup mask image, firstly, whether the outline of the makeup area contains a standard face key point or not is determined, and if so, the standard face key point on the outline is determined as a first fixed position corresponding to the makeup area. And if not, generating a first fixed position on the outline of the makeup area by utilizing the key points of the standard human face around the makeup area in a linear transformation mode. Specifically, the first fixed point can be obtained by performing translation operations such as upward movement, downward movement, left movement or right movement on surrounding standard face key points.
For example, for the nose region, the key point located on the nose can be moved to the left by a certain pixel distance to obtain a point located on the left alar part of the nose. And moving the key point of the nose to the right by a certain pixel distance to obtain a point on the right-side alar part of the nose. The key point of the nose, this point on the left flank of the nose and this point on the right flank of the nose are taken as the three first fixation points for the nose area.
In this embodiment of the application, the number of the first positioning points corresponding to each makeup area may be a preset number, and the preset number may be 3 or 4, and the like.
After the first positioning point corresponding to each makeup area in the makeup mask image is obtained in the above manner, the second positioning point corresponding to each first positioning point is determined from the face area image corresponding to the initial frame image according to the first face key point corresponding to the initial frame image. Because the standard face key point corresponding to the makeup mask image and the first face key point corresponding to the initial frame image are obtained through the same detection model, the key points at different positions have respective numbers. Therefore, for the first positioning point belonging to the standard human face key points, the first human face key points with the same number as the standard human face key points corresponding to the first positioning point are determined from the first human face key points corresponding to the initial frame image, and the determined first human face key points are used as the second positioning points corresponding to the first positioning point. And for a first positioning point obtained by linear transformation of the standard human face key points, determining a first human face key point corresponding to the first positioning point from first human face key points corresponding to the initial frame image, and determining a point obtained by the same linear transformation of the first human face key point as a second positioning point corresponding to the first positioning point.
After the second positioning point corresponding to each first positioning point is determined in the above manner, the makeup mask map is stretched, and each first positioning point is stretched to the position corresponding to each corresponding second positioning point, that is, the position of each first positioning point in the makeup mask map after stretching is the same as the position of the corresponding second positioning point.
By means of the method, the makeup area in the makeup mask image can be aligned with the area actually made up by the user in the initial frame image, so that the first target image made up can be accurately extracted from the initial frame image through the makeup mask image, and accuracy of makeup progress detection is improved.
After aligning the cosmetic mask image with the initial frame image, a first mask image corresponding to an intersection region between the cosmetic mask image and the face region image of the initial frame image is obtained through the operation of step S31, and then a first target region image corresponding to the initial frame image is deducted through the method of step S32.
S32: and calculating the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image corresponding to the initial frame image.
And performing AND operation on the first mask image and the face region image corresponding to the initial frame image, and cutting out colored images of each makeup region from the face region image corresponding to the initial frame image to obtain a first target region image corresponding to the initial frame image.
In other embodiments of the present application, since the makeup areas in the makeup mask pattern are not consecutive, the makeup mask pattern may be further divided into a plurality of sub-mask patterns, and the makeup areas included in each sub-mask pattern are different. And then, acquiring a first target area image from the face area image corresponding to the initial frame image by using the split sub-mask image. The method can be specifically realized through the following operations of steps S33-S37, and comprises the following steps:
s33: the makeup mask image is divided into a plurality of sub-mask images, and each sub-mask image comprises at least one makeup area.
The makeup mask pattern comprises a plurality of mutually incoherent makeup areas, and the mutually incoherent makeup areas are split to obtain a plurality of sub-mask patterns, wherein each sub-mask pattern only comprises one makeup area or more than one makeup area. The makeup areas included in the sub-mask images are different from each other, and except that the pixel values of the pixel points in the makeup areas in the sub-mask images are not zero, the pixel values of the pixel points in other areas are all zero.
S34: and respectively converting each sub-mask image and the face region image into a binary image.
S35: and respectively carrying out AND operation on the binary image corresponding to each sub-mask image and the binary image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image.
And for any sub-mask image, performing AND operation on pixel values of pixel points with the same coordinates in the binary image of the sub-mask image and the binary image corresponding to the face region image. Because the pixel value of the pixel point in the make-up area in the sub-make-up mask image is not zero, the pixel points in other areas are all zero. Therefore, the sub-mask image obtained by the operation corresponds to a makeup area corresponding to the sub-mask image which is cut out from the face area image corresponding to the initial frame image.
In other embodiments of the present application, since the makeup mask image is generated based on a preset standard face image, and the sub-mask image is separated from the makeup mask image, a makeup area in the sub-mask image may not completely coincide with an area actually made up by a user in the initial frame image, thereby affecting accuracy of makeup progress detection. Therefore, before the and operation is performed on the binarized image corresponding to the sub-mask image and the binarized image corresponding to the face region image, the alignment operation can be performed on the makeup region in the sub-mask image and the corresponding region in the initial frame image.
Specifically, one or more first positioning points on the outline of the makeup area in the sub-mask map are determined according to the standard human face key points corresponding to the makeup mask map. And determining a second positioning point corresponding to each first positioning point from the initial frame image according to the first face key point corresponding to the initial frame image. And stretching the sub-mask graph, and stretching each first positioning point to a position corresponding to each corresponding second positioning point, namely, the position of each first positioning point in the stretched sub-mask graph is the same as the position of the corresponding second positioning point.
By means of the method, the makeup area in the sub-mask image can be aligned with the area actually made up by the user in the initial frame image, so that the first target image made up can be accurately extracted from the initial frame image through each sub-mask image, and accuracy of makeup progress detection is improved. By splitting the makeup mask image into a plurality of sub-mask images and respectively aligning each sub-mask image with the initial frame image in the above manner, the accuracy of alignment after splitting is higher compared with the manner of directly aligning the makeup mask image with the initial frame image.
S36: and (4) respectively carrying out AND operation on each sub-mask image and the initial frame image to obtain a plurality of sub-target area images corresponding to the initial frame image.
S37: and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
For the current frame image, the second target area image corresponding to the current frame image can be obtained in the same manner. Namely, the face area image corresponding to the current frame image is converted into a binary image, and then the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face area image of the current frame image are subjected to AND operation to obtain a second mask image corresponding to an intersection area between the cosmetic mask image and the face area image of the current frame image. And the second mask image and the face area image corresponding to the current frame image are subjected to AND operation to obtain a second target area image corresponding to the current frame image. Or, performing and operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face region image of the current frame image to obtain each sub-mask image corresponding to the intersection region between each sub-mask image and the face region image of the current frame image. And performing AND operation on the sub-mask images and the face area image corresponding to the current frame image, and combining the obtained sub-target area images into a second target area image corresponding to the current frame image.
In other embodiments of the present application, it is considered that the edge of the makeup area in the actual makeup scene may not have a clear outline, for example, the color is lighter closer to the edge in the blush scene, so that the blush makeup is more natural and does not appear very obtrusive. Therefore, after the first target area image and the second target area image are obtained through the embodiment, the boundary corrosion processing is further performed on the makeup areas in the first target area image and the second target area image respectively, so that the boundaries of the makeup areas are blurred, the makeup areas in the first target area image and the second target area image are closer to the real makeup range, and the accuracy of the makeup progress detection is further provided.
The color spaces of the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image obtained in the above manner are both RGB color spaces. The embodiment of the application determines the influence of preset types of makeup such as blush on each channel component of the color space through a large number of experiments in advance, and finds that the influence difference on each color channel in the RGB color space is not large. The HSV color space is composed of three components, namely Hue, Saturation and Value, wherein when one component changes, the values of the other two components do not change obviously, and compared with the RGB color space, the HSV color space can separate one channel component. And determining which channel component of the brightness, the hue and the saturation is most influenced by the preset type of makeup through experiments, and configuring the channel component with the most influence as a preset single-channel component corresponding to the preset type of makeup in the server. For a preset type of makeup such as blush, the corresponding preset single-channel component may be a brightness component.
After the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image are obtained in any one of the above manners, both the first target area image and the second target area image are converted into HSV color space from RGB color space. And separating a preset single-channel component from the HSV color space of the converted first target area image to obtain a first target area image only containing the preset single-channel component. And separating a preset single-channel component from the HSV color space of the converted second target area image to obtain a second target area image only containing the preset single-channel component.
And then determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
Specifically, the absolute values of the differences of the channel components corresponding to the pixel points with the same position in the first target area image and the second target area image are calculated respectively. For example, if the preset type of makeup is blush, the absolute value of the difference in luminance components between pixel points having the same coordinates in the converted first target area image and second target area image is calculated.
And determining the area of the area with finished specific makeup according to the absolute value of the difference value corresponding to each pixel point. Specifically, the number of pixel points whose corresponding absolute value of the difference satisfies the preset makeup completion condition is counted. The preset makeup finishing condition is that the absolute value of the difference value corresponding to the pixel point is greater than a first preset threshold, and the first preset threshold can be 7 or 8.
And determining the counted number of the pixel points meeting the preset makeup finishing condition as the area of the area where the specific makeup is finished. And counting the total number of all pixel points in all the makeup areas in the first target area image or the second target area image, and determining the total number of the pixel points as the total area corresponding to all the makeup areas. And then calculating the ratio between the area of the area where the specific makeup is finished and the total area corresponding to the target makeup area, and determining the ratio as the current makeup progress of the specific makeup corresponding to the user. And calculating the ratio of the counted pixel point number to the total number of the pixel points in all the makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
In other embodiments of the present application, in order to further improve the accuracy of the makeup progress detection, the makeup areas in the first target area image and the second target area image are further aligned. Specifically, binarization processing is performed on a first target area image and a second target area image which only contain the preset single-channel component, that is, the values of the preset single-channel component corresponding to the pixel points in the makeup area of the first target area image and the second target area image are both modified to 1, and the values of the preset single-channel component of the pixel points at the rest positions are both modified to 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing AND operation on the first binarization mask image and the second binarization mask image, namely performing AND operation on pixel points at the same positions in the first binarization mask image and the second binarization mask image respectively to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image. The preset single-channel component non-zero region of the pixel points in the second mask image is the makeup region superposed in the first target region image and the second target region.
The face region image corresponding to the initial frame image and the face region image corresponding to the current frame image are obtained through the operation of step 102. Performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and the second mask image and the face area image corresponding to the current frame image are subjected to AND operation to obtain a new second target area image corresponding to the current frame image.
Because the second mask image contains the makeup area overlapped in the initial frame image and the current frame image, the new first target area image and the new second target area image are respectively extracted from the initial frame image and the current frame image through the second mask image according to the mode, so that the positions of the makeup areas in the new first target area image and the new second target area image are completely consistent, the makeup progress is determined by subsequently comparing the changes of the makeup areas in the current frame image and the makeup areas in the initial frame image, the comparison areas are ensured to be completely consistent, and the accuracy of the makeup progress detection is greatly improved.
After the makeup areas in the initial frame image and the current frame image are aligned in the above manner to obtain a new first target area image and a new second target area image, the current makeup progress corresponding to the current frame image is determined again through the operation of the above step 103.
After the current makeup progress is determined by any one of the above modes, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current makeup progress may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up by a user, the making-up progress detection method provided by the embodiment of the application detects the making-up progress of each frame of image behind the first frame of image relative to the first frame of image in real time, and displays the detected making-up progress to the user, so that the user can visually see the own making-up progress, and the making-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, reference is made to the following description taken in conjunction with the accompanying drawings. As shown in fig. 5, according to the initial frame image and the first face key point corresponding thereto, and the current frame image and the second face key point corresponding thereto, the faces in the initial frame image and the current frame image are aligned and cut, respectively, and then the two cut face region images are smoothed and denoised by the laplacian algorithm. And then aligning the makeup mask image with the two face area images respectively, and deducting a first target area image and a second target area image from the two face area images respectively according to the makeup mask image. And carrying out boundary corrosion treatment on the first target area image and the second target area image. And then converting the first target area image and the second target area image into an image containing a preset single-channel component in an HSV color space. And aligning the first target area image and the second target area image again, and then calculating the current makeup progress according to the first target area image and the second target area image.
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of face area identification is improved. And determining a makeup area from the face area image based on the face key point, and aligning pixels of the makeup area in the initial frame image and the makeup area in the current frame image, so that the accuracy of the identification of the makeup area is improved. And aligning the makeup areas in the initial frame image and the current frame image, and reducing errors caused by position difference of the makeup areas. The discontinuous makeup area can be separately calculated when the makeup area is pulled out, and the accuracy rate of obtaining the makeup area is increased. Still align the make-up region in cosmetic mask picture with the make-up region in the face region image, guaranteed that the make-up region of picking up all is in the face region image, can not surpass facial boundary. In addition, a deep learning mode is not adopted, a large amount of data does not need to be collected in advance, and the detection result is returned to the user through the capture of the real-time picture of the makeup of the user and the calculation of the server side. Compared with a deep learning model reasoning scheme, the method and the system consume less calculation cost in an algorithm processing link, and reduce the processing pressure of the server.
The embodiment of the application also provides a makeup progress detection device which is used for executing the makeup progress detection method provided by any one of the embodiments. As shown in fig. 6, the apparatus includes:
an obtaining module 201, configured to obtain at least one makeup area, and obtain an initial frame image and a current frame image of a makeup video of a user;
a generating module 202, configured to generate a makeup mask map according to the makeup area;
the progress determining module 203 is configured to determine a current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image, and the current frame image.
The generating module 202 is configured to draw a contour of each makeup area in a preset blank face image according to the position and the shape of each makeup area; and filling pixels in each drawn outline to obtain a cosmetic mask image.
The progress determination module 203 is configured to obtain a first target area image for applying makeup from the initial frame image and obtain a second target area image for applying makeup from the current frame image with the makeup mask image as a reference; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
The progress determination module 203 is configured to detect a first face key point corresponding to the initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key point; and taking the makeup mask image as a reference, and acquiring a first target area image for makeup from the face area image.
The progress determination module 203 is used for respectively converting the makeup mask image and the face area image into binary images; performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image; and calculating the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image corresponding to the initial frame image.
The progress determining module 203 is configured to determine one or more first positioning points located on the outline of each makeup area in the makeup mask map according to the standard face key points corresponding to the makeup mask map; determining a second positioning point corresponding to each first positioning point from the initial frame image according to the first face key point; and stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
The progress determination module 203 is configured to split the cosmetic mask map into a plurality of sub-mask maps, where each sub-mask map includes at least one makeup area; respectively converting each sub-mask image and the face region image into binary images; respectively carrying out AND operation on the binarization image corresponding to each sub-mask image and the binarization image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image; respectively carrying out AND operation on each sub-mask image and the initial frame image to obtain a plurality of sub-target area images corresponding to the initial frame image; and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
The progress determining module 203 is configured to determine one or more first fixed positions, located on the outline of the makeup area, in the first sub-mask map according to the standard face key points corresponding to the makeup mask map, where the first sub-mask map is any one of the plurality of sub-mask maps; determining a second positioning point corresponding to each first positioning point from the initial frame image according to the first face key point; and stretching the first sub-mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
The progress determining module 203 is configured to convert the first target area image and the second target area image into images containing preset single-channel components in HSV color space, respectively; and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
The progress determining module 203 is configured to calculate absolute difference values of preset single-channel components corresponding to pixels with the same position in the converted first target area image and the converted second target area image, respectively; counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions; and calculating the ratio of the counted pixel point number to the total number of the pixel points in all the makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
The progress determining module 203 is configured to perform binarization processing on the first target area image and the second target area image respectively to obtain a first binarized mask image corresponding to the first target area image and a second binarized mask image corresponding to the second target area image; performing AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image; acquiring a face region image corresponding to an initial frame image and a face region image corresponding to a current frame image; performing AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and the second mask image and the face area image corresponding to the current frame image are subjected to AND operation to obtain a new second target area image corresponding to the current frame image.
And the image corrosion module is used for respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
The progress determining module 203 is configured to perform rotation correction on the initial frame image and the first face key point according to the first face key point; according to the corrected first face key point, an image containing a face area is intercepted from the corrected initial frame image; and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
The progress determining module 203 is configured to determine a left-eye central coordinate and a right-eye central coordinate according to a left-eye key point and a right-eye key point included in the first face key point; determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate; and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
The progress determining module 203 is used for determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points; determining an intercepting frame corresponding to a face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and according to the intercepting frame, intercepting an image containing a human face region from the corrected initial frame image.
The progress determination module 203 is used for amplifying the interception frame by a preset multiple; and according to the amplified intercepting frame, intercepting an image containing a human face region from the corrected initial frame image.
And the progress determining module 203 is configured to perform scaling and translation processing on the corrected first face key points according to the size of the image including the face region and a preset size.
The makeup progress detection device provided by the above embodiment of the application and the makeup progress detection method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the stored application program.
The embodiment of the application also provides electronic equipment for executing the makeup progress detection method. Please refer to fig. 7, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 7, the electronic device 8 includes: a processor 800, a memory 801, a bus 802 and a communication interface 803, the processor 800, the communication interface 803 and the memory 801 being connected by the bus 802; the memory 801 stores a computer program operable on the processor 800, and the processor 800 executes the method for detecting progress of makeup provided in any one of the foregoing embodiments when executing the computer program.
The Memory 801 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the apparatus and at least one other network element is realized through at least one communication interface 803 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 802 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 801 is used for storing a program, and the processor 800 executes the program after receiving an execution instruction, and the method for detecting a progress of makeup disclosed in any of the embodiments of the present application may be applied to the processor 800, or implemented by the processor 800.
The processor 800 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 800. The Processor 800 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 801, and the processor 800 reads the information in the memory 801 and completes the steps of the method in combination with the hardware thereof.
The electronic equipment provided by the embodiment of the application and the cosmetic progress detection method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
Referring to fig. 8, the computer readable storage medium is an optical disc 30, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program executes the method for detecting progress of makeup provided by any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the cosmetic progress detection method provided by the embodiment of the present application have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
in the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted to reflect the following schematic: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A makeup progress detection method characterized by comprising:
acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a makeup video of a user;
generating a makeup mask image according to the makeup area;
and determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
2. The method of claim 1, wherein determining a current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image and the current frame image comprises:
taking the makeup mask image as a reference, acquiring a first target area image for makeup from the initial frame image, and acquiring a second target area image for makeup from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
3. The method of claim 2, wherein the obtaining a first target area image of makeup from the initial frame image with reference to the makeup mask map comprises:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key point;
and taking the makeup mask image as a reference, and acquiring a first target area image for makeup from the face area image.
4. The method according to claim 3, wherein the obtaining a first target area image of makeup from the face area image with reference to the makeup mask image comprises:
respectively converting the makeup mask image and the face region image into binary images;
performing AND operation on the binary image corresponding to the cosmetic mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the cosmetic mask image and the face region image;
and calculating the first mask image and the face area image corresponding to the initial frame image to obtain a first target area image corresponding to the initial frame image.
5. The method according to claim 4, wherein before performing an and operation on the binarized image corresponding to the cosmetic mask image and the binarized image corresponding to the face region image, the method further comprises:
determining one or more first positioning points on the outline of each makeup area in the makeup mask image according to the standard human face key points corresponding to the makeup mask image;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the makeup mask image, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
6. The method according to claim 3, wherein the obtaining a first target area image of makeup from the face area image with reference to the makeup mask image comprises:
splitting the cosmetic mask map into a plurality of sub-mask maps, wherein each sub-mask map comprises at least one cosmetic area;
respectively converting each sub-mask image and the face region image into a binary image;
respectively performing AND operation on the binary image corresponding to each sub-mask image and the binary image corresponding to the face region image to obtain the sub-mask image corresponding to each sub-mask image;
respectively carrying out AND operation on each sub-mask image and the face region image to obtain a plurality of sub-target region images corresponding to the initial frame image;
and combining the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
7. The method according to claim 6, wherein before performing and operation on the binarized image corresponding to each sub-mask map and the binarized image corresponding to the face region image, the method further comprises:
determining one or more first positioning points on the outline of a makeup area in a first sub-mask image according to standard face key points corresponding to the makeup mask image, wherein the first sub-mask image is any one of the plurality of sub-mask images;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the first sub-mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
8. The method according to claim 2, wherein the determining a current makeup progress corresponding to the current frame image according to the first target area image and the second target area image comprises:
respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
9. The method according to claim 8, wherein the determining a current makeup progress corresponding to the current frame image according to the converted first target area image and the converted second target area image comprises:
calculating difference absolute values of the preset single-channel components corresponding to pixel points with the same position in the converted first target area image and the converted second target area image respectively;
counting the number of pixel points of which the corresponding absolute values of the differences meet preset makeup completion conditions;
and calculating the ratio of the counted pixel point number to the total number of pixel points in all makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
10. The method according to any one of claims 2 to 9, wherein before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further comprises:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
performing and operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to the intersection area of the first target area image and the second target area image;
acquiring a face region image corresponding to the initial frame image and a face region image corresponding to the current frame image;
performing and operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
and calculating the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
11. The method according to any one of claims 2 to 9, wherein before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further comprises:
and respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
12. The method according to claim 3, wherein the obtaining a face region image corresponding to the initial frame image according to the first face key point comprises:
performing rotation correction on the initial frame image and the first face key point according to the first face key point;
according to the corrected first face key point, intercepting an image containing a face region from the corrected initial frame image;
and zooming the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
13. The method of claim 12, further comprising:
and carrying out scaling translation processing on the corrected key points of the first face according to the size of the image containing the face area and the preset size.
14. The method of claim 12, wherein said rotationally rectifying the initial frame image and the first face keypoints according to the first face keypoints comprises:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included in the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and performing rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
15. The method according to claim 12, wherein said extracting an image containing a face region from the corrected initial frame image according to the corrected first face key point comprises:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key points;
determining a capturing frame corresponding to a face area in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and intercepting an image containing the face area from the corrected initial frame image according to the intercepting frame.
16. The method of claim 15, further comprising:
amplifying the intercepting frame by a preset multiple;
and according to the amplified intercepting frame, intercepting an image containing the face region from the corrected initial frame image.
17. The method of claim 1, wherein generating a cosmetic mask map from the cosmetic area comprises:
drawing the outline of each makeup area in a preset blank face image according to the position and the shape of each makeup area;
and filling pixels in each drawn outline to obtain a cosmetic mask image.
18. A makeup progress detection device characterized by comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring at least one makeup area and acquiring an initial frame image and a current frame image of a makeup video of a user;
the generating module is used for generating a makeup mask image according to the makeup area;
and the progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the makeup mask image, the initial frame image and the current frame image.
19. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of any one of claims 1-17.
20. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the method according to any of claims 1-17.
CN202111017071.9A 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium Active CN113837020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111017071.9A CN113837020B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111017071.9A CN113837020B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113837020A true CN113837020A (en) 2021-12-24
CN113837020B CN113837020B (en) 2024-02-02

Family

ID=78961696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111017071.9A Active CN113837020B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113837020B (en)

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272849A (en) * 2003-03-12 2004-09-30 Pola Chem Ind Inc Judgment method of cosmetic effect
CN101556699A (en) * 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
JP2011008397A (en) * 2009-06-24 2011-01-13 Sony Ericsson Mobilecommunications Japan Inc Makeup support apparatus, makeup support method, makeup support program and portable terminal device
CN104834800A (en) * 2015-06-03 2015-08-12 上海斐讯数据通信技术有限公司 Beauty making-up method, system and device
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror
TWI573100B (en) * 2016-06-02 2017-03-01 Zong Jing Investment Inc Method for automatically putting on face-makeup
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
CN107545220A (en) * 2016-06-29 2018-01-05 中兴通讯股份有限公司 A kind of face identification method and device
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108154121A (en) * 2017-12-25 2018-06-12 深圳市美丽控电子商务有限公司 Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108292423A (en) * 2015-12-25 2018-07-17 松下知识产权经营株式会社 Local dressing producing device, local dressing utilize program using device, local dressing production method, local dressing using method, local dressing production process and local dressing
CN108765268A (en) * 2018-05-28 2018-11-06 京东方科技集团股份有限公司 A kind of auxiliary cosmetic method, device and smart mirror
CN109063671A (en) * 2018-08-20 2018-12-21 三星电子(中国)研发中心 Method and device for intelligent cosmetic
US20190087641A1 (en) * 2017-09-15 2019-03-21 Cal-Comp Big Data, Inc. Body information analysis apparatus and blush analysis method thereof
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template
CN110543875A (en) * 2019-09-25 2019-12-06 西安理工大学 Auxiliary make-up device
CN110663063A (en) * 2017-05-25 2020-01-07 华为技术有限公司 Method and device for evaluating facial makeup
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111369644A (en) * 2020-02-28 2020-07-03 北京旷视科技有限公司 Face image makeup trial processing method and device, computer equipment and storage medium
CN111783511A (en) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 Beauty treatment method, device, terminal and storage medium
CN112232175A (en) * 2020-10-13 2021-01-15 南京领行科技股份有限公司 Method and device for identifying state of operation object
CN112507766A (en) * 2019-09-16 2021-03-16 珠海格力电器股份有限公司 Face image extraction method, storage medium and terminal equipment

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272849A (en) * 2003-03-12 2004-09-30 Pola Chem Ind Inc Judgment method of cosmetic effect
CN101556699A (en) * 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
JP2011008397A (en) * 2009-06-24 2011-01-13 Sony Ericsson Mobilecommunications Japan Inc Makeup support apparatus, makeup support method, makeup support program and portable terminal device
CN104834800A (en) * 2015-06-03 2015-08-12 上海斐讯数据通信技术有限公司 Beauty making-up method, system and device
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror
CN108292423A (en) * 2015-12-25 2018-07-17 松下知识产权经营株式会社 Local dressing producing device, local dressing utilize program using device, local dressing production method, local dressing using method, local dressing production process and local dressing
TWI573100B (en) * 2016-06-02 2017-03-01 Zong Jing Investment Inc Method for automatically putting on face-makeup
CN107545220A (en) * 2016-06-29 2018-01-05 中兴通讯股份有限公司 A kind of face identification method and device
CN110663063A (en) * 2017-05-25 2020-01-07 华为技术有限公司 Method and device for evaluating facial makeup
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
US20190087641A1 (en) * 2017-09-15 2019-03-21 Cal-Comp Big Data, Inc. Body information analysis apparatus and blush analysis method thereof
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108154121A (en) * 2017-12-25 2018-06-12 深圳市美丽控电子商务有限公司 Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror
CN108765268A (en) * 2018-05-28 2018-11-06 京东方科技集团股份有限公司 A kind of auxiliary cosmetic method, device and smart mirror
CN109063671A (en) * 2018-08-20 2018-12-21 三星电子(中国)研发中心 Method and device for intelligent cosmetic
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template
CN112507766A (en) * 2019-09-16 2021-03-16 珠海格力电器股份有限公司 Face image extraction method, storage medium and terminal equipment
CN110543875A (en) * 2019-09-25 2019-12-06 西安理工大学 Auxiliary make-up device
CN111783511A (en) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 Beauty treatment method, device, terminal and storage medium
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111369644A (en) * 2020-02-28 2020-07-03 北京旷视科技有限公司 Face image makeup trial processing method and device, computer equipment and storage medium
CN112232175A (en) * 2020-10-13 2021-01-15 南京领行科技股份有限公司 Method and device for identifying state of operation object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHENGHAO SHI 等: "Facial Keypoints Detection", 《ARXIV》, pages 1 - 28 *
刘家远 等: "基于视频的虚拟试妆应用研究", 《系统仿真学报》, vol. 30, no. 11, pages 4195 - 4202 *

Also Published As

Publication number Publication date
CN113837020B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
US10783354B2 (en) Facial image processing method and apparatus, and storage medium
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109359575B (en) Face detection method, service processing method, device, terminal and medium
CN109952594B (en) Image processing method, device, terminal and storage medium
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
CN111445410A (en) Texture enhancement method, device and equipment based on texture image and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN106326823B (en) Method and system for obtaining head portrait in picture
CN109711268B (en) Face image screening method and device
CN112396050B (en) Image processing method, device and storage medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111814564A (en) Multispectral image-based living body detection method, device, equipment and storage medium
CN114120163A (en) Video frame processing method and device, and related equipment and storage medium thereof
WO2017173578A1 (en) Image enhancement method and device
WO2022087846A1 (en) Image processing method and apparatus, device, and storage medium
CN115731591A (en) Method, device and equipment for detecting makeup progress and storage medium
CN116798041A (en) Image recognition method and device and electronic equipment
CN113837019A (en) Cosmetic progress detection method, device, equipment and storage medium
CN113012030A (en) Image splicing method, device and equipment
CN116342519A (en) Image processing method based on machine learning
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113837017B (en) Cosmetic progress detection method, device, equipment and storage medium
CN115222621A (en) Image correction method, electronic device, storage medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant