CN113837020B - Cosmetic progress detection method, device, equipment and storage medium - Google Patents

Cosmetic progress detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113837020B
CN113837020B CN202111017071.9A CN202111017071A CN113837020B CN 113837020 B CN113837020 B CN 113837020B CN 202111017071 A CN202111017071 A CN 202111017071A CN 113837020 B CN113837020 B CN 113837020B
Authority
CN
China
Prior art keywords
image
makeup
frame image
face
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111017071.9A
Other languages
Chinese (zh)
Other versions
CN113837020A (en
Inventor
刘聪
苗锋
张梦洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202111017071.9A priority Critical patent/CN113837020B/en
Publication of CN113837020A publication Critical patent/CN113837020A/en
Application granted granted Critical
Publication of CN113837020B publication Critical patent/CN113837020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application provides a cosmetic progress detection method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a user makeup video; generating a makeup mask map according to the makeup area; and determining the current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image and the current frame image. According to the makeup mask map, the current frame image and the initial frame image of the user makeup process are compared to determine the current makeup progress. The makeup progress detection can be realized only through image processing, a deep learning model is not needed, the operation amount is small, the cost is low, the processing pressure of a server is reduced, the progress detection of makeup steps such as blush and the like is realized, the efficiency of the makeup progress detection is improved, and the real-time requirement of the makeup progress detection can be met.

Description

Cosmetic progress detection method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method, a device, equipment and a storage medium for detecting makeup progress.
Background
Makeup has become an indispensable link in many people's daily lives, and blush can make the face appear healthy ruddy gas color, and can highlight the third dimension of face. Therefore, the blush is an important step in the makeup process, and if the makeup progress of the blush can be fed back to the user in real time, the consumption of the user's energy by makeup can be greatly reduced, and the makeup time can be saved.
Currently, there are some functions of providing virtual makeup, skin color detection, personalized product recommendation and the like by using a deep learning model in the related art, and all the functions need to collect a large number of face pictures in advance to train the deep learning model.
However, the face picture is private data of the user, and it is difficult to collect a huge face picture. And the model training consumes a large amount of calculation resources, and has high cost. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is very high, the deep learning model with the real-time performance requirement can be met, and the detection accuracy is not high.
Disclosure of Invention
The application provides a makeup progress detection method, device, equipment and storage medium, wherein a current frame image and an initial frame image of a user makeup process are compared to determine a current makeup progress according to a makeup mask image. The makeup progress detection can be realized only through image processing, a deep learning model is not needed, the operation amount is small, the cost is low, the processing pressure of a server is reduced, the progress detection of makeup steps such as blush and the like is realized, the efficiency of the makeup progress detection is improved, and the real-time requirement of the makeup progress detection can be met.
An embodiment of a first aspect of the present application provides a makeup progress detection method, including:
acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a user makeup video;
generating a makeup mask map according to the makeup area;
and determining the current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image and the current frame image.
In some embodiments of the present application, the determining, according to the makeup mask map, the initial frame image, and the current frame image, a current makeup progress corresponding to the current frame image includes:
taking the makeup mask image as a reference, acquiring a first makeup target area image from the initial frame image and acquiring a second makeup target area image from the current frame image;
and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
In some embodiments of the present application, the acquiring the first target area image for makeup from the initial frame image with the makeup mask map as a reference includes:
Detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key points;
and taking the makeup mask map as a reference, and acquiring a first object area image for makeup from the face area image.
In some embodiments of the present application, the acquiring a first target area image for makeup from the face area image with the makeup mask map as a reference includes:
respectively converting the makeup mask map and the face region image into binary images;
performing an AND operation on the binary image corresponding to the makeup mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the makeup mask image and the face region image;
and performing AND operation on the face region image corresponding to the first mask image and the initial frame image to obtain a first target region image corresponding to the initial frame image.
In some embodiments of the present application, before performing the and operation on the binary image corresponding to the makeup mask map and the binary image corresponding to the face area image, the method further includes:
Determining one or more first positioning points positioned on the outline of each makeup area in the makeup mask map according to the standard face key points corresponding to the makeup mask map;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the makeup mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
In some embodiments of the present application, the acquiring a first target area image for makeup from the face area image with the makeup mask map as a reference includes:
splitting the makeup mask map into a plurality of sub-mask maps, wherein each sub-mask map comprises at least one makeup area;
converting each sub-mask image and the face region image into a binarized image respectively;
performing AND operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face region image respectively to obtain sub-mask images corresponding to each sub-mask image;
performing AND operation on each sub-mask image and the face area image respectively to obtain a plurality of sub-target area images corresponding to the initial frame image;
And merging the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
In some embodiments of the present application, before performing an and operation on the binarized image corresponding to each of the sub-mask images and the binarized image corresponding to the face area image, the method further includes:
determining one or more first positioning points on the outline of the makeup area in a first sub-mask map according to standard face key points corresponding to the makeup mask map, wherein the first sub-mask map is any one of the sub-mask maps;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the first sub-mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
In some embodiments of the present application, the determining, according to the first target area image and the second target area image, a current makeup progress corresponding to the current frame image includes:
respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
And determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
In some embodiments of the present application, the determining, according to the converted first target area image and the second target area image, the current makeup progress corresponding to the current frame image includes:
respectively calculating the absolute value of the difference value of the preset single-channel component corresponding to the pixel points with the same positions in the converted first target area image and the converted second target area image;
counting the number of pixels of which the corresponding absolute value of the difference value meets the preset makeup completion condition;
and calculating the ratio of the counted number of the pixels to the total number of the pixels in all the makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
In some embodiments of the present application, before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further includes:
respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
Performing an AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to an intersection area of the first target area image and the second target area image;
acquiring a face area image corresponding to the initial frame image and a face area image corresponding to the current frame image;
performing an AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image; and performing AND operation on the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
In some embodiments of the present application, before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, the method further includes:
and respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
In some embodiments of the present application, the acquiring, according to the first face keypoints, a face area image corresponding to the initial frame image includes:
According to the first face key points, carrying out rotation correction on the initial frame image and the first face key points;
according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image;
and scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
In some embodiments of the present application, the method further comprises: and scaling and translating the corrected first face key points according to the size of the image containing the face area and the preset size.
In some embodiments of the present application, the performing rotation correction on the initial frame image and the first face key point according to the first face key point includes:
respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included by the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and carrying out rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
In some embodiments of the present application, the capturing an image including a face area from the corrected initial frame image according to the corrected first face keypoints includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key point;
determining a interception frame corresponding to a face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and according to the interception frame, intercepting an image containing the face area from the corrected initial frame image.
In some embodiments of the present application, the method further comprises:
amplifying the intercepting frame by a preset multiple;
and according to the enlarged intercepting frame, intercepting an image containing the face area from the corrected initial frame image.
In some embodiments of the present application, the generating a makeup mask map according to the makeup application region includes:
drawing the outline of each makeup area in a preset blank face image according to the position and the shape of each makeup area;
And (5) filling pixels in each drawn outline to obtain a makeup mask map.
An embodiment of a second aspect of the present application provides a makeup progress detecting device, including:
the acquisition module is used for acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a user makeup video;
the generating module is used for generating a makeup mask map according to the makeup area;
and the progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image and the current frame image.
An embodiment of a third aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method of the first aspect.
An embodiment of the fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program for execution by a processor to implement the method of the first aspect.
The technical scheme provided in the embodiment of the application has at least the following technical effects or advantages:
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of identifying the face area is improved. And the makeup area is determined from the face area image based on the face key points, and the makeup areas in the initial frame image and the current frame image are subjected to pixel alignment, so that the accuracy of the makeup area identification is improved. And aligning the makeup areas in the initial frame image and the current frame image, and reducing errors caused by the position difference of the makeup areas. When the makeup areas are picked up, the incoherent makeup areas can be calculated separately, and the accuracy of acquiring the makeup areas is increased. The makeup areas in the makeup mask map are aligned with the makeup areas in the face area image, so that the fact that the makeup areas are all in the face area image and cannot exceed the face boundary is ensured. And the method does not adopt a deep learning mode, does not need to collect a large amount of data in advance, and returns the detection result to the user through capturing a real-time picture of user makeup and calculating by a server side. Compared with a deep learning model reasoning scheme, the method and the device consume less calculation cost in an algorithm processing link, and reduce the processing pressure of a server.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures.
In the drawings:
FIG. 1 is a flow chart of a method for detecting progress of makeup according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a display interface for a user to select a make-up area displayed by a client according to an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of solving the rotation angle of an image according to an embodiment of the present application;
FIG. 4 is a schematic diagram of two coordinate system transformations provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart of a method for detecting progress of makeup according to an embodiment of the present application;
fig. 6 is a schematic structural view of a makeup progress detecting device according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 8 shows a schematic diagram of a storage medium according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs.
A cosmetic progress detection method, apparatus, device, and storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
At present, some virtual makeup trying functions exist in the related technology, and the virtual makeup trying method can be applied to sales counters or mobile phone application software, and can provide virtual makeup trying services for users by adopting face recognition technology, so that various makeup looks can be matched and real-time facial fit and display can be carried out. In addition, people face skin detection services are provided, but the services can only solve the requirement that users select cosmetics suitable for themselves or select skin care schemes suitable for themselves. Based on the services, the user can be helped to select the cosmetic products suitable for the user, but the progress of makeup cannot be displayed, and the real-time makeup requirement of the user cannot be met. In the related art, there are some functions of providing virtual trial makeup, skin color detection, personalized product recommendation and the like by using a deep learning model, and all the functions need to collect a large number of face pictures in advance to train the deep learning model. However, the face picture is private data of the user, and it is difficult to collect a huge face picture. And the model training consumes a large amount of calculation resources, and has high cost. The accuracy of the model is inversely proportional to the real-time performance, the makeup progress detection needs to capture the face information of the user in real time to determine the current makeup progress of the user, the real-time performance requirement is very high, the deep learning model with the real-time performance requirement can be met, and the detection accuracy is not high.
Based on this, the embodiment of the application provides a makeup progress detection method, which is used for detecting the makeup progress of a preset type of makeup, wherein the preset type can be the makeup type for making up in a face specific area, and the preset type can be blush or the type for making up in a face specific area in special makeup such as Beijing opera makeup.
The method compares a current frame image of a user makeup process with an initial frame image (i.e., a first frame image) to determine a makeup progress. And (3) identifying the face key points of the initial frame image and the current frame image, and based on the face key points, cutting out face area images from the initial frame image and the current frame image. And generating a makeup mask pattern according to the makeup application area needing makeup. And respectively intercepting a first target area image and a second target area image which need to be made up from face area images corresponding to the initial frame image and the current frame image according to the make-up mask image, and determining the current make-up progress by comparing the difference of preset single-channel components such as brightness or tone of pixel points in the first target area image and the second target area image. The accuracy of makeup progress detection is very high, and does not need to adopt a deep learning model, so that the operation amount is small, the cost is low, the processing pressure of a server is reduced, the progress detection of makeup steps such as blush and the like is realized, the efficiency of the makeup progress detection is improved, and the real-time requirement of the makeup progress detection can be met.
Referring to fig. 1, the method specifically includes the steps of:
step 101: at least one make-up area is acquired, and an initial frame image and a current frame image of a user make-up video are acquired.
The execution subject of the embodiment of the application is a server. And a client matched with the makeup progress detection service provided by the server is installed on the mobile phone or the computer of the user. When the user needs to use the makeup progress detection service, the user opens the client on the terminal, and the client displays a plurality of makeup areas corresponding to the preset type of makeup, such as a plurality of makeup areas corresponding to the blush. The displayed make-up areas may be classified according to facial area, such as nose area, cheek area on both sides, chin area, etc. Each region category may include a plurality of contours of cosmetic regions of different shapes and/or sizes. The user selects one or more makeup areas that need makeup by himself from the plurality of makeup areas displayed. And the client sends the make-up area selected by the user to the server.
As an example, as shown in fig. 2, the display interface includes contours of the makeup areas corresponding to the nose area, the cheek areas on both sides, and the chin area, and the user may select a facial area that needs to make up by himself or herself and a contour of the makeup area selected in the selected facial area from a plurality of contours corresponding to the respective areas, and click the confirm button to submit the contour of the makeup area selected by himself or herself after the selection. The client detects one or more makeup areas submitted by a user and sends the one or more makeup areas to the server.
As another example, the embodiment of the application may further make a plurality of makeup style charts based on the preset standard face image, where the makeup style charts include contours of one or more makeup areas. The preset standard face image is a face image with no face shielding, clear five sense organs and parallel connecting lines of two eyes and a horizontal line. And the user selects one cosmetic style chart from the displayed multiple cosmetic style charts. And the client sends the cosmetic style sheet selected by the user to the server. And the server receives the makeup style chart sent by the client and acquires one or more makeup areas selected by the user from the makeup style chart.
Through any mode, the user can select the makeup area needing to make up by self-definition, and the personalized makeup requirements of different users on the makeup of the preset types such as blush and the like can be met.
In other embodiments of the present application, the user may not select the makeup area, but preset a fixed makeup area in the server, where the location and shape of the makeup area are set. After the user opens the client, the client prompts the user to make up at the positions corresponding to the make-up areas set by the server. And when the server receives a request of detecting the makeup progress of the user, the server directly acquires one or more preset makeup areas from the local configuration file.
The make-up area is configured in the server in advance, when the user needs to detect the make-up progress, the make-up area is not required to be acquired from the terminal of the user, the bandwidth is saved, the user operation is simplified, and the processing time is shortened.
The display interface of the client is also provided with a video uploading interface, when the user is detected to click the video uploading interface, the camera device of the terminal is called to shoot the makeup video of the user, and the user performs makeup operations of preset types such as blush and the like in the makeup area of the face of the user in the shooting process. The user's terminal transmits the photographed cosmetic video to the server in the form of a video stream. The server receives each frame of image of the cosmetic video transmitted by the user's terminal.
In other embodiments of the present application, after obtaining the initial frame image and the current frame image of the makeup video of the user, the server further detects whether the initial frame image and the current frame image both include only the face image of the same user. Firstly, whether the initial frame image and the current frame image both contain only one face image is detected, and if the initial frame image and/or the current frame image contain a plurality of face images or the initial frame image and/or the current frame image do not contain the face images, prompt information is sent to a terminal of a user. The user terminal receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the cosmetic video. For example, the prompt message may be "please keep only the face of the same person in the lens".
If the initial frame image and the current frame image are detected to only contain one face image, whether the face image in the initial frame image and the face image in the current frame image belong to the same user is further judged. Specifically, face feature information corresponding to a face image in an initial frame image and face feature information corresponding to a face image in a current frame image may be extracted by a face recognition technology, similarity of the face feature information extracted in the two frame images is calculated, if the calculated similarity is greater than or equal to a set value, it is determined that the faces in the initial frame image and the current frame image belong to the same user, and then a current makeup progress corresponding to the current frame image is determined by the following operations of steps 102 and 103. If the calculated similarity is smaller than the set value, determining that the faces in the initial frame image and the current frame image belong to different users, and sending prompt information to the terminal of the user. The user terminal receives and displays the prompt information to prompt the user to keep that only the face of the same user appears in the cosmetic video.
In the embodiment of the application, the server uses the received first frame image as an initial frame image, and uses the initial frame image as a reference to compare the current makeup progress of the specific makeup corresponding to each frame image received subsequently. Since the processing manner of each subsequent frame of image is the same, the embodiment of the application uses the current frame of image received at the current moment as an example to illustrate the process of detecting the makeup progress.
After the server obtains at least one makeup area and the initial frame image and the current frame image of the user makeup video through this step, the current makeup progress of the user is determined through the operations of the following steps 102 and 103.
Step 102: and generating a makeup mask pattern according to the obtained makeup area.
Specifically, according to the position and the shape of each makeup area, the outline of each makeup area is drawn in a preset blank face image. The preset blank face image may be formed by removing pixels in the preset standard face image. After the outline of each makeup area is drawn in the preset blank face image, pixel filling is carried out in each drawn outline, pixel points with the same pixel value are filled in the outline of the same makeup area, and the pixel values of the pixel points filled in different makeup areas are different from each other. And taking the image after the filling operation as a makeup mask map.
Step 103: and determining the current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image and the current frame image.
Firstly, according to the makeup mask map, a first target area image for makeup is obtained from an initial frame image, and a second target area image for makeup is obtained from a current frame image. Namely, taking the makeup mask map as a mask, and respectively cutting out images of makeup areas which are needed to be made up by a user from an initial frame image and a current frame image. And then determining the current makeup progress corresponding to the current frame image according to the cut first target area image and the second target area image.
Since the procedure of acquiring the first target area image from the initial frame image is the same as the procedure of acquiring the second target area from the current frame image. The present embodiment specifically describes this procedure by taking an initial frame image as an example. The server obtains a first target area image corresponding to the initial frame image through the following operations of steps S1-S3, including:
s1: and detecting a first face key point corresponding to the initial frame image.
The server is provided with a pre-trained detection model for detecting the key points of the human face, and interface service for detecting the key points of the human face is provided through the detection model. After the server acquires the initial frame image of the user makeup video, the server invokes the interface service for detecting the key points of the human faces, and all the key points of the human faces of the user in the initial frame image are identified through the detection model. In order to distinguish the face key points corresponding to the current frame image, in the embodiment of the present application, all the face key points corresponding to the initial frame image are called as first face key points. And (5) referring all the face key points corresponding to the current frame image as second face key points.
The identified key points of the human face comprise key points on the outline of the face of the user, and key points of the parts such as mouth, nose, eyes, eyebrows and the like. The number of face keypoints identified may be 106.
S2: and acquiring a face region image corresponding to the initial frame image according to the first face key points.
The server specifically acquires a face area image corresponding to the initial frame image through the following operations of steps S20 to S22, including:
s20: and carrying out rotation correction on the initial frame image and the first face key points according to the first face key points.
Because the user can not ensure that the pose angles of the faces in each frame of image are the same when shooting the makeup video through the terminal, in order to improve the accuracy of comparing the current frame of image with the initial frame of image, the faces in each frame of image need to be rotationally corrected, so that the connecting lines of the eyes of the faces in each frame of image after correction are all on the same horizontal line, the pose angles of the faces in each frame of image are ensured to be the same, and the problem of larger detection error of the makeup progress caused by different pose angles is avoided.
Specifically, a left eye center coordinate and a right eye center coordinate are respectively determined according to a left eye key point and a right eye key point included in the first face key point. All left eye key points of the left eye region and all right eye key points of the right eye region are determined from the first face key points. And taking an average value of the determined abscissas of all the left eye key points, taking an average value of the abscissas of all the left eye key points, forming a coordinate by the average value of the abscissas corresponding to the left eye and the average value of the ordinates, and determining the coordinate as a center coordinate of the left eye. The right eye center coordinates are determined in the same manner.
And then determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate. As shown in fig. 3, the horizontal difference dx and the vertical difference dy of the left-eye center coordinate and the right-eye center coordinate are calculated from the two coordinates, and the two-eye connecting line length d of the left-eye center coordinate and the right-eye center coordinate is calculated. And calculating an included angle theta between the two-eye connecting line and the horizontal direction according to the length d of the two-eye connecting line, the horizontal difference dx and the vertical difference dy, wherein the included angle theta is the rotation angle corresponding to the initial frame image. And then calculating the center point coordinate of the connecting line of the two eyes according to the center coordinates of the left eye and the center coordinates of the right eye, wherein the center point coordinate is the rotation center point coordinate corresponding to the initial frame image.
And carrying out rotation correction on the initial frame image and the first face key points according to the calculated rotation angle and the rotation center point coordinates. Specifically, the rotation angle and the rotation center point coordinates are input into a preset function for calculating a rotation matrix of the picture, and the preset function may be a function cv2.getrotation matrix2d () in OpenCV. And obtaining a rotation matrix corresponding to the initial frame image by calling the preset function. And then calculating the product of the initial frame image and the rotation matrix to obtain a corrected initial frame image. The operation of correcting the initial frame image by using the rotation matrix can also be completed by calling the function cv2.warp affine () in OpenCV.
For the first face key points, correction is required to be carried out on each first face key point one by one so as to correspond to the corrected initial frame image. When the first face key points are corrected one by one, two coordinate system conversions are required, the first time of converting the coordinate system with the upper left corner of the initial frame image as the origin into the coordinate system with the lower left corner as the origin, and the second time of further converting the coordinate system with the lower left corner as the origin into the coordinate system with the rotation center point coordinates as the origin, as shown in fig. 4. After two times of coordinate system conversion, each first face key point is converted according to the following formula (1), and rotation correction of the first face key points can be completed.
In formula (1), x 0 、y 0 The horizontal coordinate and the vertical coordinate of the first face key point before rotation correction are respectively, x and y are respectively the horizontal coordinate and the vertical coordinate of the first face key point after rotation correction, and θ is the rotation angle.
The corrected initial frame image and the first face key point are based on the whole image, and the whole image not only contains the face information of the user but also contains other redundant image information, so that the face area of the corrected image needs to be cut through the following step S21.
S21: and according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image.
First, a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value are determined from the corrected first face key point. And then determining a cut-out frame corresponding to the face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value. Specifically, a coordinate point is formed by the minimum abscissa value and the minimum ordinate value, and the coordinate point is used as the top left corner vertex of the intercept frame corresponding to the face region. And forming another coordinate point by the maximum abscissa value and the maximum ordinate value, and taking the coordinate point as the right lower corner vertex of the intercept frame corresponding to the face region. And determining the position of a cutting frame in the corrected initial frame image according to the left upper corner vertex and the right lower corner vertex, and cutting out an image positioned in the cutting frame from the corrected initial frame image, namely cutting out an image containing a face area.
In other embodiments of the present application, in order to ensure that all face areas of a user are intercepted, the situation that the follow-up makeup progress detection error is large due to incomplete interception is avoided, and the intercepting frame can be further amplified by a preset multiple, and the preset multiple can be 1.15 or 1.25. The embodiment of the application does not limit the specific value of the preset multiple, and the preset multiple can be set according to the requirement in practical application. And amplifying the interception frame to the periphery by a preset multiple, and intercepting the image in the amplified interception frame from the corrected initial frame image, thereby intercepting the image containing the complete face area of the user.
S22: and scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
After the image containing the face area of the user is intercepted from the initial frame image in the mode, the image containing the face area is scaled to a preset size, and the face area image corresponding to the initial frame image is obtained. The preset size may be 390×390 or 400×400, etc. The embodiment of the application does not limit the specific value of the preset size, and can be set according to the requirement in practical application.
In order to adapt the first face key point to the zoomed face region image, the zoomed image containing the face region is zoomed to a preset size, and then the zoomed and translated processing is performed on the corrected first face key point according to the size of the image containing the face region before zooming and the preset size. Specifically, according to the size of the image containing the face area before scaling and the preset size to which the image needs to be scaled, determining the translation direction and the translation distance of each first face key point, further respectively performing translation operation on each first face key point according to the translation direction and the translation distance corresponding to each first face key point, and recording the coordinates of each translated first face key point.
The face region image is obtained from the initial frame image in the above manner, the first face key points are adapted to the obtained face region image through operations such as rotation correction and translation scaling, and then the image region corresponding to the makeup region is extracted from the face region image in the following manner in step S3.
In other embodiments of the present application, before executing step S3, filtering processing may be performed on the face area image corresponding to the initial frame image, so as to remove noise in the face area image. Specifically, a gaussian filtering algorithm or a laplace algorithm can be adopted to filter and smooth the face region image corresponding to the initial frame image.
Taking a Gaussian filtering algorithm as an example, gaussian filtering processing can be performed on the face area image corresponding to the initial frame image according to a Gaussian kernel with a preset size. The gaussian kernel of the gaussian filter is a key parameter of the gaussian filter processing, if the gaussian kernel is selected too small, a good filter effect cannot be achieved, and if the gaussian kernel is selected too large, although noise information in an image can be filtered, useful information in the image can be smoothed at the same time. In the embodiment of the application, the gaussian kernel with the preset size is selected, and the preset size can be 9×9. In addition, the other group of parameters sigmaX, sigmaY of the Gaussian filter function are all set to 0, and after Gaussian filtering, the image information is smoother, so that the accuracy of the follow-up acquisition of the makeup progress is improved.
After the face area image corresponding to the initial frame image is obtained in the above manner, the target area image corresponding to the specific makeup is extracted from the face area image in step S3.
S3: and taking the makeup mask map as a reference, and acquiring a first object region image for makeup from the face region image corresponding to the initial frame image.
The preset type of makeup such as blush is a makeup method for applying makeup to a fixed area of the face, such as blush in a specific area such as a nose area, cheek areas on both sides, chin area, etc. Therefore, the specific areas needing to be made up can be directly extracted from the face area image, interference of the ineffective area on the making up progress detection is avoided, and accuracy of the making up progress detection is improved.
The server obtains the first target area image specifically through the operations of steps S30 to S32, including:
s30: and respectively converting the makeup mask map and the facial area image into a binary image.
S31: and performing AND operation on the binary image corresponding to the makeup mask image and the binary image corresponding to the face area image to obtain a first mask image corresponding to the intersection area of the makeup mask image and the face area image.
And performing AND operation on pixel values of pixel points with the same coordinates in the binary image corresponding to the makeup mask image and the binary image corresponding to the face region image. Because only the pixel values of the pixels in the makeup mask map are not zero, the pixels in other areas are all zero. Therefore, the first mask image obtained by the operation corresponds to that each make-up area is cut out from the face area image corresponding to the initial frame image.
In other embodiments of the present application, since the makeup mask map is generated based on the preset standard face image, the makeup area in the makeup mask map may not completely coincide with the area actually made up by the user in the initial frame image, thereby affecting the accuracy of the makeup progress detection. Therefore, before the binary image corresponding to the makeup mask image and the binary image corresponding to the face area image are subjected to the AND operation, the alignment operation can be performed on the makeup area in the makeup mask image and the corresponding area in the initial frame image.
Specifically, one or more first positioning points positioned on the outline of each makeup area in the makeup mask map are determined according to standard face key points corresponding to the makeup mask map. And the standard face key points corresponding to the makeup mask map are standard face key points corresponding to the preset standard face image. For any makeup area in the makeup mask map, firstly determining whether the contour of the makeup area contains standard face key points, and if so, determining the standard face key points on the contour as first positioning points corresponding to the makeup area. If the first positioning point is not included, the first positioning point positioned on the outline of the makeup area is generated by using the standard face key points around the makeup area in a linear transformation mode. Specifically, the surrounding standard face key points can be shifted up, down, left or right and other translation operations to obtain the first positioning points.
For example, for the nose region, a critical point located at the nose may be shifted to the left by a certain pixel distance, resulting in a point located on the left side of the nose wing. And moving the key point of the nose to the right by a certain pixel distance to obtain a point on the wing of the nose on the right side. The key point of the nose, the point on the left nasal wing and the point on the right nasal wing are taken as three first positioning points corresponding to the nose area.
In this embodiment of the present application, the number of the first positioning points corresponding to each makeup area may be a preset number, and the preset number may be 3 or 4.
After the first locating points corresponding to each makeup area in the makeup mask map are obtained in the mode, the second locating points corresponding to each first locating point are determined from the face area image corresponding to the initial frame image according to the first face key points corresponding to the initial frame image. Because the standard face key points corresponding to the makeup mask map and the first face key points corresponding to the initial frame image are obtained through the same detection model, the key points at different positions are provided with respective numbers. Therefore, for the first locating point belonging to the standard face key point, the first face key point with the same number as the standard face key point corresponding to the first locating point is determined from the first face key points corresponding to the initial frame image, and the determined first face key point is used as the second locating point corresponding to the first locating point. And for a first locating point obtained by performing linear transformation on the standard face key point, determining a first face key point corresponding to the first locating point from the first face key point corresponding to the initial frame image, and determining a point obtained by performing the same linear transformation on the first face key point as a second locating point corresponding to the first locating point.
After the second positioning points corresponding to the first positioning points are determined in the above manner, stretching the makeup mask map, and stretching each first positioning point to the corresponding position of each second positioning point, namely, the position of each first positioning point in the makeup mask map after stretching is the same as the position of the corresponding second positioning point.
By means of the method, the makeup area in the makeup mask map can be aligned with the actual makeup area of the user in the initial frame image, so that the first target image for makeup can be accurately extracted from the initial frame image through the makeup mask map, and the accuracy of makeup progress detection is improved.
After aligning the makeup mask map with the initial frame image, obtaining a first mask image corresponding to an intersection area between the makeup mask map and the face area image of the initial frame image through the operation of step S31, and then buckling a first target area image corresponding to the initial frame image through the mode of step S32.
S32: and performing AND operation on the face region image corresponding to the first mask image and the initial frame image to obtain a first target region image corresponding to the initial frame image.
Because the first mask image is a binarized image, performing an AND operation on the face region image corresponding to the first mask image and the initial frame image, and cutting out the images of the various colorful makeup areas from the face region image corresponding to the initial frame image, thereby obtaining a first target region image corresponding to the initial frame image.
In other embodiments of the present application, because the makeup areas in the makeup mask map are not consecutive, the makeup mask map may be further broken into a plurality of sub-mask maps, where each of the sub-mask maps includes a different makeup area. And then acquiring a first target area image from the face area image corresponding to the initial frame image by utilizing the split sub-mask image. Specifically, the method can be realized by the following operations of steps S33-S37, comprising the following steps:
s33: the cosmetic mask map is split into a plurality of sub-mask maps, each of which includes at least one make-up area.
The makeup mask map comprises a plurality of mutually-disconnected makeup areas, and the mutually-disconnected makeup areas are split to obtain a plurality of sub-mask maps, wherein each sub-mask map can only comprise one makeup area or more than one makeup area. The makeup areas included in the sub-mask patterns are different from each other, and the pixel values of the pixel points in other areas in the sub-mask patterns are zero except for the pixel values of the pixel points in the makeup areas.
S34: each sub-mask map and the face region image are respectively converted into a binarized image.
S35: and performing AND operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face area image respectively to obtain sub-mask images corresponding to each sub-mask image.
And for any one of the sub-mask patterns, performing AND operation on pixel values of pixel points with the same coordinates in the binarized image of the sub-mask pattern and the binarized image corresponding to the face region image. Because only the pixel values of the pixels in the makeup area in the makeup mask map are not zero, the pixel values of other areas are zero. Therefore, the obtained sub-mask image corresponds to the makeup area corresponding to the sub-mask image which is cut out from the face area image corresponding to the initial frame image.
In other embodiments of the present application, since the makeup mask map is generated based on the preset standard face image, and the sub-mask map is split from the makeup mask map, the makeup area in the sub-mask map is likely not to completely coincide with the area actually made up by the user in the initial frame image, thereby affecting the accuracy of the makeup progress detection. Therefore, before the binarization image corresponding to the sub-mask image and the binarization image corresponding to the face area image are subjected to the AND operation, the make-up area in the sub-mask image and the corresponding area in the initial frame image can be aligned.
Specifically, according to standard face key points corresponding to the makeup mask map, one or more first positioning points positioned on the outline of the makeup area in the sub-mask map are determined. And determining a second positioning point corresponding to each first positioning point from the initial frame image according to the first face key point corresponding to the initial frame image. And stretching the sub-mask map to the position corresponding to each corresponding second positioning point, namely, stretching each first positioning point in the stretched sub-mask map to the position corresponding to each second positioning point, wherein the position of each first positioning point in the stretched sub-mask map is the same as the position of the corresponding second positioning point.
By the method, the makeup areas in the sub-mask images can be aligned with the areas actually made up by the user in the initial frame image, so that the first object image for making up can be accurately extracted from the initial frame image through each sub-mask image, and the accuracy of makeup progress detection is improved. By splitting the makeup mask map into a plurality of sub-mask maps, each sub-mask map is aligned with the initial frame image in the above manner, and the accuracy of alignment after splitting is higher than that of direct alignment of the makeup mask image with the initial frame image.
S36: and performing AND operation on each sub-mask image and the initial frame image respectively to obtain a plurality of sub-target area images corresponding to the initial frame image.
S37: and merging the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
For the current frame image, a second target area image corresponding to the current frame image may be obtained in the same manner. The face area image corresponding to the current frame image is converted into a binary image, and then the binary image corresponding to the makeup mask image and the binary image corresponding to the face area image of the current frame image are subjected to AND operation to obtain a second mask image corresponding to an intersection area between the makeup mask image and the face area image of the current frame image. And performing AND operation on the face region image corresponding to the second mask image and the current frame image to obtain a second target region image corresponding to the current frame image. Or performing an AND operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face area image of the current frame image to obtain each sub-mask image corresponding to the intersection area between each sub-mask image and the face area image of the current frame image. And performing AND operation on the face region images corresponding to the sub-mask images and the current frame image, and merging the obtained sub-target region images into a second target region image corresponding to the current frame image.
In other embodiments of the present application, it is contemplated that the edges of the make-up area in an actual make-up scene may not have a very sharp outline, such as a lighter color near the edges in a blush scene, thereby making the blush more natural and not appear very abrupt. Therefore, after the first target area image and the second target area image are obtained through the embodiment, boundary corrosion treatment is further performed on the makeup areas in the first target area image and the second target area image respectively, so that the boundaries of the makeup areas are blurred, the makeup areas in the first target area image and the second target area image are closer to the actual makeup range, and accuracy of makeup progress detection is further provided.
The color spaces of the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image obtained in the above manner are both RGB color spaces. According to the embodiment of the application, the influence of the blush and other preset types of makeup on the channel components of the color space is determined through a large number of experiments in advance, and the influence on the color channels in the RGB color space is found to be little different. The HSV color space is composed of three components, hue, saturation and Value, wherein when one component changes, the other two component values will not change obviously, and compared with the RGB color space, the HSV color space can separate one channel component. And determining which channel component in brightness, tone and saturation is most influenced by the preset type of makeup through experiments, and configuring the channel component with the greatest influence as the preset single channel component corresponding to the preset type of makeup in a server. For a preset type of makeup such as blush, the corresponding preset single channel component may be a brightness component.
After the first target area image corresponding to the initial frame image and the second target area image corresponding to the current frame image are obtained in any mode, both the first target area image and the second target area image are converted into HSV color space from RGB color space. And separating a preset single-channel component from the HSV color space of the converted first target area image to obtain a first target area image only containing the preset single-channel component. And separating a preset single-channel component from the HSV color space of the converted second target area image to obtain a second target area image only comprising the preset single-channel component.
And then determining the current makeup progress corresponding to the current frame image according to the converted first target area image and second target area image.
Specifically, the absolute values of the differences of the channel components corresponding to the pixel points with the same positions in the first target area image and the second target area image are calculated respectively. For example, if a preset type of makeup is blush, the absolute value of the difference in luminance component between pixel points having the same coordinates in the converted first target area image and second target area image is calculated.
And determining the area of the area where the specific makeup is finished according to the absolute value of the difference value corresponding to each pixel point. Specifically, the number of pixels whose corresponding absolute value of the difference satisfies a preset makeup completion condition is counted. The preset makeup completion condition is that the absolute value of the difference value corresponding to the pixel point is larger than a first preset threshold, and the first preset threshold can be 7 or 8.
And determining the number of the counted pixel points meeting the preset makeup completion condition as the area of the area where the specific makeup is completed. And counting the total number of all pixel points in all the makeup areas in the first target area image or the second target area image, and determining the total number of the pixel points as the total area corresponding to all the makeup areas. Then, the ratio between the area of the area where the specific makeup is completed and the total area corresponding to the target makeup area is calculated, and the ratio is determined as the current makeup progress of the specific makeup corresponding to the user. And calculating the ratio of the number of the counted pixels to the total number of the pixels in all the makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
In other embodiments of the present application, in order to further improve accuracy of the makeup progress detection, the makeup areas in the first target area image and the second target area image are further aligned. Specifically, binarization processing is performed on the first target area image and the second target area image which only contain the preset single-channel component, that is, the values of the preset single-channel components corresponding to the pixels in the makeup areas of the first target area image and the second target area image are all modified to be 1, and the values of the preset single-channel components of the pixels at the rest positions are all modified to be 0. And obtaining a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image through binarization processing.
And performing AND operation on the first binarization mask image and the second binarization mask image, namely performing AND operation on pixel points at the same position in the first binarization mask image and the second binarization mask image respectively to obtain a second mask image corresponding to an intersection region of the first target region image and the second target region image. And a region with a preset single-channel component of the pixel point in the second mask image being not zero is a region in which the first target region image and the second target region coincide.
And obtaining a face area image corresponding to the initial frame image and a face area image corresponding to the current frame image through the operation of the step 102. Performing AND operation on the face region image corresponding to the second mask image and the initial frame image to obtain a new first target region image corresponding to the initial frame image; and performing AND operation on the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
The second mask image comprises the makeup areas which are overlapped in the initial frame image and the current frame image, so that the second mask image is used for respectively picking out a new first target area image and a new second target area image from the initial frame image and the current frame image according to the mode, the positions of the makeup areas in the new first target area image and the new second target area image are completely consistent, and if the makeup areas in the current frame image are compared with the makeup areas in the initial frame image after the makeup areas are changed, the makeup progress is determined, the fact that the compared areas are completely consistent is ensured, and the accuracy of the makeup progress detection is greatly improved.
The makeup areas in the initial frame image and the current frame image are aligned in the above manner, and after a new first target area image and a new second target area image are obtained, the current makeup progress corresponding to the current frame image is determined again through the operation of step 103.
After determining the current makeup progress through any mode, the server sends the current makeup progress to the terminal of the user. And after receiving the current makeup progress, the terminal of the user displays the current makeup progress. The current make-up schedule may be a ratio or a percentage. The terminal may display the current makeup progress in the form of a progress bar.
In the process of making up of a user, through the make-up progress detection method provided by the embodiment of the application, the make-up progress of each frame of image behind the first frame of image is detected in real time relative to the first frame of image, and the detected make-up progress is displayed to the user, so that the user can intuitively see the make-up progress of the user, and the make-up efficiency is improved.
In order to facilitate understanding of the methods provided by the embodiments of the present application, the following description is made with reference to the accompanying drawings. As shown in fig. 5, according to the initial frame image and the corresponding first face key point thereof, and the current frame image and the corresponding second face key point thereof, the faces in the initial frame image and the current frame image are aligned and cut respectively, and then the two cut face area images are smoothed and denoised through the laplace algorithm. And then aligning the makeup mask map with the two face area images respectively, and buckling out the first target area image and the second target area image from the two face area images according to the makeup mask map. And carrying out boundary corrosion treatment on the first target area image and the second target area image. And then converting the first target area image and the second target area image into images containing preset single-channel components in HSV color space. And performing alignment processing on the first target area image and the second target area image again, and then calculating the current makeup progress according to the first target area image and the second target area image.
In the embodiment of the application, the face key points are utilized to correct and cut the face area of the user in the video frame, so that the accuracy of identifying the face area is improved. And the makeup area is determined from the face area image based on the face key points, and the makeup areas in the initial frame image and the current frame image are subjected to pixel alignment, so that the accuracy of the makeup area identification is improved. And aligning the makeup areas in the initial frame image and the current frame image, and reducing errors caused by the position difference of the makeup areas. When the makeup areas are picked up, the incoherent makeup areas can be calculated separately, and the accuracy of acquiring the makeup areas is increased. The makeup areas in the makeup mask map are aligned with the makeup areas in the face area image, so that the fact that the makeup areas are all in the face area image and cannot exceed the face boundary is ensured. And the method does not adopt a deep learning mode, does not need to collect a large amount of data in advance, and returns the detection result to the user through capturing a real-time picture of user makeup and calculating by a server side. Compared with a deep learning model reasoning scheme, the method and the device consume less calculation cost in an algorithm processing link, and reduce the processing pressure of a server.
The embodiment of the application also provides a makeup progress detecting device, which is used for executing the makeup progress detecting method provided by any embodiment. As shown in fig. 6, the apparatus includes:
an acquisition module 201, configured to acquire at least one makeup area, and acquire an initial frame image and a current frame image of a makeup video of a user;
a generating module 202, configured to generate a makeup mask map according to the makeup area;
the progress determining module 203 is configured to determine a current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image, and the current frame image.
A generating module 202, configured to draw a contour of each makeup area in a preset blank face image according to a position and a shape of each makeup area; and (5) filling pixels in each drawn outline to obtain a makeup mask map.
The progress determining module 203 is configured to obtain a first target area image for makeup from an initial frame image and obtain a second target area image for makeup from a current frame image with reference to the makeup mask image; and determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
The progress determining module 203 is configured to detect a first face key point corresponding to the initial frame image; acquiring a face region image corresponding to the initial frame image according to the first face key points; and taking the makeup mask map as a reference, and acquiring a first target area image for makeup from the face area image.
The progress determining module 203 is configured to convert the makeup mask map and the face area image into binary images respectively; performing AND operation on the binary image corresponding to the makeup mask image and the binary image corresponding to the face area image to obtain a first mask image corresponding to an intersection area of the makeup mask image and the face area image; and performing AND operation on the face region image corresponding to the first mask image and the initial frame image to obtain a first target region image corresponding to the initial frame image.
The progress determining module 203 is configured to determine one or more first positioning points located on the contour of each makeup area in the makeup mask map according to the standard face key points corresponding to the makeup mask map; determining a second positioning point corresponding to each first positioning point from the initial frame image according to the first face key points; and stretching the makeup mask pattern, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
The progress determining module 203 is configured to split the makeup mask map into a plurality of sub-mask maps, where each sub-mask map includes at least one makeup area; converting each sub-mask image and the face area image into a binarized image respectively; performing AND operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face area image respectively to obtain sub-mask images corresponding to each sub-mask image respectively; performing AND operation on each sub-mask image and the initial frame image respectively to obtain a plurality of sub-target area images corresponding to the initial frame image; and merging the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
The progress determining module 203 is configured to determine, according to a standard face key point corresponding to a makeup mask map, one or more first positioning points located on a contour of a makeup area in a first sub-mask map, where the first sub-mask map is any one of a plurality of sub-mask maps; determining a second positioning point corresponding to each first positioning point from the initial frame image according to the first face key points; and stretching the first sub-mask map to stretch each first locating point to a position corresponding to each corresponding second locating point.
The progress determining module 203 is configured to convert the first target area image and the second target area image into images containing a preset single channel component in the HSV color space; and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and second target area image.
The progress determining module 203 is configured to calculate absolute difference values of preset single-channel components corresponding to pixel points with the same positions in the converted first target area image and second target area image respectively; counting the number of pixels of which the corresponding absolute value of the difference value meets the preset makeup completion condition; and calculating the ratio of the counted number of the pixels to the total number of the pixels in all the makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
The progress determining module 203 is configured to perform binarization processing on the first target area image and the second target area image respectively, so as to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image; performing AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to an intersection area of the first target area image and the second target area image; acquiring a face area image corresponding to the initial frame image and a face area image corresponding to the current frame image; performing AND operation on the face region image corresponding to the second mask image and the initial frame image to obtain a new first target region image corresponding to the initial frame image; and performing AND operation on the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
And the image corrosion module is used for carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image respectively.
The progress determining module 203 is configured to rotationally correct the initial frame image and the first face key point according to the first face key point; according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image; and scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
The progress determining module 203 is configured to determine a left-eye center coordinate and a right-eye center coordinate according to a left-eye key point and a right-eye key point included in the first face key point; according to the left eye center coordinate and the right eye center coordinate, determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image; and carrying out rotation correction on the initial frame image and the first face key points according to the rotation angle and the rotation center point coordinates.
The progress determining module 203 is configured to determine a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value from the corrected first face key point; determining a cut-out frame corresponding to the face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value; and according to the cut-out frame, cutting out an image containing the face area from the corrected initial frame image.
The progress determining module 203 is configured to amplify the truncated frame by a preset multiple; and according to the enlarged cut-out frame, cutting out an image containing the face area from the corrected initial frame image.
The progress determining module 203 is configured to perform scaling and translation processing on the corrected first face key point according to the size of the image including the face area and the preset size.
The makeup progress detecting device provided by the above embodiment of the present application and the makeup progress detecting method provided by the embodiment of the present application are the same inventive concept, and have the same beneficial effects as the method adopted, operated or implemented by the application program stored therein.
The embodiment of the application also provides an electronic device for executing the makeup progress detection method. Referring to fig. 7, a schematic diagram of an electronic device according to some embodiments of the present application is shown. As shown in fig. 7, the electronic device 8 includes: a processor 800, a memory 801, a bus 802 and a communication interface 803, the processor 800, the communication interface 803 and the memory 801 being connected by the bus 802; the memory 801 stores a computer program executable on the processor 800, and the processor 800 executes the makeup progress detection method according to any one of the foregoing embodiments of the present application when the computer program is executed.
The memory 801 may include a high-speed random access memory (RAM: random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the device network element and the at least one other network element is achieved through at least one communication interface 803 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 802 may be an ISA bus, a PCI bus, or an EISA bus, among others. The buses may be classified as address buses, data buses, control buses, etc. The memory 801 is configured to store a program, and the processor 800 executes the program after receiving an execution instruction, and the method for detecting a makeup progress disclosed in any of the foregoing embodiments of the present application may be applied to the processor 800 or implemented by the processor 800.
The processor 800 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in processor 800. The processor 800 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 801, and the processor 800 reads information in the memory 801 and performs the steps of the above method in combination with its hardware.
The electronic device provided by the embodiment of the application and the makeup progress detection method provided by the embodiment of the application are the same in the invention conception, and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
The present embodiment also provides a computer readable storage medium corresponding to the method for detecting a makeup progress provided in the foregoing embodiment, referring to fig. 8, the computer readable storage medium is shown as an optical disc 30, on which a computer program (i.e. a program product) is stored, where the computer program, when executed by a processor, performs the method for detecting a makeup progress provided in any of the foregoing embodiments.
It should be noted that examples of the computer readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical or magnetic storage medium, which will not be described in detail herein.
The computer-readable storage medium provided by the above-described embodiments of the present application has the same advantageous effects as the method adopted, operated or implemented by the application program stored therein, for the same inventive concept as the cosmetic progress detection method provided by the embodiments of the present application.
It should be noted that:
in the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the following schematic diagram: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A cosmetic progress detection method, characterized by comprising:
acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a user makeup video;
generating a makeup mask map according to the makeup area;
determining a current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image and the current frame image;
Wherein, according to the makeup mask map, the initial frame image and the current frame image, determining a current makeup progress corresponding to the current frame image includes: taking the makeup mask map as a reference, and acquiring a first object area image for makeup from the initial frame image; the first target area image is used for determining the current makeup progress;
the step of obtaining a first target area image for makeup from the initial frame image by taking the makeup mask image as a reference includes: obtaining a first mask image corresponding to an intersection area between the makeup mask image and a face area image in the initial frame image; and performing AND operation on the first mask image and the face region image in the initial frame image to obtain a first target region image corresponding to the initial frame image.
2. The method according to claim 1, wherein the determining the current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image, and the current frame image includes:
taking the makeup mask map as a reference, and acquiring a second target area image for makeup from the current frame image;
And determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image.
3. The method according to claim 2, wherein the acquiring the first target area image for makeup from the initial frame image with reference to the makeup mask map includes:
detecting a first face key point corresponding to the initial frame image;
acquiring a face region image corresponding to the initial frame image according to the first face key points;
and taking the makeup mask map as a reference, and acquiring a first object area image for makeup from the face area image.
4. A method as claimed in claim 3, wherein said obtaining a first target area image of makeup from said face area image with reference to said makeup mask map comprises:
respectively converting the makeup mask map and the face region image into binary images;
and performing an AND operation on the binary image corresponding to the makeup mask image and the binary image corresponding to the face region image to obtain a first mask image corresponding to an intersection region of the makeup mask image and the face region image.
5. The method according to claim 4, wherein before performing the and operation on the binary image corresponding to the makeup mask map and the binary image corresponding to the face region image, further comprising:
determining one or more first positioning points positioned on the outline of each makeup area in the makeup mask map according to the standard face key points corresponding to the makeup mask map;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the makeup mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
6. A method as claimed in claim 3, wherein said obtaining a first target area image of makeup from said face area image with reference to said makeup mask map comprises:
splitting the makeup mask map into a plurality of sub-mask maps, wherein each sub-mask map comprises at least one makeup area;
converting each sub-mask image and the face region image into a binarized image respectively;
performing AND operation on the binarized image corresponding to each sub-mask image and the binarized image corresponding to the face region image respectively to obtain sub-mask images corresponding to each sub-mask image;
Performing AND operation on each sub-mask image and the face area image respectively to obtain a plurality of sub-target area images corresponding to the initial frame image;
and merging the plurality of sub-target area images into a first target area image corresponding to the initial frame image.
7. The method according to claim 6, wherein before performing an and operation on the binarized image corresponding to each of the sub-mask images and the binarized image corresponding to the face region image, the method further comprises:
determining one or more first positioning points on the outline of the makeup area in a first sub-mask map according to standard face key points corresponding to the makeup mask map, wherein the first sub-mask map is any one of the sub-mask maps;
determining a second positioning point corresponding to each first positioning point from the face region image according to the first face key points;
and stretching the first sub-mask map, and stretching each first positioning point to a position corresponding to each corresponding second positioning point.
8. The method according to claim 2, wherein determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image includes:
Respectively converting the first target area image and the second target area image into images containing preset single-channel components in an HSV color space;
and determining the current makeup progress corresponding to the current frame image according to the converted first target area image and the second target area image.
9. The method of claim 8, wherein determining the current makeup progress corresponding to the current frame image from the converted first target area image and the second target area image comprises:
respectively calculating the absolute value of the difference value of the preset single-channel component corresponding to the pixel points with the same positions in the converted first target area image and the converted second target area image;
counting the number of pixels of which the corresponding absolute value of the difference value meets the preset makeup completion condition;
and calculating the ratio of the counted number of the pixels to the total number of the pixels in all the makeup areas in the first target area image to obtain the current makeup progress corresponding to the current frame image.
10. The method according to any one of claims 2 to 9, wherein before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, further comprises:
Respectively carrying out binarization processing on the first target area image and the second target area image to obtain a first binarization mask image corresponding to the first target area image and a second binarization mask image corresponding to the second target area image;
performing an AND operation on the first binarization mask image and the second binarization mask image to obtain a second mask image corresponding to an intersection area of the first target area image and the second target area image;
acquiring a face area image corresponding to the initial frame image and a face area image corresponding to the current frame image;
performing an AND operation on the second mask image and the face region image corresponding to the initial frame image to obtain a new first target region image corresponding to the initial frame image;
and performing AND operation on the second mask image and the face region image corresponding to the current frame image to obtain a new second target region image corresponding to the current frame image.
11. The method according to any one of claims 2 to 9, wherein before determining the current makeup progress corresponding to the current frame image according to the first target area image and the second target area image, further comprises:
And respectively carrying out boundary corrosion treatment on the makeup areas in the first target area image and the second target area image.
12. The method according to claim 3, wherein the obtaining a face area image corresponding to the initial frame image according to the first face keypoints includes:
according to the first face key points, carrying out rotation correction on the initial frame image and the first face key points;
according to the corrected first face key points, an image containing a face area is intercepted from the corrected initial frame image;
and scaling the image containing the face area to a preset size to obtain a face area image corresponding to the initial frame image.
13. The method according to claim 12, wherein the method further comprises:
and scaling and translating the corrected first face key points according to the size of the image containing the face area and the preset size.
14. The method of claim 12, wherein the performing rotational correction on the initial frame image and the first face keypoints based on the first face keypoints comprises:
Respectively determining a left eye center coordinate and a right eye center coordinate according to a left eye key point and a right eye key point which are included by the first face key point;
determining a rotation angle and a rotation center point coordinate corresponding to the initial frame image according to the left eye center coordinate and the right eye center coordinate;
and carrying out rotation correction on the initial frame image and the first face key point according to the rotation angle and the rotation center point coordinate.
15. The method according to claim 12, wherein the capturing an image including a face region from the corrected initial frame image according to the corrected first face keypoints includes:
determining a minimum abscissa value, a minimum ordinate value, a maximum abscissa value and a maximum ordinate value from the corrected first face key point;
determining a interception frame corresponding to a face region in the corrected initial frame image according to the minimum abscissa value, the minimum ordinate value, the maximum abscissa value and the maximum ordinate value;
and according to the interception frame, intercepting an image containing the face area from the corrected initial frame image.
16. The method of claim 15, wherein the method further comprises:
amplifying the intercepting frame by a preset multiple;
and according to the enlarged intercepting frame, intercepting an image containing the face area from the corrected initial frame image.
17. The method of claim 1, wherein the generating a cosmetic mask map from the make-up area comprises:
drawing the outline of each makeup area in a preset blank face image according to the position and the shape of each makeup area;
and (5) filling pixels in each drawn outline to obtain a makeup mask map.
18. A makeup progress detecting device, characterized by comprising:
the acquisition module is used for acquiring at least one makeup area, and acquiring an initial frame image and a current frame image of a user makeup video;
the generating module is used for generating a makeup mask map according to the makeup area;
the progress determining module is used for determining the current makeup progress corresponding to the current frame image according to the makeup mask map, the initial frame image and the current frame image;
wherein, according to the makeup mask map, the initial frame image and the current frame image, determining a current makeup progress corresponding to the current frame image includes: taking the makeup mask map as a reference, and acquiring a first object area image for makeup from the initial frame image; the first target area image is used for determining the current makeup progress;
The step of obtaining a first target area image for makeup from the initial frame image by taking the makeup mask image as a reference includes: obtaining a first mask image corresponding to an intersection area between the makeup mask image and a face area image in the initial frame image; and performing AND operation on the first mask image and the face region image in the initial frame image to obtain a first target region image corresponding to the initial frame image.
19. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor runs the computer program to implement the method of any one of claims 1-17.
20. A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the method of any of claims 1-17.
CN202111017071.9A 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium Active CN113837020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111017071.9A CN113837020B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111017071.9A CN113837020B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113837020A CN113837020A (en) 2021-12-24
CN113837020B true CN113837020B (en) 2024-02-02

Family

ID=78961696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111017071.9A Active CN113837020B (en) 2021-08-31 2021-08-31 Cosmetic progress detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113837020B (en)

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272849A (en) * 2003-03-12 2004-09-30 Pola Chem Ind Inc Judgment method of cosmetic effect
CN101556699A (en) * 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
JP2011008397A (en) * 2009-06-24 2011-01-13 Sony Ericsson Mobilecommunications Japan Inc Makeup support apparatus, makeup support method, makeup support program and portable terminal device
CN104834800A (en) * 2015-06-03 2015-08-12 上海斐讯数据通信技术有限公司 Beauty making-up method, system and device
TWI573100B (en) * 2016-06-02 2017-03-01 Zong Jing Investment Inc Method for automatically putting on face-makeup
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
CN107545220A (en) * 2016-06-29 2018-01-05 中兴通讯股份有限公司 A kind of face identification method and device
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108154121A (en) * 2017-12-25 2018-06-12 深圳市美丽控电子商务有限公司 Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108292423A (en) * 2015-12-25 2018-07-17 松下知识产权经营株式会社 Local dressing producing device, local dressing utilize program using device, local dressing production method, local dressing using method, local dressing production process and local dressing
CN108765268A (en) * 2018-05-28 2018-11-06 京东方科技集团股份有限公司 A kind of auxiliary cosmetic method, device and smart mirror
CN109063671A (en) * 2018-08-20 2018-12-21 三星电子(中国)研发中心 Method and device for intelligent cosmetic
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template
CN110543875A (en) * 2019-09-25 2019-12-06 西安理工大学 Auxiliary make-up device
CN110663063A (en) * 2017-05-25 2020-01-07 华为技术有限公司 Method and device for evaluating facial makeup
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111369644A (en) * 2020-02-28 2020-07-03 北京旷视科技有限公司 Face image makeup trial processing method and device, computer equipment and storage medium
CN111783511A (en) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 Beauty treatment method, device, terminal and storage medium
CN112232175A (en) * 2020-10-13 2021-01-15 南京领行科技股份有限公司 Method and device for identifying state of operation object
CN112507766A (en) * 2019-09-16 2021-03-16 珠海格力电器股份有限公司 Face image extraction method, storage medium and terminal equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror
CN109508581A (en) * 2017-09-15 2019-03-22 丽宝大数据股份有限公司 Biological information analytical equipment and its blush analysis method

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004272849A (en) * 2003-03-12 2004-09-30 Pola Chem Ind Inc Judgment method of cosmetic effect
CN101556699A (en) * 2008-11-07 2009-10-14 浙江大学 Face-based facial aging image synthesis method
JP2011008397A (en) * 2009-06-24 2011-01-13 Sony Ericsson Mobilecommunications Japan Inc Makeup support apparatus, makeup support method, makeup support program and portable terminal device
CN104834800A (en) * 2015-06-03 2015-08-12 上海斐讯数据通信技术有限公司 Beauty making-up method, system and device
CN108292423A (en) * 2015-12-25 2018-07-17 松下知识产权经营株式会社 Local dressing producing device, local dressing utilize program using device, local dressing production method, local dressing using method, local dressing production process and local dressing
TWI573100B (en) * 2016-06-02 2017-03-01 Zong Jing Investment Inc Method for automatically putting on face-makeup
CN107545220A (en) * 2016-06-29 2018-01-05 中兴通讯股份有限公司 A kind of face identification method and device
CN110663063A (en) * 2017-05-25 2020-01-07 华为技术有限公司 Method and device for evaluating facial makeup
CN111066060A (en) * 2017-07-13 2020-04-24 资生堂美洲公司 Virtual face makeup removal and simulation, fast face detection, and landmark tracking
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108154121A (en) * 2017-12-25 2018-06-12 深圳市美丽控电子商务有限公司 Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror
CN108765268A (en) * 2018-05-28 2018-11-06 京东方科技集团股份有限公司 A kind of auxiliary cosmetic method, device and smart mirror
CN109063671A (en) * 2018-08-20 2018-12-21 三星电子(中国)研发中心 Method and device for intelligent cosmetic
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template
CN112507766A (en) * 2019-09-16 2021-03-16 珠海格力电器股份有限公司 Face image extraction method, storage medium and terminal equipment
CN110543875A (en) * 2019-09-25 2019-12-06 西安理工大学 Auxiliary make-up device
CN111783511A (en) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 Beauty treatment method, device, terminal and storage medium
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111369644A (en) * 2020-02-28 2020-07-03 北京旷视科技有限公司 Face image makeup trial processing method and device, computer equipment and storage medium
CN112232175A (en) * 2020-10-13 2021-01-15 南京领行科技股份有限公司 Method and device for identifying state of operation object

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Facial Keypoints Detection;Shenghao Shi 等;《arXiv》;1-28 *
刘家远 等.基于视频的虚拟试妆应用研究.《系统仿真学报》.2018,第30卷(第11期),4195-4202. *
基于视频的虚拟试妆应用研究;刘家远 等;《系统仿真学报》;第30卷(第11期);4195-4202 *

Also Published As

Publication number Publication date
CN113837020A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US10783354B2 (en) Facial image processing method and apparatus, and storage medium
CN109359575B (en) Face detection method, service processing method, device, terminal and medium
CN109952594B (en) Image processing method, device, terminal and storage medium
KR20200043448A (en) Method and apparatus for image processing, and computer readable storage medium
CN110852310B (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN111445410A (en) Texture enhancement method, device and equipment based on texture image and storage medium
CN106326823B (en) Method and system for obtaining head portrait in picture
CN108416291B (en) Face detection and recognition method, device and system
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN109190617B (en) Image rectangle detection method and device and storage medium
CN111814564A (en) Multispectral image-based living body detection method, device, equipment and storage medium
US11315360B2 (en) Live facial recognition system and method
CN112396050B (en) Image processing method, device and storage medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN112487922A (en) Multi-mode face in-vivo detection method and system
WO2017173578A1 (en) Image enhancement method and device
CN109919128B (en) Control instruction acquisition method and device and electronic equipment
CN115690130B (en) Image processing method and device
CN115731591A (en) Method, device and equipment for detecting makeup progress and storage medium
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113837019A (en) Cosmetic progress detection method, device, equipment and storage medium
CN115222621A (en) Image correction method, electronic device, storage medium, and computer program product
CN113837017B (en) Cosmetic progress detection method, device, equipment and storage medium
JP3578321B2 (en) Image normalizer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant