CN113344837B - Face image processing method and device, computer readable storage medium and terminal - Google Patents

Face image processing method and device, computer readable storage medium and terminal Download PDF

Info

Publication number
CN113344837B
CN113344837B CN202110722304.9A CN202110722304A CN113344837B CN 113344837 B CN113344837 B CN 113344837B CN 202110722304 A CN202110722304 A CN 202110722304A CN 113344837 B CN113344837 B CN 113344837B
Authority
CN
China
Prior art keywords
eye shadow
face
face image
processed
makeup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110722304.9A
Other languages
Chinese (zh)
Other versions
CN113344837A (en
Inventor
谢富名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202110722304.9A priority Critical patent/CN113344837B/en
Publication of CN113344837A publication Critical patent/CN113344837A/en
Priority to PCT/CN2021/141467 priority patent/WO2023273247A1/en
Application granted granted Critical
Publication of CN113344837B publication Critical patent/CN113344837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

A face image processing method and device, a computer readable storage medium and a terminal are provided, wherein the face image processing method comprises the following steps: carrying out face recognition on a face image to be processed to obtain an eye shadow region of interest; carrying out posture transformation on the obtained eye shadow dressing template so as to align the eye shadow dressing template to the eye shadow interested area; determining an eye shadow makeup trial effect intensity coefficient according to the face attribute information of the face image to be processed; and carrying out image fusion on the eye shadow makeup template after the posture transformation and the face image to be processed by adopting the eye shadow makeup effect intensity coefficient to obtain an eye shadow makeup image. By the scheme, the image fusion effect of the eye shadow makeup template and the face image to be processed can be improved, and the naturalness of the eye shadow makeup trial image can be further improved.

Description

Face image processing method and device, computer readable storage medium and terminal
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a face image processing method and device, a computer readable storage medium and a terminal.
Background
With the steady growth of the scale of the makeup industry and the popularization of Artificial Intelligence (AI) technology, virtual face makeup is leading the reform of the makeup industry, and the makeup consumption has huge potential. In the process of making up the face, the eye shadow is the most important link in the eye makeup, so that the eyes are more magical and look beautiful and active. In order to reduce the marketing cost of cosmetics, virtual makeup trial is applied, and then the effect of the current virtual eye shadow makeup trial is unnatural, thereby affecting the user experience.
Disclosure of Invention
The technical problem solved by the embodiment of the invention is that the virtual eye shadow makeup trying effect is not natural, thereby influencing the user experience.
In order to solve the above technical problem, an embodiment of the present invention provides a face image processing method, including: carrying out face recognition on a face image to be processed to obtain an eye shadow region of interest; carrying out posture transformation on the obtained eye shadow dressing template so as to align the eye shadow dressing template to the eye shadow interested area; determining an eye shadow makeup trial effect intensity coefficient according to the face attribute information of the face image to be processed; and carrying out image fusion on the eye shadow makeup template after the posture is changed and the face image to be processed by adopting the eye shadow makeup trial effect intensity coefficient to obtain an eye shadow makeup trial image.
Optionally, the performing facial recognition on the face image to be processed to obtain an eye shadow region of interest includes: carrying out face recognition on the face image to be processed to obtain position information of key points of the face; and determining the eye shadow region of interest according to the position information of the key points of the human face.
Optionally, the determining the eye shadow region of interest according to the position information of the key points of the human face includes: and determining the eye shadow interested area according to the position information of the eye key points in the face key points and the position information of the eyebrow key points, wherein the eye shadow interested area is positioned in a preset area around the eye key points and does not cross the eyebrow key points.
Optionally, performing pose transformation on the obtained eye shadow makeup template to align the obtained eye shadow makeup template to the eye shadow region of interest includes: calculating posture transformation information according to the standard human face posture of the obtained eye shadow makeup template and the human face posture of the human face image to be processed; and performing posture transformation on the eye shadow makeup template according to the posture transformation information, so that the eye shadow makeup template after posture transformation is aligned to the eye shadow interested area.
Optionally, the calculating pose transformation information according to the face pose of the obtained eye shadow makeup template and the face pose of the face image to be processed includes: generating eye shadow auxiliary key points at positions which are away from eye key points in the human face key points by a set distance according to the position information of the human face key points, wherein the eye shadow auxiliary key points are used for limiting the boundaries of the eye shadow makeup; and calculating posture transformation information for aligning each face key point of the standard face corresponding to the eye shadow makeup template with the corresponding face key point in the face image to be processed by adopting a preset triangulated array, wherein the posture transformation information is used for representing the offset of each face key point of the standard face and the corresponding face key point in the face image to be processed.
Optionally, the calculating pose transformation information for aligning the face key points of each standard face corresponding to the eye shadow makeup template with the face key points corresponding to the face image to be processed includes: when the size of a standard face corresponding to the eye shadow makeup template is different from the size of the face image to be processed, performing size conversion on any one of the standard face or the face image to be processed to enable the converted size to be the same; and calculating the posture transformation information based on the transformed standard face image or the face image to be processed.
Optionally, the performing the pose transformation on the eye shadow makeup template according to the pose transformation information includes: and according to the posture transformation information, combining the eye shadow makeup template, and calculating by adopting a deformation interpolation algorithm to obtain the eye shadow makeup template after the posture transformation.
Optionally, the image fusion of the eye shadow makeup template after the posture transformation and the face image to be processed by using the eye shadow makeup effect intensity coefficient includes: and carrying out image fusion on the eye shadow makeup appearance template after the posture conversion and the human face image to be processed according to the eye shadow makeup appearance template after the posture conversion and the channel information of the human face image to be processed in the RGB color space domain by adopting the eye shadow makeup effect intensity coefficient, wherein the value range of the eye shadow makeup effect intensity coefficient is [0,1].
Optionally, the face image processing method further includes: after the eye shadow makeup trial image is obtained, judging whether channel information of an RGB color space domain of the eye shadow makeup trial image conforms to a set range; and if not, discarding the eye shadow makeup trying image.
Optionally, the eye shadow makeup template is determined based on a selection of a user, or the eye shadow makeup template is determined according to a face recognition result.
Optionally, the determining the eye shadow cosmetic template according to the face recognition result includes: according to the face recognition result, obtaining face skin color information in the face image to be processed, and according to the skin color information, selecting an eye shadow makeup template matched with the skin color, wherein the skin color information comprises: skin tone brightness and/or skin tone color.
Optionally, before performing face recognition on the face image to be processed to obtain an eye shadow region of interest, the method further includes: the face image to be processed is subjected to size scaling, and whether the distance between the scaled face image to be processed and the terminal equipment for collecting the face image to be processed meets a set distance or not is calculated; if yes, carrying out face recognition on the face image to be processed to obtain the eye shadow region of interest; if not, discarding the face image to be processed, and continuously acquiring a next image as the face image to be processed.
Optionally, the eye shadow dressing template is obtained in the following manner: acquiring a drawn eye shadow image sample and a standard face image; and extracting the channel information of the eye shadow makeup template in the RGB color space domain according to the channel information of the eye shadow image sample in the RGB color space domain and the channel information of the standard face image in the RGB color space domain.
An embodiment of the present invention further provides a face image processing apparatus, including: the eye shadow region-of-interest determining unit is used for carrying out face recognition on the face image to be processed to obtain an eye shadow region-of-interest; the pose transformation unit calculation unit is used for calculating pose transformation information according to the standard human face pose of the obtained eye shadow makeup template and the human face pose of the human face image to be processed; the alignment unit is used for carrying out posture transformation on the eye shadow dressing template according to the posture transformation information so that the eye shadow dressing template after posture transformation is aligned to the eye shadow interested area; and the fusion unit is used for carrying out image fusion on the eye shadow makeup template after the posture transformation and the face image to be processed by adopting an eye shadow makeup effect intensity coefficient to obtain an eye shadow makeup image, and the eye shadow makeup effect intensity coefficient is determined according to the face attribute information of the face image to be processed.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium is a non-volatile storage medium or a non-transitory storage medium, and a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs any of the above steps of the human face image processing method.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory is stored with a computer program capable of running on the processor, and the processor executes any one of the steps of the human face image processing method when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
carrying out face recognition on a face image to be processed to obtain an eye shadow interesting area, aligning the eye shadow makeup template to the eye shadow interesting area, determining eye shadow makeup trial effect intensity according to face attribute information of the face image to be processed, and carrying out image fusion on the eye shadow makeup template after posture transformation and the face image to be processed by adopting an eye shadow makeup trial effect intensity coefficient to obtain an eye shadow makeup trial image. Because the eye shadow makeup effect intensity coefficient is adopted when the eye shadow makeup template after the posture conversion and the face image to be processed are subjected to image fusion, the eye shadow makeup effect intensity is determined according to the face attribute information of the face image to be processed, and the eye shadow makeup effect intensity coefficient adaptive to each user is determined adaptively according to the face attribute information determined according to the face image to be processed of each user, the image fusion effect of the eye shadow makeup template and the face image to be processed can be improved, the naturalness of the eye shadow makeup image can be improved, and the user experience is improved.
Further, posture transformation information is calculated through the standard human face posture of the eye shadow makeup template and the human face posture of the human face image to be processed, so that the eye shadow makeup template is subjected to posture transformation through the posture transformation information, the eye shadow makeup template after the posture transformation can be aligned to the eye shadow region of interest, and a foundation is provided for improving the image fusion effect of the eye shadow makeup template and the human face image to be processed under the large posture.
Drawings
Fig. 1 is a flowchart of a face image processing method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of locations of key points of a human face according to an embodiment of the present invention;
FIG. 3 is a flow chart of another face image processing method in an embodiment of the invention;
fig. 4 is a schematic structural diagram of a face image processing apparatus in an embodiment of the present invention.
Detailed Description
As described above, in some scenes, virtual face makeup needs to be used, however, the existing virtual face makeup effect is unnatural, which results in a less than ideal face makeup effect.
In order to solve the problems, in the embodiment of the invention, facial recognition is carried out on a face image to be processed to obtain an eye shadow interested area, an eye shadow makeup template is aligned to the eye shadow interested area, eye shadow makeup trial effect intensity is determined according to face attribute information of the face image to be processed, and an eye shadow makeup trial effect intensity coefficient is adopted to carry out image fusion on the eye shadow makeup template after posture transformation and the face image to be processed to obtain an eye shadow makeup trial image. Because the eye shadow makeup effect intensity coefficient is adopted when the eye shadow makeup template after the posture conversion and the face image to be processed are subjected to image fusion, the eye shadow makeup effect intensity is determined according to the face attribute information of the face image to be processed, and the eye shadow makeup effect intensity coefficient adaptive to each user is determined adaptively according to the face attribute information determined according to the face image to be processed of each user, the image fusion effect of the eye shadow makeup template and the face image to be processed can be improved, the naturalness of the eye shadow makeup image can be improved, and the user experience can be improved.
In order to make the aforementioned objects, features and advantages of the embodiments of the present invention more comprehensible, specific embodiments accompanied with figures are described in detail below.
The embodiment of the invention provides a face image processing method, which can be used for virtual face beauty under various scenes, such as a makeup trial application scene, an image beauty application scene, a video beauty application scene and the like. The execution subject of the face image processing method may be a chip in the terminal, or may also be a chip such as a control chip and a processing chip that can be used for the terminal, or other various appropriate components.
Referring to fig. 1, a flowchart of a face image processing method in the embodiment of the present invention is shown, which specifically includes the following steps:
and S11, carrying out face recognition on the face image to be processed to obtain an eye shadow region of interest.
In particular implementations, the region of interest (ROI) of the eye shadow may be obtained in a variety of ways. The eye shadow region of interest is used to indicate the area where eye shadow make-up is likely to be performed.
In one non-limiting embodiment of the present invention, artificial Intelligence (AI) face recognition is performed on a face image to be processed, and an eye shadow region of interest can be determined based on the AI face recognition result.
In another non-limiting embodiment of the present invention, facial recognition may be performed on the face image to be processed to obtain position information of key points of a face; and determining the eye shadow interested area according to the position information of the key points of the human face.
Further, in order to improve the accuracy of positioning the eye region of interest, the accuracy of the positions of the obtained key points of the human face can be improved by improving the accuracy of human face alignment and adopting a high-accuracy human face alignment technology. The accuracy of the determination of the eye shadow region of interest is improved by improving the accuracy of the positions of the key points of the human face.
Referring to fig. 2, a schematic position diagram of a face key point in the embodiment of the present invention is shown, where the number of the face key points illustrated in fig. 2 is 104 (i.e., gray points numbered 1 to 104 in the figure). In practical application, according to different feature information required for the face region, other face key points may be added in other regions, for example, the face key points are added in the forehead region or the hairline region, so that the number of the face key points is not limited thereto, and may be other numbers, which is not described herein again.
In implementations, eye keypoints may be used to limit the contours of the eye. As shown in fig. 2, the eye key points may include face key points denoted by reference numerals 67 to 74, and 76 to 83 in the drawing. Wherein the eyebrow key points may include face key points numbered 34 through 42, and 43 through 51 in the figure.
Specifically, the eye shadow region of interest may be determined according to position information of eye key points in the face key points and position information of eyebrow key points. The eye shadow interested area is located in a preset area around the eye key point and does not cross the eyebrow key point, namely the eye shadow interested key point is located in an area between the eyebrow and the eye and a preset area below the eye. For example, the eye shadow region of interest is usually a partial region located above the upper eyelid and below the eyebrow and a predetermined region below the lower eyelid.
Further, before step S11 is executed, the obtained face image to be processed may also be checked to determine whether the face image to be processed meets a certain requirement. For example, the face image to be processed is scaled and then subjected to face recognition, and the distance between the enlarged maximum face and the terminal device is calculated. If the set distance is met, judging that the requirement is met; and if the set distance is not met, judging that the requirement is not met. If the occupation ratio of the face area in the face image to be processed is small in the whole face image, even if the eye shadow makeup processing is performed on the face image to be processed, the eye shadow makeup effect is not obvious enough due to the small occupation ratio of the eye area, and by scaling the face image to be processed, when the distance between the amplified maximum face and the terminal device does not meet the set distance, the occupation ratio of the face in the face image to be processed is small, the eye shadow makeup processing can be omitted, and the subsequent steps are not performed.
Further, before step S11, the face pose of the face image to be processed may also be detected, and if the face pose angle of the face image to be processed exceeds the set angle, it is determined that the face pose angle is too large, and the face pose angle is discarded without being processed. For example, when the face pose is 90 degrees, that is, the face is a side face, at this time, even if the eye shadow makeup is performed, the obtained eye makeup cosmetic effect is not obvious enough, and the eye shadow makeup cosmetic treatment may not be performed, that is, the subsequent steps are not performed. For another example, in a situation where the face pose is a back shadow and the face cannot be recognized, eye shadow makeup may not be performed, that is, the subsequent steps are not performed.
By checking the face image to be processed before performing step S11, the subsequent steps S12 to S14 can be performed for the face image to be processed that satisfies the requirements. The subsequent steps S12 to S14 are not performed for the face image to be processed that does not satisfy the condition. The image processing method can improve the image processing effect and save calculation resources.
And S12, performing posture transformation on the obtained eye shadow dressing template so as to align the eye shadow dressing template to the eye shadow interested area.
In a specific implementation, step S12 may be implemented as follows: calculating posture transformation information according to the standard human face posture of the obtained eye shadow makeup template and the human face posture of the human face image to be processed; and performing posture transformation on the eye shadow makeup template according to the posture transformation information, so that the eye shadow makeup template after posture transformation is aligned to the eye shadow interested area.
Further, the posture change information may be calculated by:
and generating eye shadow auxiliary key points at positions which are away from the eye key points in the human face key points by a set distance according to the position information of the human face key points. That is, the eye shadow auxiliary key points can be adaptively generated at the positions of the eye key points which are away from each other by the set distance according to the position information of the eye key points. The eye shadow assisting key points are used for limiting the boundary of the eye shadow makeup. Particularly, in a video, different angle postures of the human face may occur, the human face self-shielding may exist in the large posture of the human face, the eye shadow auxiliary key points can avoid the problem that the eye shadow has abnormal effect due to the overlarge angle of the human face, and the eye shadow auxiliary key points can effectively assist in determining the migration area of the eye shadow makeup template so as to improve the eye shadow makeup effect. The large face posture refers to a posture in which the angle of the face exceeds a set angle, for example, 30 degrees, compared with the normal face.
In some non-limiting embodiments, a portion of the eyeshadow assist keypoints are located below the eyebrows and above the upper eyelid, and a portion of the eyeshadow assist keypoints are located below the lower eyelid, particularly below a portion of the lower eyelid generally between the outer corner of the eye and the center of the lower eyelid. The black dots as in fig. 2 are eye shadow assist key dots.
In some non-limiting embodiments, a preset triangulated array is used to calculate pose transformation information when aligning each face key point of a standard face corresponding to the eye shadow cosmetic template with a corresponding face key point in the to-be-processed face image, where the pose transformation information is used to represent an offset between each face key point of the standard face and the corresponding face key point in the to-be-processed face image.
In a specific implementation, the triangularization array records relative position information of each face key point after being triangulated. The triangularization array is the relative position information after triangularization according to the face key points of the standard face corresponding to the eye shadow makeup template.
Furthermore, eye shadow auxiliary key points can be determined on the standard face according to the position information of the face key points of the standard face, so that the face key points and the eye shadow auxiliary key points in the standard face image are triangulated, and a triangulated array is obtained. The obtained triangularization array is stored and can be subsequently shared to be used in subsequent face image processing to be processed, triangularization does not need to be carried out on the face image to be processed again, the calculation force requirement can be reduced, and the image processing efficiency can be improved.
Further, to further reduce the data size of the triangulated array and improve algorithm performance, the number of triangulated arrays for other non-eye shadow regions of interest may be reduced while preserving the triangulated array for the eye shadow regions of interest.
In some non-limiting embodiments, when the pose transformation information is calculated, there may be a case where the size of the standard face corresponding to the eye makeup template is different from the size of the face image to be processed, and at this time, the size of the standard face corresponding to the eye makeup template may be transformed so that the transformed standard face has the same size as the face image to be processed. Or transforming the size of the face image to be processed, so that the size of the transformed face image to be processed is the same as that of the standard face image. And calculating the posture transformation information based on the transformed standard face image or the face image to be processed.
In some non-limiting embodiments, the size of the image may be transformed in an upsampled or downsampled manner.
For example, when the size of the standard face corresponding to the eye shadow makeup template is smaller than the size of the face image to be processed, the standard face of the eye shadow makeup template may be up-sampled, so that the size of the standard face of the up-sampled eye shadow makeup template is the same as the size of the target face image. It will be appreciated that there are other ways of changing the dimensions, which are not illustrated here.
In some embodiments, the pose transformation information may be a pose transformation matrix or a position relationship mapping (Map) Map. It is understood that the pose transformation information may have other representations, which are not illustrated herein.
In specific implementation, the eye shadow makeup template after the posture conversion can be obtained by adopting a deformation interpolation algorithm calculation according to the posture conversion information and combining the eye shadow makeup template.
In some embodiments, bilinear interpolation or other interpolation algorithms may be used to calculate the pose-transformed eye shadow cosmetic template.
Specifically, when the eye makeup template is subjected to pose transformation according to the pose transformation information, an interpolation algorithm adopted when the eye makeup template is subjected to the pose transformation and the interpolation condition of the eye makeup template can be determined according to the concern between the size of a standard face image corresponding to the eye makeup template and the size of a face image to be processed, so that the face pose of the eye makeup template subjected to the interpolation and the pose transformation is the same as the face pose of the face image to be processed, and the size of the eye makeup template subjected to the interpolation and the pose transformation is matched with the size of the face image to be processed.
And S13, determining an eye shadow makeup trial effect intensity coefficient according to the face attribute information of the face image to be processed.
In specific implementation, the face attribute detection can be performed on the face image of the image to be processed to obtain the face attribute information of the face image to be processed. The face attribute information may include gender attribute information.
Due to the fact that requirements of people of different genders on eye shadow makeup are different, face attribute detection is conducted on the face image to be processed, and adaptive eye shadow makeup trial effect intensity coefficients can be determined for different people in a self-adaptive mode according to face attribute detection results. The eye shadow makeup trial effect intensity coefficient is used for controlling the intensity of the eye shadow makeup trial effect, the value range of the eye shadow makeup trial effect intensity coefficient is [0,1], the greater the eye shadow makeup trial effect intensity coefficient is, the greater the intensity of the eye shadow makeup trial effect is, and the more obvious the makeup effect of the presented eye shadow is after image fusion is carried out; accordingly, the smaller the intensity coefficient of the eye shadow makeup test effect is, the less conspicuous the makeup effect of the eye shadow is exhibited.
For example, when the face attribute detection result indicates that the face in the face image to be processed is a male, a small eye shadow trial makeup effect intensity coefficient may be selected or the eye shadow trial makeup effect intensity coefficient may be taken as zero, so as to improve the effect after image fusion.
For another example, when the face attribute detection result indicates that the face in the face image to be processed is a female, a larger eye shadow trial makeup intensity coefficient may be selected to improve the effect after image fusion.
Furthermore, the face attribute information can also comprise skin color information, and the adaptive eye shadow makeup trial effect intensity coefficient can be determined for different people according to the skin color information in a self-adaptive mode. Wherein the skin tone information may include one or more of skin tone brightness, skin tone color. For example, if the skin brightness is larger, the representation skin tends to be white, and at this time, a relatively smaller eye shadow makeup trial effect intensity coefficient is configured, so that a relatively obvious eye shadow makeup effect can be presented. And the smaller the skin color brightness, the darker the skin color, and the stronger the eye shadow makeup trial effect intensity coefficient is configured, so that the obvious eye shadow makeup effect can be presented. Because the makeup effect of the eye shadow is influenced by skin color information such as skin color brightness, skin color and the like, the eye shadow trial makeup effect intensity coefficient which is adaptive to different crowds is determined by considering the skin color information, the natural eye shadow makeup effect of different crowds can be improved, and the obtained image fusion effect is good.
Furthermore, factors such as gender attribute information and skin color information in the face attribute information can be comprehensively considered to determine the eye shadow makeup trial effect intensity coefficient.
In some embodiments, the user may select the eye shadow makeup trial effect intensity coefficient, and when it is detected that the user selects the eye shadow makeup trial effect intensity coefficient, the eye shadow makeup trial effect intensity coefficient configured by the user is used for subsequent image fusion. The eye shadow makeup trial effect intensity coefficient adjusting key or the intensity bar can be configured on the display interface, and the user can select the eye shadow makeup trial effect intensity coefficient through the corresponding key or the dragging intensity bar so as to meet the individual requirements of different users.
In other embodiments, when it is detected that the user selects the eye shadow effect intensity coefficient, the eye shadow effect intensity coefficient selected by the user and the eye shadow effect intensity coefficient determined by the detection result of the face attribute information may be integrated to determine the eye shadow effect intensity coefficient finally used in image fusion.
For example, corresponding weights may be assigned to the eye makeup effect intensity coefficient selected by the user and the eye makeup effect intensity coefficient determined based on the face attribute information detection result, respectively. And performing weighted calculation according to the eye shadow dressing effect intensity coefficient selected by the user and the corresponding weight thereof, and the eye shadow dressing effect intensity coefficient determined based on the human face attribute information detection result and the corresponding weight thereof, and taking the weighted calculation result as the eye shadow dressing effect intensity coefficient finally adopted in image fusion.
And S14, carrying out image fusion on the eye shadow makeup template after the posture is changed and the face image to be processed by adopting the eye shadow makeup trial effect intensity coefficient to obtain an eye shadow makeup trial image.
Further, step S14 may be implemented as follows: and performing image fusion on the eye shadow makeup template after the posture conversion and the human face image to be processed according to the eye shadow makeup template after the posture conversion and the channel information of the human face image to be processed in the RGB color space domain by adopting the eye shadow makeup effect intensity coefficient. The image fusion can be carried out in the RGB color space, the eye shadow hue and saturation effects of the eye shadow makeup template can be ensured, meanwhile, the texture information of a brightness channel can be kept, and the algorithm complexity can be reduced. Wherein R is red, G is green, and B is blue. It is understood that the pose-transformed eye makeup template and the face image to be processed may be fused in other types of color spaces, which are not illustrated here.
Further, taking image fusion in an RGB color space domain as an example, the eye shadow cosmetic template after pose transformation and the face image to be processed may be subjected to image fusion by using the following formulas (1), (2) and (3):
R dst =R o ·(k·(R e -255)+255)/255; (1)
G dst =G o ·(k·(G e -255)+255)/255; (2)
B dst =B o ·(k·(B e -255)+255)/255; (3)
wherein k is the strength coefficient of the eye shadow makeup trial effect, and k belongs to [0,1]];R o 、G o And B o Representing the channel information of the face image to be processed in the RGB color space domain, wherein R o For the red channel information of the face image to be processed in the RGB color space domain, G o For green channel information of the face image to be processed in the RGB color space domain, B o For the face image to be processed in RGB color spaceBlue channel information of the domain; r e 、G e And B e Representing the channel information of the eye shadow makeup template in RGB color space domain after the posture transformation, wherein R e Information of red channel in RGB color space domain for eye shadow dressing template after pose transformation, G e Green channel information for the pose-transformed eye shadow makeup template in the RGB color space domain, B e Blue channel information of the eye shadow makeup template in an RGB color space domain after posture transformation; r is dst 、G dst And B dst Channel information representing eye shadow makeup image in RGB color space domain, wherein R dst Red channel information, G, in the RGB color space domain for eye shadow makeup trying images dst Green channel information in RGB color space domain for eye shadow makeup trying images, B dst And making up blue channel information of the image in the RGB color space domain for the eye shadow.
Further, after the eye shadow makeup trying image is obtained, the eye shadow makeup trying image can be verified to ensure that the obtained eye shadow makeup trying image is correct.
Specifically, the eye shadow makeup trial image can be verified by judging whether the RGB color space domain channel information of the eye shadow makeup trial image meets a set range; and if not, discarding the eye shadow makeup trying image. Wherein the set range may be [0,255].
In some embodiments, on the basis of the above equations (1) to (3), the following equations (4) to (6) may be used to determine whether the RGB color space domain channel information of the eye shadow makeup image meets the set range.
R dst =CLIP((R o ·(k·(R e -255)+255)/255),0,255); (4)
G dst =CLIP((G o ·(k·(G e -255)+255)/255),0,255); (5)
B dst =CLIP((B o ·(k·(B e -255)+255)/255),0,255); (6)
The CLIP () is used to limit the value range, and the CLIP (x, 0, 255) represents that the value range of x is limited to [0,255]]I.e. R is dst 、G dst And B dst Is limited to [0,255]]。
The method comprises the steps of performing face recognition on a face image to be processed to obtain an eye shadow interesting region, aligning an eye shadow makeup template to the eye shadow interesting region, determining eye shadow makeup trial effect intensity according to face attribute information of the face image to be processed, and performing image fusion on the eye shadow makeup template after posture transformation and the face image to be processed by adopting an eye shadow makeup trial effect intensity coefficient to obtain an eye shadow makeup trial image. Because the eye shadow trial makeup effect intensity coefficient is adopted when the eye shadow makeup template after the posture transformation and the face image to be processed are subjected to image fusion, the eye shadow trial makeup effect intensity is determined according to the face attribute information of the face image to be processed, and the eye shadow trial makeup effect intensity coefficient adaptive to each user is determined in a self-adaptive manner according to the face attribute information determined according to the face image to be processed of each user, the image fusion effect of the eye shadow makeup template and the face image to be processed can be improved, and the naturalness of the eye shadow trial makeup image can be improved.
In specific implementation, the selection mode of the eye makeup template corresponding to the application scene can be configured according to different application scenes.
The eye makeup template in the above embodiment may be determined based on the selection of the user. Specifically, the user operation interface may be configured with an option of an eye makeup template, and the user may select a desired eye makeup template according to actual needs.
The eye makeup template in the above embodiment may also be determined according to the face attribute information. Specifically, according to the face attribute information, obtaining face skin color information in the face image to be processed, and according to the skin color information, selecting an eye shadow makeup template matched with the skin color, wherein the skin color information comprises: skin tone brightness and/or skin tone color. Wherein the mapping relationship between the skin color information and the eye shadow makeup template can be preset.
According to different scene requirements, eye shadow makeup templates can be selected in different modes. For example, in an eye shadow makeup application scenario, an eye shadow makeup template may be selected by a user. For a user who does not know how to select a proper eye shadow, the eye shadow dressing template can be selected for the user according to the face recognition result in a face recognition mode. The eye shadow makeup template is determined in a face recognition mode, so that the influence of the skin color and the like of the user can be considered in the comprehensive consideration of other face conditions of the user, and the selected eye shadow makeup template is more adaptive to the user. And the matching relationship between the eye shadow cosmetic template and the face recognition result can be determined based on big data research.
For another example, when the camera is used to make up an image or a video, the user may select to determine the eye shadow makeup template, or the eye shadow makeup template may be determined according to the face recognition result of the user.
In a specific implementation, the eye shadow makeup template in the above embodiment may be stored in an eye shadow database. Several different types of eye shadow cosmetic templates may be stored in the eye shadow database.
In some non-limiting embodiments, the eye makeup template may be obtained as follows: acquiring a drawn eye shadow image sample and a standard face image; and extracting the channel information of the eye shadow makeup template in the RGB color space domain by adopting the channel information of the eye shadow image sample in the RGB color space domain and the channel information of the standard face image in the RGB color space domain.
Specifically, the R channel information of the eye shadow image sample in the RGB color space domain and the R channel information of the standard face image in the RGB color space domain are adopted to extract and obtain the R channel information of the eye shadow makeup template in the RGB color space domain. Correspondingly, the G channel information of the eye shadow image sample in the RGB color space domain and the G channel information of the standard face image in the RGB color space domain are adopted, and the G channel information of the eye shadow dressing template in the RGB color space domain is extracted and obtained. And B channel information of the eye shadow image sample in the RGB color space domain and B channel information of the standard face image in the RGB color space domain are adopted, and the B channel information of the eye shadow makeup template in the RGB color space domain is extracted and obtained.
The method comprises the steps that a drawing software is adopted to draw a drawn eye shadow image sample on the basis of a standard face image, and then if a new eye shadow dressing or a new eye shadow exists, the new eye shadow image sample is drawn on the basis of the standard face image and a corresponding eye shadow dressing template is extracted.
In addition, when the eye shadow dressing template is stored, the eye shadow dressing template in the eye shadow interested area can be stored, so that the memory occupied by data can be reduced while the eye shadow dressing template is stored.
In some non-limiting embodiments, the channel information of the eye shadow makeup template in the RGB color space domain can be extracted and obtained by using the following formulas (7), (8) and (9).
R d =255·R s /R a ; (7)
G d =255·G s /G a ; (8)
B d =255·B s /B a ; (9)
Wherein R is a 、G a And B a Representing channel information of a standard face image in an RGB color space domain, wherein R a Red channel information in RGB color space domain for standard face image, G a Green channel information in the RGB color space domain for standard face images, B a Blue channel information of a standard face image in an RGB color space domain; r s 、G s And B s Representing channel information of an image sample after eye shadow drawing in an RGB color space domain; r s Red channel information in the RGB color space domain for a particular eye-filmed image sample, G s Green channel information in the RGB color space domain for a particular eye-shaded image sample, B s Blue channel information of an image sample subjected to eye shadow drawing in an RGB color space domain; r d 、G d And B d Eye shadow dressing model showing current productionChannel information of the panel in the RGB color space domain, where R d Red channel information in RGB color space domain for a certain eyeshadow rendered image sample, G d Green channel information in the RGB color space domain for a particular eye-shaded image sample, B d And blue channel information of a certain eye shadow drawn image sample in an RGB color space domain.
Further, in order to determine whether the obtained eye shadow makeup template is correct, in the embodiment of the present invention, the eye shadow makeup template may be verified by determining whether RGB color space domain channel information of the eye shadow makeup template conforms to a set range.
In some non-limiting embodiments, the following formulas (10) to (12) may be used to determine whether the RGB color space domain channel information of the eye shadow makeup template conforms to the set range on the basis of the above formulas (7) to (9). Wherein the set range may be [0,255].
R d =CLIP(255·R s /R a ,0,255); (10)
G d =CLIP(255·G s /G a ,0,255); (11)
B d =CLIP(255·B s /B a ,0,255); (12)
Wherein, CLIP () is used to limit the value range, CLIP (x, 0, 255) represents to limit the value range of x to [0,255]]I.e. by mixing R d 、G d And B d Is limited to [0,255]]。
It should be noted that 255 shown in the above formula and the embodiment, when 8 bits is adopted for the color depth of the image, the corresponding color depth range is [0,255], and 255 is the maximum color depth. It can be understood that when bits (bits) with other values are used for representation, the range of color depth is different, and accordingly, the maximum color depth value is also different, and only 255 needs to be adjusted to the corresponding maximum color depth under other bits.
In order to facilitate better understanding and implementation of the embodiment of the present invention for those skilled in the art, referring to fig. 3, a flowchart of another face image processing method provided by the embodiment of the present invention is given, which specifically includes the following steps:
step S301, face recognition is carried out on the face image to be processed.
Step S302, judging whether the establishment of the largest face and the camera meets the requirements.
If yes, executing step S303; when the judgment result is no, step S304 is executed.
In step S303, the eye shadow is not applied, and the process is ended.
And step S304, detecting key points of the human face image to be processed.
In step S305, an eye shadow region of interest is acquired.
Specifically, the eye shadow region of interest may be determined according to the face keypoint detection result.
And step S306, generating the auxiliary key points of the eye shadow.
Step S307, the eye shadow dressing template is changed to be consistent with the human face pose of the human face image to be processed.
In a specific implementation, an eye shadow makeup template may be fabricated through step S310. And in step S311, an eye shadow database is generated. The eye shadow makeup templates stored in the eye shadow database may be available to the user. And the eye shadow makeup template in step S307 is from the eye shadow database.
In step S308, the intensity of the eye shadow effect is controlled during image fusion.
In a specific implementation, a face attribute analysis is performed on the face image to be processed, via step S312. In step S313, the eye shadow effect intensity coefficient is determined according to the face attribute analysis result. In step S308, the eye shadow effect intensity is controlled when the eye shadow dressing template after posture conversion is fused with the face image to be processed according to the eye shadow effect intensity coefficient.
And step S309, outputting the eye shadow makeup trial image.
The eye shadow makeup trying image obtained after the image fusion can be displayed on a display terminal, so that a user can visually know the eye shadow makeup trying effect.
In a specific implementation, the specific implementation process of steps S301 to S313 may refer to the related description in the face image processing method provided in the foregoing embodiment, and details are not repeated here.
By adopting the scheme of the invention, the eye shadow makeup trial effect intensity coefficient is determined according to the current face attribute information, the self-adaptive eye shadow fusion algorithm is realized, the natural eye shadow effect can be realized, and the effect intensity is adjustable.
Furthermore, the eye shadow database is drawn for the standard face image, rich eye shadow makeup can be migrated to any face image, the expansibility of the eye shadow makeup template is strong, and the eye shadow makeup in the database can be migrated to any face image only by enriching the eye shadow database subsequently. In addition, the eye shadow dressing template can only store an eye shadow ROI area, and the utilization rate of a data memory can be effectively reduced.
Furthermore, by using automatic face recognition and a high-precision face key point alignment technology, the positions of the eyes of the face can be accurately positioned, so that the accuracy of determining the eye shadow region of interest is improved, and support is provided for subsequently improving the image fusion accuracy.
Furthermore, after the key points of the human face of the standard human face image and the key points of the human face of the target human face image are aligned, the eye shadow makeup template is transferred by using an interpolation mapping method, the posture transformation of the eye shadow makeup template is realized, and the effect of the large-posture human face eye shadow can be improved.
Furthermore, the eye shadow makeup template after conversion in the RGB color space is fused with the target face image, so that the algorithm complexity can be greatly reduced, and the method can be used for a terminal.
The embodiment of the invention also provides a face image processing device, and a schematic structural diagram of the face image processing device in the embodiment of the invention is given with reference to fig. 4. The face image processing apparatus 40 may include:
an eye shadow region-of-interest determining unit 41, configured to perform face recognition on a face image to be processed to obtain an eye shadow region-of-interest;
a pose transformation unit calculation unit 42 for calculating pose transformation information according to the standard face pose of the obtained eye shadow makeup template and the face pose of the face image to be processed;
an alignment unit 43, configured to perform pose transformation on the eye shadow makeup template according to the pose transformation information, so that the eye shadow makeup template after the pose transformation is aligned to the eye shadow region of interest;
and the fusion unit 44 is configured to perform image fusion on the pose-transformed eye shadow makeup template and the to-be-processed face image by using an eye shadow makeup effect intensity coefficient to obtain an eye shadow makeup image, where the eye shadow makeup effect intensity coefficient is determined according to the face attribute information of the to-be-processed face image.
In a specific implementation, the specific working principle and the working flow of the face image processing apparatus 40 may refer to descriptions in the face image processing method provided in any of the above embodiments of the present invention, and are not described herein again.
In a specific implementation, the face image processing device 40 may correspond to a chip having a face image processing function in the terminal; or to a chip having a data processing function; or a chip module corresponding to the terminal and comprising a chip with a face image processing function; or to a chip module having a chip with a data processing function, or to a terminal.
In a specific implementation, each module/unit included in each apparatus and product described in the foregoing embodiments may be a software module/unit, may also be a hardware module/unit, or may also be a part of a software module/unit and a part of a hardware module/unit.
For example, for each apparatus and product applied to or integrated into a chip, each module/unit included in the apparatus and product may all be implemented by hardware such as a circuit, or at least a part of the modules/units may be implemented by a software program running on a processor integrated within the chip, and the remaining (if any) part of the modules/units may be implemented by hardware such as a circuit; for each device and product applied to or integrated with the chip module, each module/unit included in the device and product may be implemented by hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components of the chip module, or at least part of the modules/units may be implemented by a software program running on a processor integrated inside the chip module, and the rest (if any) part of the modules/units may be implemented by hardware such as a circuit; for each device and product applied to or integrated in the terminal, each module/unit included in the device and product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program running on a processor integrated in the terminal, and the rest (if any) part of the modules/units may be implemented by using hardware such as a circuit.
An embodiment of the present invention further provides a computer-readable storage medium, which is a non-volatile storage medium or a non-transitory storage medium, and a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps in the face image processing method provided in any of the above embodiments.
The embodiment of the present invention further provides a terminal, which includes a memory and a processor, where the memory stores a computer program capable of running on the processor, and the processor executes the steps in the face image processing method provided in any of the above embodiments when running the computer program.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, and the program may be stored in any computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A face image processing method is characterized by comprising the following steps:
carrying out face recognition on a face image to be processed to obtain an eye shadow region of interest;
carrying out posture transformation on the obtained eye shadow dressing template so as to align the eye shadow dressing template to the eye shadow interested area;
determining an eye shadow makeup trial effect intensity coefficient according to the face attribute information of the face image to be processed; performing image fusion on the eye shadow makeup template after the posture transformation and the face image to be processed by adopting the eye shadow makeup effect intensity coefficient to obtain an eye shadow makeup image;
wherein, the gesture transformation is carried out on the obtained eye shadow makeup template so as to align the eye shadow interested area, and the gesture transformation comprises the following steps:
calculating posture transformation information according to the standard human face posture of the obtained eye shadow makeup template and the human face posture of the human face image to be processed;
performing posture transformation on the eye shadow makeup template according to the posture transformation information, so that the eye shadow makeup template after posture transformation is aligned to the eye shadow interested area;
calculating posture transformation information according to the obtained human face posture of the eye shadow makeup template and the human face posture of the human face image to be processed, wherein the posture transformation information comprises the following steps:
generating eye shadow auxiliary key points at positions which are away from eye key points in the human face key points by a set distance according to position information of the human face key points, wherein the eye shadow auxiliary key points are used for limiting the boundaries of eye shadow makeup, part of the eye shadow auxiliary key points are positioned below eyebrows and above upper eyelids, and part of the eye shadow auxiliary key points are positioned below part of lower eyelids between outer canthus and center of the lower eyelids; calculating posture transformation information for aligning each face key point of a standard face corresponding to the eye shadow makeup template with a corresponding face key point in the face image to be processed by adopting a preset triangulated array, wherein the posture transformation information is used for representing the offset of each face key point of the standard face and the corresponding face key point in the face image to be processed;
the image fusion of the eye shadow makeup template after the posture transformation and the face image to be processed by adopting the eye shadow makeup trial effect intensity coefficient comprises the following steps:
adopting the eye shadow makeup effect intensity coefficient, and carrying out image fusion on the eye shadow makeup template after the posture transformation and the face image to be processed according to the eye shadow makeup template after the posture transformation and the channel information of the face image to be processed in an RGB color space domain, wherein the value range of the eye shadow makeup effect intensity coefficient is [0,1];
after the eye shadow makeup trial image is obtained, judging whether channel information of an RGB color space domain of the eye shadow makeup trial image conforms to a set range;
and if not, discarding the eye shadow makeup trying image.
2. The method for processing a human face image according to claim 1, wherein the performing facial recognition on the human face image to be processed to obtain an eye shadow region of interest comprises:
carrying out face recognition on the face image to be processed to obtain position information of key points of the face;
and determining the eye shadow region of interest according to the position information of the key points of the human face.
3. The method for processing human face image according to claim 2, wherein the determining the eye shadow region of interest according to the position information of the human face key points comprises:
and determining the eye shadow interested area according to the position information of the eye key points in the face key points and the position information of the eyebrow key points, wherein the eye shadow interested area is positioned in a preset area around the eye key points and does not cross the eyebrow key points.
4. The method of processing a face image according to claim 1, wherein the calculating pose transformation information for aligning the face key points of each standard face corresponding to the eye makeup template with the face key points corresponding to the face image to be processed includes:
when the size of a standard face corresponding to the eye shadow makeup template is different from the size of the face image to be processed, performing size conversion on any one of the standard face or the face image to be processed to enable the converted size to be the same;
and calculating the posture transformation information based on the transformed standard face image or the face image to be processed.
5. The face image processing method according to any one of claims 1 to 4, wherein performing pose transformation on the eye makeup template according to the pose transformation information comprises:
and according to the posture transformation information, combining the eye shadow dressing template, and calculating by adopting a deformation interpolation algorithm to obtain the eye shadow dressing template after the posture transformation.
6. The face image processing method of claim 1, wherein the eye makeup template is determined based on a user's selection, or the eye makeup template is determined based on the face attribute information.
7. The face image processing method of claim 6, wherein the eye makeup template is determined based on the face attribute information, comprising:
obtaining face skin color information in the face image to be processed according to the face attribute information, and selecting an eye shadow makeup template matched with the skin color according to the skin color information, wherein the skin color information comprises: skin tone brightness and/or skin tone color.
8. The method of claim 1, wherein before the face recognition of the face image to be processed to obtain the eye shadow region of interest, the method further comprises:
the face image to be processed is subjected to size scaling, and whether the distance between the scaled face image to be processed and the terminal equipment for collecting the face image to be processed meets a set distance or not is calculated;
if yes, carrying out face recognition on the face image to be processed to obtain the eye shadow region of interest;
if not, discarding the face image to be processed, and continuously acquiring a next image as the face image to be processed.
9. The face image processing method of claim 1, wherein the eye makeup template is obtained by:
acquiring a drawn eye shadow image sample and a standard face image;
and extracting the channel information of the eye shadow makeup template in the RGB color space domain according to the channel information of the eye shadow image sample in the RGB color space domain and the channel information of the standard face image in the RGB color space domain.
10. A face image processing apparatus, comprising:
the eye shadow region-of-interest determining unit is used for carrying out face identification on the face image to be processed to obtain an eye shadow region-of-interest;
the pose transformation unit calculation unit is used for calculating pose transformation information according to the standard human face pose of the obtained eye shadow makeup template and the human face pose of the human face image to be processed;
the alignment unit is used for carrying out posture conversion on the eye shadow dressing template according to the posture conversion information so as to align the eye shadow dressing template after the posture conversion to the eye shadow interested area;
the fusion unit is used for carrying out image fusion on the eye shadow makeup template after the posture transformation and the face image to be processed by adopting an eye shadow makeup effect intensity coefficient to obtain an eye shadow makeup image, and the eye shadow makeup effect intensity coefficient is determined according to the face attribute information of the face image to be processed;
the alignment unit is used for calculating posture transformation information according to the standard human face posture of the obtained eye shadow makeup template and the human face posture of the human face image to be processed; performing posture transformation on the eye shadow makeup template according to the posture transformation information, so that the eye shadow makeup template after the posture transformation is aligned to the eye shadow interested area; calculating posture transformation information according to the obtained human face posture of the eye shadow makeup template and the human face posture of the human face image to be processed, wherein the posture transformation information comprises the following steps: generating eye shadow auxiliary key points at positions which are away from eye key points in the human face key points by a set distance according to position information of the human face key points, wherein the eye shadow auxiliary key points are used for limiting the boundaries of eye shadow makeup, part of the eye shadow auxiliary key points are positioned below eyebrows and above upper eyelids, and part of the eye shadow auxiliary key points are positioned below part of lower eyelids between outer canthus and center of the lower eyelids; calculating posture transformation information for aligning each face key point of a standard face corresponding to the eye shadow makeup template with a corresponding face key point in the face image to be processed by adopting a preset triangularization array, wherein the posture transformation information is used for representing the offset of each face key point of the standard face and the corresponding face key point in the face image to be processed;
the fusion unit is used for carrying out image fusion on the eye shadow makeup template after the posture transformation and the face image to be processed according to the eye shadow makeup effect intensity coefficient after the posture transformation and the channel information of the face image to be processed in the RGB color space domain, wherein the value range of the eye shadow makeup effect intensity coefficient is [0,1];
the method further comprises the steps of judging whether channel information of the RGB color space domain of the eye shadow makeup trying image conforms to a set range after the eye shadow makeup trying image is obtained; and if the eye shadow is not matched with the eye shadow, discarding the unit of the eye shadow makeup image.
11. A computer-readable storage medium, which is a non-volatile storage medium or a non-transitory storage medium, on which a computer program is stored, which, when being executed by a processor, performs the steps of the face image processing method according to any one of claims 1 to 9.
12. A terminal comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, wherein the processor, when executing the computer program, performs the steps of the face image processing method of any one of claims 1 to 9.
CN202110722304.9A 2021-06-28 2021-06-28 Face image processing method and device, computer readable storage medium and terminal Active CN113344837B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110722304.9A CN113344837B (en) 2021-06-28 2021-06-28 Face image processing method and device, computer readable storage medium and terminal
PCT/CN2021/141467 WO2023273247A1 (en) 2021-06-28 2021-12-27 Face image processing method and device, computer readable storage medium, terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110722304.9A CN113344837B (en) 2021-06-28 2021-06-28 Face image processing method and device, computer readable storage medium and terminal

Publications (2)

Publication Number Publication Date
CN113344837A CN113344837A (en) 2021-09-03
CN113344837B true CN113344837B (en) 2023-04-18

Family

ID=77481145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110722304.9A Active CN113344837B (en) 2021-06-28 2021-06-28 Face image processing method and device, computer readable storage medium and terminal

Country Status (2)

Country Link
CN (1) CN113344837B (en)
WO (1) WO2023273247A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344837B (en) * 2021-06-28 2023-04-18 展讯通信(上海)有限公司 Face image processing method and device, computer readable storage medium and terminal
CN117036157B (en) * 2023-10-09 2024-02-20 易方信息科技股份有限公司 Editable simulation digital human figure design method, system, equipment and medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063560B (en) * 2018-06-28 2022-04-05 北京微播视界科技有限公司 Image processing method, image processing device, computer-readable storage medium and terminal
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN111783511A (en) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 Beauty treatment method, device, terminal and storage medium
CN111369644A (en) * 2020-02-28 2020-07-03 北京旷视科技有限公司 Face image makeup trial processing method and device, computer equipment and storage medium
CN111583102B (en) * 2020-05-14 2023-05-16 抖音视界有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN112669233A (en) * 2020-12-25 2021-04-16 北京达佳互联信息技术有限公司 Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN112784773B (en) * 2021-01-27 2022-09-27 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal
CN112819718A (en) * 2021-02-01 2021-05-18 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN113344837B (en) * 2021-06-28 2023-04-18 展讯通信(上海)有限公司 Face image processing method and device, computer readable storage medium and terminal

Also Published As

Publication number Publication date
WO2023273247A1 (en) 2023-01-05
CN113344837A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN108229278B (en) Face image processing method and device and electronic equipment
US9142054B2 (en) System and method for changing hair color in digital images
CN109952594B (en) Image processing method, device, terminal and storage medium
JP3779570B2 (en) Makeup simulation apparatus, makeup simulation control method, and computer-readable recording medium recording makeup simulation program
CN106056064B (en) A kind of face identification method and face identification device
KR101259662B1 (en) Face classifying method face classifying device classification map face classifying program recording medium where this program is recorded
CN108012081B (en) Intelligent beautifying method, device, terminal and computer readable storage medium
CN112784773B (en) Image processing method and device, storage medium and terminal
US20100189357A1 (en) Method and device for the virtual simulation of a sequence of video images
CN113344837B (en) Face image processing method and device, computer readable storage medium and terminal
WO2022095721A1 (en) Parameter estimation model training method and apparatus, and device and storage medium
JP2001109913A (en) Picture processor, picture processing method, and recording medium recording picture processing program
WO2020177434A1 (en) Image processing method and apparatus, image device, and storage medium
CN107798318A (en) The method and its device of a kind of happy micro- expression of robot identification face
CN110866139A (en) Cosmetic treatment method, device and equipment
CN114187166A (en) Image processing method, intelligent terminal and storage medium
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
JP4893968B2 (en) How to compose face images
CN115631516A (en) Face image processing method, device and equipment and computer readable storage medium
CN113379623A (en) Image processing method, image processing device, electronic equipment and storage medium
WO2011152842A1 (en) Face morphing based on learning
JP2003030684A (en) Face three-dimensional computer graphic generation method and device, face three-dimensional computer graphic generation program and storage medium storing face three-dimensional computer graphic generation program
CN113421197B (en) Processing method and processing system of beautifying image
CN116580445B (en) Large language model face feature analysis method, system and electronic equipment
CN117389676B (en) Intelligent hairstyle adaptive display method based on display interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant