WO2023273247A1 - 人脸图像处理方法及装置、计算机可读存储介质、终端 - Google Patents

人脸图像处理方法及装置、计算机可读存储介质、终端 Download PDF

Info

Publication number
WO2023273247A1
WO2023273247A1 PCT/CN2021/141467 CN2021141467W WO2023273247A1 WO 2023273247 A1 WO2023273247 A1 WO 2023273247A1 CN 2021141467 W CN2021141467 W CN 2021141467W WO 2023273247 A1 WO2023273247 A1 WO 2023273247A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye shadow
face image
makeup
processed
face
Prior art date
Application number
PCT/CN2021/141467
Other languages
English (en)
French (fr)
Inventor
谢富名
Original Assignee
展讯通信(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 展讯通信(上海)有限公司 filed Critical 展讯通信(上海)有限公司
Publication of WO2023273247A1 publication Critical patent/WO2023273247A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Embodiments of the present invention relate to the field of image processing, and in particular, to a face image processing method and device, a computer-readable storage medium, and a terminal.
  • the technical problem solved by the embodiments of the present invention is that the effect of virtual eye shadow makeup trial is unnatural, thereby affecting user experience.
  • an embodiment of the present invention provides a face image processing method, including: performing face recognition on the face image to be processed to obtain an eye shadow region of interest; performing pose transformation on the acquired eye shadow makeup template to align it to the The area of interest of the eye shadow; according to the face attribute information of the face image to be processed, determine the eye shadow makeup test effect intensity coefficient; use the eye shadow makeup test effect intensity coefficient to transform the eye shadow makeup template after the posture transformation and the described Image fusion is performed on the face image to be processed to obtain an eye shadow test makeup image.
  • performing face recognition on the face image to be processed to obtain the eye shadow region of interest includes: performing face recognition on the face image to be processed to obtain position information of key points of the face; The location information of the point determines the eye shadow region of interest.
  • the determining the region of interest of the eye shadow according to the position information of the key points of the human face includes: according to the position information of the key points of the eyes and the position information of the key points of the eyebrows in the key points of the human face, determining the An eye shadow ROI, the eye shadow ROI is located in a preset area around the eye key point and does not cross the eyebrow key point.
  • performing pose transformation on the acquired eyeshadow makeup template to align to the eyeshadow region of interest includes: according to the acquired eyeshadow makeup template's standard human face pose and the human face of the human face image to be processed Posture, calculating posture transformation information; performing posture transformation on the eye shadow makeup template according to the posture transformation information, so that the posture transformed eye shadow makeup template is aligned to the eye shadow region of interest.
  • the calculating the pose transformation information according to the acquired face pose of the eye shadow makeup template and the face pose of the face image to be processed includes: according to the position information of the key points of the face, At the position where the eye key points in the face key points set the distance, an auxiliary key point of eye shadow is generated, and the auxiliary key point of eye shadow is used to limit the boundary of makeup on the eye shadow;
  • the attitude transformation information of each face key point of the standard face corresponding to the eye shadow makeup template is aligned with the corresponding face key point in the to-be-processed face image, and the pose transformation information is used to characterize the standard face
  • the offset of each face key point from the corresponding face key point in the to-be-processed face image is used to characterize the standard face.
  • the calculation is used to align the face key points of each standard face corresponding to the eye shadow makeup template with the corresponding face key points in the face image to be processed, including : when the size of the standard human face corresponding to the eye shadow makeup template is different from the size of the human face image to be processed, perform size conversion on any one of the standard human face or the human face image to be processed, so that The sizes after transformation are the same; the pose transformation information is calculated based on the transformed standard face image or the face image to be processed.
  • performing pose transformation on the eye shadow makeup template according to the pose transformation information includes: calculating the posture transformed pose by using a deformation interpolation algorithm in combination with the eye shadow makeup template according to the pose transformation information. Eye shadow makeup template.
  • using the eye shadow makeup test effect intensity coefficient to perform image fusion on the eye shadow makeup template after the pose transformation and the face image to be processed includes: using the eye shadow try makeup effect intensity coefficient, according to The eye shadow makeup template after the pose transformation and the channel information of the human face image to be processed in the RGB color space domain, perform image fusion on the eye shadow makeup template after the pose transformation and the human face image to be processed, wherein, The value range of the intensity coefficient of the eye shadow test makeup effect is [0, 1].
  • the face image processing method further includes: after obtaining the eye shadow test makeup image, judging whether the channel information of the RGB color space domain of the eye shadow test makeup image conforms to the set range; if not, then The eyeshadow swatch image is discarded.
  • the eye shadow makeup template is determined based on a user's selection, or the eye shadow makeup template is determined according to a facial recognition result.
  • the eye shadow makeup template is determined according to the facial recognition result, including: obtaining the skin color information of the human face in the face image to be processed according to the facial recognition result, and selecting the facial skin color information corresponding to the skin color information according to the skin color information.
  • An eye shadow makeup template matching the skin color, the skin color information includes: skin color brightness and/or skin color.
  • the face recognition on the face image to be processed to obtain the eye shadow region of interest before performing face recognition on the face image to be processed to obtain the eye shadow region of interest, it also includes: scaling the face image to be processed, calculating the distance between the scaled face image to be processed and collecting the Whether the distance of the terminal device processing the face image satisfies the set distance; if it is satisfied, face recognition is performed on the face image to be processed to obtain the eye shadow region of interest; if not, the to-be-processed face image is discarded For the face image, continue to acquire the next image as the face image to be processed.
  • the eye shadow makeup template is obtained in the following manner: obtain a drawn eye shadow image sample and a standard face image; according to the channel information of the eye shadow image sample in the RGB color space domain and the standard face image in the The channel information of the RGB color space domain is extracted to obtain the channel information of the eye shadow makeup template in the RGB color space domain.
  • An embodiment of the present invention also provides a human face image processing device, including: an eye shadow region of interest determination unit, configured to perform face recognition on the face image to be processed to obtain an eye shadow region of interest; a gesture transformation unit calculation unit, used to obtain The standard human face posture of the eye shadow makeup template and the human face posture of the human face image to be processed are used to calculate the posture transformation information; the alignment unit is used to perform posture transformation on the eye shadow makeup template according to the posture transformation information, so that the posture The transformed eye shadow makeup template is aligned to the eye shadow region of interest; the fusion unit is used to perform image fusion on the eye shadow makeup template after the posture transformation and the human face image to be processed by using the eye shadow trial makeup effect intensity coefficient to obtain An eye shadow test makeup image, the eye shadow test makeup effect intensity coefficient is determined according to the face attribute information of the human face image to be processed.
  • An embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium is a non-volatile storage medium or a non-transitory storage medium, and a computer program is stored thereon, and the computer program is run by a processor Perform the steps of any one of the face image processing methods described above.
  • An embodiment of the present invention also provides a terminal, including a memory and a processor, the memory stores a computer program that can run on the processor, and when the processor runs the computer program, any one of the above-mentioned The steps of the face image processing method.
  • Face recognition is performed on the face image to be processed to obtain the eye shadow area of interest, and the eye shadow makeup template is aligned to the eye shadow area of interest.
  • the strength of the eye shadow test makeup effect is determined, and the eye shadow test makeup effect is used.
  • the intensity coefficient performs image fusion on the eye shadow makeup template after pose transformation and the face image to be processed to obtain the eye shadow makeup test image.
  • the image fusion of the eye shadow makeup template after the pose transformation and the face image to be processed uses the eye shadow test makeup effect intensity coefficient, and the eye shadow test makeup effect intensity is determined according to the face attribute information of the face image to be processed, through According to the face attribute information determined by each user's face image to be processed, adaptively determine the matching eye shadow makeup effect intensity coefficient for each user, so that the relationship between the eye shadow makeup template and the face image to be processed can be improved.
  • the image fusion effect can further improve the naturalness of the eye shadow test makeup image and improve the user experience.
  • the posture transformation of the eye shadow makeup template can be performed through the posture transformation information, and the posture transformed eye shadow makeup template can be Aligned to the eye shadow region of interest, which provides a basis for improving the image fusion effect of the eye shadow makeup template and the face image to be processed under large poses.
  • Fig. 1 is the flowchart of a kind of face image processing method in the embodiment of the present invention
  • Fig. 2 is a schematic diagram of the position of a key point of a human face in an embodiment of the present invention
  • Fig. 3 is the flow chart of another kind of face image processing method in the embodiment of the present invention.
  • Fig. 4 is a schematic structural diagram of a face image processing device in an embodiment of the present invention.
  • face recognition is performed on the face image to be processed to obtain the eye shadow area of interest
  • the eye shadow makeup template is aligned to the eye shadow area of interest, and according to the face attribute information of the face image to be processed, Determine the strength of the eye shadow makeup trial effect, and use the eye shadow trial makeup effect intensity coefficient to perform image fusion on the eye shadow makeup template after the pose transformation and the face image to be processed to obtain the eye shadow makeup trial image.
  • the image fusion of the eye shadow makeup template after the pose transformation and the face image to be processed uses the eye shadow test makeup effect intensity coefficient, and the eye shadow test makeup effect intensity is determined according to the face attribute information of the face image to be processed, through According to the face attribute information determined by each user's face image to be processed, adaptively determine the matching eye shadow makeup effect intensity coefficient for each user, so that the relationship between the eye shadow makeup template and the face image to be processed can be improved.
  • the image fusion effect can further improve the naturalness of the eye shadow test makeup image and improve the user experience.
  • An embodiment of the present invention provides a face image processing method.
  • the face image processing method can be used for virtual face makeup in various scenarios, such as makeup trial application scenarios, image beautification application scenarios, and video beautification applications. scene etc.
  • the execution subject of the face image processing method may be a chip in the terminal, or may be a chip such as a control chip or a processing chip that can be used in the terminal, or other various appropriate components.
  • FIG. 1 provide the flow chart of a kind of face image processing method in the embodiment of the present invention, specifically can comprise the following steps:
  • Step S11 perform face recognition on the face image to be processed, and obtain eye shadow ROI.
  • the region of interest (ROI) of the eye shadow can be obtained in various ways. Eyeshadow ROIs are used to indicate areas where eyeshadow application is possible.
  • AI facial recognition is performed on the face image to be processed, and the eye shadow region of interest can be determined according to the AI facial recognition result.
  • face recognition can be performed on the face image to be processed to obtain the position information of the key points of the face; according to the position information of the key points of the face, it is determined that the eye shadow is interested area.
  • a schematic diagram of the position of a key point of a human face in an embodiment of the present invention is provided, and the number of key points of a human face shown in Fig. 2 is 104 (that is, the gray points labeled 1 to 104 in the figure) .
  • other face key points can also be added in other areas, such as adding face key points in the forehead area or hairline area, so that the face key points
  • the number is not limited to this, and can also be other numbers, which will not be repeated here.
  • eye keypoints can be used to constrain the outline of the eye.
  • the eye key points may include face key points numbered 67 to 74 and 76 to 83 in the figure.
  • the eyebrow key points may include face key points marked 34 to 42 and 43 to 51 in the figure.
  • the eye shadow region of interest may be determined according to the position information of the eye key points and the position information of the eyebrow key points in the human face key points.
  • the eye shadow ROI is located in a preset area around the eye key point and does not cross the eyebrow key point, that is, the eye shadow ROI is located in the area between the eyebrow and the eye and the preset area below the eye.
  • the eye shadow ROI is usually a part of the area above the upper eyelid and below the eyebrow and a preset area below the lower eyelid.
  • the acquired face image to be processed may also be checked to determine whether the face image to be processed meets certain requirements. For example, face recognition is performed after scaling the face image to be processed, and the distance between the enlarged maximum face and the terminal device is calculated. If the set distance is met, it is determined that the requirement is met; if the set distance is not met, it is determined that the requirement is not met. If the proportion of the face area in the face image to be processed is relatively small in the overall face image, even if the face image to be processed is processed with eye shadow makeup, the effect of eye shadow makeup will not be obvious due to the small proportion of the eye area.
  • the face pose of the face image to be processed can also be detected, if the face pose angle of the face image to be processed exceeds the set angle, it is determined that the face pose angle is too large, and it is discarded without processing .
  • the face posture is 90 degrees, that is, it is a side face.
  • the eye shadow makeup treatment may not be performed, that is, the subsequent steps may not be performed.
  • step S11 By checking the face images to be processed before step S11 is executed, the following steps S12 to S14 can be executed for the face images to be processed that meet the requirements. For the face images to be processed that do not meet the conditions, the subsequent steps S12 to S14 are not executed. While improving the image processing effect, it can also save computing power resources.
  • Step S12 performing pose transformation on the acquired eye shadow makeup template to align to the eye shadow ROI.
  • step S12 can be realized in the following manner: according to the acquired standard face pose of the eyeshadow makeup template and the face pose of the face image to be processed, calculate pose transformation information; The pose transformation is performed on the eye shadow makeup template, so that the pose transformed eye shadow makeup template is aligned to the eye shadow region of interest.
  • pose transformation information can be calculated as follows:
  • an auxiliary key point of eye shadow is generated at a position with a set distance from the key points of eyes among the key points of the human face. That is to say, according to the location information of the key points of the eyes, it is possible to adaptively generate eye shadow auxiliary key points at the positions of the key points of the eyes at a set distance.
  • the eyeshadow helper key is used to limit the boundaries of the eyeshadow application.
  • the face may appear at different angles, and there may be self-occlusion of the face when the face is in a large pose, and the eye shadow auxiliary key point can avoid the problem that the eye shadow has an abnormal effect due to the excessive angle of the face.
  • the key points can effectively assist in determining the migration area of the eye shadow makeup template to improve the effect of eye shadow makeup.
  • Face big posture refers to the posture when the deflection angle of the face angle exceeds the set angle compared with the front face, such as exceeding 30 degrees.
  • some of the eyeshadow assist keypoints are located below the brow and above the upper eyelid, and some of the eyeshadow assist keypoints are located below the lower eyelid, typically in the portion of the lower eyelid between the outer corner of the eye and the center of the lower lid. below.
  • the black point in Figure 2 is the auxiliary key point of eye shadow.
  • a preset triangulated array is used to calculate the key points of each face of the standard face corresponding to the eyeshadow makeup template and the corresponding face key points in the face image to be processed.
  • Pose transformation information during alignment where the pose transformation information is used to characterize the offset between each face key point of the standard face and the corresponding face key point in the face image to be processed.
  • relative position information after triangulation of each face key point is recorded in the triangulation array.
  • the triangulation array is the relative position information after triangulation of the face key points of the standard face corresponding to the eyeshadow makeup template.
  • the eye shadow auxiliary key points can be determined on the standard face, so as to triangulate the face key points and the eye shadow auxiliary key points in the standard face image, and obtain Triangulated array.
  • the obtained triangulated array is stored, and can be shared with subsequent processing of the face image to be processed, without the need to triangulate the face image to be processed again, which can reduce computing power requirements and improve image processing efficiency.
  • the size of the standard human face corresponding to the eye shadow makeup template is different from the size of the human face image to be processed.
  • the size of the standard face corresponding to the makeup template is transformed so that the transformed standard face has the same size as the face image to be processed.
  • transform the size of the face image to be processed so that the size of the transformed face image to be processed is the same as that of the standard face image.
  • the pose transformation information is calculated based on the transformed standard face image or the face image to be processed.
  • upsampling or downsampling may be used to transform the size of the image.
  • the standard human face of the eye shadow makeup template when the size of the standard human face corresponding to the eye shadow makeup template is smaller than the size of the human face image to be processed, the standard human face of the eye shadow makeup template can be up-sampled, so that the standard human face of the eye shadow makeup template after upsampling
  • the size of is the same as the size of the target face image. It is understandable that there are other ways of size transformation in other situations, and no examples will be given here.
  • the pose transformation information may be a pose transformation matrix, or a position relationship mapping (Map) graph. It can be understood that the attitude transformation information may also have other representation forms, and no examples will be given here.
  • the pose-transformed eye shadow makeup template can be calculated by using the deformation interpolation algorithm according to the pose transformation information, combined with the eye shadow makeup template.
  • bilinear interpolation or other interpolation algorithms may be used to calculate the pose-transformed eye shadow makeup template.
  • the eye shadow makeup template when performing pose transformation on the eye shadow makeup template according to the pose transformation information, it can be determined to perform the eye shadow makeup template according to the concern between the size of the standard face image corresponding to the eye shadow makeup template and the size of the face image to be processed.
  • the interpolation algorithm used during pose transformation, and the interpolation of the eye shadow makeup template so that the face pose of the eye shadow makeup template after interpolation and pose transformation is the same as the face pose of the face image to be processed, and after interpolation and The size of the eye shadow makeup template after the posture change is adapted to the size of the face image to be processed.
  • Step S13 according to the face attribute information of the face image to be processed, determine the eye shadow makeup test effect intensity coefficient.
  • face attribute detection may be performed on the face image to be processed to obtain face attribute information of the face image to be processed.
  • the face attribute information may include gender attribute information.
  • the eye shadow test makeup effect intensity coefficient is used to control the strength of the eye shadow test makeup effect.
  • the value range of the eye shadow test makeup effect intensity coefficient is [0, 1]. The larger the eye shadow test makeup effect intensity coefficient is, the greater the eye shadow test makeup effect is. , after the image fusion, the more obvious the makeup effect of the presented eye shadow is; correspondingly, the smaller the intensity coefficient of the eye shadow trial makeup effect is, the less obvious the makeup effect of the presented eye shadow is.
  • a smaller eye shadow test makeup effect intensity coefficient may be selected or zero eye shadow test makeup effect intensity coefficient may be selected to improve the image fusion effect.
  • a larger eye shadow test makeup intensity coefficient may be selected to improve the image fusion effect.
  • the face attribute information may also include skin color information
  • the matching eye shadow trial makeup effect intensity coefficients can be adaptively determined for different groups of people according to the skin color information.
  • the skin color information may include one or more of skin color brightness and skin color. For example, the brighter the skin color, the fairer the skin color. At this time, a relatively small eye shadow trial effect intensity coefficient can be configured to present a more obvious eye shadow makeup effect. The smaller the brightness of the skin color, the darker the skin color is. At this time, a relatively large eye shadow trial makeup effect intensity factor can be used to show an obvious eye shadow makeup effect.
  • the skin color information such as skin color brightness and skin color will affect the makeup effect of eye shadow, therefore, by considering the skin color information to adapt and determine the matching eye shadow trial makeup effect intensity coefficient for different groups of people, it can improve the natural eye shadow of different groups of people. Makeup effect, the resulting image fusion effect is better.
  • factors such as gender attribute information and skin color information in the face attribute information can also be comprehensively considered to determine the eye shadow test makeup effect intensity coefficient.
  • the user can select an eye shadow test effect intensity coefficient, and when it is detected that the user selects an eye shadow test effect intensity coefficient, the eye shadow test effect intensity coefficient configured by the user is used for subsequent image fusion.
  • the eye shadow test makeup effect intensity coefficient adjustment button or intensity bar can be configured on the display interface, and the user can select the eye shadow test makeup effect intensity coefficient through the corresponding button or drag the intensity bar to meet the personalized needs of different users.
  • the eye shadow makeup effect intensity coefficient selected by the user and the eye shadow makeup effect intensity coefficient determined by the detection result of the face attribute information may be integrated to determine the final result of the image fusion.
  • corresponding weights may be assigned to the eyeshadow makeup effect intensity coefficient selected by the user and the eyeshadow makeup intensity coefficient determined based on the detection result of the face attribute information.
  • the weighted calculation is performed, and the weighted calculation result is used as the final eye shadow makeup used in image fusion. Effect strength factor.
  • Step S14 using the eye shadow makeup test effect intensity coefficient to perform image fusion on the eye shadow makeup template after the pose transformation and the human face image to be processed, to obtain an eye shadow makeup test image.
  • step S14 can be implemented in the following manner: using the eye shadow makeup test effect intensity coefficient, according to the eye shadow makeup template after the pose transformation and the channel information of the human face image to be processed in the RGB color space domain, the Image fusion is performed on the eyeshadow makeup template after the pose transformation and the face image to be processed.
  • Image fusion can be performed in the RGB color space, which can not only ensure the eye shadow tone and saturation effect of the eye shadow makeup template, but also retain the texture information of the brightness channel, and reduce the complexity of the algorithm. Among them, R is red, G is green, and B is blue. It can be understood that the pose-transformed eyeshadow makeup template and the human face image to be processed can also be fused in other types of color spaces, and examples are not given here.
  • k is the intensity coefficient of the eye shadow test makeup effect, k ⁇ [0,1];
  • R o , G o and B o represent the channel information of the face image to be processed in the RGB color space domain, where R o is the face image to be processed The red channel information of the face image in the RGB color space domain, G o is the green channel information of the face image to be processed in the RGB color space domain, and B o is the blue channel information of the face image to be processed in the RGB color space domain;
  • R e , G e and B e represent the channel information of the eye shadow makeup template after posture transformation in the RGB color space domain, where R e is the red channel information of the eye shadow makeup template after posture transformation in the RGB color space domain, and G e is The green channel information of the eye shadow makeup template after posture transformation in the RGB color space domain, B e is the blue channel information of the eye shadow makeup template after posture transformation in the RGB color space domain;
  • R dst , G dst and B dst
  • the eye shadow test image can be verified to ensure that the obtained eye shadow test image is correct.
  • the eye shadow test image can be verified by judging whether the RGB color space domain channel information of the eye shadow test image conforms to the set range; if not, the eye shadow test image is discarded.
  • the set range can be [0,255].
  • the following formulas (4) to (6) can be used to judge whether the RGB color space domain channel information of the eye shadow test image conforms to the set range.
  • R dst CLIP(R o k (R e ⁇ 255)+255)/255),0,255); (4)
  • G dst CLIP(G o ⁇ k ⁇ (G e -255)+255)/255),0,255); (5)
  • CLIP() is used to limit the value range
  • CLIP(x,0,255) means to limit the value range of x to [0,255], that is, to limit the value range of R dst , G dst and B dst to [0,255] .
  • eye shadow makeup test effect intensity coefficient performs image fusion on the eye shadow makeup template after the pose transformation and the face image to be processed, and obtains the eye shadow makeup test image.
  • the image fusion of the eye shadow makeup template after the pose transformation and the face image to be processed uses the eye shadow test makeup effect intensity coefficient, and the eye shadow test makeup effect intensity is determined according to the face attribute information of the face image to be processed, through According to the face attribute information determined by each user's face image to be processed, adaptively determine the matching eye shadow makeup effect intensity coefficient for each user, so that the relationship between the eye shadow makeup template and the face image to be processed can be improved.
  • Image fusion effect which in turn can improve the naturalness of the eye shadow test makeup image.
  • the selection mode of the eyeshadow makeup template corresponding to the application scenario can be configured.
  • the eyeshadow makeup template in the above embodiments can be determined based on the user's selection. Specifically, an option of an eyeshadow makeup template can be configured on the user operation interface, and the user can select a desired eyeshadow makeup template according to actual needs.
  • the eye shadow makeup template in the above embodiment can also be determined according to the attribute information of the human face. Specifically, according to the face attribute information, the face skin color information in the face image to be processed is obtained, and according to the skin color information, an eye shadow makeup template suitable for the skin color is selected, and the skin color information Includes: Skin Lightness and/or Skin Color.
  • the mapping relationship between the skin color information and the eye shadow makeup template can be preset.
  • eye shadow makeup templates in different ways.
  • the user may select an eye shadow makeup template.
  • an eye shadow makeup template for users who don't know how to choose the eye shadow that suits them, they can select an eye shadow makeup template for the user based on the facial recognition results through facial recognition. Determining the eyeshadow makeup template by means of face recognition can comprehensively consider other makeup of the user's face and the influence of the user's skin color, so that the selected eyeshadow makeup template is more suitable for the user.
  • the matching relationship between eyeshadow makeup templates and facial recognition results can be determined based on big data research.
  • the user may choose to determine the eyeshadow makeup template, or the eyeshadow makeup template may be determined based on the user's facial recognition results.
  • the eyeshadow makeup templates in the above embodiments may be stored in an eyeshadow database.
  • eye shadow makeup templates may be stored in the eye shadow database.
  • the eye shadow makeup template can be obtained in the following manner: obtain the drawn eye shadow image sample and the standard human face image; use the channel information of the eye shadow image sample in the RGB color space domain and the standard human face The channel information of the image in the RGB color space domain is extracted to obtain the channel information of the eye shadow makeup template in the RGB color space domain.
  • the R channel information of the eye shadow image sample in the RGB color space domain and the R channel information of the standard face image in the RGB color space domain are used to extract the R channel information of the eye shadow makeup template in the RGB color space domain .
  • the G channel information of the eye shadow image sample in the RGB color space domain and the G channel information of the standard face image in the RGB color space domain are extracted.
  • the B channel information of the eye shadow image sample in the RGB color space domain and the B channel information of the standard face image in the RGB color space domain is extracted.
  • the drawn eye shadow image sample can be drawn on the basis of the standard face image by using some drawing software. If there is a new eye shadow makeup or new eye shadow, just draw a new one on the basis of the standard face image. Eyeshadow image samples and corresponding eyeshadow makeup templates can be extracted. Enriching the eyeshadow makeup templates in the eyeshadow database in this way can reduce the acquisition cost of eyeshadow image samples.
  • the eye shadow makeup template in the eye shadow region of interest can be stored, so that while storing the eye shadow makeup template, the memory occupied by data can be reduced.
  • the following formulas (7), (8) and (9) can be used to extract the channel information of the eye shadow makeup template in the RGB color space domain.
  • R a , G a and B a represent the channel information of the standard face image in the RGB color space domain, where R a is the red channel information of the standard face image in the RGB color space domain, and G a is the standard face image
  • the green channel information in the RGB color space domain, B a is the blue channel information of the standard face image in the RGB color space domain
  • R s , G s and B s indicate that an image sample after drawing eye shadow is in the RGB color space domain channel information
  • R s is the red channel information of an image sample after drawing eye shadow in the RGB color space domain
  • G s is the green channel information of an image sample after drawing eye shadow in the RGB color space domain
  • B s is a certain
  • R d , G d and B d represent the channel information of the currently produced eye shadow makeup template in the RGB color space domain, where R d is a drawing The red channel information of the image sample after eye shadow in the RGB color
  • the eye shadow makeup template can also be verified by judging whether the RGB color space domain channel information of the eye shadow makeup template conforms to the set range.
  • the following formulas (10) to (12) can be used to judge whether the RGB color space domain channel information of the eye shadow makeup template conforms to the set scope.
  • the set range can be [0,255].
  • R d CLIP(255 ⁇ R s /R a ,0,255); (10)
  • G d CLIP(255 ⁇ G s /G a ,0,255); (11)
  • CLIP() is used to limit the value range
  • CLIP(x,0,255) means to limit the value range of x to [0,255], that is, to limit the value range of R d , G d and B d to [0,255] .
  • the above formula and the 255 in the embodiment indicate that when the color depth of the image is 8 bits, the corresponding color depth range is [0, 255], and 255 is the maximum color depth. It is understandable that when the bit (bit) of other values is used to represent, the range of the color depth is different, and the value of the maximum color depth is also different accordingly, only need to adjust 255 to the corresponding maximum color depth under other bits That's it.
  • FIG. 3 a flow chart of another face image processing method provided by the embodiment of the present invention is provided, which may specifically include the following steps:
  • Step S301 performing face recognition on the face image to be processed.
  • Step S302 judging whether the establishment of the largest face and the camera meets the requirements.
  • Step S303 no eye shadow makeup trial, and end the process.
  • Step S304 performing face key point detection on the face image to be processed.
  • Step S305 acquiring the eye shadow ROI.
  • the eye shadow ROI can be determined according to the face key point detection result.
  • Step S306 generating eye shadow auxiliary key points.
  • Step S307 transforming the eyeshadow makeup template to be consistent with the face pose of the face image to be processed.
  • an eye shadow makeup template can be created through step S310. And through step S311, an eye shadow database is generated.
  • the eye shadow makeup templates stored in the eye shadow database can be used by users.
  • the eye shadow makeup template in step S307 comes from the eye shadow database.
  • Step S308 controlling the intensity of the eye shadow effect during image fusion.
  • step S312 face attribute analysis is performed on the face image to be processed.
  • step S313 the eye shadow effect intensity coefficient is determined according to the face attribute analysis result.
  • step S308 according to the eye shadow effect strength coefficient, the eye shadow effect strength is controlled when the pose-converted eye shadow makeup template and the face image to be processed are fused.
  • Step S309 outputting an image of an eye shadow makeup test.
  • the image of the eye shadow test makeup obtained after the image fusion can be displayed on the display terminal, so that the user can intuitively know the effect of the eye shadow test makeup.
  • the eye shadow test makeup effect intensity coefficient is determined, and an adaptive eye shadow fusion algorithm is realized, which can realize a natural eye shadow effect, and the effect intensity is adjustable.
  • the rich eye shadow makeup can be transferred to any face image. Given any face image.
  • the eye shadow makeup template can only save the eye shadow ROI area, which can effectively reduce the data memory usage.
  • the position of the face and eyes can be accurately positioned to improve the accuracy of determining the area of interest in the eye shadow, and provide support for the subsequent improvement of image fusion accuracy.
  • the interpolation mapping method is used to transfer the eye shadow makeup template to realize the pose transformation of the eye shadow makeup template, which can improve the effect of large pose face eye shadow.
  • the fusion of the eyeshadow makeup template converted in the RGB color space and the target face image can greatly reduce the complexity of the algorithm, so that it can be used on the terminal.
  • Face image processing device 40 may include:
  • An eye shadow region of interest determination unit 41 is used to perform facial recognition on the face image to be processed to obtain an eye shadow region of interest;
  • Gesture transformation unit calculation unit 42 used to calculate posture transformation information according to the acquired standard facial posture of the eye shadow makeup template and the human facial posture of the human face image to be processed;
  • An alignment unit 43 configured to perform pose transformation on the eye shadow makeup template according to the pose transformation information, so that the pose transformed eye shadow makeup template is aligned to the eye shadow region of interest;
  • the fusion unit 44 is used to perform image fusion on the eye shadow makeup template after the posture transformation and the face image to be processed by using the eye shadow test makeup effect intensity coefficient to obtain the eye shadow test makeup image, and the eye shadow try makeup effect intensity coefficient is based on The face attribute information of the face image to be processed is determined.
  • the specific working principle and workflow of the face image processing device 40 can refer to the description in the face image processing method provided in any of the above-mentioned embodiments of the present invention, and will not be repeated here.
  • the face image processing device 40 may correspond to a chip with a face image processing function in the terminal; or to a chip with a data processing function; or to a chip in which the terminal includes a chip with a face image processing function Module; or corresponds to a chip module with a chip with data processing functions, or corresponds to a terminal.
  • each module/unit contained in the product may be a software module/unit, or a hardware module/unit, or may be partly a software module/unit, partly is a hardware module/unit.
  • each module/unit contained therein may be realized by hardware such as a circuit, or at least some modules/units may be realized by a software program, and the software program Running on the integrated processor inside the chip, the remaining (if any) modules/units can be realized by means of hardware such as circuits; They are all realized by means of hardware such as circuits, and different modules/units can be located in the same component (such as chips, circuit modules, etc.) or different components of the chip module, or at least some modules/units can be realized by means of software programs, The software program runs on the processor integrated in the chip module, and the remaining (if any) modules/units can be realized by hardware such as circuits; Each unit/unit can be realized by means of hardware such as a circuit, and different modules/units can be located in the same component (such as a chip, a circuit module, etc.) or different components in the terminal, or at least some modules/units can be implemented in the form of a software program Real
  • An embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium is a non-volatile storage medium or a non-transitory storage medium, and a computer program is stored thereon, and the computer program is run by a processor At this time, the steps in the face image processing method provided by any of the above-mentioned embodiments are executed.
  • An embodiment of the present invention also provides a terminal, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor executes any of the above-mentioned embodiments when running the computer program Steps in the provided face image processing method.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种人脸图像处理方法及装置、计算机可读存储介质、终端,所述人脸图像处理方法,包括:对待处理人脸图像进行面部识别,得到眼影感兴趣区域(S11);将获取的眼影妆容模板进行姿态变换,以对齐至所述眼影感兴趣区域(S12);根据所述待处理人脸图像的人脸属性信息,确定眼影试妆效果强度系数(S13);采用所述眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,得到眼影试妆图像(S14)。上述方法,能够提高眼影妆容模板与待处理人脸图像的图像融合效果,进而可以提高眼影试妆图像的自然性。

Description

人脸图像处理方法及装置、计算机可读存储介质、终端
本申请要求2021年6月28日提交中国专利局、申请号为202110722304.9、发明名称为“人脸图像处理方法及装置、计算机可读存储介质、终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及图像处理领域,尤其涉及一种人脸图像处理方法及装置、计算机可读存储介质、终端。
背景技术
随着美妆行业规模的稳步增长与人工智能(Artificial Intelligence,AI)技术的普及,虚拟人脸美妆正引领美妆行业的变革,美妆消费呈现潜力巨大。在人脸美妆过程中,眼影是眼妆妆容中最重要的一个环节,可以使眼睛更具神采且显得美丽动人。为了降低化妆品的营销成本,虚拟试妆应用而生,然后,目前的虚拟眼影试妆的效果不自然,从而影响用户体验。
发明内容
本发明实施例解决的技术问题是虚拟眼影试妆效果不自然,从而影响用户体验。
为解决上述技术问题,本发明实施例提供一种人脸图像处理方法,包括:对待处理人脸图像进行面部识别,得到眼影感兴趣区域;将获取的眼影妆容模板进行姿态变换,以对齐至所述眼影感兴趣区域;根据所述待处理人脸图像的人脸属性信息,确定眼影试妆效果强度系数;采用所述眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,得到眼影试妆图像。
可选的,所述对待处理人脸图像进行面部识别,得到眼影感兴趣区域,包括:对所述待处理人脸图像进行面部识别,得到人脸关键点的位置信息;根据所述人脸关键点的位置信息确定所述眼影感兴趣区域。
可选的,所述根据人脸关键点的位置信息确定所述眼影感兴趣区域,包括:根据所述人脸关键点中的眼睛关键点的位置信息以及眉毛关键点的位置信息,确定所述眼影感兴趣区域,所述眼影感兴趣区域位于所述眼睛关键点周围的预设区域,且不越过所述眉毛关键点。
可选的,所述将获取的眼影妆容模板进行姿态变换,以对齐至所述眼影感兴趣区域,包括:根据获取的眼影妆容模板的标准人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息;根据所述姿态变换信息对所述眼影妆容模板进行姿态变换,使得姿态变换后的眼影妆容模板对齐至所述眼影感兴趣区域。
可选的,所述根据获取的眼影妆容模板的人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息,包括:根据人脸关键点的位置信息,在距离所述人脸关键点中的眼部关键点设定距离的位置处,生成眼影辅助关键点,所述眼影辅助关键点用于限制眼影上妆的边界;采用预设的三角化数组,计算用于将所述眼影妆容模板对应的标准人脸的各人脸关键点与所述待处理人脸图像中对应的人脸关键点对齐的姿态变换信息,所述姿态变换信息用于表征所述标准人脸的各人脸关键点与所述待处理人脸图像中对应的人脸关键点的偏移量。
可选的,所述计算用于将所述眼影妆容模板对应的各标准人脸的人脸关键点与所述待处理人脸图像中对应的人脸关键点之间对齐的姿态变换信息,包括:当所述眼影妆容模板对应的标准人脸的尺寸与所述待处理人脸图像的尺寸不同时,对所述标准人脸或者所述待处理人脸图像中的任一个进行尺寸变换,使得变换后的尺寸相同;基于变换后的标准人脸图像或者待处理人脸图像,计算所述姿态变换信息。
可选的,所述根据所述姿态变换信息对所述眼影妆容模板进行姿 态变换,包括:根据所述姿态变换信息,结合所述眼影妆容模板,采用变形插值算法计算得到所述姿态变换后的眼影妆容模板。
可选的,所述采用所述眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,包括:采用所述眼影试妆效果强度系数,根据所述姿态变换后的眼影妆容模板以及所述待处理人脸图像在RGB颜色空间域的通道信息,对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,其中,所述眼影试妆效果强度系数的取值范围为[0,1]。
可选的,所述人脸图像处理方法还包括:得到所述眼影试妆图像之后,判断所述眼影试妆图像的RGB颜色空间域的通道信息是否符合设定的范围;如不符合,则丢弃所述眼影试妆图像。
可选的,所述眼影妆容模板基于用户的选择确定,或者所述眼影妆容模板根据面部识别结果确定。
可选的,所述眼影妆容模板根据所述面部识别结果确定,包括:根据所述面部识别结果,得到所述待处理人脸图像中的人脸肤色信息,根据所述肤色信息,选择与所述肤色相适配的眼影妆容模板,所述肤色信息包括:肤色亮度和/或肤色颜色。
可选的,在对待处理人脸图像进行面部识别,得到眼影感兴趣区域之前,还包括:对所述待处理人脸图像进行尺寸缩放,计算缩放后的待处理人脸图像距离采集所述待处理人脸图像的终端设备的距离是否满足设定距离;若满足,则对所述待处理人脸图像进行人脸识别,得到所述眼影感兴趣区域;若不满足,则丢弃所述待处理人脸图像,继续获取下一图像作为所述待处理人脸图像。
可选的,所述眼影妆容模板采用如下方式得到:获取绘制的眼影图像样本以及标准人脸图像;根据所述眼影图像样本在RGB颜色空间域的通道信息与所述标准人脸图像在所述RGB颜色空间域的通道信息,提取得到所述眼影妆容模板在RGB颜色空间域的通道信息。
本发明实施例还提供一种人脸图像处理装置,包括:眼影感兴趣区域确定单元,用于对待处理人脸图像进行面部识别,得到眼影感兴趣区域;姿态变换单元计算单元,用于根据获取的眼影妆容模板的标准人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息;对齐单元,用于根据所述姿态变换信息对所述眼影妆容模板进行姿态变换,使得姿态变换后的眼影妆容模板对齐至所述眼影感兴趣区域;融合单元,用于采用眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,得到眼影试妆图像,所述眼影试妆效果强度系数根据所述待处理人脸图像的人脸属性信息确定。
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质为非易失性存储介质或非瞬态存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时执上述任一种人脸图像处理方法的步骤。
本发明实施例还提供一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序时执行上述任一种人脸图像处理方法的步骤。
与现有技术相比,本发明实施例的技术方案具有以下有益效果:
对待处理人脸图像进行面部识别,得到眼影感兴趣区域,将眼影妆容模板对齐至眼影感兴趣区域,根据待处理人脸图像的人脸属性信息,确定眼影试妆效果强度,采用眼影试妆效果强度系数对姿态变换后的眼影妆容模板及待处理人脸图像进行图像融合,得到眼影试妆图像。由于对姿态变换后的眼影妆容模板及待处理人脸图像进行图像融合时采用了眼影试妆效果强度系数,而眼影试妆效果强度是根据待处理人脸图像的人脸属性信息确定的,通过根据每个用户的待处理人脸图像确定的人脸属性信息为每个用户自适应地确定与之相适配的眼影试妆效果强度系数,从而可以提高眼影妆容模板与待处理人脸图像的图像融合效果,进而可以提高眼影试妆图像的自然性,提高用户体 验。
进一步,通过对眼影妆容模板的标准人脸姿态与待处理人脸图像的人脸姿态,计算姿态变换信息,从而通过姿态变换信息对眼影妆容模板进行姿态变换,可以将姿态变换后的眼影妆容模板对齐至眼影感兴趣区域,这为提高大姿态下的眼影妆容模板与待处理人脸图像的图像融合效果提供基础。
附图说明
图1是本发明实施例中的一种人脸图像处理方法的流程图;
图2是本发明实施例中的一种人脸关键点的位置示意图;
图3是本发明实施例中的另一种人脸图像处理方法的流程图;
图4是本发明实施例中的一种人脸图像处理装置的结构示意图。
具体实施方式
如上所述,在一些场景中需要使用虚拟人脸美妆,然而,现有的虚拟人脸美妆效果不自然,导致人脸美妆效果不够理想。
为了解决上述问题,在本发明实施例中,对待处理人脸图像进行面部识别,得到眼影感兴趣区域,将眼影妆容模板对齐至眼影感兴趣区域,根据待处理人脸图像的人脸属性信息,确定眼影试妆效果强度,采用眼影试妆效果强度系数对姿态变换后的眼影妆容模板及待处理人脸图像进行图像融合,得到眼影试妆图像。由于对姿态变换后的眼影妆容模板及待处理人脸图像进行图像融合时采用了眼影试妆效果强度系数,而眼影试妆效果强度是根据待处理人脸图像的人脸属性信息确定的,通过根据每个用户的待处理人脸图像确定的人脸属性信息为每个用户自适应地确定与之相适配的眼影试妆效果强度系数,从而可以提高眼影妆容模板与待处理人脸图像的图像融合效果,进而可以提高眼影试妆图像的自然性,并可以提高用户体验。
为使本发明实施例的上述目的、特征和有益效果能够更为明显易 懂,下面结合附图对本发明的具体实施例做详细的说明。
本发明实施例提供一种人脸图像处理方法,人脸图像处理方法可以用于多种场景下的虚拟人脸美妆,如试妆应用场景,如图像美颜应用场景,如视频美颜应用场景等。人脸图像处理方法的执行主体可以为终端中的芯片,也可以为能够用于终端的控制芯片、处理芯片等芯片或者其他各种恰当的元器件等。
参照图1,给出了本发明实施例中的一种人脸图像处理方法的流程图,具体可以包括如下步骤:
步骤S11,对待处理人脸图像进行面部识别,得到眼影感兴趣区域。
在具体实施中,可以通过多种方式得到眼影感兴趣区域(region of interest,ROI)。眼影感兴趣区域用于指示可能进行眼影上妆的区域。
在本发明一非限制性实施例中,对待处理人脸图像进行人工智能(Artificial Intelligence,AI)面部识别,根据AI面部识别结果,可以确定眼影感兴趣区域。
在本发明另一非限制性实施例中,可以对所述待处理人脸图像进行面部识别,得到人脸关键点的位置信息;根据所述人脸关键点的位置信息确定所述眼影感兴趣区域。
进一步地,为了提高眼部感兴趣区域定位的精度,可以通过提高人脸对齐的精度,采用高精度人脸对齐技术,提高所得到的人脸关键点的位置的精度。通过提高人脸关键点的位置的精度,来提高眼影感兴趣区域的确定的精度。
参照图2,给出了本发明实施例中的一种人脸关键点的位置示意图,图2中示意的人脸关键点的数目104个(也即图中标号为1至104的灰色点)。在实际应用中,根据对人脸区域所需的特征信息不同,还可以在其他区域增加其他人脸关键点,如在额头区域或者发际线区域等增设人脸关键点,从而人脸关键点的数目不限于此,还可以 为其他数目,此处不再赘述。
在具体实施中,眼睛关键点可以用于限制眼睛的轮廓。如图2所示,眼部关键点可以包括图中标号为67至74,以及76至83的人脸关键点。其中眉毛关键点可以包括图中标号为34至42,以及43至51的人脸关键点。
具体而言,可以根据所述人脸关键点中的眼睛关键点的位置信息以及眉毛关键点的位置信息,确定所述眼影感兴趣区域。所述眼影感兴趣区域位于所述眼睛关键点周围的预设区域,且不越过所述眉毛关键点,也即眼影感兴趣关键点位于眉毛与眼睛之间的区域以及眼睛下方的预设区域。例如,眼影感兴趣区域通常是位于上眼皮以上且位于眉毛以下的部分区域以及部下眼皮以下的预设区域。
进一步,在步骤S11执行之前,还可以对获取到的待处理人脸图像进行检验,判断待处理人脸图像是否满足一定的要求。例如,对待处理人脸图像进行尺度缩放后进行面部识别,计算放大后的最大人脸距离终端设备的距离。若是满足设定的距离,则判定满足要求;若是不满足设定的距离,则判定不满足要求。若待处理人脸图像中的人脸区域在整体人脸图像的占比较小时,即使对待处理人脸图像进行眼影美妆处理,也会出现因眼部区域占比较小,眼影美妆效果不够明显,而通过对待处理人脸图像进行缩放,当放大后的最大人脸距离终端设备的距离不满足设定距离时,则出现待处理人脸图像中的人脸占比较小,可以不做眼影美妆处理,也即不执行后续的步骤。
进一步,在步骤S11之前,还可以对待处理人脸图像的人脸姿态进行检测,若待处理人脸图像的人脸姿态角度超出设定角度,则判定人脸姿态角度过大,舍弃不做处理。例如,对于人脸姿态为90度时,也即呈侧脸,此时,即使处理进行眼影美妆,所得到的眼妆美颜效果不够明显,可以不做眼影美妆处理,也即不执行后续的步骤。又如,对于人脸姿态为背影,无法识别到人脸的情景下,也可以不做眼影美妆处理,也即不执行后续的步骤。
通过在执行步骤S11之前,对待处理人脸图像进行检验,对于满足要求的待处理人脸图像,可以执行后续的步骤S12至步骤S14。对于不满足条件的待处理人脸图像则不执行后续的步骤S12至步骤S14。可以在提高图像处理效果的同时,还可以节约算力资源。
步骤S12,将获取的眼影妆容模板进行姿态变换,以对齐至所述眼影感兴趣区域。
在具体实施中,步骤S12可以通过如下方式实现:根据获取的眼影妆容模板的标准人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息;根据所述姿态变换信息对所述眼影妆容模板进行姿态变换,使得姿态变换后的眼影妆容模板对齐至所述眼影感兴趣区域。
进一步,可以通过如下方式计算姿态变换信息:
根据人脸关键点的位置信息,在距离所述人脸关键点中的眼部关键点设定距离的位置处,生成眼影辅助关键点。也即,可以实现根据眼部关键点的位置信息,自适应地的在各眼部关键点的相距设定距离的位置处,生成眼影辅助关键点。眼影辅助关键点用于限制眼影上妆的边界。尤其是在视频中,人脸可能出现不同角度姿态,在人脸大姿态时可能存在人脸自遮挡,而眼影辅助关键点可以避免眼影因人脸角度过大而效果异常的问题,通过眼影辅助关键点可以有效地协助确定眼影妆容模板的迁移区域,以提高眼影上妆效果。人脸大姿态指人脸角度相比正脸时偏转角度超出设定角度时的姿态,如超出30度。
在一些非限制性实施例中,部分眼影辅助关键点位于眉毛下方且位于上眼皮上方,以及部分眼影辅助关键点位于下眼皮下方,具体通常位于外眼角与下眼皮中心之间的部分下眼皮的下方。如图2中的黑色点为眼影辅助关键点。
在一些非限制性实施例中,采用预设的三角化数组,计算将所述眼影妆容模板对应的标准人脸的各人脸关键点与所述待处理人脸图 像中对应的人脸关键点对齐时的姿态变换信息,所述姿态变换信息用于表征所述标准人脸的各人脸关键点与所述待处理人脸图像中对应的人脸关键点的偏移量。
在具体实施中,三角化数组中记录有各人脸关键点三角化之后的相对位置信息。三角化数组为根据眼影妆容模板对应的标准人脸的人脸关键点三角化之后的相对位置信息。
进一步地,还可以根据标准人脸的人脸关键点的位置信息,在标准人脸确定眼影辅助关键点,从而对标准人脸图像中的人脸关键点以及眼影辅助关键点进行三角化,得到三角化数组。将得到的三角化数组存储,后续可以共享给后续的待处理人脸图像处理时使用,而无需再次对待处理人脸图像进行三角化,可以减小算力需求,还可以提高图像处理效率。
进一步,为进一步减小三角化数组的数据大小,以及提高算法性能,可以在保留眼影感兴趣区域的三角化数组时并减小其他非眼影感兴趣区域的三角化数组的数量。
在一些非限制性实施例中,在计算姿态变换信息时,可能存在所述眼影妆容模板对应的标准人脸的尺寸与所述待处理人脸图像的尺寸不同的情况,此时,可以对眼影妆容模板对应的标准人脸的尺寸进行变换,使得变换后的标准人脸与待处理人脸图像的尺寸相同。或者对待处理人脸图像的尺寸进行变换,使得变换后的待处理人脸图像的尺寸与标准人脸图像的尺寸相同。基于变换后的标准人脸图像或者待处理人脸图像,计算所述姿态变换信息。
在一些非限制性实施例中,可以采用上采样或者下采样的方式变换图像的尺寸的大小。
例如,眼影妆容模板对应的标准人脸的尺寸小于所述待处理人脸图像的尺寸时,可以对眼影妆容模板的标准人脸进行上采样,以使得上采样后的眼影妆容模板的标准人脸的尺寸和目标人脸图像的尺寸 相同。可以理解的是,还有其他情形下的一些尺寸变换方式,此处不再一一举例。
在一些实施例中,姿态变换信息可以为姿态变换矩阵,也可以为位置关系映射(Map)图。可以理解的是,姿态变换信息还可以有其他的表现形式,此处不再一一举例。
在具体实施中,可以根据姿态变换信息,结合眼影妆容模板,采用变形插值算法计算得到姿态变换后的眼影妆容模板。
在一些实施例中,可以采用双线性插值或者其他插值算法计算姿态变换后的眼影妆容模板。
具体而言,在根据姿态变换信息对眼影妆容模板进行姿态变换时,可以根据眼影妆容模板对应的标准人脸图像的尺寸与待处理人脸图像的尺寸之间的关心,确定对眼影妆容模板进行姿态变换时所采用的插值算法,以及对眼影妆容模板的插值情况,以使得经过插值以及姿态变换后的眼影妆容模板的人脸姿态与待处理人脸图像的人脸姿态相同,且经过插值以及姿态换后的眼影妆容模板的尺寸与待处理人脸图像的尺寸适配。
步骤S13,根据所述待处理人脸图像的人脸属性信息,确定眼影试妆效果强度系数。
在具体实施中,可以对待处理图人脸图像进行人脸属性检测,得到待处理人脸图像的人脸属性信息。人脸属性信息可以包括性别属性信息。
由于不同性别的人群对眼影美妆的要求不同,通过对待处理人脸图像进行人脸属性检测,可以根据人脸属性检测结果,自适应为不同人群确定相适配的眼影试妆效果强度系数。眼影试妆效果强度系数用于控制眼影试妆效果的强度,眼影试妆效果强度系数的取值范围为[0,1],眼影试妆效果强度系数越大,眼影试妆效果的强度越大,进行图像融合后,所呈现的眼影的上妆效果越明显;相应地,眼影试妆 效果强度系数越小,所呈现的眼影的上妆效果的明显度越小。
例如,人脸属性检测结果指示待处理人脸图像中的人脸为男性时,可以选择较小眼影试妆效果强度系数或者取眼影试妆效果强度系数为零,以提高图像融合后的效果。
又如,人脸属性检测结果指示待处理人脸图像中的人脸为女性时,可以选择较大眼影试妆强度系数,以提高图像融合后的效果。
进一步,人脸属性信息还可以包括肤色信息,可以根据肤色信息自适应为不同人群确定相适配的眼影试妆效果强度系数。其中肤色信息可以包括肤色亮度、肤色颜色中的一种或多种。例如,肤色亮度较大的则表征肤色越偏向白皙,此时配置相对较小的眼影试妆效果强度系数,即可呈现出较为明显的眼影上妆效果。而肤色亮度较小的则表征肤色越偏暗沉,此时配置相对较大的眼影试妆效果强度系数,才可呈现出明显的眼影上妆效果。由于肤色亮度以及肤色颜色等肤色信息会影响眼影的上妆效果,因此,通过考虑肤色信息来自适应为不同人群确定相适配的眼影试妆效果强度系数,可以提高不同人群均具有较自然的眼影上妆效果,所得到的图像融合效果较好。
进一步,还可以综合考虑人脸属性信息中的性别属性信息以及肤色信息等因素确定眼影试妆效果强度系数。
在一些实施例中,用户可以选择眼影试妆效果强度系数,当检测到用户选择眼影试妆效果强度系数时,则采用用户配置的眼影试妆效果强度系数进行后续的图像融合。其中,可以在显示界面上配置有眼影试妆效果强度系数调整按键或者强度条,用户可以通过相应的按键或者拖拉强度条来选择眼影试妆效果强度系数,以满足不同用户的个性化需求。
在另一些实施例中,当检测到用户选择眼影妆容效果强度系数时,可以综合用户选择的眼影妆容效果强度系数以及人脸属性信息检测结果确定的眼影妆容效果强度系数,确定最终在图像融合时采用的 眼影妆容效果强度系数。
例如,可以分别为用户选择的眼影妆容效果强度系数以及基于人脸属性信息检测结果确定的眼影妆容强度系数分配对应的权重。根据用户选择的眼影妆容效果强度系数与其对应的权重,以及基于人脸属性信息检测结果确定的眼影妆容强度系数与其对应的权重进行加权计算,将加权计算结果作为最终在图像融合时采用的眼影妆容效果强度系数。
步骤S14,采用所述眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,得到眼影试妆图像。
进一步地,步骤S14可以通过如下方式实现:采用所述眼影试妆效果强度系数,根据所述姿态变换后的眼影妆容模板以及所述待处理人脸图像在RGB颜色空间域的通道信息,对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合。可以在RGB颜色空间进行图像融合,可以保证眼影妆容模板的眼影色调和饱和度效果的同时,还可以保留亮度通道的纹理信息,并能够降低算法复杂度。其中,R为红色,G为绿色,B为蓝色。可以理解的是,也可以在其他类型的颜色空间上对姿态变换后的眼影妆容模板以及所述待处理人脸图像进行融合,此处不再一一举例。
进一步,以在RGB颜色空间域进行图像融合为例,可以采用如下公式(1)、(2)及(3)对姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合:
R dst=R o·k·(R e-255)+255)/255;                (1)
G dst=G o·k·(G e-255)+255)/255;                (2)
B dst=B o·k·(B e-255)+255)/255;                 (3)
其中,k为眼影试妆效果强度系数,k∈[0,1];R o、G o及B o表示 待处理人脸图像在RGB颜色空间域的通道信息,其中,R o为待处理人脸图像在RGB颜色空间域的红色通道信息,G o为待处理人脸图像在RGB颜色空间域的绿色的通道信息,B o为待处理人脸图像在RGB颜色空间域的蓝色通道信息;R e、G e及B e表示姿态变换后的眼影妆容模板在RGB颜色空间域的通道信息,其中,R e为姿态变换后的眼影妆容模板在RGB颜色空间域的红色通道信息,G e为姿态变换后的眼影妆容模板在RGB颜色空间域的绿色通道信息,B e为姿态变换后的眼影妆容模板在RGB颜色空间域的蓝色通道信息;R dst、G dst及B dst表示眼影试妆图像在RGB颜色空间域的通道信息,其中,R dst为眼影试妆图像在RGB颜色空间域的红色通道信息,G dst为眼影试妆图像在RGB颜色空间域的绿色通道信息,B dst为眼影试妆图像在RGB颜色空间域的蓝色通道信息。
进一步,得到所述眼影试妆图像之后,可以对眼影试妆图像进行验证,以确保得到的眼影试妆图像是正确的。
具体而言,可以通过判断所述眼影试妆图像的RGB颜色空间域通道信息是否符合设定的范围来对眼影试妆图像进行验证;如不符合,则丢弃所述眼影试妆图像。其中设定的范围可以为[0,255]。
在一些实施例中,在上述公式(1)至(3)的基础上,可以采用如下公式(4)至(6)判断眼影试妆图像的RGB颜色空间域通道信息是否符合设定的范围。
R dst=CLIP(R o·k·(R e-255)+255)/255),0,255);      (4)
G dst=CLIP(G o·k·(G e-255)+255)/255),0,255);      (5)
B dst=CLIP(B o·k·(B e-255)+255)/255),0,255);      (6)
其中,CLIP()用于限制取值范围,CLIP(x,0,255)表示将x的取值范围限制在[0,255],也即将R dst、G dst以及B dst的取值范围限制在[0,255]。
由上可知,对待处理人脸图像进行面部识别,得到眼影感兴趣区域,将眼影妆容模板对齐至眼影感兴趣区域,根据待处理人脸图像的人脸属性信息,确定眼影试妆效果强度,采用眼影试妆效果强度系数对姿态变换后的眼影妆容模板及待处理人脸图像进行图像融合,得到眼影试妆图像。由于对姿态变换后的眼影妆容模板及待处理人脸图像进行图像融合时采用了眼影试妆效果强度系数,而眼影试妆效果强度是根据待处理人脸图像的人脸属性信息确定的,通过根据每个用户的待处理人脸图像确定的人脸属性信息为每个用户自适应地确定与之相适配的眼影试妆效果强度系数,从而可以提高眼影妆容模板与待处理人脸图像的图像融合效果,进而可以提高眼影试妆图像的自然性。
在具体实施中,可以根据不同的应用场景,配置与应用场景相对应的眼影妆容模板的选择方式。
上述实施例中的眼影妆容模板可以基于用户的选择确定。具体而言,可以在用户操作界面上配置有眼影妆容模板的选项,用户可以根据实际需求选择所需的眼影妆容模板。
上述实施例中的眼影妆容模板也可以根据人脸属性信息确定。具体而言,根据所述人脸属性信息,得到所述待处理人脸图像中的人脸肤色信息,根据所述肤色信息,选择与所述肤色相适配的眼影妆容模板,所述肤色信息包括:肤色亮度和/或肤色颜色。其中肤色信息与眼影妆容模板之间的映射关系可以预先设定。
根据不同场景需求,可以采用不同方式选择眼影妆容模板。例如,在眼影试妆应用场景中,可以由用户选择眼影妆容模板。对于不知道如何选择适合自己的眼影的用户,可以通过面部识别的方式,根据面部识别结果为用户选择眼影妆容模板。通过面部识别的方式确定眼影妆容模板可以在综合考虑用户的面部其他妆容的情况以及考虑用户肤色等影响,使得选择的眼影妆容模板与用户更加适配。而眼影妆容模板与面部识别结果之间的匹配关系,可以基于大数据研究而确定。
又如,在采用相机对图像进行美妆或者对视频进行美妆时,可以 由用户选择确定眼影妆容模板,也可以根据对用户的面部识别结果确定眼影妆容模板。
在具体实施中,上述实施例中的眼影妆容模板可以存储于眼影数据库中。在眼影数据库中可以存储有若干个不同类型的眼影妆容模板。
在一些非限制性实施例中,眼影妆容模板可以采用如下方式得到:获取绘制的眼影图像样本以及标准人脸图像;采用所述眼影图像样本在RGB颜色空间域的通道信息与所述标准人脸图像在所述RGB颜色空间域的通道信息,提取得到所述眼影妆容模板在RGB颜色空间域的通道信息。
具体而言,采用所述眼影图像样本在RGB颜色空间域的R通道信息与标准人脸图像在RGB颜色空间域的R通道信息,提取得到所述眼影妆容模板在RGB颜色空间域的R通道信息。相应地,采用所述眼影图像样本在RGB颜色空间域的G通道信息与标准人脸图像在RGB颜色空间域的G通道信息,提取得到所述眼影妆容模板在RGB颜色空间域的G通道信息。采用所述眼影图像样本在RGB颜色空间域的B通道信息与标准人脸图像在RGB颜色空间域的B通道信息,提取得到所述眼影妆容模板在RGB颜色空间域的B通道信息。
其中,绘制的眼影图像样本可以在标准人脸图像的基础上,采用一些制图软件绘制得到,后续若有新的眼影妆容或者有新的眼影,只需在标准人脸图像的基础上绘制新的眼影图像样本,并提取出相应的眼影妆容模板即可,以此方式丰富眼影数据库中的眼影妆容模板,可以降低眼影图像样本的获取成本。
此外,在存储眼影妆容模板时,可以存储眼影感兴趣区域内的眼影妆容模板,这样在存储眼影妆容模板的同时,可以降低数据所占用的内存。
在一些非限制性实施例中,可以采用如下公式(7)、(8)及(9), 提取得到眼影妆容模板在RGB颜色空间域的通道信息。
R d=255·R s/R a;                   (7)
G d=255·G s/G a;                   (8)
B d=255·B s/B a;                   (9)
其中,R a、G a及B a表示标准人脸图像在RGB颜色空间域的通道信息,其中,R a为标准人脸图像在RGB颜色空间域的红色通道信息,G a为标准人脸图像在RGB颜色空间域的绿色通道信息,B a为标准人脸图像在RGB颜色空间域的蓝色通道信息;R s、G s及B s表示某个绘制眼影后的图像样本在RGB颜色空间域的通道信息;R s为某个绘制眼影后的图像样本在RGB颜色空间域的红色通道信息,G s为某个绘制眼影后的图像样本在RGB颜色空间域的绿色通道信息,B s为某个绘制眼影后的图像样本在RGB颜色空间域的蓝色通道信息;R d、G d及B d表示当前制作的眼影妆容模板在RGB颜色空间域的通道信息,其中,R d为某个绘制眼影后的图像样本在RGB颜色空间域的红色通道信息,G d为某个绘制眼影后的图像样本在RGB颜色空间域的绿色通道信息,B d为某个绘制眼影后的图像样本在RGB颜色空间域的蓝色通道信息。
进一步,为确定所得到的眼影妆容模板是否正确,在本发明实施例中,还可以对通过判断眼影妆容模板的RGB颜色空间域通道信息是否符合设定的范围来对眼影妆容模板进行验证。
在一些非限制性实施例中,可以在上述公式(7)至(9)的基础上,采用如下公式(10)至(12)判断眼影妆容模板的RGB颜色空间域通道信息是否符合设定的范围。其中设定的范围可以为[0,255]。
R d=CLIP(255·R s/R a,0,255);           (10)
G d=CLIP(255·G s/G a,0,255);           (11)
B d=CLIP(255·B s/B a,0,255);            (12)
其中,CLIP()用于限制取值范围,CLIP(x,0,255)表示将x的取值范围限制在[0,255],也即将R d、G d及B d的取值范围限制在[0,255]。
需要说明的是,上述公式以及实施例中出现的255,为图像的色深采用8bit时,对应的色深范围为[0,255],255即为最大色深。可以理解的是,当采用其他取值的比特(bit)表示时,色深的范围不同,相应地最大色深取值也不同,只需相应地将255调整为其他比特下对应的最大色深即可。
为了便于本领域技术人员更好的理解和实现本发明实施例,参照图3,给出了本发明实施例提供的另一种人脸图像处理方法的流程图,具体可以包括如下步骤:
步骤S301,对待处理人脸图像进行面部识别。
步骤S302,判断最大人脸与摄像头的建立是否符合要求。
当判断结果为是时,执行步骤S303;当判断结果为否时,执行步骤S304。
步骤S303,不进行眼影试妆,并结束流程。
步骤S304,对待处理人脸图像进行人脸关键点检测。
步骤S305,获取眼影感兴趣区域。
具体而言,可以根据人脸关键点检测结果确定眼影感兴趣区域。
步骤S306,生成眼影辅助关键点。
步骤S307,将眼影妆容模板变换之与待处理人脸图像的人脸姿态一致。
在具体实施中,可以通过步骤S310制作眼影妆容模板。并通过步骤S311,生成眼影数据库。存储于眼影数据库中的眼影妆容模板可以供用户使用。且步骤S307中的眼影妆容模板来自于眼影数据库。
步骤S308,在图像融合时,控制眼影效果强度。
在具体实施中,在通过步骤S312,对待处理人脸图像进行人脸属性分析。通过步骤S313,根据人脸属性分析结果,确定眼影效果强度系数。进而在步骤S308中,根据眼影效果强度系数,控制姿态转换后的眼影妆容模板与待处理人脸图像在图像融合时,眼影效果强度。
步骤S309,输出眼影试妆图像。
图像融合后得到的眼影试妆图像可以显示在显示终端,以供用户直观的获知眼影试妆效果。
在具体实施中,上述步骤S301至S313的具体实现过程,可以参照上述实施例中提供的人脸图像处理方法中的相关描述,此处不做赘述。
采用本发明上述方案,根据当前人脸属性信息,确定眼影试妆效果强度系数,实现自适应眼影融合算法,可实现自然的眼影效果,且效果强度可调。
进一步,通过对标准人脸图像绘制眼影数据库,可以将丰富的眼影妆容迁移到任意人脸图像上,眼影妆容模板的拓展性强,后续只需丰富眼影数据库,便可将数据库中的眼影妆容迁移给任意人脸图像。此外,眼影妆容模板可以仅保存眼影ROI区域,可有效降低数据内存使用率。
进一步,使用自动面部识别和高精度人脸关键点对齐技术,可对人脸眼睛位置准确定位,以提高眼影感兴趣区域的确定的精度,为后续提高图像融合精度提供支撑。
进一步,将标准人脸图像与目标人脸图像进行人脸关键点对齐后,使用插值映射方法迁移眼影妆容模板,实现对眼影装通模板的姿态变换,可提高大姿态人脸眼影的效果。
进一步,在RGB颜色空间进行转换后的眼影妆容模板与目标人脸图像的融合,可大大降低算法复杂度,以能够用于终端上。
本发明实施例还提供一种人脸图像处理装置,参照图4,给出了本发明实施例中的一种人脸图像处理装置的结构示意图。人脸图像处理装置40可以包括:
眼影感兴趣区域确定单元41,用于对待处理人脸图像进行面部识别,得到眼影感兴趣区域;
姿态变换单元计算单元42,用于根据获取的眼影妆容模板的标准人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息;
对齐单元43,用于根据所述姿态变换信息对所述眼影妆容模板进行姿态变换,使得姿态变换后的眼影妆容模板对齐至所述眼影感兴趣区域;
融合单元44,用于采用眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,得到眼影试妆图像,所述眼影试妆效果强度系数根据所述待处理人脸图像的人脸属性信息确定。
在具体实施中,人脸图像处理装置40的具体工作原理及工作流程,可以参考本发明上述任一实施例中提供的人脸图像处理方法中的描述,此处不再赘述。
在具体实施中,人脸图像处理装置40可以对应于终端中具有人脸图像处理功能的芯片;或者对应于具有数据处理功能的芯片;或者对应于终端包括具有人脸图像处理功能的芯片的芯片模组;或者对应于具有数据处理功能芯片的芯片模组,或者对应于终端。
在具体实施中,关于上述实施例中描述的各个装置、产品包含的各个模块/单元,其可以是软件模块/单元,也可以是硬件模块/单元,或者也可以部分是软件模块/单元,部分是硬件模块/单元。
例如,对于应用于或集成于芯片的各个装置、产品,其包含的各个模块/单元可以都采用电路等硬件的方式实现,或者,至少部分模 块/单元可以采用软件程序的方式实现,该软件程序运行于芯片内部集成的处理器,剩余的(如果有)部分模块/单元可以采用电路等硬件方式实现;对于应用于或集成于芯片模组的各个装置、产品,其包含的各个模块/单元可以都采用电路等硬件的方式实现,不同的模块/单元可以位于芯片模组的同一组件(例如芯片、电路模块等)或者不同组件中,或者,至少部分模块/单元可以采用软件程序的方式实现,该软件程序运行于芯片模组内部集成的处理器,剩余的(如果有)部分模块/单元可以采用电路等硬件方式实现;对于应用于或集成于终端的各个装置、产品,其包含的各个模块/单元可以都采用电路等硬件的方式实现,不同的模块/单元可以位于终端内同一组件(例如,芯片、电路模块等)或者不同组件中,或者,至少部分模块/单元可以采用软件程序的方式实现,该软件程序运行于终端内部集成的处理器,剩余的(如果有)部分模块/单元可以采用电路等硬件方式实现。
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质为非易失性存储介质或非瞬态存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时执行上述任一实施例提供的人脸图像处理方法中的步骤。
本发明实施例还提供一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序时执行上述任一实施例提供的人脸图像处理方法中的步骤。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于任一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
虽然本发明披露如上,但本发明并非限定于此。任何本领域技术人员,在不脱离本发明的精神和范围内,均可作各种更动与修改,因此本发明的保护范围应当以权利要求所限定的范围为准。

Claims (16)

  1. 一种人脸图像处理方法,其特征在于,包括:
    对待处理人脸图像进行面部识别,得到眼影感兴趣区域;
    将获取的眼影妆容模板进行姿态变换,以对齐至所述眼影感兴趣区域;
    根据所述待处理人脸图像的人脸属性信息,确定眼影试妆效果强度系数;
    采用所述眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,得到眼影试妆图像。
  2. 如权利要求1所述的人脸图像处理方法,其特征在于,所述对待处理人脸图像进行面部识别,得到眼影感兴趣区域,包括:
    对所述待处理人脸图像进行面部识别,得到人脸关键点的位置信息;
    根据所述人脸关键点的位置信息确定所述眼影感兴趣区域。
  3. 如权利要求2所述的人脸图像处理方法,其特征在于,所述根据人脸关键点的位置信息确定所述眼影感兴趣区域,包括:
    根据所述人脸关键点中的眼睛关键点的位置信息以及眉毛关键点的位置信息,确定所述眼影感兴趣区域,所述眼影感兴趣区域位于所述眼睛关键点周围的预设区域,且不越过所述眉毛关键点。
  4. 如权利要求1所述的人脸图像处理方法,其特征在于,所述将获取的眼影妆容模板进行姿态变换,以对齐至所述眼影感兴趣区域,包括:
    根据获取的眼影妆容模板的标准人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息;
    根据所述姿态变换信息对所述眼影妆容模板进行姿态变换,使得 姿态变换后的眼影妆容模板对齐至所述眼影感兴趣区域。
  5. 如权利要求4所述的人脸图像处理方法,其特征在于,所述根据获取的眼影妆容模板的人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息,包括:
    根据人脸关键点的位置信息,在距离所述人脸关键点中的眼部关键点设定距离的位置处,生成眼影辅助关键点,所述眼影辅助关键点用于限制眼影上妆的边界;
    采用预设的三角化数组,计算用于将所述眼影妆容模板对应的标准人脸的各人脸关键点与所述待处理人脸图像中对应的人脸关键点对齐的姿态变换信息,所述姿态变换信息用于表征所述标准人脸的各人脸关键点与所述待处理人脸图像中对应的人脸关键点的偏移量。
  6. 如权利要求5所述的人脸图像处理方法,其特征在于,所述计算用于将所述眼影妆容模板对应的各标准人脸的人脸关键点与所述待处理人脸图像中对应的人脸关键点之间对齐的姿态变换信息,包括:
    当所述眼影妆容模板对应的标准人脸的尺寸与所述待处理人脸图像的尺寸不同时,对所述标准人脸或者所述待处理人脸图像中的任一个进行尺寸变换,使得变换后的尺寸相同;
    基于变换后的标准人脸图像或者待处理人脸图像,计算所述姿态变换信息。
  7. 如权利要求1至6任一项所述的人脸图像处理方法,其特征在于,所述根据所述姿态变换信息对所述眼影妆容模板进行姿态变换,包括:
    根据所述姿态变换信息,结合所述眼影妆容模板,采用变形插值算法计算得到所述姿态变换后的眼影妆容模板。
  8. 如权利要求1至6任一项所述的人脸图像处理方法,其特征在 于,所述采用所述眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,包括:
    采用所述眼影试妆效果强度系数,根据所述姿态变换后的眼影妆容模板以及所述待处理人脸图像在RGB颜色空间域的通道信息,对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,其中,所述眼影试妆效果强度系数的取值范围为[0,1]。
  9. 如权利要求8所述的人脸图像处理方法,其特征在于,还包括:
    得到所述眼影试妆图像之后,判断所述眼影试妆图像的RGB颜色空间域的通道信息是否符合设定的范围;
    如不符合,则丢弃所述眼影试妆图像。
  10. 如权利要求1所述的人脸图像处理方法,其特征在于,所述眼影妆容模板基于用户的选择确定,或者所述眼影妆容模板根据所述人脸属性信息确定。
  11. 如权利要求10所述的人脸图像处理方法,其特征在于,所述眼影妆容模板根据所述人脸属性信息确定,包括:
    根据所述人脸属性信息,得到所述待处理人脸图像中的人脸肤色信息,根据所述肤色信息,选择与所述肤色相适配的眼影妆容模板,所述肤色信息包括:肤色亮度和/或肤色颜色。
  12. 如权利要求1所述的人脸图像处理方法,其特征在于,在对待处理人脸图像进行面部识别,得到眼影感兴趣区域之前,还包括:
    对所述待处理人脸图像进行尺寸缩放,计算缩放后的待处理人脸图像距离采集所述待处理人脸图像的终端设备的距离是否满足设定距离;
    若满足,则对所述待处理人脸图像进行人脸识别,得到所述眼影感兴趣区域;
    若不满足,则丢弃所述待处理人脸图像,继续获取下一图像作为所述待处理人脸图像。
  13. 如权利要求1所述的人脸图像处理方法,其特征在于,所述眼影妆容模板采用如下方式得到:
    获取绘制的眼影图像样本以及标准人脸图像;
    根据所述眼影图像样本在RGB颜色空间域的通道信息与所述标准人脸图像在所述RGB颜色空间域的通道信息,提取得到所述眼影妆容模板在RGB颜色空间域的通道信息。
  14. 一种人脸图像处理装置,其特征在于,包括:
    眼影感兴趣区域确定单元,用于对待处理人脸图像进行面部识别,得到眼影感兴趣区域;
    姿态变换单元计算单元,用于根据获取的眼影妆容模板的标准人脸姿态以及所述待处理人脸图像的人脸姿态,计算姿态变换信息;
    对齐单元,用于根据所述姿态变换信息对所述眼影妆容模板进行姿态变换,使得姿态变换后的眼影妆容模板对齐至所述眼影感兴趣区域;
    融合单元,用于采用眼影试妆效果强度系数对所述姿态变换后的眼影妆容模板及所述待处理人脸图像进行图像融合,得到眼影试妆图像,所述眼影试妆效果强度系数根据所述待处理人脸图像的人脸属性信息确定。
  15. 一种计算机可读存储介质,所述计算机可读存储介质为非易失性存储介质或非瞬态存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器运行时执行权利要求1至13中任一项所述的人脸图像处理方法的步骤。
  16. 一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,其特征在于,所述处理器运行 所述计算机程序时执行权利要求1至13中任一项所述的人脸图像处理方法的步骤。
PCT/CN2021/141467 2021-06-28 2021-12-27 人脸图像处理方法及装置、计算机可读存储介质、终端 WO2023273247A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110722304.9 2021-06-28
CN202110722304.9A CN113344837B (zh) 2021-06-28 2021-06-28 人脸图像处理方法及装置、计算机可读存储介质、终端

Publications (1)

Publication Number Publication Date
WO2023273247A1 true WO2023273247A1 (zh) 2023-01-05

Family

ID=77481145

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/141467 WO2023273247A1 (zh) 2021-06-28 2021-12-27 人脸图像处理方法及装置、计算机可读存储介质、终端

Country Status (2)

Country Link
CN (1) CN113344837B (zh)
WO (1) WO2023273247A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036157A (zh) * 2023-10-09 2023-11-10 易方信息科技股份有限公司 可编辑的仿真数字人形象设计方法、系统、设备及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344837B (zh) * 2021-06-28 2023-04-18 展讯通信(上海)有限公司 人脸图像处理方法及装置、计算机可读存储介质、终端

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063560A (zh) * 2018-06-28 2018-12-21 北京微播视界科技有限公司 图像处理方法、装置、计算机可读存储介质和终端
CN111369644A (zh) * 2020-02-28 2020-07-03 北京旷视科技有限公司 人脸图像的试妆处理方法、装置、计算机设备和存储介质
CN111583102A (zh) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 人脸图像处理方法、装置、电子设备及计算机存储介质
CN111783511A (zh) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 美妆处理方法、装置、终端以及存储介质
CN112784773A (zh) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 图像处理方法及装置、存储介质、终端
CN113344837A (zh) * 2021-06-28 2021-09-03 展讯通信(上海)有限公司 人脸图像处理方法及装置、计算机可读存储介质、终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584180A (zh) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 人脸图像处理方法、装置、电子设备及计算机存储介质
CN112669233A (zh) * 2020-12-25 2021-04-16 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备、存储介质及程序产品
CN112819718A (zh) * 2021-02-01 2021-05-18 深圳市商汤科技有限公司 图像处理方法及装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063560A (zh) * 2018-06-28 2018-12-21 北京微播视界科技有限公司 图像处理方法、装置、计算机可读存储介质和终端
CN111783511A (zh) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 美妆处理方法、装置、终端以及存储介质
CN111369644A (zh) * 2020-02-28 2020-07-03 北京旷视科技有限公司 人脸图像的试妆处理方法、装置、计算机设备和存储介质
CN111583102A (zh) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 人脸图像处理方法、装置、电子设备及计算机存储介质
CN112784773A (zh) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 图像处理方法及装置、存储介质、终端
CN113344837A (zh) * 2021-06-28 2021-09-03 展讯通信(上海)有限公司 人脸图像处理方法及装置、计算机可读存储介质、终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036157A (zh) * 2023-10-09 2023-11-10 易方信息科技股份有限公司 可编辑的仿真数字人形象设计方法、系统、设备及介质
CN117036157B (zh) * 2023-10-09 2024-02-20 易方信息科技股份有限公司 可编辑的仿真数字人形象设计方法、系统、设备及介质

Also Published As

Publication number Publication date
CN113344837A (zh) 2021-09-03
CN113344837B (zh) 2023-04-18

Similar Documents

Publication Publication Date Title
CN108229278B (zh) 人脸图像处理方法、装置和电子设备
WO2019128508A1 (zh) 图像处理方法、装置、存储介质及电子设备
WO2019228473A1 (zh) 人脸图像的美化方法和装置
WO2019100282A1 (zh) 一种人脸肤色识别方法、装置和智能终端
US9691136B2 (en) Eye beautification under inaccurate localization
CN106056064B (zh) 一种人脸识别方法及人脸识别装置
WO2018188534A1 (zh) 人脸图像处理方法、装置和电子设备
JP3779570B2 (ja) 化粧シミュレーション装置、化粧シミュレーション制御方法、化粧シミュレーションプログラムを記録したコンピュータ読み取り可能な記録媒体
JP2022528128A (ja) 肌質測定方法、肌質等級分類方法、肌質測定装置、電子機器及び記憶媒体
WO2023273247A1 (zh) 人脸图像处理方法及装置、计算机可读存储介质、终端
CN110390632B (zh) 基于妆容模板的图像处理方法、装置、存储介质及终端
JP4998637B1 (ja) 画像処理装置、情報生成装置、画像処理方法、情報生成方法、制御プログラムおよび記録媒体
CN106326823B (zh) 一种获取图片中头像的方法和系统
WO2022161009A1 (zh) 图像处理方法及装置、存储介质、终端
CN109952594A (zh) 图像处理方法、装置、终端及存储介质
US9135726B2 (en) Image generation apparatus, image generation method, and recording medium
WO2023273246A1 (zh) 人脸图像处理方法及装置、计算机可读存储介质、终端
WO2023066120A1 (zh) 图像处理方法、装置、电子设备及存储介质
JP2009211151A (ja) 顔画像処理装置
WO2022135574A1 (zh) 肤色检测方法、装置、移动终端和存储介质
CN110866139A (zh) 一种化妆处理方法、装置及设备
JP2014194617A (ja) 視線方向推定装置、視線方向推定装置および視線方向推定プログラム
CN107153806B (zh) 一种人脸检测方法及装置
CN114187166A (zh) 图像处理方法、智能终端及存储介质
JP6098133B2 (ja) 顔構成部抽出装置、顔構成部抽出方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21948172

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE