WO2022237081A1 - 妆容迁移方法、装置、设备和计算机可读存储介质 - Google Patents

妆容迁移方法、装置、设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2022237081A1
WO2022237081A1 PCT/CN2021/126184 CN2021126184W WO2022237081A1 WO 2022237081 A1 WO2022237081 A1 WO 2022237081A1 CN 2021126184 W CN2021126184 W CN 2021126184W WO 2022237081 A1 WO2022237081 A1 WO 2022237081A1
Authority
WO
WIPO (PCT)
Prior art keywords
makeup
image
face
original
organ
Prior art date
Application number
PCT/CN2021/126184
Other languages
English (en)
French (fr)
Inventor
吴文岩
郑程耀
甘世康
钱晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2022237081A1 publication Critical patent/WO2022237081A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to image processing technology, and in particular to a makeup transfer method, device, equipment and computer-readable storage medium.
  • the user can transfer the makeup of the model in the makeup reference image to the face of the target object through the beauty makeup application, so as to realize the automatic makeup processing of the face of the target object; however, in the process of makeup migration, usually the model
  • the makeup color is migrated to the face of the target object, such as the user's face, that is, the color is transferred, but the texture of the makeup area such as the lip texture and eyebrow texture of the user's face is still different from the model, which affects the user's facial features.
  • the details and naturalness of the shape transfer of makeup areas such as lipstick and eyebrows on the face.
  • Embodiments of the present disclosure provide a makeup migration method, device, device, and computer-readable storage medium, which improve the detail and naturalness of makeup migration.
  • An embodiment of the present disclosure provides a makeup transfer method, including:
  • the shape of the second organ region is deformed into the first organ region of the same organ type in the original face image, and the second deformed organ region is obtained, based on the first Second, deform the organ area, and then perform color transfer and texture transfer on the first organ area, which improves the detail and naturalness of the makeup transfer, thereby improving the effect of the makeup effect image.
  • An embodiment of the present disclosure provides a makeup transfer device, including:
  • the obtaining part is configured to obtain an original human face image comprising an original human face and a makeup human face image comprising a target makeup;
  • the deformation part is configured to perform image deformation processing on the second organ area in the makeup face image based on the first organ area in the original face image to obtain a second deformed organ area; the first organ the region and said second organ region correspond to the same type of organ;
  • the migration part is configured to perform color migration and texture migration on the first organ region in the original face image based on the second deformed organ region to obtain the original face image after makeup migration.
  • the acquisition part is further configured to extract the original human face image from the user image containing the target object, and extract the makeup face image from a makeup reference image containing the target makeup.
  • the acquiring part is further configured to detect the key points of the user's face to obtain the first key points of the user's face; based on the first key points of the face, the Face alignment is performed on the user image to obtain the original face image.
  • the acquisition part is further configured to detect the facial key points of the makeup reference image to obtain the second facial key points of the makeup reference image; based on the second facial key points, Face alignment is performed on the makeup reference image to obtain the makeup face image.
  • the acquisition part is further configured to acquire a first transformation matrix based on the first original position information of the first face key point and the target position information of the target-aligned face key point; the first A transformation matrix characterizes the positional relationship between the first original location information and the target location information; based on the first transformation matrix, adjust the first original location information to obtain a user-aligned image; from the user-aligned image Extract the original face image.
  • the acquisition part is further configured to acquire a second transformation matrix based on the second original position information of the second face key point and the target position information of the target-aligned face key point; the second A transformation matrix characterizes the positional relationship between the second original position information and the target position information; based on the second transformation matrix, adjust the second original position information to obtain a makeup alignment image; from the makeup alignment image Extract the face image with makeup.
  • the deformation part is further configured to perform organ segmentation on the original face image to obtain the first organ region, and perform organ segmentation on the makeup face image to obtain the second organ region.
  • the deformation part is further configured to determine a second triangle mesh based on the second aligned face key points in the second organ region, and, based on the first Aligning the key points of the face to determine the corresponding first triangular mesh; through affine transformation, adjusting the shape information of each second triangle in the second triangular mesh to the shape information of the corresponding first triangle to obtain The second deformable organ region.
  • the transfer part is further configured to perform color transfer on the corresponding first organ region based on the second deformable organ region to obtain a first color transfer region; transform the texture of the second deformable organ region Migrate to the corresponding first color transition area to obtain the transitioned original human face image.
  • the migration part is further configured to use the pixel value of each channel of each pixel in the first organ region minus the pixel mean value of the corresponding channel in the first organ region, plus the The pixel mean value of the corresponding channel of the second deformation organ region is obtained to obtain the shifted pixel value of each channel of each pixel; based on the shifted pixel value of each channel of each pixel, the first color shifted area is obtained.
  • the migration part is further configured to perform color migration and texture migration on the first organ region in the original human face image based on the second deformed organ region, to obtain the original human face after makeup migration After imaging, a makeup effect image is obtained based on the transferred original face image and the user image.
  • the migration part is further configured to obtain the inverse matrix of the first transformation matrix as the first transformation inverse matrix; based on the first transformation inverse matrix, the original face image after the migration performing inverse adjustment to obtain an inversely adjusted original human face image; using the inversely adjusted original human face image to replace the original human face image in the user image to obtain the makeup effect image.
  • the migration part is further configured to identify the makeup effect after replacing the original face image in the user image with the inversely adjusted original face image to obtain the makeup effect image
  • the first torso skin area of the image and the second torso skin area of the makeup reference image based on the second torso skin area, color migration is performed on the first torso skin area to obtain a natural makeup effect image, and in the The natural makeup effect image is displayed on the makeup migration interface.
  • the deformation part is further configured to perform image deformation processing on the corresponding second organ region based on the first organ region when the area of the first organ region is greater than a preset target area , to obtain the second deformation organ region.
  • An embodiment of the present disclosure provides a makeup transfer device, the device includes:
  • a memory configured to store a computer program
  • the processor is configured to implement the above makeup transfer method when executing the computer program stored in the memory.
  • An embodiment of the present disclosure provides a computer-readable storage medium, which stores a computer program and is configured to realize the above makeup migration method when executed by a processor.
  • An embodiment of the present disclosure provides a computer program, including computer readable codes.
  • a processor in the electronic device implements the above method when executed.
  • Embodiments of the present disclosure provide a makeup migration method, device, device, and computer-readable storage medium; acquire the original face image containing the original face and the makeup face image containing the target makeup; based on the first face image in the original face image An organ area, image deformation processing is performed on the second organ area in the makeup face image to obtain the second deformed organ area.
  • the first organ area and the second organ area correspond to the same type of organ; based on the second deformed organ area, the original The first organ region in the face image undergoes color migration and texture migration to obtain the original face image after makeup migration; that is, the makeup migration device can transform the second organ region of the makeup face into the corresponding first
  • the shape of the organ area is the same as the second deformed organ area. Based on the second deformed organ area, color transfer and texture transfer are performed on the first organ area, thereby improving the naturalness and detail of makeup transfer.
  • Fig. 1 is a schematic structural diagram of an optional makeup transfer system architecture provided by an embodiment of the present disclosure
  • Fig. 2 is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of an optional makeup transfer interface provided by an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram of an original human face image and a makeup human face image provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of an image deformation processing effect provided by an embodiment of the present disclosure.
  • Fig. 6a is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 6b is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 6c is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 6d is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 7 is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 8 is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 9a is a schematic triangulation diagram of an optional first organ region provided by an embodiment of the present disclosure.
  • Fig. 9b is a schematic triangulation diagram of an optional second organ region provided by an embodiment of the present disclosure.
  • Fig. 10 is a flowchart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 11 is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 12 is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 13 is a flow chart of an optional makeup transfer method provided by an embodiment of the present disclosure.
  • Fig. 14 is a schematic diagram of the composition and structure of a makeup transfer device provided by an embodiment of the present disclosure.
  • Fig. 15 is a schematic diagram of the composition and structure of a makeup transfer device provided by an embodiment of the present disclosure.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion, such that a method or apparatus comprising a series of elements not only includes the explicitly stated elements, but also includes Other elements not explicitly listed, or also include elements inherent to implementing the method or apparatus.
  • an element defined by the phrase “comprising a " does not exclude the presence of additional related elements (such as steps in the method or A unit in an apparatus, for example, a unit may be part of a circuit, part of a processor, part of a program or software, etc.).
  • the display method provided by the embodiment of the present disclosure includes a series of steps, but the display method provided by the embodiment of the present disclosure is not limited to the steps described.
  • the display device provided by the embodiment of the present disclosure includes a series of modules, However, the display device provided by the embodiments of the present disclosure is not limited to include the explicitly recorded modules, and may also include modules required for obtaining relevant information or processing based on the information.
  • RGB image a color image encoded with red, yellow, and blue; the color of each pixel is a mixed color of red, yellow, and blue; that is, a pixel includes three color components of red, yellow, and blue.
  • LAB image LAB-coded color image; where, L represents brightness (Luminance or Luma), "A” and “B” represent color opposite dimensions, which are two color channels.
  • makeup transfer is an important direction in the field of image generation in computer vision. Compared with the traditional method of making makeup stickers by designers, users can obtain makeup effects by selecting makeup stickers on the original image.
  • the makeup transfer technology provides a higher degree of freedom and supports obtaining from any interested reference makeup images. Makeup, such as obtaining the model's makeup from the reference makeup image and migrating it to the original image.
  • the makeup transfer method of the related technology is mainly to transfer the makeup color in the reference makeup image to the user's original image through simple Platts transformation, which is easily affected by the difference in lighting between the reference makeup image and the user's original image, and the difference in the position of the facial features of the characters. Due to the influence of factors such as differences in face angles and makeup textures, the makeup in the reference makeup image has a strong sense of incongruity when it is transferred to the user's original face, making the makeup migration more natural and detailed. Low.
  • Embodiments of the present disclosure provide a makeup transfer method, device, device, and computer-readable storage medium, which can improve the detail and naturalness of makeup transfer.
  • the makeup transfer method provided by the embodiments of the present disclosure is applied to makeup transfer devices, and will be described below Exemplary applications of the makeup migration device provided by the embodiments of the present disclosure
  • the makeup migration device provided by the embodiments of the present disclosure can be implemented as AR glasses, notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (such as mobile phones, portable music Players, personal digital assistants, dedicated messaging devices, portable game devices) and other types of user terminals can also be implemented as servers.
  • the makeup migration device When the makeup migration device is implemented as a terminal, it can transfer the makeup of the makeup face in the makeup reference image to the user's face in the user image; here, the terminal can interact with the cloud server, and obtain the makeup reference image and the user's face through the cloud server. at least one of the images. Wherein, the user image may also be acquired in real time, which is not limited in this embodiment of the present disclosure.
  • the terminal obtains a makeup reference image by interacting with the server, and takes makeup migration as an example to describe the makeup migration system.
  • FIG. 1 is a schematic diagram of an optional architecture of a makeup migration system 100 provided by an embodiment of the present disclosure.
  • a terminal 400 (terminal 400-1 and terminal 400-2 ) is connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of both.
  • the terminal 400 is configured to acquire an original face image containing the original face and a makeup face image containing target makeup; based on the first organ area in the original face image, image the second organ area in the makeup face image Deformation processing to obtain a second deformed organ region; the first organ region and the second organ region correspond to the same type of organ; based on the second deformed organ region, color migration and texture migration are performed on the first organ region in the original face image, Get the original face image after makeup migration.
  • the preset makeup migration application 410 on the mobile phone can be started, and on the makeup migration interface of the preset makeup migration application, after receiving a picture instruction, a picture request is initiated to the server 200,
  • the server 200 acquires the makeup reference image from the database 500 ; and sends the makeup reference image back to the terminal 400 .
  • the terminal 400 extracts the makeup face image from the makeup reference image, migrates the makeup in the makeup face image to the original face image extracted from the user image, and obtains the original face image after makeup migration. face image, and display the original face image after migration on the display interface of the preset makeup migration application 410.
  • An embodiment of the present disclosure provides a makeup transfer method, as shown in FIG. 2 , the method includes: S101-S103.
  • the terminal obtains the original face image and the makeup face image, wherein the makeup face image contains the target makeup; in this way, the terminal can migrate the target makeup to the original face, and obtain the original face after makeup migration. face image.
  • the original face image and the target makeup face image may be images collected by the terminal through an image acquisition device, or images downloaded by the terminal from a server through the network; this embodiment of the present disclosure makes no limitation .
  • the original face may have original makeup or no makeup; this is not limited in the embodiment of the present disclosure.
  • the makeup of the original face in the original face image after the makeup migration can be superimposed on the original makeup by superimposing the superimposed makeup of the target makeup, or can replace the original makeup for the target makeup makeup.
  • the original face image includes multiple organ regions
  • the makeup face image may include corresponding multiple organ regions; in this way, the terminal may perform makeup migration for each organ region.
  • the organ region in the original face image is the first organ region
  • the organ region in the makeup face image is the second organ region.
  • the first organ region may include at least one of the following: a left eyebrow region, a right eyebrow region, a left eye makeup region, a right eye makeup region, a lipstick region, and a base region.
  • the second organ area may include at least one of: a left eyebrow makeup area, a right eyebrow makeup area, a left eye makeup area, a right eye makeup area, a lipstick area, and a foundation area.
  • the base area is other areas in the original human face except the left eyebrow makeup area, the right eyebrow makeup area, the left eye makeup area, the right eye makeup area and the lipstick area; Areas other than eyebrow makeup area, left eye makeup area, right eye makeup area, and lipstick area.
  • the terminal may perform image deformation processing on the shape of the second organ region based on the shape of the first organ region in the original human face to obtain the second deformed organ region, so that the second deformed organ region
  • the shape of is the same as that of the corresponding first organ region.
  • the second organ region for which the terminal performs image deformation processing may be multiple organ regions of a makeup face image, or may be one organ region of a makeup face image; here, the second organ region for performing image deformation processing
  • the organ area may be set according to actual requirements, which is not limited in the embodiments of the present disclosure.
  • the terminal may apply makeup to the face image 3B according to the first eye makeup area, the first eyebrow makeup area, the first lipstick area, and the first foundation area in the original face image 3A.
  • the second eye makeup area, the second eyebrow makeup area, the second lipstick area, and the second foundation area are subjected to image deformation processing to obtain the second deformed eye makeup area, the second deformed eyebrow makeup area, and the second deformed lipstick area, and the second deformed foundation area to obtain an image 3B1.
  • the facial features in the image 3B1 are basically consistent with the facial features in the original facial image 3A.
  • the image deformation processing may be based on a movement-based least squares deformation algorithm, or may be a line-based deformation algorithm, or may be a triangular mesh affine transformation algorithm; for the method of image deformation processing, it may be based on Actual requirements are set, and the embodiment of the present disclosure does not limit it.
  • the terminal may transfer the color and texture in the second deformed organ region to the first organ region to obtain the original face image after makeup migration.
  • the second deformable organ area includes: the second deformed eye makeup area, the second deformed eyebrow makeup area, and the second deformed lipstick area; the terminal can transfer the color and texture in the second deformed eye makeup area to the first eye makeup area.
  • makeup area, the color and texture in the second deformed eyebrow makeup area migrate to the first eyebrow makeup area, and the color and texture in the second deformed lipstick area migrate to the first lipstick area, thereby transferring the makeup face image Eye makeup, eyebrow makeup and lipstick are transferred to the original face image.
  • the terminal may perform color migration on the first organ region based on the second deformable organ region through a color transfer algorithm; and, through a texture transfer method, migrate the texture of the second deformable organ region to the first organ region In this way, the original face image is converted into the original face image after makeup migration.
  • the color migration algorithm can be the Reinhard algorithm, the Welsh algorithm, the adaptive migration algorithm, or the fuzzy C-means (Fuzzy C-means, FCM) algorithm; Examples are not limited.
  • the texture transfer method may be an attention mechanism-based texture transfer method, or a structure-guided image texture transfer method; it may also be an image fusion algorithm; this embodiment of the present disclosure makes no limitation.
  • the shape of the second organ region is transformed into the first organ region of the same organ type in the original face image, and the second deformed organ region is obtained , based on the second deformed organ region, color transfer and texture transfer are performed on the first organ region, which improves the detail and naturalness of the makeup transfer, thereby improving the effect of the makeup effect image.
  • the acquisition of the original face image containing the original face and the makeup face image containing the target makeup in S101 may include:
  • the original face image is a face image extracted from a user image containing a user target object
  • the makeup face image is a face image extracted from a makeup reference image including a target makeup
  • the terminal may first acquire the user image and the makeup reference image, and then extract the original face image from the user image, and extract the makeup face image from the makeup reference image. In this way, the terminal can migrate the makeup of any makeup reference image to any user image, which improves the flexibility of makeup migration.
  • the makeup migration interface of the makeup migration application may be displayed on the display interface of the terminal.
  • a picture upload control is displayed on the makeup migration interface, so that the terminal can acquire user images and makeup reference images through a preset interface in response to the trigger operation upon receiving a trigger operation on the picture upload control.
  • the picture upload control includes a user image upload control 41 and a makeup reference image upload control 42.
  • the terminal receives the trigger operation of the user image upload control 41 or the makeup reference image upload control 42, it can open the picture upload control interface, and display the picture library control and shooting control on the picture upload control interface;
  • the user image or makeup reference image can be obtained from the picture library; in the case of the terminal receiving a trigger operation on the shooting control, the user image or makeup reference image can be collected through the image acquisition device.
  • the makeup reference image and the user image may be images collected by the terminal through an image acquisition device, or images downloaded by the terminal through the network.
  • the makeup reference image and user image may be set according to actual requirements. For this, Embodiments of the present disclosure are not limited.
  • the terminal when the terminal acquires the makeup reference image and the user image, the terminal may display a makeup migration control on the makeup migration interface.
  • the terminal receives the trigger operation of the makeup migration control, it determines that the makeup migration instruction has been received, and in response to the makeup migration instruction, extracts the original face image from the user image, and extracts the makeup face image from the makeup reference image , so that according to the method in the embodiment of the present disclosure, the makeup transition from the makeup face image to the original face image is realized.
  • the terminal may perform at least one preprocessing such as face angle adjustment and scaling on the makeup reference image and the user image respectively, so that the angle and size of the makeup face image and the original face image are the same.
  • the terminal can perform face image extraction on the makeup face image and the original face image according to the same preset size, so that the size of the makeup face in the makeup face image and the original face in the original face image are the same same.
  • the terminal can adjust the face angles of the face in the makeup reference image and the face of the target object in the user image respectively to obtain the front face of the makeup and the front face of the target object; and then adjust the front face of the makeup Scale with the front face of the target object to obtain the makeup face and the original face of the same size; finally, take the original face and the makeup face as the center respectively, and obtain the original face image from the user image according to the preset size, And, obtain the makeup face image from the makeup reference image; thus obtain the makeup face image and the original face image of the same size; wherein, the makeup face in the makeup face image and the original face size in the original face image Also the same.
  • the terminal when the terminal receives the makeup reference image 5B and the user image 5A, it preprocesses the user image 5A and extracts the original human face image 5A1, and extracts the original face image 5A1, and the makeup reference image 5B Perform preprocessing and extract the makeup face image 5B1; the size of the original face image 5A1 and the makeup face image 5B1 are both 400 ⁇ 400.
  • the implementation of extracting the original face image from the user image containing the target object in S1011, as shown in FIG. 6a may include: S201-S202.
  • the terminal after the terminal obtains the user image and the makeup reference image, it can perform face key point detection on the user image and the makeup image to obtain the first face key point of the user image and the second face of the makeup image key point.
  • the terminal may first perform skin smoothing and whitening processing on the user image and the makeup reference image, and then perform face key point detection, thereby improving detection accuracy.
  • the terminal may perform face alignment on the user image according to the first key points of human face to obtain the original human face image.
  • the terminal may perform an affine transformation according to the location information of the key points of the first human face, so as to implement face alignment on the user image.
  • face alignment is performed on the user image to obtain the original face image, as shown in FIG. 6b, including: S2021- S2023.
  • the terminal adjusts the face of the target object to the face of the target alignment, the face of the target alignment is a frontal face, and the size is the preset face size; the terminal can obtain the key The target position information of the point, the first transformation matrix is obtained through the first original position information and the target position information of the first face key point.
  • the first transformation matrix represents the positional relationship between the first original position information and the target position information; thus, the terminal can transform the position of the first face key point to the target aligned face according to the first transformation matrix The position of the key point, realize the face alignment of the face of the target object, and obtain the user alignment image; the alignment face of the target object in the user alignment image is the frontal face of the preset face size.
  • first human face key points there are 240 first human face key points, and there are also 240 target aligned human face key points.
  • the position information of the key points is represented by two-dimensional coordinates, and the first original position information of any first human face key point Expressed as ( xi , y i ), the corresponding target position information of a target aligned face key point is expressed as ( xi 'y i '); where, 240 ⁇ i ⁇ 1, thus, the formula can be obtained (1):
  • a, b, c, d, e and f are affine transformation coefficients.
  • the first transformation matrix ⁇ can be obtained by formula (3), see formula (3):
  • the terminal after the terminal adjusts the first face key point to the target face key point through the first transformation matrix, it can align the target face with the target object as the center according to the preset image size, and obtain the data from the user.
  • the original face image is extracted from the aligned image, and the target object in the original face image is aligned with the face as the original face.
  • the implementation of extracting makeup face images from the makeup reference image containing the target makeup in S1011, as shown in FIG. 6c may include: S301-S302.
  • face key point detection may be performed on the makeup reference image to obtain the first face key point of the user image and the second face key point of the makeup reference image.
  • the terminal may first perform skin smoothing and whitening treatment on the makeup reference image, and then perform face key point detection, thereby improving detection accuracy.
  • the terminal may perform face alignment on the makeup reference image according to the second key points of the human face to obtain a makeup face image.
  • the terminal may perform an affine transformation according to the position information of the second key point of the human face, so as to realize the face alignment of the makeup reference image.
  • the terminal can perform face alignment through face key points, and perform makeup transfer based on the face-aligned image, which improves the accuracy of makeup transfer, thereby improving the effect of makeup transfer.
  • the terminal may obtain target position information of key points of the face for which the target is aligned, and obtain the second transformation matrix through the second original position information and target position information of the second key point of the face.
  • the terminal adjusts the makeup face in the makeup reference image to a target alignment face, and the target alignment face is a front face, and the size is a preset face size.
  • the second transformation matrix represents the positional relationship between the second original position information and the target position information; the terminal can adjust the position of the key points of the second face to the position where the target is aligned with the face according to the second transformation matrix , to realize the face alignment of the makeup face, and obtain the makeup alignment image; the makeup alignment face in the makeup alignment image is a frontal face with a preset face size.
  • the manner of obtaining the second transformation matrix is the same as that of the first transformation matrix.
  • the manner of obtaining the first transformation matrix has been described in detail in S2022 and will not be repeated here.
  • the terminal adjusts the second face key point to the position of the key point of the target alignment face through the second transformation matrix, and extracts the makeup face from the makeup alignment image with the makeup alignment face as the center In the image, the makeup in the makeup face image is aligned with the face as the makeup face.
  • the face image with makeup can be extracted from the alignment image with makeup according to a preset image size.
  • the makeup face image extracted according to the preset image size is the same size as the original face image; the size of the face in the makeup face image is aligned with the target object in the original face image Same size.
  • the size of the target alignment face is 400 ⁇ 400, and the preset image size is 512 ⁇ 512, then the size of the original face image and the size of the makeup face image are both 512 ⁇ 512, in the original face image
  • the size of the adjusted face of the target object is 400 ⁇ 400; the size of the adjusted face of the makeup face in the makeup face image is 400 ⁇ 400; where the center position of the original face image is the same as the adjusted target person The center position of the face is the same; the center position of the makeup face image is the same as the center position of the adjusted makeup face.
  • the terminal after the terminal obtains the first key point of human face and the second key point of human face, it can use the first key point of human face and the second key point of human face to compare the angle, Keep the size consistent with the target alignment face, so as to obtain the original face image and makeup face image, because the original face image and makeup face image have the same size, and the original face and makeup face in the original face image The size of the face with makeup in the image is also the same. In this way, the terminal can perform makeup migration based on the original face image and the face image with makeup, which can improve the accuracy of makeup migration.
  • image deformation processing is performed on the second organ region in the makeup face image to obtain the realization of the second deformed organ region, as shown in the figure 7, may include: S401-S402.
  • the terminal may perform organ segmentation on the original face image and the makeup face image respectively, so as to obtain the first organ area of the original face image and the second organ area of the makeup face image.
  • the terminal may perform organ segmentation on the original face image and the face image with makeup through the facial features segmentation algorithm.
  • the facial features segmentation algorithm can be a Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation (BiSeNetV2) algorithm based on guided aggregation, or an Effective Hierarchical Aggregation Network (An Effective Hierarchical Aggregation Network) for Face Parsing, EHANet) algorithm can also be an adaptive weakly supervised caricature face parsing (Weakly-supervised Caricature Face Parsing through Domain Adaptation, Cari Face Parsing) algorithm; for this, the disclosed embodiment does not limit.
  • the terminal may perform organ segmentation on the original face image according to the face key points in the original face image to obtain the first organ region; and according to the face key points in the makeup face image pair Organ segmentation is performed on the makeup face to obtain the second organ area.
  • S402. Perform image deformation processing on the second organ region, and adjust the shape information of the second organ region to the corresponding shape information of the first organ region to obtain a second deformed organ region.
  • the terminal may adjust the shape information of the corresponding second organ region according to the shape information of the first organ region, and adjust the shape information of the second organ region As the shape information of the first organ region, the second deformable organ region is obtained.
  • the shape information may include: information such as outline and area; this may be set according to actual requirements, and the embodiments of the present disclosure make no limitation thereto.
  • image deformation processing is performed on the second organ region in S402, and the shape information of the second organ region is adjusted to the corresponding shape information of the first organ region to obtain the realization of the second deformed organ region, As shown in FIG. 8 , it may include: S501-S502.
  • S501 Determine a plurality of second triangular meshes based on the second aligned face key points in the second organ region, and determine a corresponding first triangular mesh based on the first aligned face key points in the first organ region grid.
  • the original face image is the user image after face alignment, and the first face key point in the user image is adjusted to become the first aligned face key point;
  • the makeup face image is the face alignment
  • the aligned makeup reference image, the second face key point in the makeup image is adjusted to become the second aligned face key point;
  • the first organ area in the original face image can include multiple first aligned face key points point
  • the second organ region in the makeup face image may include a plurality of second aligned face key points.
  • the terminal may connect multiple second aligned face key points in each second organ region according to a preset triangulation method to obtain a second triangular mesh; the second triangular mesh including a plurality of disjoint second triangles; and, the terminal may connect the plurality of first aligned face key points in the first organ region according to a preset triangulation method to obtain a first triangle mesh, and the first triangle
  • the mesh includes a plurality of disjoint first triangles.
  • each first triangle in the first triangle network is in one-to-one correspondence with each second triangle in the second triangle mesh.
  • the terminal can obtain a corresponding triangular affine transformation matrix based on each second triangle and the corresponding first triangle; through the triangular affine transformation matrix, affine transformation is performed on the second triangle, and the first The shape information of the second triangle is adjusted to the shape information of the corresponding first triangle, so that the shape information of each second triangle is the same as the shape information of the corresponding first triangle, thereby obtaining each second deformed triangle; each second deformed The triangles compose the second triangle mesh, thereby obtaining the second deformable organ region.
  • the first organ region includes 9 first aligned face key points, and according to the preset triangulation method, the 9 first aligned face key points are connected to obtain 8 first aligned face key points A triangle T1 1 -T8 1 ;
  • the second organ area includes 9 second aligned face key points, and the 9 second aligned face key points are connected according to the preset triangulation method , to obtain eight second triangles T1 2 -T8 2 corresponding to the eight first triangles; thus, adjusting the shape information of the eight second triangles in Figure 9b to the shape information of the eight first triangles in Figure 9a, Then the shape information of the second organ region can be adjusted to the shape information of the first organ region to obtain the second deformed organ region.
  • the number of face key points of the face the more the grids of the first triangle and the second triangle, the higher the consistency of the shapes of the multiple second deformable organ regions and the multiple first organ regions;
  • the terminal may first adjust the shape information of the second organ region in the makeup face image to be the same as the shape information of the corresponding first organ region in the original face image. Makeup transfer can make the target makeup more consistent with the original face, thereby improving the details of makeup transfer.
  • color migration and texture migration are performed on the first organ region in the original face image to obtain the realization of the original face image after makeup migration, as shown in the figure As shown in 10, it may include: S601-S602.
  • the terminal may transfer the color of the second deformed organ region to the corresponding first organ region to obtain the first color transferred region.
  • the terminal may replace pixels in the first organ region with pixels in the second deformed organ region, thereby changing the color in the first organ region to obtain the first color shift region.
  • color transfer is performed on the corresponding first organ region to obtain the realization of the first color transfer region, which may include: S5011-S5012.
  • the terminal can convert the original face image and the makeup face image from RGB images to LAB images, so that each pixel in the first organ area and the second deformed organ area includes an L-channel pixel value, A channel pixel value and B channel pixel value.
  • the terminal can use the pixel value of the L channel of the pixel to subtract the pixel mean value of the L channel in the first organ region, plus the corresponding 2.
  • the pixel mean value of the L channel in the deformable organ region is obtained to obtain the migration pixel value of the L channel of each pixel; according to this method, the migration pixel value of the A channel and the migration pixel value of the B channel of each pixel can be obtained.
  • the terminal after the terminal obtains the migrated pixel value of the L channel, the migrated pixel value of the A channel, and the migrated pixel value of the B channel of each pixel, it obtains the LAB area for the first color migration, and then migrates the first color
  • the LAB area is converted into RGB format to obtain the first color transfer area.
  • the terminal after obtaining the first color transfer region, can transfer the gradient information of the second deformable organ region to the corresponding first color transfer region through the Poisson fusion algorithm, so as to realize the The texture of the second deformed organ region in the image is transferred to the corresponding first color transfer region, changing the skin texture and texture of multiple first color transfer regions in the original face image, and obtaining the transferred original face image.
  • the first organ area includes: the user's eyebrow makeup area, the user's lipstick area, the user's eye makeup area, and the user's foundation area; the terminal performs color migration on the user's eyebrow makeup area, the user's lipstick area, the user's eye makeup area, and the user's foundation area After that, texture migration can be performed on the user's eyebrow makeup area, user's lipstick area, user's eye makeup area, and user's foundation area.
  • the terminal realizes the texture change based on the pixels in the first color migration area by first color migration and then Poisson fusion, which improves the naturalness of makeup migration.
  • the color transfer and texture transfer are performed on the first organ region in the original face image to obtain the original face image after makeup transfer, which can be realized include:
  • the terminal after the terminal obtains the migrated original face image, it realizes the face makeup migration; since the original face image is extracted from the user image, the terminal can The inverse processing of the extraction process is performed on the face image to obtain the user image of the face of the target subject with the target makeup as the makeup effect image.
  • the makeup effect image is obtained based on the transferred original face image and user image, as shown in FIG. 11 , which may include: S1041-S1043.
  • the original face image is obtained by adjusting the user image based on the first transformation matrix. Therefore, after the terminal performs makeup migration on the pre-processed face image to obtain the original face image after migration, it can Perform inverse adjustment on the original face image after migration to obtain the original face image after inverse adjustment; the original face image after inverse adjustment has the same size and angle as the face of the target object in the user image, that is to say, The inversely adjusted original face image is the face of the target subject with the target makeup.
  • the terminal can obtain the inverse matrix of the first transformation matrix as the first transformation inverse matrix, and adjust the position of the first aligned face key point in the original face image to the first transformation inverse matrix through the first transformation inverse matrix.
  • the position of a key point of a face is obtained, and the original face image after inverse adjustment is obtained.
  • the terminal after the terminal obtains the inversely adjusted original face image, it pastes the inversely adjusted original face image back into the user image, and replaces the original face image extracted from the user image, thereby obtaining Makeup effect image.
  • the terminal can perform makeup migration on user images and makeup reference images of various angles and sizes, which improves the naturalness and detail of makeup migration , also increases the flexibility of makeup migration.
  • the inversely adjusted original face image is used to replace the original face image in the user image to obtain the makeup effect image, as shown in FIG. 12 , which may include: S701-S702 .
  • the terminal after obtaining the makeup effect image, can identify the torso skin area of the makeup effect image and the makeup reference image, and obtain the first torso skin area of the first makeup effect image and the second torso skin area of the makeup reference image .
  • the torso skin area is an exposed skin area other than the face; here, the terminal can recognize the torso skin area as an identification object using a facial features segmentation algorithm, such as a semantic segmentation method, to obtain the first torso skin area and the second torso skin area .
  • a facial features segmentation algorithm such as a semantic segmentation method
  • the terminal may perform color migration on the first torso skin area based on the second torso skin area to obtain a natural makeup effect image, and The natural makeup effect image is displayed on the makeup migration interface.
  • the terminal may also perform the transformation on the first torso skin area of the target object in the makeup effect image based on the second torso skin area in the makeup reference image.
  • Color migration to obtain a natural makeup effect image in this way, the face and torso skin color of the target object in the natural makeup effect image is more natural and coordinated, thereby improving the naturalness of makeup migration.
  • image deformation processing is performed on the second organ region in the makeup face image to obtain the realization of the second deformed organ region.
  • the method includes: when the area of the first organ region is larger than the preset target area, based on the first organ region, performing image deformation processing on the corresponding second organ region to obtain a second deformed organ region.
  • the terminal may compare the area of the first organ region with the corresponding preset target area, and if the area of the first organ region is smaller than the corresponding preset target area, It is determined that the blocked area of the first organ region is too large, so the terminal will not perform image deformation processing on the first organ region.
  • the area of the first organ region is represented by pixels.
  • the first organ area is the right eyebrow makeup area; the area of the right eyebrow makeup area is 40 ⁇ 10, and the corresponding preset right eyebrow makeup target area is 50 ⁇ 10; in this way, the terminal can judge that the right eyebrow makeup area is blocked, and the terminal can be wrong
  • the right eyebrow makeup area in the second organ area undergoes image deformation processing.
  • different first organ regions may correspond to different preset target areas, and the preset target area may be set according to actual requirements; this is not limited by the embodiments of the present disclosure.
  • the terminal After the terminal obtains the first organ region, it can determine whether the first organ region is blocked according to the area of the first organ region and the corresponding preset target area, so as not to over-occlude the first organ region. Makeup migration is performed in different regions, which saves resource consumption and improves the efficiency of makeup migration.
  • Fig. 13 is a schematic diagram of a makeup transfer method provided by an embodiment of the present disclosure. As shown in Fig. 13, the method may include:
  • the angle of the human face indicates the angle at which the human face deviates from the frontal face.
  • the first human face key point and the second human face key point may be 240 human face key points.
  • the terminal may align the face key points with the target based on the first face key point to obtain the first transformation matrix.
  • the terminal may obtain the face angle information affine matrix as the first transformation matrix.
  • the terminal adjusts the face in the user image to be processed to a frontal face according to the first face key point and the first transformation matrix, and then takes the frontal face as the center, extracts the original face image according to a preset image size, and, according to The second face key point adjusts the face in the makeup reference image to be processed to the front face, and then takes the front face as the center, and extracts the makeup face image according to the preset image size.
  • the size of the original face image and the makeup face image is 512 ⁇ 512, and the size of the front face in the original face image and the front face in the makeup face image is 400 ⁇ 400.
  • the terminal may implement organ segmentation through a facial features segmentation algorithm; the first organ region and the second organ region may be image regions in the form of a facial features segmentation map.
  • the terminal can obtain the second organ area, such as the base area, or the eye makeup area, or the lipstick area, from the above 240 key points of the face, according to the semantics of each key point M key points of .
  • M is a positive integer greater than or equal to 3.
  • the terminal connects the M key points according to its preset triangle connection rules to obtain N triangle meshes as the second triangle mesh.
  • N is a positive integer greater than or equal to 1.
  • the terminal performs the same process on the original face image to obtain N triangle meshes corresponding to the original face image as the first triangle mesh.
  • each first triangle in the first triangle network is in one-to-one correspondence with each second triangle in the second triangle mesh.
  • the terminal calculates an affine transformation matrix from the second triangular mesh to the first triangular mesh by traversing each first triangular mesh and each corresponding second triangular mesh.
  • the terminal performs an affine transformation on each second triangular grid in the makeup face image, so that each second triangular grid fits better with each first triangular grid, so that the second triangular grid in the makeup face image
  • the shape of the organ region is consistent with the shape of the first organ region in the original face image to obtain the second deformed organ region.
  • the terminal may convert the original face image and the makeup face image from an RGB image to a LAB image, so that the makeup migration effect is more in line with the subjective perception of human eyes.
  • the terminal can separately calculate the mean values of the corresponding pixels of the makeup face image and the original face image on the three channels of the LAB. And, for each pixel in the original face image, for the three channels of LAB corresponding to the pixel, subtract the pixel mean value of the original face image corresponding to the channel on each channel, and add the makeup person corresponding to the channel The pixel mean of the face image to obtain the color migration area in the LAB space.
  • the terminal transforms the color transition area in the LAB space back into the RGB color space to obtain the first color transition area.
  • the terminal uses the Poisson fusion algorithm to transfer the texture of the second deformable organ region to the corresponding first color transfer region, so as to realize the transfer of makeup texture and texture, and obtain the original face image after transfer.
  • the color of the original human face image matches the color of the makeup human face image through color migration.
  • the gradient information of the makeup face image (the structural information of the color is removed) can be migrated to the original face image, so that the visual effect of the original face image after migration can reflect the skin texture and texture change.
  • the terminal may perform face alignment inverse processing on the migrated original face image, that is, use the migrated original face image to reversely paste the face of the user image to be processed.
  • the inverse process of face alignment includes: restoring the size of the original face image after migration, as well as the angle and size of the face; in this way, the terminal can obtain The inversely adjusted original face image is pasted back into the user image to be processed. At this time, the face in the user image to be processed is a face with makeup.
  • the terminal may calculate its inverse matrix according to the first transformation matrix obtained in S804, and use the inverse matrix to perform affine transformation on the migrated original face image to restore it to the original face size, Such as restoring from the size of 512*512 to the original size.
  • the terminal can migrate the makeup of any makeup reference image to any user image, and perform color migration and texture migration on the human face based on organ deformation.
  • the terminal can The color migration of the torso skin matches the color of the face and torso of the user image to be processed, improving the naturalness and detail of the makeup transfer.
  • the terminal may perform skin segmentation operations on the user image to be processed and the makeup reference image to be processed, respectively, to obtain the first torso skin area corresponding to the user image to be processed, and the second torso skin area corresponding to the makeup reference image to be processed area.
  • the terminal performs color transfer on the first torso skin area according to the second torso skin area through the above color transfer method to obtain a natural makeup effect image.
  • the terminal in the embodiment of the present disclosure can correct the position of facial features in the reference makeup image to be consistent with the user's photo by using face key point detection, triangulation algorithm and affine transformation method.
  • Migration of makeup on the platform thus overcoming the problem of unnatural migration effect caused by different angles and facial features between the reference makeup map and the user map in related technologies, and supporting and realizing the makeup migration of any face angle, any facial features, and any face shape .
  • the method of color migration first and then Poisson fusion, while transferring the texture details of the reference makeup texture, the naturalness of the migration result is guaranteed.
  • FIG. 14 is a schematic diagram of an optional composition structure of the makeup transfer device provided in the embodiment of the present disclosure.
  • the makeup transfer device 20 includes:
  • the acquisition part 2001 is configured to acquire an original human face image containing the original human face and a makeup face image containing the target makeup;
  • the deformation part 2002 is configured to perform image deformation processing on the second organ area in the makeup face image based on the first organ area in the original face image to obtain a second deformed organ area; the first the organ region and the second organ region correspond to the same type of organ;
  • the migration part 2003 is configured to perform color migration and texture migration on the first organ region in the original face image based on the second deformed organ region, to obtain the original face image after makeup migration.
  • the acquisition part 2001 is further configured to extract the original human face image from the user image containing the target object, and extract the makeup face image from the makeup reference image containing the target makeup image.
  • the acquisition part 2001 is further configured to detect the key points of the face of the user image to obtain the first key point of the face of the user image; based on the first key point of the face , performing face alignment on the user image to obtain the original face image.
  • the acquisition part 2001 is further configured to perform face key point detection on the makeup reference image to obtain a second face key point of the makeup reference image; based on the second face The key point is to perform face alignment on the makeup reference image to obtain the makeup face image.
  • the acquisition part 2001 is further configured to acquire a first transformation matrix based on the first original position information of the first face key point and the target position information of the target-aligned face key point;
  • the first transformation matrix represents the positional relationship between the first original position information and the target position information; based on the first transformation matrix, adjust the first original position information to obtain a user-aligned image; from the The original face image is extracted from the user-aligned image.
  • the acquiring part 2001 is further configured to acquire a second transformation matrix based on the second original position information of the second facial key point and the target position information of the target aligned human face key point;
  • the second transformation matrix characterizes the positional relationship between the second original position information and the target position information; based on the second transformation matrix, adjust the second original position information to obtain a makeup alignment image; from the The face image of the makeup is extracted from the makeup alignment image.
  • the deformation part 2002 is further configured to perform organ segmentation on the original face image to obtain the first organ region, and perform organ segmentation on the makeup face image to obtain the The second organ region: performing image deformation processing on the second organ region, adjusting the shape information of the second organ region to the corresponding shape information of the first organ region, to obtain the second deformed organ region.
  • the deformation part 2002 is further configured to determine a second triangle mesh based on the second aligned face key points in the second organ region, and, based on The first aligned face key points, determine the corresponding first triangular mesh; wherein, each second triangle in the second triangular mesh is disjoint; each first triangle in the first triangular mesh is not Intersecting: through affine transformation, adjusting the shape information of each second triangle in the second triangle grid to the shape information of the corresponding first triangle to obtain the second deformable organ region.
  • the transfer part 2003 is further configured to perform color transfer on the corresponding first organ region based on the second deformable organ region to obtain a first color transfer region; The texture of the region is transferred to the corresponding first color transfer region to obtain the transferred original face image.
  • the migration part 2003 is further configured to subtract the pixel mean value of the corresponding channel in the first organ region from the pixel value of each channel of each pixel in the first organ region, and then Adding the pixel mean value of the corresponding channel of the second deformation organ region to obtain the shifted pixel value of each channel of each pixel; based on the shifted pixel value of each channel of each pixel, the first color is obtained Migration area.
  • the migration part 2003 is further configured to perform color migration and texture migration on the first organ region in the original face image based on the second deformed organ region to obtain the makeup-transferred After the original face image, a makeup effect image is obtained based on the transferred original face image and the user image.
  • the migration part 2003 is further configured to acquire the inverse matrix of the first transformation matrix as the first transformation inverse matrix; based on the first transformation inverse matrix, the migrated original performing inverse adjustment on the face image to obtain an inversely adjusted original face image; using the inversely adjusted original face image to replace the original face image in the user image to obtain the makeup effect image.
  • the migration part 2003 is further configured to, after using the inversely adjusted original face image to replace the original face image in the user image to obtain the makeup effect image, identify the The first torso skin area of the makeup effect image and the second torso skin area of the makeup reference image; based on the second torso skin area, color migration is performed on the first torso skin area to obtain a natural makeup effect image, And display the natural makeup effect image on the makeup migration interface.
  • the deforming part 2002 is further configured to, in the case that the area of the first organ region is larger than the preset target area, based on the first organ region, carry out the corresponding second organ region image deformation processing to obtain the second deformed organ region.
  • FIG. 15 is a schematic diagram of an optional composition structure of the makeup transfer device provided in the embodiment of the present disclosure.
  • the makeup transfer device 21 includes: a processor 2101 and Memory 2102.
  • the memory 2102 stores a computer program that can run on the processor 2101.
  • the processor 2101 executes the computer program, it implements the steps of any one of the presented methods in the embodiments of the present disclosure.
  • the memory 2102 is configured to store computer programs and applications by the processor 2101, and can also cache data to be processed or processed by the processor 2101 and each module in the display device (for example, image data, audio data, voice communication data and video communication data) ), which can be implemented by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • FLASH FLASH
  • RAM Random Access Memory
  • Processor 2101 When the processor 2101 executes the program, the steps of any one of the aforementioned makeup transfer methods are realized.
  • Processor 2101 generally controls the overall operation of presentation device 21 .
  • the above-mentioned processor can be an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (Programmable Logic At least one of Device, PLD), Field Programmable Gate Array (Field Programmable Gate Array, FPGA), Central Processing Unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor. Understandably, the electronic device that implements the above processor function may also be other, which is not limited in this embodiment of the present disclosure.
  • Computer-readable storage medium/memory can be volatile storage medium or non-volatile storage medium, can be read-only memory (Read Only Memory, ROM), programmable read-only memory (Programmable Read-Only Memory, PROM) , Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Magnetic Random Access Memory (Ferromagnetic Random Access Memory, FRAM), flash memory (Flash Memory), magnetic surface memory, optical disc, or compact disc read-only memory (Compact Disc Read-Only Memory, CD-ROM) and other memories; it can also include one or any combination of the above memories
  • Various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual requirements to achieve the purpose of the solutions of the embodiments of the present disclosure.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may be used as a single unit, or two or more units may be integrated into one unit; the above-mentioned integration
  • the unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
  • the above-mentioned integrated units of the present disclosure are realized in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the computer software products are stored in a storage medium, and include several instructions to make The equipment automatic test line executes all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes various media capable of storing program codes such as removable storage devices, ROMs, magnetic disks or optical disks.
  • the terminal in the embodiment of the present disclosure can correct the position of facial features in the reference makeup image to be consistent with the user's photo by using face key point detection, triangulation algorithm, and affine transformation method, and perform makeup based on this technology.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种妆容迁移方法、装置、设备和计算机可读存储介质,所述方法包括:获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像(S101);基于原始人脸图像中的第一器官区域,对妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;第一器官区域和第二器官区域对应同一类型的器官(S102);基于第二变形器官区域,对原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像(S103)。通过该方法提高了妆容迁移的自然度和细节度。

Description

妆容迁移方法、装置、设备和计算机可读存储介质
相关申请的交叉引用
本公开基于申请号为202110530429.1、申请日为2021年05月14日、申请名称为“妆容迁移方法、装置、设备和计算机可读存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本公开涉及图像处理技术,尤其涉及一种妆容迁移方法、装置、设备和计算机可读存储介质。
背景技术
目前,用户通过美妆应用可以将妆容参考图像中模特的妆容迁移到目标对象的人脸上,从而实现对目标对象的人脸的自动妆容处理;然而,在妆容迁移过程中,通常是将模特的妆容颜色迁移到目标对象的人脸,如用户人脸上,也即进行颜色的迁移,而用户人脸的嘴唇纹理、眉毛纹理等妆容区域的纹理与模特仍然存在差异,从而影响了用户人脸的口红,眉毛等妆容区域的状容迁移细节度和自然度。
发明内容
本公开实施例提供一种妆容迁移方法、装置、设备和计算机可读存储介质,提高了妆容迁移的细节度和自然度。
本公开的技术方案是这样实现的:
本公开实施例提供一种妆容迁移方法,包括:
获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像;基于所述原始人脸图像中的第一器官区域,对所述妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;所述第一器官区域和所述第二器官区域对应同一类型的器官;基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像。
这样,通过对妆容人脸图像的第二器官区域进行图像变形处理,将第二器官区域的形状变形为原始人脸图像中相同器官类型的第一器官区域,得到第二变形器官区域,基于第二变形器官区域,再对第一器官区域进行颜色迁移和纹理迁移,提高了妆容迁移的细节度和自然度,从而提高了妆容效果图像的效果。
本公开实施例提供一种妆容迁移装置,包括:
获取部分,被配置为获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像;
变形部分,被配置为基于所述原始人脸图像中的第一器官区域,对所述妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;所述第一器官区域和所述第二器官区域对应同一类型的器官;
迁移部分,被配置为基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像。
上述装置中,所述获取部分,还被配置为从包含目标对象的用户图像中提取所述原始人脸图像,以及从包含所述目标妆容的妆容参考图像中提取所述妆容人脸图像。
上述装置中,所述获取部分,还被配置为对所述用户图像进行人脸关键点检测,得到所述用户图像的第一人脸关键点;基于所述第一人脸关键点,对所述用户图像进行人脸对齐,得到所述原始人脸图像。
上述装置中,所述获取部分,还被配置为对所述妆容参考图像进行人脸关键点检测,得到所述妆容参考图像的第二人脸关键点;基于所述第二人脸关键点,对所述妆容参考图像进行人脸对齐,得到所述妆容人脸图像。
上述装置中,所述获取部分,还被配置为基于所述第一人脸关键点的第一原始位置信息和目标对齐人脸关键点的目标位置信息,获取第一变换矩阵;所述第一变换矩阵表征所述第一原始位置信息和所述目标位置信息之间的位置关系;基于所述第一变换矩阵,调整所述第一原始位置信息,得到用户对齐图像;从所述用户对齐图像中提取所述原始人脸图像。
上述装置中,所述获取部分,还被配置为基于所述第二人脸关键点的第二原始位置信息和目标对齐人脸关键点的目标位置信息,获取第二变换矩阵;所述第二变换矩阵表征所述第二原始位置信息和所述目标位置信息之间的位置关系;基于所述第二变换矩阵,调整所述第二原始位置信息,得到妆容对齐图像;从所述妆容对齐图像中提取所述妆容人脸图像。
上述装置中,所述变形部分,还被配置为对所述原始人脸图像进行器官分割,得到所述第一器官区域,以及对所述妆容人脸图像进行器官分割,得到所述第二器官区域;对所述第二器官区域进行图像变形处理,将所述第二器官区域的形状信息调整为对应的第一器官区域的形状信息,得到所述第二变形器官区域。
上述装置中,所述变形部分,还被配置为基于所述第二器官区域中的第二对齐人脸关键点,确定第二三角形网格,以及,基于所述第一器官区域中的第一对齐人脸关键点,确定对应的第一三角形网格;通过仿射变换,将所述第二三角形网格中的每个第二三角形的形状信息调整为对应的第一三角形的形状信息,得到所述第二变形器官区域。
上述装置中,所述迁移部分,还被配置为基于所述第二变形器官区域,对对应的第一器官区域进行颜色迁移,得到第一颜色迁移区域;将所述第二变形器官区域的纹理迁移至所述对应的第一颜色迁移区域中,得到所述迁移后的原始人脸图像。
上述装置中,所述迁移部分,还被配置为采用所述第一器官区域中每个像素的每个通道的像素值减去所述第一器官区域中对应通道的像素均值,再加上所述第二变形器官区域的对应通道的像素均值,得到每个像素的每个通道的迁移像素值;基于所述每个像素的每个通道的迁移像素值,得到所述第一颜色迁移区域。
上述装置中,所述迁移部分,还被配置为基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像之后,基于所述迁移后的原始人脸图像和所述用户图像,得到妆容效果图像。
上述装置中,所述迁移部分,还被配置为获取所述第一变换矩阵的逆矩阵,作为第一变换逆矩阵;基于所述第一变换逆矩阵,对所述迁移后的原始人脸图像进行逆调整,得到逆调整后的原始人脸图像;利用所述逆调整后的原始人脸图像替换所述用户图像中的原始人脸图像,得到所述妆容效果图像。
上述装置中,所述迁移部分,还被配置为在利用所述逆调整后的原始人脸图像替换所述用户图像中的原始人脸图像,得到所述妆容效果图像之后,识别所述妆容效果图像的第一躯干皮肤区域和所述妆容参考图像的第二躯干皮肤区域;基于所述第二躯干皮肤区域,对所述第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像,并在所述妆容迁移界面上显示所述自然妆容效果图像。
上述装置中,所述变形部分,还被配置为在所述第一器官区域的面积大于预设目标面积的情况下,基于所述第一器官区域,对对应的第二器官区域进行图像变形处理,得到所述第 二变形器官区域。
本公开实施例提供一种妆容迁移设备,所述设备包括:
存储器,被配置为存储计算机程序;
处理器,被配置为执行所述存储器中存储的计算机程序时,实现其上述妆容迁移方法。
本公开实施例提供一种计算机可读存储介质,存储有计算机程序,被配置为被处理器执行时,实现上述妆容迁移方法。
本公开实施例提供一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行时实现上述方法。
本公开实施例具有以下有益效果:
本公开实施例提供了一种妆容迁移方法、装置、设备和计算机可读存储介质;获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像;基于原始人脸图像中的第一器官区域,对妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域第一器官区域和第二器官区域对应同一类型的器官;基于第二变形器官区域,对原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像;也就是说,妆容迁移装置可以将妆容人脸的第二器官区域变换为与对应的第一器官区域形状相同第二变形器官区域,基于第二变形器官区域对第一器官区域进行了颜色迁移和纹理迁移,从而提高了妆容迁移的自然度和细节度。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并于说明书一起用于说明本公开的技术方案。
图1是本公开实施例提供的一个可选的妆容迁移系统架构的结构示意图;
图2为本公开实施例提供的一种可选的妆容迁移方法流程图;
图3为本公开实施例提供的一种可选的妆容迁移界面的示意图;
图4为本公开实施例提供的一种原始人脸图像和妆容人脸图像的示意图;
图5为本公开实施例提供的一种图像变形处理的效果示意图;
图6a为本公开实施例提供的一种可选的妆容迁移方法流程图;
图6b为本公开实施例提供的一种可选的妆容迁移方法流程图;
图6c为本公开实施例提供的一种可选的妆容迁移方法流程图;
图6d为本公开实施例提供的一种可选的妆容迁移方法流程图;
图7为本公开实施例提供的一种可选的妆容迁移方法流程图;
图8为本公开实施例提供的一种可选的妆容迁移方法流程图;
图9a为本公开实施例提供的一种可选的第一器官区域的三角剖分示意图;
图9b为本公开实施例提供的一种可选的第二器官区域的三角剖分示意图;
图10为本公开实施例提供的一种可选的妆容迁移方法流程图;
图11为本公开实施例提供的一种可选的妆容迁移方法流程图;
图12为本公开实施例提供的一种可选的妆容迁移方法流程图;
图13为本公开实施例提供的一种可选的妆容迁移方法流程图;
图14为本公开实施例提供的一种妆容迁移装置的组成结构示意图;
图15为本公开实施例提供的一种妆容迁移设备的组成结构示意图。
具体实施方式
为了使本公开的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本公 开进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用于解释本公开,并不用于限定本公开。
以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处所提供的实施例仅仅用以解释本公开,并不用于限定本公开。另外,以下所提供的实施例是用于实施本公开的部分实施例,而非提供实施本公开的全部实施例,在不冲突的情况下,本公开实施例记载的技术方案可以任意组合的方式实施。
在本公开实施例中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的方法或者装置不仅包括所明确记载的要素,而且还包括没有明确列出的其他要素,或者是还包括为实施方法或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的方法或者装置中还存在另外的相关要素(例如方法中的步骤或者装置中的单元,例如的单元可以是部分电路、部分处理器、部分程序或软件等等)。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,U和/或W,可以表示:单独存在U,同时存在U和W,单独存在W这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括U、W、V中的至少一种,可以表示包括从U、W和V构成的集合中选择的任意一个或多个元素。
例如,本公开实施例提供的展示方法包含了一系列的步骤,但是本公开实施例提供的展示方法不限于所记载的步骤,同样地,本公开实施例提供的展示装置包括了一系列模块,但是本公开实施例提供的展示装置不限于包括所明确记载的模块,还可以包括为获取相关信息、或基于信息进行处理时所要求设置的模块。
除非另有定义,本文所使用的所有的技术和科学术语与属于本公开的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本公开实施例的目的,不是旨在限制本公开。
对本公开实施例进行进一步详细说明之前,对本公开实施例中涉及的名词和术语进行说明,本公开实施例中涉及的名词和术语适用于如下的解释。
1)RGB图像:红、黄和蓝编码的彩色图像;其中每个像素点的颜色是红、黄和蓝的混合色;即,一个像素点包括红、黄和蓝三色的色彩分量。
2)LAB图像:LAB编码的彩色图像;其中,L表示明亮度(Luminance或Luma),“A”和“B”表示颜色对立维度,为两个色彩通道。
目前,妆容迁移是计算机视觉中图像生成领域的一个重要方向。相比于传统的由设计师制作好妆容贴纸,用户在原图上通过选择妆容贴纸来得到妆容效果的方式,妆容迁移技术提供了更高的自由度,支持从任意感兴趣的参考妆容图中获取妆容,如从参考妆容图中获取模特的妆容,并迁移到原图中。然而,相关技术的妆容迁移方式主要是通过简单的普氏变换,将参考妆容图中的妆容颜色迁移到用户原图上,容易受到参考妆容图与用户原图之间光照差异、人物五官位置差异、人脸角度差异、妆容纹理差异等因素的影响,造成参考妆容图中的妆容在迁移到用户原图的人脸上时存在强烈的违和感,使得妆容迁移的自然度和细节还原度较低。
本公开实施例提供一种妆容迁移方法、装置、设备和计算机可读存储介质,能够提高妆容迁移的细节度和自然度,本公开实施例提供的妆容迁移方法应用于妆容迁移设备中,下面说明本公开实施例提供的妆容迁移设备的示例性应用,本公开实施例提供的妆容迁移设备可以实施为AR眼镜、笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的用户终端,也可以实施为服务器。
下面,将说明妆容迁移设备实施为终端时示例性应用。当妆容迁移设备实施为终端时, 可以将妆容参考图像中妆容人脸的妆容迁移至用户图像中的用户人脸中;这里,终端可以与云端服务器进行交互,通过云端服务器获取妆容参考图像与用户图像中的至少一个。其中,用户图像也可以为实时采集得到的,本公开实施例不作限制。下面结合在实际应用场景中,终端通过与服务器交互的方式获取妆容参考图像,以进行妆容迁移为例进行妆容迁移系统的说明。
参见图1,图1是本公开实施例提供的妆容迁移系统100的一个可选的架构示意图,为实现支撑一个妆容迁移应用,终端400(示例性示出了终端400-1和终端400-2)通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合。
终端400被配置为获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像;基于原始人脸图像中的第一器官区域,对妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;第一器官区域和第二器官区域对应同一类型的器官;基于第二变形器官区域,对原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像。
示例性地,当终端400实施为手机时,可以启动手机上的预设妆容迁移应用410,在预设妆容迁移应用的妆容迁移界面上,在接收到图片指令后,向服务器200发起图片请求,服务器200接收到图片请求后,从数据库500中获取妆容参考图像;并将妆容参考图像发回给终端400。终端400得到服务器反馈的妆容参考图像之后,从妆容参考图像中提取妆容人脸图像,将妆容人脸图像中的妆容迁移到从用户图像中提取的原始人脸图像中,得到妆容迁移后的原始人脸图像,并在预设妆容迁移应用410的显示界面上显示迁移后的原始人脸图像。
本公开实施例提供一种妆容迁移方法,如图2所示,该方法包括:S101-S103。
S101、获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像。
在本公开实施例中,终端获取原始人脸图像和妆容人脸图像,其中,妆容人脸图像中包含目标妆容;如此,终端可以将目标妆容迁移到原始人脸中,得到妆容迁移后的原始人脸图像。
在本公开实施例中,原始人脸图像和目标妆容人脸图像可以为终端通过图像采集装置采集到的图像,也可以为终端通过网络从服务器下载的图像;对此,本公开实施例不作限制。
在本公开实施例中,原始人脸中可以有原始妆容,也可以没有妆容;对此,本公开实施例不作限制。示例性的,在原始人脸有原始妆容的情况下,妆容迁移后的原始人脸图像中原始人脸的妆容,可以为原始妆容叠加目标妆容的叠加妆容,也可以为目标妆容代替了原始妆容的妆容。
S102、基于原始人脸图像中的第一器官区域,对妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;第一器官区域和第二器官区域对应同一类型的器官。
在本公开实施例中,原始人脸图像包括多个器官区域,妆容人脸图像可以包括对应的多个器官区域;如此,终端可以针对每个器官区域,进行妆容迁移。其中,原始人脸图像中的器官区域为第一器官区域,妆容人脸图像中的器官区域为第二器官区域。
在本公开的一些实施例中,第一器官区域可以包括以下至少一个:左眉毛区域、右眉毛区域、左眼妆区域、右眼妆区域、口红区域和基底区域。第二器官区域可以包括以下至少一个:左眉妆区域、右眉妆区域、左眼妆区域、右眼妆区域、口红区域和粉底区域。
在一些实施例中,基底区域为原始人脸中除左眉妆区域、右眉妆区域、左眼妆区域、右眼妆区域和口红区域以外的其他区域;粉底区域为左眉妆区域、右眉妆区域、左眼妆区域、右眼妆区域和口红区域以外的其他区域。
在本公开的一些实施例中,终端可以基于原始人脸中的第一器官区域的形状,对第二器官区域的形状进行图像变形处理,得到第二变形器官区域,以使第二变形器官区域的形状与对应的第一器官区域的形状相同。这样,在形状相同的基础上进行妆容迁移处理,可以大大提高迁移后妆容的自然度与细节贴合度。
在本公开实施例中,终端进行图像变形处理的第二器官区域可以为妆容人脸图像的多个器官区域,也可以为妆容人脸图像的一个器官区域;这里,进行图像变形处理的第二器官区域可以根据实际要求设置,对此,本公开实施例不作限制。
示例性的,如图3所示,终端可以根据原始人脸图像3A中的第一眼妆区域、第一眉妆区域、第一口红区域,以及第一粉底区域,对妆容人脸图像3B中的第二眼妆区域、第二眉妆区域、第二口红区域,以及第二粉底区域进行图像变形处理,得到第二变形眼妆区域、第二变形眉妆区域、第二变形口红区域,以及第二变形粉底区域,从而得到图像3B1,可以看出,图像3B1中的人脸五官与原始人脸图像3A中的人脸五官形状基本一致。
在本公开实施例中,图像变形处理可以为基于移动的最小二乘变形算法,也可以为基于线的变形算法,还可以为三角网格仿射变换算法;对于图像变形处理的方法,可以根据实际要求设置,本公开实施例不作限制。
S103、基于第二变形器官区域,对原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像。
在本公开实施例中,终端在得到第二变形器官区域后,可以将第二变形器官区域中的颜色和纹理迁移到第一器官区域中,得到妆容迁移后的原始人脸图像。
示例性的,第二变形器官区域包括:第二变形眼妆区域、第二变形眉妆区域、第二变形口红区域;终端可以将第二变形眼妆区域中的颜色和纹理迁移到第一眼妆区域,第二变形眉妆区域中的颜色和纹理迁移到第一眉妆区域,以及,第二变形口红区域中的颜色和纹理迁移到第一口红区域,从而将妆容人脸图像中的眼妆、眉妆和口红迁移到原始人脸图像中。
在本公开实施例中,终端可以通过颜色迁移算法,基于第二变形器官区域,第一器官区域进行颜色迁移;并且,通过纹理迁移方法,将第二变形器官区域的纹理迁移到第一器官区域中,从而将原始人脸图像转换为妆容迁移后的原始人脸图像。
在本公开实施例中,颜色迁移算法可以为Reinhard算法,也可以为Welsh算法,还可以为自适应迁移算法,或者,模糊C均值(Fuzzy C-means,FCM)算法;对此,本公开实施例不作限制。
在本公开实施例中,纹理迁移方法可以为基于注意力机制的纹理迁移方法,也可以是基于结构引导的图像纹理迁移方法;还可以为图像融合算法;对此,本公开实施例不作限制。
可以理解的是,通过对妆容人脸图像的第二器官区域进行图像变形处理,将第二器官区域的形状变形为原始人脸图像中相同器官类型的第一器官区域,得到第二变形器官区域,基于第二变形器官区域,再对第一器官区域进行颜色迁移和纹理迁移,提高了妆容迁移的细节度和自然度,从而提高了妆容效果图像的效果。
在本公开的一些实施例中,S101中获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像的实现,可以包括:
S1011、从包含目标对象的用户图像中提取原始人脸图像,以及从包含目标妆容的妆容参考图像中提取妆容人脸图像。
在本公开实施例中,原始人脸图像是从包含用户目标对象的用户图像中提取的人脸图像;妆容人脸图像中是从包含目标妆容的妆容参考图像中提取的人脸图像。
在本公开实施例中,终端可以先获取用户图像和妆容参考图像,再从用户图像中提取原始人脸图像,以及从妆容参考图像中提取妆容人脸图像。这样,终端可以将任意妆容参考图像的妆容迁移到任意用户图像中,提高了妆容迁移的灵活性。
在本公开的一些实施例中,终端运行妆容迁移应用时,在终端的显示界面上可以显示妆容迁移应用的妆容迁移界面。妆容迁移界面上显示有图片上传控件,如此,终端可以在接收到对图片上传控件的触发操作的情况下,响应于触发操作,通过预设接口获取用户图像和妆容参考图像。
示例性的,如图4所示,图片上传控件包括用户图像上传控件41和妆容参考图像上传控 件42。终端接收到对用户图像上传控件41或妆容参考图像上传控件42的触发操作时,可以打开图片上传控制界面,并在图片上传控制界面显示图片库控件和拍摄控件;在终端接收到对图片库控件的触发操作的情况下,可以从图片库中获取用户图像或妆容参考图像;在终端接收到对拍摄控件的触发操作的情况下,可以通过图像采集装置采集用户图像或妆容参考图像。
在本公开实施例中,妆容参考图像和用户图像可以是终端通过图像采集装置采集的图像,也可以终端通过网络下载的图像,对于妆容参考图像和用户图像,可以根据实际要求设置,对此,本公开实施例不作限制。
在本公开实施例中,终端在获取到妆容参考图像和用户图像的情况下,终端可以在妆容迁移界面上显示妆容迁移控件。终端在接收到妆容迁移控件的触发操作的情况下,确定接收到妆容迁移指令,并响应于妆容迁移指令,从用户图像中提取原始人脸图像,以及,从妆容参考图像中提取妆容人脸图像,从而根据本公开实施例中的方法,实现从妆容人脸图像向原始人脸图像的妆容迁移。
在一些实施例中,终端可以通过对妆容参考图像与用户图像分别进行人脸角度调整与缩放等至少一种预处理,以使妆容人脸图像和原始人脸图像的角度与尺寸相同。并且,终端可以根据相同的预设尺寸对妆容人脸图像和原始人脸图像进行人脸图像提取,以使妆容人脸图像中的妆容人脸和原始人脸图像中的原始人脸的尺寸也相同。
在本公开的一些实施例中,终端可以分别对妆容参考图像中的人脸和用户图像中的目标对象人脸进行人脸角度调整,得到妆容正脸和目标对象正脸;再对妆容正脸和目标对象正脸进行缩放处理,得到尺寸相同的妆容人脸和原始人脸;最后,分别以原始人脸和妆容人脸为中心,按照预设尺寸,从用户图像中获取原始人脸图像,以及,从妆容参考图像中获取妆容人脸图像;从而得到尺寸相同的妆容人脸图像和原始人脸图像;其中,妆容人脸图像中的妆容人脸和原始人脸图像中的原始人脸尺寸也相同。
示例性的,如图5所示,终端在接收到妆容参考图像5B和用户图像5A的情况下,对用户图像5A进行预处理并提取出原始人脸图像5A1,以及,对妆容参考图像5B中进行预处理并提取出妆容人脸图像5B1;原始人脸图像5A1和妆容人脸图像5B1的尺寸均为400×400。
在本公开的一些实施例中,S1011中从包含目标对象的用户图像中提取原始人脸图像的实现,如图6a所示,可以包括:S201-S202。
S201、对用户图像进行人脸关键点检测,得到用户图像的第一人脸关键点。
在本公开实施例中,终端在获取用户图像和妆容参考图像后,可以对用户图像以及妆容图像进行人脸关键点检测,得到用户图像的第一人脸关键点以及妆容图像的第二人脸关键点。
在本公开的一些实施例中,终端可以先对用户图像和妆容参考图像进行磨皮美白处理后,再进行人脸关键点检测,从而提高检测精度。
S202、基于第一人脸关键点,对用户图像进行人脸对齐,得到原始人脸图像。
在本公开实施例中,终端在得到第一人脸关键点后,可以根据第一人脸关键点,对用户图像进行人脸对齐,得到原始人脸图像。
在本公开实施例中,终端可以通过第一人脸关键点的位置信息,进行仿射变换,从而实现对用户图像的人脸对齐。
在本公开的一些实施例中,基于图6a,S202中基于第一人脸关键点,对用户图像进行人脸对齐,得到原始人脸图像的实现,可以如图6b所示,包括:S2021-S2023。
S2021、基于第一人脸关键点的第一原始位置信息和目标对齐人脸关键点的目标位置信息,获取第一变换矩阵;第一变换矩阵表征第一原始位置信息和目标位置信息之间的位置关系。
S2022、基于第一变换矩阵,调整第一原始位置信息,得到用户对齐图像。
在本公开实施例中,终端将目标对象人脸调整为目标对齐人脸,目标对齐人脸为正脸,且尺寸为预设人脸尺寸;终端可以获取目标对齐人脸的目标对齐人脸关键点的目标位置信息, 通过第一人脸关键点的第一原始位置信息以及目标位置信息,获取第一变换矩阵。
在一些实施例中,第一变换矩阵表征第一原始位置信息和目标位置信息之间的位置关系;如此,终端可以按照第一变换矩阵将第一人脸关键点的位置变换到目标对齐人脸关键点的位置,实现对目标对象人脸的人脸对齐,得到用户对齐图像;用户对齐图像中的目标对象对齐人脸为预设人脸尺寸的正脸。
示例性的,第一人脸关键点有240个,目标对齐人脸关键点也有240个,通过二维坐标表征关键点的位置信息,将任意一个第一人脸关键点的第一原始位置信息表示为(x i,y i),与之对应的一个目标对齐人脸关键点的目标位置信息表示为(x i'y i');其中,240≥i≥1,由此,可以得到公式(1):
Figure PCTCN2021126184-appb-000001
其中,a、b、c、d、e和f为仿射变换系数。
将公式(1)中用矩阵方程表示,得到的矩阵方程如公式(2)所示:
Figure PCTCN2021126184-appb-000002
其中,n=240。通过公式(3)可以得到第一变换矩阵Ω,参见公式(3):
Figure PCTCN2021126184-appb-000003
S2023、从用户对齐图像中提取原始人脸图像。
在本公开实施例中,终端通过第一变换矩阵,将第一人脸关键点调整至目标对齐人脸关键点后,可以按照预设的图像尺寸,以目标对象对齐人脸为中心,从用户对齐图像中提取原始人脸图像,原始人脸图像中的目标对象对齐人脸为原始人脸。
在本公开的一些实施例中,S1011中从包含目标妆容的妆容参考图像中提取妆容人脸图像的实现,如图6c所示,可以包括:S301-S302。
S301、对妆容参考图像进行人脸关键点检测,得到妆容图像的第二人脸关键点。
在本公开实施例中,终端妆容参考图像后,可以对妆容参考图像进行人脸关键点检测,得到用户图像的第一人脸关键点以及妆容参考图像的第二人脸关键点。
在本公开的一些实施例中,终端可以先对妆容参考图像进行磨皮美白处理后,再进行人脸关键点检测,从而提高检测精度。
S302、基于第二人脸关键点,对妆容图像进行人脸对齐,得到妆容人脸图像。
在本公开实施例中,终端在得到第二人脸关键点后,可以根据第二人脸关键点,对妆容参考图像进行人脸对齐,得到妆容人脸图像。
在本公开实施例中,终端可以通过第二人脸关键点的位置信息,进行仿射变换,从而实现对妆容参考图像的人脸对齐。终端可以通过人脸关键点进行人脸对齐,基于人脸对齐后的图像进行妆容迁移,提高了妆容迁移的精度,从而提高了妆容迁移的效果。
在本公开的一些实施例中,基于图6c,S302中基于第二人脸关键点,对妆容图像进行人脸对齐,得到妆容人脸图像的实现,可以如图6d所示,包括:S3021-S3023。
S3021、基于第二人脸关键点的第二原始位置信息和目标对齐人脸关键点的目标位置信息,获取第二变换矩阵;第二变换矩阵表征第二原始位置信息和目标位置信息之间的位置关系。
S3022、基于第二变换矩阵,调整第二原始位置信息,得到妆容对齐图像。
在本公开实施例中,终端可以获取目标对齐人脸的人脸关键点的目标位置信息,通过第二人脸关键点的第二原始位置信息以及目标位置信息,获取第二变换矩阵。
其中,终端将妆容参考图像中的妆容人脸调整为目标对齐人脸,目标对齐人脸为正脸,且尺寸为预设人脸尺寸。
在一些实施例中,第二变换矩阵表征第二原始位置信息和目标位置信息之间的位置关系;终端可以按照第二变换矩阵将第二人脸关键点的位置调整到目标对齐人脸的位置,实现对妆容人脸的人脸对齐,得到妆容对齐图像;妆容对齐图像中的妆容对齐人脸为预设人脸尺寸的正脸。
这里,第二变换矩阵的获取方式与第一变换矩阵的获取方式相同,第一变换矩阵的获取方式在S2022中已做详细描述,在此不再赘述。
S3023、从妆容对齐图像中提取妆容人脸图像。
在本公开实施例中,终端通过第二变换矩阵,将第二人脸关键点调整至目标对齐人脸关键点的位置后,以妆容对齐人脸为中心,从妆容对齐图像中提取妆容人脸图像,妆容人脸图像中的妆容对齐人脸为妆容人脸。
在一些实施例中,可以按照预设的图像尺寸从妆容对齐图像中提取妆容人脸图像。示例性的,按照预设的图像尺寸提取的妆容人脸图像与原始人脸图像的尺寸相同;妆容人脸图像中的妆容对齐人脸的尺寸与原始人脸图像中的目标对象对齐人脸的尺寸相同。
示例性的,目标对齐人脸的尺寸为400×400,预设的图像尺寸为512×512,则原始人脸图像的尺寸和妆容人脸图像的尺寸均为512×512,原始人脸图像中的调整后的目标对象人脸的尺寸为400×400;妆容人脸图像中的调整后的妆容人脸的尺寸为400×400;其中,原始人脸图像的中心位置与调整后的目标对象人脸的中心位置相同;妆容人脸图像的中心位置与调整后的妆容人脸的中心位置相同。
可以理解的是,终端获取第一人脸关键点和第二人脸关键点后,可以通过第一人脸关键点和第二人脸关键点,将妆容人脸和目标对象人脸的角度、尺寸保持与目标对齐人脸一致,从而得到原始人脸图像和妆容人脸图像,由于,原始人脸图像和妆容人脸图像的尺寸相同,且原始人脸图像中的原始人脸以及妆容人脸图像中妆容人脸的尺寸也相同,如此,终端可以基于原始人脸图像和妆容人脸图像进行妆容迁移,可以提高妆容迁移的精度。
在本公开的一些实施例中,S102中基于原始人脸图像中的第一器官区域,对妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域的实现,如图7所示,可以包括:S401-S402。
S401、对原始人脸图像进行器官分割,得到第一器官区域,以及对妆容人脸图像进行器官分割,得到第二器官区域。
在本公开实施例中,终端可以对原始人脸图像和妆容人脸图像分别进行器官分割,从而得到原始人脸图像的第一器官区域以及妆容人脸图像的第二器官区域。
在本公开的一些实施例中,终端可以通过五官分割算法对原始人脸图像和妆容人脸图像 分别进行器官分割。其中,五官分割算法可以为基于引导聚合的双边实时语义分割网络(Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation,BiSeNetV2)算法,也可以为有效人脸分析分层聚合网络(An Effective Hierarchical Aggregation Network for Face Parsing,EHANet)算法,还可以为自适应的弱监督漫画人脸分析(Weakly-supervised Caricature Face Parsing through Domain Adaptation,Cari Face Parsing)算法;对此,本公开实施例不作限制。
在本公开的一些实施例中,终端可以根据原始人脸图像中的人脸关键点对原始人脸图像进行器官分割,得到第一器官区域;以及根据妆容人脸图像中的人脸关键点对妆容人脸进行器官分割,得到第二器官区域。
S402、对第二器官区域进行图像变形处理,将第二器官区域的形状信息调整为对应的第一器官区域的形状信息,得到第二变形器官区域。
在本公开实施例中,终端得到第一器官区域和第二器官区域后,可以按照第一器官区域的形状信息,调整对应的第二器官区域的形状信息,将第二器官区域的形状信息调整为第一器官区域的形状信息,得到第二变形器官区域。
这里,形状信息可以包括:轮廓和面积等信息;对此,可以根据实际要求设置,本公开实施例不作限制。
在本公开的一些实施例中,S402中对第二器官区域进行图像变形处理,将第二器官区域的形状信息调整为对应的第一器官区域的形状信息,得到第二变形器官区域的实现,如图8所示,可以包括:S501-S502。
S501、基于第二器官区域中的第二对齐人脸关键点,确定多个第二三角形网格,以及,基于第一器官区域中的第一对齐人脸关键点,确定对应的第一三角形网格。
在本公开实施例中,原始人脸图像是经过人脸对齐的用户图像,用户图像中的第一人脸关键点调整后,成为第一对齐人脸关键点;妆容人脸图像是经过人脸对齐的妆容参考图像,妆容图像中的第二人脸关键点调整后,成为第二对齐人脸关键点;如此,原始人脸图像中的第一器官区域可以包括多个第一对齐人脸关键点,妆容人脸图像中的第二器官区域可以包括多个第二对齐人脸关键点。
在本公开实施例中,终端可以按照预设三角剖分方法,对每个第二器官区域中的多个第二对齐人脸关键点进行连接,得到第二三角形网格;第二三角形网格包括多个不相交的第二三角形;以及,终端可以按照预设三角剖分方法对第一器官区域中的多个第一对齐人脸关键点进行连接,得到第一三角形网格,第一三角形网格包括多个不相交的第一三角形。
在一些实施例中,由于人脸关键点的检测方式一致,第一对齐人脸关键点和第二对齐人脸关键点的数量一致;并且,终端基于预设三角剖分方法对第一对齐人脸关键点和第二人脸关键点进行连接,连接方式一致,因此,第一三角形网络中的每个第一三角形和第二三角形网格中的每个第二三角形一一对应。
S502、通过仿射变换,将第二三角形网格中的每个第二三角形的形状信息调整为对应的第一三角形的形状信息,得到第二变形器官区域。
在本公开实施例中,终端可以基于每个第二三角形和对应的第一三角形,得到对应的一个三角仿射变换矩阵;通过三角仿射变换矩阵,对第二三角形进行仿射变换,将第二三角形的形状信息调整为对应的第一三角形的形状信息,使每个第二三角形的形状信息与对应的第一三角形的形状信息相同,从而得到每个第二变形三角形;每个第二变形三角形组成了第二三角形网格,从而得到第二变形器官区域。
示例性的,如图9a所示,第一器官区域包括9个第一对齐人脸关键点,按照预设三角剖分法,对9个第一对齐人脸关键点进行连接,得到8个第一三角形T1 1-T8 1;如图9b所示,为第二器官区域包括9个第二对齐人脸关键点,按照预设三角剖分法,对9个第二对齐人脸关键点进行连接,得到与8个第一三角形对应的8个第二三角形T1 2-T8 2;如此,将图9b中 的8个第二三角形的形状信息调整为图9a中8个第一三角形的形状信息,则可以将第二器官区域的形状信息调整为第一器官区域的形状信息,得到第二变形器官区域。
其中,人脸关键点数量越多,第一三角形和第二三角形的网格越多,则多个第二变形器官区域与多个第一器官区域的形状的一致性越高;人脸关键点数量越少,则第一三角形和第二三角形的网络越少,终端在做图像变形处理时的资源消耗越少;这里,人脸关键点数量可以根据实际要求设置,对此,本公开实施例不作限制。
可以理解的是,终端可以先将妆容人脸图像中的第二器官区域的形状信息调整至与原始人脸图像中对应的第一器官区域的形状信息相同,如此,对相同形状的器官区域进行妆容迁移,能够使得目标妆容与原始人脸更加吻合,从而提高妆容迁移的细节度。
在本公开的一些实施例中,S103中基于第二变形器官区域,对原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像的实现,如图10所示,可以包括:S601-S602。
S601、基于第二变形器官区域,对对应的第一器官区域进行颜色迁移,得到第一颜色迁移区域。
在本公开实施例中,终端在得到第二变形器官区域后,可以将第二变形器官区域的颜色迁移至对应的第一器官区域中,得到第一颜色迁移区域。
在本公开的一些实施例中,终端可以利用第二变形器官区域的像素替换第一器官区域中的像素,从而改变第一器官区域中的颜色,得到第一颜色迁移区域。
在本公开的一些实施例中,S601中基于第二变形器官区域,对对应的第一器官区域进行颜色迁移,得到第一颜色迁移区域的实现,可以包括:S5011-S5012。
S6011、采用第一器官区域中每个像素的每个通道的像素值减去第一器官区域中对应通道的像素均值,再加上第二变形器官区域的对应通道的像素均值,得到每个像素的每个通道的迁移像素值。
在本公开实施例中,终端可以将原始人脸图像和妆容人脸图像从RGB图像转换为LAB图像,如此,第一器官区域和第二变形器官区域中的每个像素包括L通道像素值、A通道像素值和B通道像素值。
在本公开实施例中,对任意一个第一器官区域中的每个像素,终端可以利用该像素L通道的像素值减去该第一器官区域中L通道的像素均值,再加上对应的第二变形器官区域中L通道的像素均值,得到每个像素的L通道的迁移像素值;按照此方法,可以得到每个像素的A通道的迁移像素值和B通道的迁移像素值。
S6012、基于每个像素的每个通道的迁移像素值,得到第一颜色迁移区域。
在本公开实施例中,终端得到每个像素L通道的迁移像素值、A通道的迁移像素值和B通道的迁移像素值后,就得到了第一颜色迁移LAB区域,再将第一颜色迁移LAB区域转换为RGB格式,得到第一颜色迁移区域。
S602、将第二变形器官区域的纹理迁移至对应的第一颜色迁移区域中,得到迁移后的原始人脸图像。
在本公开实施例中,终端在得到第一颜色迁移区域后,可以通过泊松融合算法,将第二变形器官区域的梯度信息迁移至对应的第一颜色迁移区域中,从而实现将妆容人脸图像中第二变形器官区域的纹理迁移到对应的第一颜色迁移区域中,改变了原始人脸图像中多个第一颜色迁移区域的皮肤纹理和质地,得到迁移后的原始人脸图像。
示例性的,第一器官区域包括:用户眉妆区域、用户口红区域、用户眼妆区域和用户粉底区域;终端对用户眉妆区域、用户口红区域、用户眼妆区域和用户粉底区域进行颜色迁移后,可以再对用户眉妆区域、用户口红区域、用户眼妆区域和用户粉底区域进行纹理迁移。
可以理解的是,终端通过先颜色迁移,再泊松融合的方式,实现在第一颜色迁移区域的像素基础上的纹理改变,提高了妆容迁移的自然度。
在本公开的一些实施例中,S103中基于第二变形器官区域,对原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像之后的实现,可以包括:
S104、基于迁移后的原始人脸图像和用户图像,得到妆容效果图像。
在本公开实施例中,终端在得到迁移后的原始人脸图像后,就实现了对人脸的妆容迁移;由于原始人脸图像是从用户图像中提取的,终端可以基于迁移后的原始人脸图像进行提取处理的逆处理,得到带有目标妆容的目标对象人脸的用户图像,作为妆容效果图像。
在本公开的一些实施例中,S104中基于迁移后的原始人脸图像和用户图像,得到妆容效果图像的实现,如图11所示,可以包括:S1041-S1043。
S1041、获取第一变换矩阵的逆矩阵,作为第一变换逆矩阵。
S1042、基于第一变换逆矩阵,对迁移后的原始人脸图像进行逆调整,得到逆调整后的原始人脸图像。
在本公开实施例,原始人脸图像是基于第一变换矩阵对用户图像进行调整后得到的,因此,终端在对前处理人脸图像进行妆容迁移,得到迁移后的原始人脸图像后,可以对迁移后的原始人脸图像进行逆调整,得到逆调整后的原始人脸图像;逆调整后的原始人脸图像与用户图像中的目标对象人脸的大小相同,角度相同,也就是说,逆调整后的原始人脸图像为带有目标妆容的目标对象人脸。
在本公开实施例中,终端可以获取第一变换矩阵的逆矩阵,作为第一变换逆矩阵,通过第一变换逆矩阵将原始人脸图像中的第一对齐人脸关键点的位置调整至第一人脸关键点的位置,得到逆调整后的原始人脸图像。
S1043、利用逆调整后的原始人脸图像替换用户图像中的原始人脸图像,得到妆容效果图像。
在本公开实施例中,终端在得到逆调整后的原始人脸图像后,将逆调整后的原始人脸图像反帖回用户图像中,替换从用户图像中提取的原始人脸图像,从而得到妆容效果图像。
可以理解的是,通过第一变换矩阵及其逆矩阵,使终端可以对各种角度、各种尺寸的用户图像和妆容参考图像进行妆容迁移,在提高了妆容迁移的自然度和细节度的同时,还增加了妆容迁移的灵活性。
在本公开的一些实施例中,S603中利用逆调整后的原始人脸图像替换用户图像中的原始人脸图像,得到妆容效果图像之后的实现,如图12所示,可以包括:S701-S702。
S701、识别妆容效果图像的第一躯干皮肤区域和妆容参考图像的第二躯干皮肤区域。
在本公开实施例中,终端在得到妆容效果图像后,可以识别妆容效果图像和妆容参考图像的躯干皮肤区域,得到第妆容效果图像的第一躯干皮肤区域和妆容参考图像的第二躯干皮肤区域。
其中,躯干皮肤区域为人脸以外的其他暴露的皮肤区域;这里,终端可以五官分割算法,例如语义分割法,将躯干皮肤区域作为识别对象进行识别,得到第一躯干皮肤区域和第二躯干皮肤区域。
S702、基于第二躯干皮肤区域,对第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像,并在妆容迁移界面上显示自然妆容效果图像。
在本公开实施例中,终端在识别出第一躯干皮肤区域和第二躯干皮肤区域后,可以基于第二躯干皮肤区域,对第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像,并在妆容迁移界面上显示自然妆容效果图像。
其中,颜色迁移的方法在S103中详细描述,此处不再赘述。
可以理解的是,终端在对目标对象人脸进行妆容迁移,得到妆容效果图像后,还可以基于妆容参考图像中的第二躯干皮肤区域,对妆容效果图像中目标对象的第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像;如此,自然妆容效果图像中目标对象的人脸和躯干皮肤颜色更加自然协调,从而提高了妆容迁移的自然度。
在本公开的一些实施例中,S102中基于原始人脸图像中的第一器官区域,对妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域的实现,还可以包括:在第一器官区域的面积大于预设目标面积的情况下,基于第一器官区域,对对应的第二器官区域进行图像变形处理,得到第二变形器官区域。
在本公开实施例中,终端在得到第一器官区域后,可以对比第一器官区域的面积与对应的预设目标面积,在第一器官区域的面积小于对应的预设目标面积的情况下,确定第一器官区域被遮挡的面积太大,如此,终端将不对第一器官区域进行图像变形处理。
示例性的,第一器官区域的面积通过像素来表征。第一器官区域为右眉妆区域;右眉妆区域的面积为40×10,对应的预设右眉妆目标面积为50×10;如此,终端可以判断右眉妆区域被遮挡,终端可以不对第二器官区域中的右眉妆区域进行图像变形处理。
这里,不同的第一器官区域的可以对应不同的预设目标面积,预设目标面积可以根据实际要求来设置;对此,本公开实施例不作限制。
可以理解的是,终端在得到第一器官区域后,可以根据第一器官区域的面积和对应的预设目标面积,确定第一器官区域被遮挡的情况,从而不对被遮挡过多的第一器官区域进行妆容迁移,节省了资源消耗,提高了妆容迁移效率。
图13为本公开实施例提供的一种妆容迁移方法的过程示意图,如图13所示,该方法可以包括:
S801、对包含目标对象的用户图像和包含目标妆容的妆容参考图像进行磨皮美白,得到待处理用户图像和待处理妆容参考图像。
S802、判断用户图像中的人脸角度和妆容参考图像中的人脸角度是否小于预设人脸角度;在判断为是的情况下,执行S803;否则,停止处理。
在本公开实施例中,人脸角度表示人脸偏离正脸的角度。
S803、对待处理用户图像和待处理妆容参考图像进行人脸关键点检测,得到用户图像的第一人脸关键点和妆容参考图像的第二人脸关键点。
在一些实施例中,第一人脸关键点与第二人脸关键点可以是240点人脸关键点。
S804、根据第一人脸关键点对待处理用户图像进行人脸对齐,得到原始人脸图像;以及,根据第二人脸关键点,对待处理妆容参考图像进行人脸对齐,得到妆容人脸图像。
在本公开实施例中,终端可以基于第一人脸关键点与目标对齐人脸关键点,得到第一变换矩阵。示例性地,对于240点人脸关键点,终端可以得到人脸角度信息仿射矩阵作为第一变换矩阵。终端根据第一人脸关键点与第一变换矩阵,将待处理用户图像中的人脸调整为正脸,再以正脸为中心,按照预设的图像尺寸提取原始人脸图像,以及,根据第二人脸关键点将待处理妆容参考图像中的人脸调整为正脸,再以正脸为中心,按照预设的图像尺寸提取妆容人脸图像。
在一些实施例中,原始人脸图像和妆容人脸图像的尺寸为512×512,原始人脸图像中的正脸和妆容人脸图像中的正脸的尺寸为400×400。
S805、对原始人脸图像进行器官分割,得到第一器官区域;以及,对妆容人脸图像进行器官分割,得到第二器官区域;第一器官区域和第二器官区域对应同一类型的器官。
在一些实施例中,终端可以通过五官分割算法实现器官分割;第一器官区域与第二器官区域可以是五官分割图的形式的图像区域。
S806、判断第一器官区域的面积是否大于预设目标面积;在判断为是的情况下,执行S807,否则,停止处理。
S807、对第二器官区域进行图像变形处理,将第二器官区域的形状信息调整为对应的第一器官区域的形状信息,得到第二变形器官区域。
在一些实施例中,对于妆容人脸图像,终端可以从上述的240个人脸关键点中,根据各个关键点的语义,获取表征第二器官区域,如基底区域,或眼妆区域,或口红区域的M个关 键点。其中,M为大于或等于3的正整数。终端通过三角剖分算法,根据其预设的三角连接规则,将M个关键点连接得到N个三角形网格,作为第二三角形网格。其中N为大于或等于1的正整数。终端对原始人脸图像进行相同过程的处理,得到原始人脸图像对应的N个三角形网格,作为第一三角形网格。其中,第一三角形网络中的每个第一三角形和第二三角形网格中的每个第二三角形一一对应。
在本公开实施例中,终端通过遍历每个第一三角形网格与每个对应的第二三角形网格,计算得到第二三角形网格到第一三角形网格的仿射变换矩阵。终端对妆容人脸图像中的每个第二三角形网格执行仿射变换,使得每个第二三角形网格与每个第一三角形网格更贴合,从而使得妆容人脸图像中的第二器官区域与原始人脸图像中的第一器官区域的形状相一致,得到第二变形器官区域。
S808、基于第二变形器官区域,对对应的第一器官区域进行颜色迁移,得到第一颜色迁移区域。
在一些实施例中,终端可以将原始人脸图像和妆容人脸图像从RGB图像转换为LAB图像,以使妆容迁移的效果更符合人眼的主观感知。终端可以在LAB空间中,分别计算妆容人脸图像和原始人脸图像在LAB三个通道上各自对应的像素均值。并且,对于原始人脸图像中每个像素,对于该像素对应的LAB三个通道,在每个通道上减去该通道对应的原始人脸图像的像素均值,再加上该通道对应的妆容人脸图像的像素均值,从而得到LAB空间中的颜色迁移区域。终端将LAB空间中的颜色迁移区域,转化回到RGB颜色空间中,得到第一颜色迁移区域。
S809、将第二变形器官区域的纹理迁移至对应的第一颜色迁移区域中,得到迁移后的原始人脸图像。
在本公开实施例中,终端通过泊松融合算法,将第二变形器官区域的纹理迁移至对应的第一颜色迁移区域中,实现妆容纹理质地的迁移,得到迁移后的原始人脸图像。
由于在S808中,已经通过颜色迁移,使得原始人脸图像的颜色与妆容人脸图像相吻合。在S809中,通过泊松融合,可以将妆容人脸图像的梯度信息(去除了颜色的结构信息)迁移到原始人脸图像,使得迁移后的原始人脸图像的视觉效果体现出皮肤纹理和质地的改变。
S810、对迁移后的原始人脸图像进行人脸对齐的逆处理,得到逆调整后的原始人脸图像。
S811、利用逆调整后的原始人脸图像替换待处理用户图像中的原始人脸图像,得到妆容效果图像。
在本公开实施例中,由于原始人脸图像是终端通过执行S804,从待处理用户图像中提取得到的,在完成对原始人脸图像的颜色与纹理迁移,得到迁移后的原始人脸图像的情况下,终端可以对迁移后的原始人脸图像进行人脸对齐的逆处理,也即使用迁移后的原始人脸图像对待处理用户图像进行人脸反贴。
在本公开实施例中,人脸对齐的逆处理包括:对迁移后的原始人脸图像的尺寸、以及其中的人脸角度和人脸尺寸进行还原;如此,终端可以将人脸对齐逆处理得到的逆调整后的原始人脸图像反贴回待处理用户图像中,此时,待处理用户图像中的人脸为带妆人脸。
在本公开实施例中,终端可以根据S804中得到的第一变换矩阵,计算其逆矩阵,并使用逆矩阵,对迁移后的原始人脸图像进行仿射变换,以还原至原始人脸尺寸,如从512*512的尺寸还原到原始尺寸。
S812、识别妆容效果图像的第一躯干皮肤区域和妆容参考图像的第二躯干皮肤区域;
S813、基于第二躯干皮肤区域,对第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像。
在本公开实施例中,终端可以将任意妆容参考图像的妆容迁移到任意用户图像中,基于器官变形,对人脸进行颜色迁移和纹理迁移。同时,考虑到此时待处理用户图像中的人脸部分已经实现了妆容迁移,颜色与原始肤色可能不同,为了减少妆容迁移后脸部与躯干的违和 感,终端可以对待处理用户图像中的躯干皮肤进行颜色迁移,使待处理用户图像的人脸和躯干颜色匹配,提高了妆容迁移的自然度和细节度。
在本公开实施例中,终端可以分别对待处理用户图像与待处理妆容参考图像进行皮肤分割操作,得到待处理用户图像对应的第一躯干皮肤区域,以及待处理妆容参考图像对应的第二躯干皮肤区域。终端通过上述颜色迁移的方法,根据第二躯干皮肤区域对第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像。
可以理解的是,本公开实施例中的终端利用人脸关键点检测、三角剖分算法与仿射变换方法,可以将参考妆容图中的人脸五官位置矫正到与用户照片一致,在此技术上进行妆容的迁移,从而克服了相关技术中参考妆容图与用户图的角度、五官位置不同造成的迁移效果不自然的问题,支持并实现了任意人脸角度,任意五官,任意脸型的妆容迁移。并且,通过先颜色迁移再泊松融合的方法,在迁移参考妆容纹理质地细节的同时,保证迁移结果的自然度。
本公开实施例还提供一种妆容迁移设备,图14为本公开实施例提供的妆容迁移设备的一个可选的组成结构示意图,如图14所示,该妆容迁移装置20包括:
获取部分2001,被配置为获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像;
变形部分2002,被配置为基于所述原始人脸图像中的第一器官区域,对所述妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;所述第一器官区域和所述第二器官区域对应同一类型的器官;
迁移部分2003,被配置为基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像。在一些实施例中,所述获取部分2001,还被配置为从包含目标对象的用户图像中提取所述原始人脸图像,以及从包含所述目标妆容的妆容参考图像中提取所述妆容人脸图像。
在一些实施例中,所述获取部分2001,还被配置为对所述用户图像进行人脸关键点检测,得到所述用户图像的第一人脸关键点;基于所述第一人脸关键点,对所述用户图像进行人脸对齐,得到所述原始人脸图像。
在一些实施例中,所述获取部分2001,还被配置为对所述妆容参考图像进行人脸关键点检测,得到所述妆容参考图像的第二人脸关键点;基于所述第二人脸关键点,对所述妆容参考图像进行人脸对齐,得到所述妆容人脸图像。
在一些实施例中,所述获取部分2001,还被配置为基于所述第一人脸关键点的第一原始位置信息和目标对齐人脸关键点的目标位置信息,获取第一变换矩阵;所述第一变换矩阵表征所述第一原始位置信息和所述目标位置信息之间的位置关系;基于所述第一变换矩阵,调整所述第一原始位置信息,得到用户对齐图像;从所述用户对齐图像中提取所述原始人脸图像。
在一些实施例中,所述获取部分2001,还被配置为基于所述第二人脸关键点的第二原始位置信息和目标对齐人脸关键点的目标位置信息,获取第二变换矩阵;所述第二变换矩阵表征所述第二原始位置信息和所述目标位置信息之间的位置关系;基于所述第二变换矩阵,调整所述第二原始位置信息,得到妆容对齐图像;从所述妆容对齐图像中提取所述妆容人脸图像。
在一些实施例中,所述变形部分2002,还被配置为对所述原始人脸图像进行器官分割,得到所述第一器官区域,以及对所述妆容人脸图像进行器官分割,得到所述第二器官区域;对所述第二器官区域进行图像变形处理,将所述第二器官区域的形状信息调整为对应的第一器官区域的形状信息,得到所述第二变形器官区域。
在一些实施例中,所述变形部分2002,还被配置为基于所述第二器官区域中的第二对齐人脸关键点,确定第二三角形网格,以及,基于所述第一器官区域中的第一对齐人脸关键点,确定对应的第一三角形网格;其中,所述第二三角形网格中每个第二三角形不相交;所述第 一三角形网格中每个第一三角形不相交;通过仿射变换,将所述第二三角形网格中的每个第二三角形的形状信息调整为对应的第一三角形的形状信息,得到所述第二变形器官区域。
在一些实施例中,所述迁移部分2003,还被配置为基于所述第二变形器官区域,对对应的第一器官区域进行颜色迁移,得到第一颜色迁移区域;将所述第二变形器官区域的纹理迁移至所述对应的第一颜色迁移区域中,得到所述迁移后的原始人脸图像。
在一些实施例中,所述迁移部分2003,还被配置为采用所述第一器官区域中每个像素的每个通道的像素值减去所述第一器官区域中对应通道的像素均值,再加上所述第二变形器官区域的对应通道的像素均值,得到每个像素的每个通道的迁移像素值;基于所述每个像素的每个通道的迁移像素值,得到所述第一颜色迁移区域。
在一些实施例中,所述迁移部分2003,还被配置为基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像之后,基于所述迁移后的原始人脸图像和所述用户图像,得到妆容效果图像。
在一些实施例中,所述迁移部分2003,还被配置为获取所述第一变换矩阵的逆矩阵,作为第一变换逆矩阵;基于所述第一变换逆矩阵,对所述迁移后的原始人脸图像进行逆调整,得到逆调整后的原始人脸图像;利用所述逆调整后的原始人脸图像替换所述用户图像中的原始人脸图像,得到所述妆容效果图像。
在一些实施例中,所述迁移部分2003,还被配置为在利用所述逆调整后的原始人脸图像替换所述用户图像中的原始人脸图像,得到所述妆容效果图像之后,识别所述妆容效果图像的第一躯干皮肤区域和所述妆容参考图像的第二躯干皮肤区域;基于所述第二躯干皮肤区域,对所述第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像,并在所述妆容迁移界面上显示所述自然妆容效果图像。
在一些实施例中,所述变形部分2002,还被配置为在所述第一器官区域的面积大于预设目标面积的情况下,基于所述第一器官区域,对对应的第二器官区域进行图像变形处理,得到所述第二变形器官区域。
本公开实施例还提供一种妆容迁移设备,图15为本公开实施例提供的妆容迁移设备的一个可选的组成结构示意图,如图15所示,该妆容迁移设备21包括:处理器2101和存储器2102,存储器2102存储有可在处理器2101上运行的计算机程序,处理器2101执行所述计算机程序被执行时,实现本公开实施例的任意一种展示方法的步骤。
存储器2102配置为存储由处理器2101计算机程序和应用,还可以缓存待处理器2101以及展示设备中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
处理器2101执行程序时实现上述任一项妆容迁移方法的步骤。处理器2101通常控制展示设备21的总体操作。
上述处理器可以为特定用途集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理装置(Digital Signal Processing Device,DSPD)、可编程逻辑装置(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器中的至少一种。可以理解地,实现上述处理器功能的电子器件还可以为其它,本公开实施例不作限制。
上述计算机可读存储介质/存储器可为易失性存储介质或非易失性存储介质,可以是只读存储器(Read Only Memory,ROM)、可编程只读存储器(Programmable Read-Only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、磁性随机存取存储器(Ferromagnetic Random Access Memory,FRAM)、快闪存 储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(Compact Disc Read-Only Memory,CD-ROM)等存储器;也可以是包括上述存储器之一或任意组合的各种终端,如移动电话、计算机、平板设备、个人数字助理等。
这里指出:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开存储介质和设备实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本公开的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本公开的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的要求选择其中的部分或全部单元来实现本公开实施例方案的目的。
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
或者,本公开上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得设备自动测试线执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
本公开所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本公开所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本公开的实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。
工业实用性
本公开实施例中,通过对妆容人脸图像的第二器官区域进行图像变形处理,将第二器官区域的形状变形为原始人脸图像中相同器官类型的第一器官区域,得到第二变形器官区域,基于第二变形器官区域,再对第一器官区域进行颜色迁移和纹理迁移,提高了妆容迁移的细节度和自然度,从而提高了妆容效果图像的效果。并且,本公开实施例中的终端利用人脸关 键点检测、三角剖分算法与仿射变换方法,可以将参考妆容图中的人脸五官位置矫正到与用户照片一致,在此技术上进行妆容的迁移,从而克服了相关技术中参考妆容图与用户图的角度、五官位置不同造成的迁移效果不自然的问题,支持并实现了任意人脸角度,任意五官,任意脸型的妆容迁移。并且,通过先颜色迁移再泊松融合的方法,在迁移参考妆容纹理质地细节的同时,保证迁移结果的自然度。

Claims (19)

  1. 一种妆容迁移方法,包括:
    获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像;
    基于所述原始人脸图像中的第一器官区域,对所述妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;所述第一器官区域和所述第二器官区域对应同一类型的器官;
    基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像。
  2. 根据权利要求1所述的方法,其中,所述获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像,包括:
    从包含目标对象的用户图像中提取所述原始人脸图像,以及从包含所述目标妆容的妆容参考图像中提取所述妆容人脸图像。
  3. 根据权利要求2所述的方法,其中,所述从包含目标对象的用户图像中提取所述原始人脸图像,包括:
    对所述用户图像进行人脸关键点检测,得到所述用户图像的第一人脸关键点;
    基于所述第一人脸关键点,对所述用户图像进行人脸对齐,得到所述原始人脸图像。
  4. 根据权利要求2所述的方法,其中,所述从包含所述目标妆容的妆容参考图像中提取所述妆容人脸图像,包括:
    对所述妆容参考图像进行人脸关键点检测,得到所述妆容参考图像的第二人脸关键点;
    基于所述第二人脸关键点,对所述妆容参考图像进行人脸对齐,得到所述妆容人脸图像。
  5. 根据权利要求3所述的方法,其中,所述基于所述第一人脸关键点,对所述用户图像进行人脸对齐,得到所述原始人脸图像,包括:
    基于所述第一人脸关键点的第一原始位置信息和目标对齐人脸关键点的目标位置信息,获取第一变换矩阵;所述第一变换矩阵表征所述第一原始位置信息和所述目标位置信息之间的位置关系;
    基于所述第一变换矩阵,调整所述第一原始位置信息,得到用户对齐图像;
    从所述用户对齐图像中提取所述原始人脸图像。
  6. 根据权利要求4所述的方法,其中,所述基于所述第二人脸关键点,对所述妆容图像进行人脸对齐,得到所述妆容人脸图像,包括:
    基于所述第二人脸关键点的第二原始位置信息和目标对齐人脸关键点的目标位置信息,获取第二变换矩阵;所述第二变换矩阵表征所述第二原始位置信息和所述目标位置信息之间的位置关系;
    基于所述第二变换矩阵,调整所述第二原始位置信息,得到妆容对齐图像;
    从所述妆容对齐图像中提取所述妆容人脸图像。
  7. 根据权利要求1至6任一所述的方法,其中,所述基于所述原始人脸图像中的第一器官区域,对所述妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域,包括:
    对所述原始人脸图像进行器官分割,得到所述第一器官区域,以及对所述妆容人脸图像进行器官分割,得到所述第二器官区域;
    对所述第二器官区域进行图像变形处理,将所述第二器官区域的形状信息调整为对应的第一器官区域的形状信息,得到所述第二变形器官区域。
  8. 根据权利要求7所述的方法,其中,所述对所述第二器官区域进行图像变形处理,将所述第二器官区域的形状信息调整为对应的第一器官区域的形状信息,得到所述第二器变形 官区域,包括:
    基于所述第二器官区域中的第二对齐人脸关键点,确定第二三角形网格,以及,基于所述第一器官区域中的第一对齐人脸关键点,确定对应的第一三角形网格;
    通过仿射变换,将所述第二三角形网格中的每个第二三角形的形状信息调整为对应的第一三角形的形状信息,得到所述第二变形器官区域。
  9. 根据权利要求1至8任一所述的方法,其中,所述基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像,包括:
    基于所述第二变形器官区域,对对应的第一器官区域进行颜色迁移,得到第一颜色迁移区域;
    将所述第二变形器官区域的纹理迁移至所述对应的第一颜色迁移区域中,得到所述迁移后的原始人脸图像。
  10. 根据权利要求9所述的方法,其中,所述基于所述第二变形器官区域,对对应的第一器官区域进行颜色迁移,得到第一颜色迁移区域,包括:
    采用所述第一器官区域中每个像素的每个通道的像素值,减去所述第一器官区域中对应通道的像素均值,再加上所述第二变形器官区域的对应通道的像素均值,得到每个像素的每个通道的迁移像素值;
    基于所述每个像素的每个通道的迁移像素值,得到所述第一颜色迁移区域。
  11. 根据权利要求2-10任一所述的方法,其中,所述基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像之后,所述方法还包括:
    基于所述迁移后的原始人脸图像和所述用户图像,得到妆容效果图像。
  12. 根据权利要求11所述的方法,其中,所述基于所述迁移后的原始人脸图像和所述用户图像,得到妆容效果图像,包括:
    获取所述第一变换矩阵的逆矩阵,作为第一变换逆矩阵;
    基于所述第一变换逆矩阵,对所述迁移后的原始人脸图像进行逆调整,得到逆调整后的原始人脸图像;
    利用所述逆调整后的原始人脸图像替换所述用户图像中的原始人脸图像,得到所述妆容效果图像。
  13. 根据权利要求12所述的方法,其中,所述利用所述逆调整后的原始人脸图像替换所述用户图像中的原始人脸图像,得到所述妆容效果图像之后,所述方法还包括:
    识别所述妆容效果图像的第一躯干皮肤区域和所述妆容参考图像的第二躯干皮肤区域;
    基于所述第二躯干皮肤区域,对所述第一躯干皮肤区域进行颜色迁移,得到自然妆容效果图像,并在所述妆容迁移界面上显示所述自然妆容效果图像。
  14. 根据权利要求1至13任一所述的方法,其中,所述基于所述原始人脸图像中的第一器官区域,对所述妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域,包括:
    在所述第一器官区域的面积大于预设目标面积的情况下,基于所述第一器官区域,对对应的第二器官区域进行图像变形处理,得到所述第二变形器官区域。
  15. 根据权利要求1-14任一所述的方法,其中,
    所述第一器官区域包括以下至少一个:
    左眉毛区域、右眉妆区域、左眼妆区域、右眼妆区域、口红区域和基底区域;
    所述第二器官区域包括以下至少一个:
    左眉妆区域、右眉妆区域、左眼妆区域、右眼妆区域、口红区域和粉底区域。
  16. 一种妆容迁移装置,包括:
    获取部分,被配置为获取包含原始人脸的原始人脸图像和包含目标妆容的妆容人脸图像;
    变形部分,被配置为基于所述原始人脸图像中的第一器官区域,对所述妆容人脸图像中的第二器官区域进行图像变形处理,得到第二变形器官区域;所述第一器官区域和所述第二器官区域对应同一类型的器官;
    迁移部分,被配置为基于所述第二变形器官区域,对所述原始人脸图像中的第一器官区域进行颜色迁移和纹理迁移,得到妆容迁移后的原始人脸图像。
  17. 一种妆容迁移设备,包括:
    存储器,被配置为存储计算机程序;
    处理器,被配置为执行所述存储器中存储的计算机程序时,实现权利要求1至15任一所述的方法。
  18. 一种计算机可读存储介质,存储有计算机程序,被配置为被处理器执行时,实现权利要求1至15任一所述的方法。
  19. 一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行时实现权利要求1至15中任意一项所述的方法。
PCT/CN2021/126184 2021-05-14 2021-10-25 妆容迁移方法、装置、设备和计算机可读存储介质 WO2022237081A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110530429.1A CN113313660A (zh) 2021-05-14 2021-05-14 妆容迁移方法、装置、设备和计算机可读存储介质
CN202110530429.1 2021-05-14

Publications (1)

Publication Number Publication Date
WO2022237081A1 true WO2022237081A1 (zh) 2022-11-17

Family

ID=77373258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/126184 WO2022237081A1 (zh) 2021-05-14 2021-10-25 妆容迁移方法、装置、设备和计算机可读存储介质

Country Status (3)

Country Link
CN (1) CN113313660A (zh)
TW (1) TW202244841A (zh)
WO (1) WO2022237081A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036157A (zh) * 2023-10-09 2023-11-10 易方信息科技股份有限公司 可编辑的仿真数字人形象设计方法、系统、设备及介质
CN117195286A (zh) * 2023-09-04 2023-12-08 北京超然聚力网络科技有限公司 一种基于大数据的用户隐私保护方法和系统
CN117241064A (zh) * 2023-11-15 2023-12-15 北京京拍档科技股份有限公司 一种直播实时人脸替换的方法、设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313660A (zh) * 2021-05-14 2021-08-27 北京市商汤科技开发有限公司 妆容迁移方法、装置、设备和计算机可读存储介质
CN114445543A (zh) * 2022-01-24 2022-05-06 北京百度网讯科技有限公司 处理纹理图像的方法、装置、电子设备及存储介质
CN114418837B (zh) * 2022-04-02 2023-06-13 荣耀终端有限公司 一种妆容迁移方法及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622472A (zh) * 2017-09-12 2018-01-23 北京小米移动软件有限公司 人脸妆容迁移方法及装置
CN111815534A (zh) * 2020-07-14 2020-10-23 厦门美图之家科技有限公司 实时皮肤妆容迁移方法、装置、电子设备和可读存储介质
CN111950430A (zh) * 2020-08-07 2020-11-17 武汉理工大学 基于颜色纹理的多尺度妆容风格差异度量及迁移方法、系统
US20210019503A1 (en) * 2018-09-30 2021-01-21 Tencent Technology (Shenzhen) Company Limited Face detection method and apparatus, service processing method, terminal device, and storage medium
CN113313660A (zh) * 2021-05-14 2021-08-27 北京市商汤科技开发有限公司 妆容迁移方法、装置、设备和计算机可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509846B (zh) * 2018-02-09 2022-02-11 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备、存储介质及计算机程序产品
CN109949216B (zh) * 2019-04-19 2022-12-02 中共中央办公厅电子科技学院(北京电子科技学院) 一种基于面部解析和光照迁移的复杂妆容迁移方法
CN112528707A (zh) * 2019-09-18 2021-03-19 广州虎牙科技有限公司 图像处理方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622472A (zh) * 2017-09-12 2018-01-23 北京小米移动软件有限公司 人脸妆容迁移方法及装置
US20210019503A1 (en) * 2018-09-30 2021-01-21 Tencent Technology (Shenzhen) Company Limited Face detection method and apparatus, service processing method, terminal device, and storage medium
CN111815534A (zh) * 2020-07-14 2020-10-23 厦门美图之家科技有限公司 实时皮肤妆容迁移方法、装置、电子设备和可读存储介质
CN111950430A (zh) * 2020-08-07 2020-11-17 武汉理工大学 基于颜色纹理的多尺度妆容风格差异度量及迁移方法、系统
CN113313660A (zh) * 2021-05-14 2021-08-27 北京市商汤科技开发有限公司 妆容迁移方法、装置、设备和计算机可读存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117195286A (zh) * 2023-09-04 2023-12-08 北京超然聚力网络科技有限公司 一种基于大数据的用户隐私保护方法和系统
CN117195286B (zh) * 2023-09-04 2024-05-07 河南中信科大数据科技有限公司 一种基于大数据的用户隐私保护方法和系统
CN117036157A (zh) * 2023-10-09 2023-11-10 易方信息科技股份有限公司 可编辑的仿真数字人形象设计方法、系统、设备及介质
CN117036157B (zh) * 2023-10-09 2024-02-20 易方信息科技股份有限公司 可编辑的仿真数字人形象设计方法、系统、设备及介质
CN117241064A (zh) * 2023-11-15 2023-12-15 北京京拍档科技股份有限公司 一种直播实时人脸替换的方法、设备及存储介质
CN117241064B (zh) * 2023-11-15 2024-03-19 北京京拍档科技股份有限公司 一种直播实时人脸替换的方法、设备及存储介质

Also Published As

Publication number Publication date
CN113313660A (zh) 2021-08-27
TW202244841A (zh) 2022-11-16

Similar Documents

Publication Publication Date Title
WO2022237081A1 (zh) 妆容迁移方法、装置、设备和计算机可读存储介质
US11037281B2 (en) Image fusion method and device, storage medium and terminal
WO2020207191A1 (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
WO2021047396A1 (zh) 图像处理方法及装置、电子设备和计算机可读存储介质
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
CN113327278B (zh) 三维人脸重建方法、装置、设备以及存储介质
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
WO2022012085A1 (zh) 人脸图像处理方法、装置、存储介质及电子设备
CN109325990B (zh) 图像处理方法及图像处理装置、存储介质
WO2020001014A1 (zh) 图像美化方法、装置及电子设备
WO2022068451A1 (zh) 风格图像生成方法、模型训练方法、装置、设备和介质
CN113628327B (zh) 一种头部三维重建方法及设备
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
JP2022550948A (ja) 3次元顔モデル生成方法、装置、コンピュータデバイス及びコンピュータプログラム
WO2021244172A1 (zh) 图像处理和图像合成方法、装置和存储介质
WO2023066120A1 (zh) 图像处理方法、装置、电子设备及存储介质
Mahajan et al. Swapitup: A face swap application for privacy protection
CN111192223B (zh) 人脸纹理图像的处理方法、装置、设备及存储介质
CN112348937A (zh) 人脸图像处理方法及电子设备
Jampour et al. Face inpainting based on high-level facial attributes
WO2019237746A1 (zh) 图像合并的方法和装置
CN111836058A (zh) 用于实时视频播放方法、装置、设备以及存储介质
JP2024519355A (ja) 画像処理方法、装置、機器、記憶媒体、プログラム製品及びプログラム
CN113822965A (zh) 图像渲染处理方法、装置和设备及计算机存储介质
US20240013358A1 (en) Method and device for processing portrait image, electronic equipment, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21941646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21941646

Country of ref document: EP

Kind code of ref document: A1