WO2022179215A1 - Procédé et appareil de traitement d'image, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de traitement d'image, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022179215A1
WO2022179215A1 PCT/CN2021/133045 CN2021133045W WO2022179215A1 WO 2022179215 A1 WO2022179215 A1 WO 2022179215A1 CN 2021133045 W CN2021133045 W CN 2021133045W WO 2022179215 A1 WO2022179215 A1 WO 2022179215A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
color
pixel
face image
face
Prior art date
Application number
PCT/CN2021/133045
Other languages
English (en)
Chinese (zh)
Inventor
苏柳
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2022179215A1 publication Critical patent/WO2022179215A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of computer vision, and in particular, to an image processing method and apparatus, an electronic device and a storage medium.
  • Foundation beauty can render people's faces, adjust skin tone, and cover up skin defects, so that the skin looks smoother and beautifies the visual experience.
  • foundation processing on face images has been more and more widely used in people's lives.
  • an image processing method comprising:
  • an image processing apparatus including:
  • the original color extraction module is used for extracting the original color of at least one pixel point in the target object of the face image in response to the beauty operation for the target object of the face image;
  • the target color determination module is used for according to the beauty
  • the color selected in the makeup operation and the original color of at least one pixel in the target object determine the target color of at least one pixel in the target object;
  • the fusion module is used to combine at least one pixel in the target object.
  • the original color and the target color are fused to obtain a fused face image.
  • a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above-mentioned image processing method when executed by a processor.
  • a computer program product comprising computer readable code, when the computer readable code is executed in an electronic device, a processor in the electronic device performs the above method.
  • the target face area to be subjected to the foundation processing operation can be located more accurately from the face image, and the accuracy of the foundation processing operation can be improved;
  • the target colors of the obtained multiple pixels are corresponding to their original colors, so that the color excess in the fusion face image that combines the original color and the target color is more realistic and natural, and the natural effect of the fusion face image is improved.
  • FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • FIG. 2 shows a schematic diagram of a preset face material according to an embodiment of the present disclosure.
  • FIG. 4 shows a schematic diagram of a constructed triangular mesh according to an embodiment of the present disclosure.
  • FIG. 5 shows a schematic diagram of a target material according to an embodiment of the present disclosure.
  • FIG. 6 shows a schematic diagram of a color lookup table according to an embodiment of the present disclosure.
  • FIG. 7 shows a schematic diagram of a face image according to an embodiment of the present disclosure.
  • FIG. 8 shows a schematic diagram of fusing face images according to an embodiment of the present disclosure.
  • FIG. 9 shows a schematic diagram of fusing face images according to an embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of fusing face images according to an embodiment of the present disclosure.
  • FIG. 12 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 13 shows a schematic diagram of an application example according to the present disclosure.
  • FIG. 14 shows a schematic diagram of an application example according to the present disclosure.
  • the method can be applied to an image processing apparatus or an image processing system, and the image processing apparatus can be a terminal device, a server, or other processing devices.
  • the terminal device may be User Equipment (UE), mobile device, user terminal, terminal, cellular phone, cordless phone, Personal Digital Assistant (PDA), handheld device, computing device, vehicle-mounted device, wearable devices, etc.
  • UE User Equipment
  • PDA Personal Digital Assistant
  • the image processing method may be applied to a cloud server or a local server
  • the cloud server may be a public cloud server or a private cloud server, which can be flexibly selected according to actual conditions.
  • the image processing method can also be implemented by the processor calling computer-readable instructions stored in the memory.
  • the image processing method may include:
  • Step S11 in response to the cosmetic operation on the target object of the face image, extract the original color of at least one pixel in the target object of the face image.
  • the face image may be any image including a face, and the face image may include one face or multiple faces, and its implementation form can be flexibly determined according to the actual situation, which is not limited in the embodiments of the present disclosure .
  • the operation content included in the operation can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
  • the beauty makeup operation may include an operation of instructing to perform beauty makeup processing on the face image; in a possible implementation manner, the beauty makeup operation may further include selecting a beauty makeup operation Color, etc.; in a possible implementation manner, the beauty makeup operation may further include an operation indicating a treatment type of beauty makeup, and the like.
  • the target object can be any object in the face image that needs beauty makeup.
  • the target object can be one or more target parts in the face image, and which parts are included in the target part, the implementation form is the same It can be flexibly determined according to the actual situation of the beauty makeup operation.
  • the target part may include the lip region.
  • the target object may also be a target face area in the face image
  • the target face area may be any area in the face image to be performed a beauty operation
  • the target face area may include a human face
  • One or more part areas in the face the implementation form of which can be flexibly determined according to the actual situation, for example, it can include the cheek area, the bridge of the nose, the chin area, the forehead area, and the area around the eyes. one or more, etc.
  • the makeup operation may include any one or more makeup operations of multiple makeup types, for example, may include makeup operations on a target area of the face, such as foundation operations, highlight operations , grooming operations, etc.
  • the target area can be understood as the area on the face to be applied, and it can also include cosmetic operations on the target parts of the face, such as lip makeup operations, eye makeup operations, eye shadow operations, etc.
  • the target area It can be understood as the organ part of the face to be applied with makeup.
  • the types of processing included in the beauty operations can also be flexibly changed.
  • the beauty operations can include one type of processing.
  • Makeup operations can also include multiple treatment types at the same time.
  • the treatment type may include natural treatment and/or metallic light effect treatment.
  • the natural treatment may include natural modification of the color of the lips to retain the original glossy effect of the lips;
  • the metallic light treatment may include modification of the color of the lips and change of the light effect, so as to obtain a lip makeup effect with metallic luster.
  • the original color may be the unprocessed color of the target object in the face image.
  • the method of extracting the original color of at least one pixel point from the target object of the face image is not limited in the embodiments of the present disclosure, and can be based on actual conditions. Flexible decision.
  • the area where the target object is located in the face image may be determined, and the color of one or more pixels contained in the area may be extracted to obtain at least one pixel in the target object of the face image The original color of the point.
  • Step S12 Determine the target color of at least one pixel in the target object according to the color selected in the beauty makeup operation and the original color of at least one pixel in the target object.
  • the target color of at least one pixel in the target object can be determined respectively, wherein the target color can be related to the selected color and is related to the selected color.
  • the original color corresponds.
  • the selected color can be fused with the original color of at least one pixel to obtain the target color. Find the target color corresponding to the original color in a certain color range where the selected color is located. Specifically how to obtain the target color of at least one pixel in the target object according to the selected color and the original color, the processing method can be flexibly determined according to the actual situation, see the following disclosed embodiments for details, and will not be expanded here.
  • Step S13 fuse the original color of at least one pixel in the target object with the target color to obtain a fused face image.
  • the fusion of the original color of at least one pixel in the target object with the target color may be to perform fusion processing on a plurality of pixels in the target object respectively.
  • the pixel point is fused with the original color of the pixel point based on the target color determined by the original color to obtain the fused color of the pixel point, and then the fused face image is obtained.
  • step S13 The manner of fusion in step S13 can be flexibly changed according to the actual situation.
  • the target color of at least one pixel in the target object is determined, so that the original color of at least one pixel in the target object and the target color are fused to obtain a fused face image.
  • the obtained target colors of multiple pixels can be made to correspond to their respective original colors, so that the color excess in the fusion face image that combines the original color and the target color is more realistic and natural, and the fusion effect is improved. Effects and realism of face images.
  • the target object may include the target face area to be subjected to the foundation processing operation.
  • step S11 may include:
  • the original color of at least one pixel in the target face region of the face image is extracted.
  • the preset face material may be a relevant material used to perform foundation processing on the face image, and the preset face material may indicate the area and/or part included in the target face area through the preset pixel color
  • the color type of the preset pixel color can be flexibly set according to the actual situation, and can include one color or multiple colors.
  • the preset pixel color may be a color that is clearly different from the color contained in the face image, such as red, green, blue or It is a rarer color in other people's faces, etc.
  • FIG. 2 shows a schematic diagram of a preset face material according to an embodiment of the present disclosure.
  • the preset face material may be a face mask, and the face mask passes through red pixels.
  • the color (not shown in the image after grayscale processing) indicates that the target face area includes other face areas except eyes, eyebrows, and lips.
  • the preset face material can also be a preset material, which is automatically called when the foundation processing operation is selected; in a possible implementation, the preset face material can also be It is the material selected by the user together in the foundation processing operation.
  • the preset face material can indicate the area and/or part included in the target face region by the preset pixel color, it can be determined that the target face region is in the face image based on the pixel color of the preset face material s position.
  • the determination process can be flexibly determined according to the actual situation. For example, after the preset face material can be directly fused with the face image, the position of the target face area in the face image can be determined according to the color of the pixels in the fused image; The preset face material is fused with the standard preset face image, and then the position of the target face area in the face image is determined according to the correspondence between the fused image and the face image, as well as the color of the pixels in the fused image. etc., the specific determination method is detailed in the following disclosed embodiments, which will not be expanded here.
  • the method of extracting the original color of at least one pixel in the target face region of the face image is not limited in the embodiments of the present disclosure, and can be flexibly determined according to the actual situation.
  • the color of one or more pixels contained in the target face region may be extracted according to the position of the target face region in the face image, so as to obtain at least one pixel in the target face region of the face image.
  • the original color of a pixel is not limited in the embodiments of the present disclosure, and can be flexibly determined according to the actual situation.
  • the color of one or more pixels contained in the target face region may be extracted according to the position of the target face region in the face image, so as to obtain at least one pixel in the target face region of the face image.
  • the position of the target face region in the face image where the foundation processing operation is to be performed is determined based on the pixel color of the preset face material, so that according to the position of the target face region in the face image, Extract the original color of at least one pixel in the target face region of the face image.
  • determining the position of the target face region in the face image based on the pixel color of the preset face material may include:
  • the preset face material and the preset face image are fused to obtain a standard material image, wherein the pixel color of the target face area in the standard material image matches the preset face material.
  • the position of the target face region in the face image is determined.
  • the preset face image may be a standard face image template, which may include complete and comprehensive face parts, and the positions of each face part in the preset face image are standard.
  • the realization form of the preset face image can be flexibly determined according to the actual situation, and any standard face used in the field of face image processing can be used as the realization form of the preset face image.
  • 3 shows a schematic diagram of a preset face image according to an embodiment of the present disclosure (in order to protect objects in the image, part of the face in the figure is subjected to mosaic processing). It can be seen from the figure that in a In the example, the face positions included in the preset face image are clear, complete and consistent with the objective distribution of the face positions of each person in the face.
  • the preset face material can be directly fused with the preset face image to obtain a standard material image.
  • the manner in which the preset face material and the preset face image are fused is not limited in the embodiments of the present disclosure.
  • the preset face material corresponding to the preset face image may be directly The pixel points are superimposed to obtain a standard material image; in some possible implementations, the preset face material and the preset face image may also be superimposed and fused according to a preset weight.
  • the preset face material can use preset pixel colors to indicate the area and/or part included in the target face area. Therefore, in some possible implementations, the fused annotation In the material image, the color of the pixels located in the target face area can also match the preset pixel color in the preset face material, so that the position of the target face area in the standard material image can be determined according to the pixel color. In some possible implementations, the color matching may be the same or similar, etc.
  • the pixel color of the target face area in the standard material image may be consistent with the preset pixel color in the preset face material, for example, both are red; in an example, in the preset face material and the preset
  • the pixel color of the target face area in the standard material image can belong to the same color category as the preset pixel color in the preset face material, but there are certain differences, such as the preset face.
  • the preset pixel color in the partial material can be dark red, and the pixel color of the target face area in the fused standard material image can be light red, etc.
  • the position mapping relationship between the standard material image and the face image can be further obtained, wherein the position mapping relationship can indicate the position mapping of the same pixel between the standard material image and the face image.
  • the determination method can be flexibly selected according to the actual situation. to calculate the position mapping relationship, or determine the position mapping relationship according to the image size and vertex coordinates of the standard material image and the face image.
  • the position of the target face region in the face image can be determined according to the pixel color and position mapping relationship of the standard material image.
  • the realization method of determining the position of the target face region in the face image can be flexibly determined according to the actual situation.
  • the position of the target face region in the standard material image can be determined according to the pixel color of the standard material image, and then the position mapping relationship can be used to determine the position of the target face region in the standard material image.
  • the location of the region in the face image, etc. please refer to the following disclosed embodiments, which will not be expanded here.
  • the pixel colors in the preset face material can be first fused to the standard preset face image, and then the target can be determined by using the positional mapping relationship between the fused standard material image and the face image.
  • the position of the face area in the face image, through the above process, the standard preset face image can be used as an intermediate medium, and the pixel color in the preset face material can be used to more accurately target the target in the face image.
  • the face area is positioned to further improve the accuracy of foundation processing operations.
  • acquiring the position mapping relationship between the standard material image and the face image may include:
  • the position mapping relationship between the standard material image and the face image is determined according to the positional correspondence between the same key point in the first key point identification result and the second key point identification result.
  • the first key point recognition result may be the result obtained by performing key point recognition on a preset face image or standard material image
  • the second key point recognition result may be the result obtained by performing key point recognition on a face image.
  • “One” and “Second” are only used to distinguish the objects identified by key points, and do not limit the sequence or method of identification.
  • the first key point recognition result may be obtained by performing key point recognition on a preset face image, or the first key point recognition result may be obtained by performing key point recognition on a standard material image.
  • the first key point identification result may include the identified key points, and may also include interpolation key points obtained by performing interpolation based on the identified key points, and the like.
  • the second key point identification result is the same, and will not be repeated here.
  • the identified key points may be related key points for locating the positions of key regions in the face, such as eye key points, mouth key points, eyebrow key points or nose key points.
  • the identified key points specifically include which key points and the number of included key points are not limited in the embodiments of the present disclosure, and can be flexibly selected according to actual conditions.
  • all relevant key points in the face image can be identified, such as 106 whole-face key points (Face106) of the face, etc.; in some possible implementations, the face image can also be obtained Some key points, such as the key points related to the target face area, such as the relevant key points of the cheeks, chin or forehead, etc.
  • the method for identifying key points is not limited in the embodiments of the present disclosure, and any method that can identify key points in an image can be used as an implementation method for identifying key points.
  • the method of performing key point recognition on the preset face image or the standard material image and the face image may be the same or different, which is also not limited in the embodiment of the present disclosure.
  • a neural network with a key point recognition function can be used to perform key point recognition on a preset face image or a standard material image and a face image, respectively.
  • the position transformation relationship in the face image, and the position transformation relationship can be used as the position mapping relationship between the standard material image and the face image.
  • a triangular mesh may be constructed in the standard material image and the face image, respectively, based on the first key point identification result and the second key point identification result.
  • the manner of constructing the triangular mesh is not limited in the embodiments of the present disclosure. Taking the triangular mesh in the standard material image as an example, in a possible implementation manner, a plurality of In keypoints and/or interpolated keypoints, every three adjacent points are connected to obtain multiple triangular meshes.
  • the constructed triangular mesh can be used for subsequent fusion or rendering, and the positional mapping relationship between the standard material image and the face image can also be determined through the vertex coordinates of the triangular mesh.
  • FIG. 4 shows a schematic diagram of a triangular mesh constructed according to an embodiment of the present disclosure (same as above, in order to protect objects in the image, part of the face in the figure is subjected to mosaic processing).
  • a triangular mesh constructed according to an embodiment of the present disclosure (same as above, in order to protect objects in the image, part of the face in the figure is subjected to mosaic processing).
  • multiple triangular meshes can be obtained.
  • the first key point recognition result is obtained by performing key point recognition on a preset face image or a standard material image
  • the second key point recognition result is obtained by performing key point recognition on the face image, so that according to The first key point recognition result and the second key point recognition result determine the position mapping relationship between the standard material image and the face image.
  • the position of the target face region in the face image is determined based on the pixel color and position mapping relationship of the standard material image, including:
  • the position of the target face region in the standard material image is mapped to the face image, and the position of the target face region in the face image is determined.
  • the pixel color of the target face area in the fused standard material image matches the preset face material. Therefore, in a possible implementation manner, the pixel color in the standard material image may be matched with the preset face material. Pixels of the color matching the preset pixel color in the preset face material are confirmed as pixels belonging to the target face area. For example, in one example, the preset pixel color in the preset face material is red, and after the preset face material is fused with the preset face image, the pixel points fused with the preset face material are The color is also red.
  • the pixels whose pixel color is red in the standard material image can be confirmed as belonging to the target face area, so as to obtain the position of the target face area in the standard material image; in an example, The preset pixel color in the preset face material is dark red, and after the preset face material is fused with the preset face image, the color of the pixels fused with the preset face material is light matching the dark red. Red, in this case, the pixels whose pixel color is light red in the standard material image can be confirmed as belonging to the target face area, so as to obtain the position of the target face area in the standard material image, etc.
  • the position of the target face region in the standard material image can be mapped to the face image according to the position mapping relationship.
  • the mapping method is different in the embodiments of the present disclosure. As a limitation, it can be flexibly determined according to the realization form of the position mapping relationship.
  • the position coordinates of the target face region in the standard material image can be transformed through the position mapping relationship to obtain the target face region in the face. Position coordinates in the image.
  • the position of the target face region in the standard material image can be determined based on the pixel color of the standard material image, and the position of the target face region in the face image can be further determined based on the positional relationship.
  • pixel screening and position transformation can be used to locate the target face area in the face image simply and quickly, which improves the speed and convenience of image processing.
  • the makeup operation may include a beautification operation on a target part of the human face, and the target object may include a target part to be beautified;
  • step S11 may include:
  • the original color of at least one pixel in the target part of the face image is extracted.
  • the implementation of the beautification operation may refer to various implementation forms of the cosmetic operation in the above disclosed embodiments, and the implementation forms of the target parts may also refer to the implementation forms of the target parts in the above disclosed embodiments, which will not be repeated here.
  • the target material can be a related material used to realize beauty makeup on the face image, and the realization form of the target material can be flexibly determined according to the actual situation of the beauty operation.
  • the target material may be a lip makeup material, such as a lip mask.
  • the target material may also be the preset face material mentioned in the above disclosed embodiments.
  • the target material may be a material selected by the user in the beauty operation; in some possible implementations, the target material may also be a preset material, which is selected in the beauty operation is called automatically. In some possible implementation manners, the target material may also be a material obtained by processing the original target material based on a face image. How to obtain the target material, and the implementation method thereof, can be found in the following disclosed embodiments, which will not be expanded here.
  • the original color of at least one pixel in the target part of the face image can be extracted according to the transparency of at least one pixel in the target material.
  • the extraction method can be flexibly determined according to the actual situation.
  • the position of the pixel points in the face image and the pixel points can be compared.
  • the corresponding area is taken as the image area where the target part is located, and the original colors of multiple pixels in the image area are extracted.
  • the specific range of the preset transparency range can be flexibly determined according to the actual situation.
  • the preset transparency range can be set to be lower than 100%, that is, the transparency of the pixels in the target material is low.
  • 100% not fully transparent
  • the area corresponding to the position of the pixel in the face image can be used as the image area where the target part is located, and the original color of the pixel in the image area can be extracted;
  • the preset transparency range may also be set to be lower than other transparency values, or within a certain transparency range, etc.
  • the embodiment of the present disclosure does not limit the range value of the preset transparency range.
  • the value of the preset transparency range can be set, and more Targetedly determine the image area where the target part that meets the requirements is located, so as to extract the more accurate original color of the target part from the face image, and then improve the reliability and authenticity of the subsequent fusion face image.
  • the target material may be a lip mask, and the transparency of different pixels in the lip mask is different , which can better represent the natural and real lip shape, so the original color in the face image extracted based on the lip mask is also more accurate and reliable.
  • the method proposed by the embodiment of the present disclosure may further include: recognizing the target part in the face image to obtain the initial position of the target part in the face image.
  • the initial position may be determined according to the face image, and the approximate position of the target part in the face image.
  • the method for determining the initial position of the target part is not limited in the embodiments of the present disclosure, and can be flexibly selected according to the actual situation, and is not limited to the following disclosed embodiments.
  • the initial position of the target part can be determined by identifying the key points of the target part.
  • the initial position of the target part can be determined according to the coordinates of the identified key points of the target part in the face image. Or determine the range of the target part in the face image according to the key points of the recognized target part, so as to obtain the initial position of the target part, etc.
  • the target part in the face image is identified to obtain the initial position of the target part in the face image, which may include:
  • the initial position of the target part in the face image is determined.
  • the face key points may be relevant key points for locating the positions of key regions in the face, such as eye key points, mouth key points, eyebrow key points or nose key points.
  • the acquired face key points specifically include which key points and the number of key points included are not limited in the embodiments of the present disclosure, and can be flexibly selected according to the actual situation.
  • all relevant key points in the face image can be obtained, such as 106 whole face key points (Face106) of the face, etc.; in some possible implementations, the face image can also be obtained.
  • the manner of obtaining the key points of the face is not limited in the embodiments of the present disclosure, and any method that can identify the key points of the face in the image can be used as an implementation manner of obtaining the key points of the face.
  • the key points of the face and the method of obtaining the key points of the face may all refer to the methods of identifying the key points in the above disclosed embodiments, which will not be repeated here.
  • a triangular mesh After acquiring at least one face key point, a triangular mesh can be constructed in the face image according to the face key point.
  • the manner of constructing the triangular mesh is not limited in the embodiments of the present disclosure, and reference may also be made to the above disclosed embodiments, which will not be repeated here.
  • a triangular mesh corresponding to the target part can also be constructed in the face image according to the key points of the face.
  • the difference is that The face key points and interpolation points related to the target part can be obtained to construct a triangular mesh corresponding to the target part, and the construction of the triangular meshes of other parts in the face image is omitted.
  • the initial position of the target part in the face image can be determined according to the position coordinates of the triangular mesh in the face image.
  • the expression form of the initial position is not limited in the embodiment of the present disclosure.
  • the position of the center point of one or more triangular meshes corresponding to the target part may be used as the initial position of the target part;
  • the coordinates of each vertex of one or more triangular meshes corresponding to the target part can also be used as the initial position of the target part, etc., which can be flexibly selected according to the actual situation.
  • a triangular mesh corresponding to the target part is constructed in the face image, so as to determine the position coordinates of the triangular mesh in the face image.
  • the initial position of the target site Through the above process, the initial positioning of the target part in the face image can be efficiently and accurately performed by means of key point recognition and grid construction, so as to facilitate the subsequent acquisition of target materials matching the target part, thereby improving image processing. accuracy and authenticity.
  • acquiring the target material corresponding to the target part may include:
  • the target part obtain the original target material corresponding to the target part
  • the standard material image is extracted to obtain the target material.
  • the original target material may be a preset material bound to the beauty makeup operation, for example, the original lip mask corresponding to the lip makeup operation may be used as the original target material.
  • the manner of obtaining the original target material is not limited in the embodiments of the present disclosure.
  • the material selected in the beauty makeup operation may be used as the original target material, or the corresponding original target material may be automatically read according to the beauty makeup operation.
  • the original target material can be directly fused with the position corresponding to the target part in the preset face image to obtain a standard material image.
  • the manner in which the original target material and the target part in the preset face image are fused is not limited in the embodiments of the present disclosure.
  • the original target material and the target part in the preset face image can be directly merged
  • the corresponding pixels in the face image are added to obtain the standard material image; in some possible implementations, the original target material and the target part in the preset face image can also be added and fused according to the preset weight.
  • a standard material image By fusing the original target material with the target part in the preset face image, a standard material image can be obtained.
  • the target material may be extracted from the standard material image based on the initial position in the above disclosed embodiments.
  • the method of extracting the target material based on the initial position may include: acquiring the color value and transparency of each pixel in the range corresponding to the initial position in the standard material image, The image composed of pixels is used as the target material.
  • the target material By fusing the original target material with the target part in the preset face image, a standard material image is obtained, and based on the initial position, the target material is extracted from the standard material image, because the initial position is based on the target part in the face image. Therefore, through the above process, the obtained target material can be more corresponding to the position of the target part in the face image, so that the original color of at least one pixel in the extracted target part can be more realistic and reliable.
  • step S12 may include:
  • a corresponding color search is performed on the original color of at least one pixel in the target object of the face image, and the target color of at least one pixel in the target object is obtained.
  • the target color may be a color determined by performing a corresponding search based on the original color in the range of the selected colors, and the target color belongs to the selected color range and corresponds to the original color.
  • the search method can be flexibly determined according to the actual situation. For details, please refer to the following disclosed embodiments. Do not expand at this time.
  • the target color of at least one pixel point in the target object is obtained.
  • color search can be used to obtain the target color that belongs to the range of the selected color and corresponds to the original color, so that the color of the target color is more realistic, and the color transition between different pixels is more natural.
  • the naturalness of the fusion face image also enhances the beauty effect.
  • a corresponding color search is performed on the original color of at least one pixel in the target object of the face image, and the target color of at least one pixel in the target object is obtained.
  • a corresponding color search is performed on the original color of at least one pixel in the target object of the face image, and the target color of at least one pixel in the target object is obtained.
  • the output color corresponding to the original color of at least one pixel in the target object of the face image is respectively searched in the color lookup table, as the target color of at least one pixel in the target object.
  • the color lookup table may include a plurality of correspondences between input colors and output colors, wherein the input color may be the color searched in the color lookup table, and the output color may be the color found in the color lookup table. For example, for example, searching in the color lookup table according to the input color A, the output color B corresponding to A can be found.
  • the corresponding relationship between the colors in the color lookup table can be flexibly set according to the actual situation, which is not limited in this embodiment of the present disclosure.
  • the output colors in the color lookup table may be arranged in a gradient form, and the specific arrangement manner is not limited in the embodiments of the present disclosure, and is not limited to the following disclosed embodiments.
  • the color lookup table corresponding to the selected color can be obtained according to the color selected in the beauty operation.
  • the output color in the color lookup table belongs to the corresponding selected color. Therefore, the target color found according to the color lookup table can be within the corresponding range of the selected color and correspond to the original color.
  • the output color corresponding to each pixel can be searched from the color lookup table according to the original colors of the pixels in the target object, and used as the target color of the pixels in the target object.
  • the search method can be flexibly determined according to the form of the color look-up table, which is not limited in this embodiment of the present disclosure.
  • FIG. 6 shows a schematic diagram of a color lookup table according to an embodiment of the present disclosure.
  • the color lookup table includes a plurality of gradient colors with natural transitions as output colors (due to the grayscale image display The color with different shades in the picture is actually a gradient color with color difference), after obtaining the original colors of multiple pixels in the target face area, you can look up these multiple pixels from the color lookup table respectively.
  • the output color is used as the target color.
  • the color lookup table containing the gradient output color can be used to obtain the target color with natural color transition, so that the target color obtained later can be Color transitions are also more natural, improving the naturalness and beauty of the resulting fused face image.
  • the target object may include the target part mentioned in the above disclosed embodiments, and the target color of at least one pixel of the target object obtained through color search may be the initial target color, in this case , step S12 may also include:
  • the target color of at least one pixel in the target part is determined according to the initial target color of at least one pixel in the target part.
  • the initial target color may be the target color mentioned in the above-mentioned disclosed embodiments, that is, in the range of the selected colors, the color determined by performing the corresponding search based on the original color, the initial target color belongs to the selected color range , and corresponds to the original color.
  • step S12 may further determine the target color based on the initial target color.
  • the initial target color can be directly used as the target color; in some possible implementations, some processing can also be performed on the initial target color, such as adjustment or fusion with other colors, etc., to Obtain the target color; in some possible implementations, it is also possible to select how to process the initial target color to obtain the target color according to the processing type corresponding to the beauty operation. How to further determine the target color according to the initial target color, the implementation method thereof can also refer to the following disclosed embodiments, which will not be expanded here.
  • the target color of at least one pixel in the target part is determined according to the initial target color.
  • determining the target color of at least one pixel in the target part according to the initial target color of at least one pixel in the target part may include:
  • the initial target color of at least one pixel in the target part is used as the target color of at least one pixel in the target part.
  • the processing type corresponding to the beauty operation includes metal light effect processing
  • the initial target color of at least one pixel in the target part is adjusted based on the randomly obtained noise value, and the target color of at least one pixel in the target part is obtained.
  • the noise value may be the noise value or information added to each pixel in the image, and the method of randomly obtaining the noise value may be to obtain the randomly obtained noise value by generating random data, and the method of generating the random data is implemented in this disclosure.
  • the examples are not limited. For details, please refer to the following disclosed embodiments, which will not be expanded here.
  • the initial target color may be directly used as the target color.
  • the initial target color of at least one pixel point in the target part may be adjusted based on the randomly obtained noise value to change different pixel points
  • the color makes the target area appear metallic light effect.
  • different methods can be selected to adjust the initial target color to determine the target color, which improves the flexibility of the beauty operation; value to adjust the initial target color of at least one pixel in the target part.
  • the color can be adjusted based on random data to obtain a more natural metallic light effect.
  • the initial target color of at least one pixel in the target part is adjusted to obtain the target color of at least one pixel in the target part, which may include:
  • the initial target color of the pixel is adjusted according to the brightness information of the pixel to obtain the target color of the pixel.
  • the noise value of each pixel can be obtained separately, wherein, the noise value of each pixel can be obtained in a random way, and the obtaining method can be based on the actual situation.
  • the noise value of each pixel point may be obtained by generating a random number within a certain numerical range.
  • separately acquiring the noise value corresponding to the pixel point may include:
  • sampling is performed at the corresponding position of the preset noise texture to obtain the noise value corresponding to the pixel point.
  • the preset noise texture may be an image whose shape matches the target part, and the noise value of each point in the image may be randomly generated in advance.
  • the corresponding noise value of each pixel in the target part in the preset noise texture may be determined according to the positional correspondence between the target part and the preset noise texture.
  • the corresponding noise values of a plurality of pixel points can be obtained more conveniently, so that the obtained While the noise value is a random value, the efficiency of acquiring the noise value is improved, thereby improving the efficiency of image processing.
  • the processing modes corresponding to different pixel points can be determined by comparing the noise value with the preset noise range.
  • the value of the preset noise range can be flexibly set according to the actual situation, and is not limited to the following disclosed embodiments. 0.78 ⁇ 0.8, etc.
  • the initial target color of the pixel can be adjusted according to the noise value corresponding to the pixel and the transparency of the pixel corresponding to the pixel in the target material. Adjust to get the target color of the pixel.
  • the specific adjustment mode can be flexibly selected according to the actual situation, and is not limited to the following disclosed embodiments.
  • the initial target color of the pixel is adjusted to obtain the target color of the pixel, which may include:
  • the initial target color of the pixel is adjusted to obtain the target color of the pixel.
  • the adjustment coefficient may be a relevant parameter in the process of adjusting the initial target color.
  • the calculation method of the adjustment coefficient determined according to the noise value and transparency can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
  • the method of determining the adjustment coefficient according to the noise value and transparency can be determined by the following formula (1) to express:
  • Adjustment coefficient noise value ⁇ pow (transparency, 4.0) (1)
  • pow(x, y) indicates the result of calculating the power of y of x, so pow(transparency, 4.0) is the value of calculating the power of 4 of transparency.
  • the target color of the pixel can be determined according to the adjustment coefficient and the preset light source value. There is no limitation in the embodiments of the present disclosure.
  • the method of adjusting the initial target color based on the adjustment coefficient and the preset light source value can also be flexibly set according to the actual situation, and is not limited to the following disclosed embodiments.
  • the method is determined according to the adjustment coefficient and the preset light source value
  • the way of the target color can be expressed by the following formula (2):
  • Target color initial target color + adjustment factor ⁇ preset light source value (2)
  • the target color of at least one pixel in the target part can be obtained when the noise value is within the preset noise range.
  • the noise value may also be outside the preset noise range.
  • the initial target color of the pixel can be adjusted according to the brightness information of the pixel to obtain the pixel's initial target color. target color.
  • the brightness information may be related information determined according to the color of the pixel in the target part of the face image, etc., and the content of the information may be flexibly determined according to the actual situation. How to determine the brightness information of a pixel point and how to adjust the initial target color according to the brightness information can be implemented in detail in the following disclosed embodiments, which will not be expanded here.
  • the noise value corresponding to the pixel point is obtained separately for at least one pixel point in the target part, and when the noise value falls within the preset noise range, the initial value of the pixel point is determined according to the noise value.
  • the target color is adjusted.
  • the noise value falls outside the preset noise range, the initial target color is adjusted according to the brightness information of the pixel point.
  • the brightness information may include the first brightness, the second brightness and the third brightness, and the initial target color of the pixel is adjusted according to the brightness information of the pixel to obtain the target color of the pixel, which can be include:
  • the first brightness of the pixel is determined according to the original color of the pixel.
  • the second brightness of the pixel point with the target brightness in the preset processing range is determined.
  • the pixel points are filtered through a preset convolution kernel, and the third brightness of the pixel points is determined according to the intermediate color obtained by the filtering of the pixel points, wherein the filtering range of the preset convolution kernel is consistent with the preset processing range .
  • the initial target color of the pixel is adjusted to obtain the target color of the pixel.
  • the first brightness may be a brightness value determined according to the color value of the original color of the pixel, wherein the brightness value may be determined by calculating the color value, in an example, the brightness value may be determined according to three of the color values.
  • the values of the color channels red R, green G, and blue B are calculated.
  • the second brightness can also be determined according to the color value of the pixel with the target brightness, wherein the pixel with the target brightness can be in the target part of the face image, within the preset processing range of the pixel and having the highest Brightness of pixels.
  • the range size of the preset processing range can be flexibly set according to the actual situation, which is not limited in the embodiments of the present disclosure.
  • the third brightness may be a brightness value determined according to a color value of an intermediate color of a pixel, wherein the intermediate color of a pixel may be a color obtained by filtering the pixel through a preset convolution check.
  • the form and size of the preset convolution kernel can be flexibly set according to the actual situation.
  • the filtering range of the preset convolution kernel is consistent with the preset processing range in the above-mentioned disclosed embodiments, that is,
  • a pixel point may be filtered through a preset convolution check to obtain an intermediate color of the pixel point after filtering, and a corresponding brightness value is calculated according to the color value of the intermediate color, as the third brightness
  • the range of the area covered by the filtering of the pixel points by the preset convolution check can be used as the preset processing range, and the target part of the face image is located within the preset processing range and has the highest brightness. Brightness of the pixel value, which can be used as the second brightness.
  • the filtering method is also not limited in the embodiment of the present disclosure, and can be flexibly selected according to the actual situation.
  • Gaussian filtering can be performed on the pixels through a preset convolution check.
  • the determination order of the first brightness, the second brightness, and the third brightness is not limited in the embodiments of the present disclosure, and may be determined simultaneously, or may be determined sequentially in a certain order, etc., and can be selected flexibly according to the actual situation. That's it.
  • the initial target color of the pixel can be adjusted according to the determined first brightness, second brightness and third brightness to obtain the target color of the pixel. How to realize the adjustment according to these three brightnesses , and its implementation can be found in the following disclosed embodiments, which will not be expanded here.
  • Adjusting the initial target color of the point can fully take into account the brightness information of the pixels in the face image within a certain range, so that the target color determined based on the brightness information can be more realistic and reliable, and improve the beauty of the fusion face image. effect and authenticity.
  • the initial target color of the pixel is adjusted to obtain the target color of the pixel, including:
  • the initial target color of the pixel is adjusted to obtain the target color of the pixel
  • the initial target color of the pixel is adjusted according to the first brightness, the second brightness, the third brightness and the preset brightness radius to obtain the target color of the pixel.
  • the method of adjusting the initial target color can refer to the formula (2) in the above disclosed embodiments, that is, The adjustment coefficient of the pixel point is determined according to the corresponding data, and then the initial target color is adjusted by using the adjustment coefficient and the preset light source value.
  • the adjustment coefficient can be determined according to the first brightness and the third brightness, and the determination method can be flexibly selected according to the actual situation, and is not limited to the following disclosed embodiments.
  • the manner of determining the adjustment coefficient according to the first brightness and the third brightness can be expressed by the following formula (3):
  • Adjustment coefficient (third brightness - first brightness)/(1.0 - first brightness) (3)
  • the adjustment coefficient can be determined according to the first brightness, the second brightness, the third brightness and a preset brightness radius, wherein the preset brightness radius can determine the metal in the metal light effect
  • the radius of the bright spot and the value of the preset brightness radius can be flexibly set according to the actual situation, and are not limited in the embodiments of the present disclosure.
  • the manner of determining the adjustment coefficient according to the first brightness, the second brightness, the third brightness and the preset brightness radius can be expressed by the following formula (4):
  • Adjustment coefficient pow((first brightness-third brightness)/(second brightness-third brightness),shiness) (4)
  • the calculation method of pow may refer to the above formula (1), which will not be repeated here, and shine is a preset brightness radius.
  • the adjustment coefficient can be calculated by the above formula (3), or the adjustment coefficient can be calculated by the above formula (4), no matter what method is used, the obtained adjustment coefficient is 0 .
  • the initial target color of the pixel is flexibly adjusted to obtain the target color of the pixel.
  • the initial target color can be flexibly changed according to the comparison of the brightness values. Adjust the way to improve the flexibility and realism of the image processing process.
  • step S13 may include:
  • the preset fusion strength respectively determine the first fusion ratio of the original color and the second fusion ratio of the target color
  • the original color and the target color are fused to obtain a fusion face image.
  • the preset fusion strength is used to indicate the respective fusion ratio or weight of the original color and the target color in the fusion process, and its value can be flexibly set according to the actual situation.
  • the fusion weight of the original color and the target color can be preset as the preset fusion strength; in a possible implementation, in the beauty operation for the face image, you can also Including the selection strength of the fusion strength, in this case, the fusion strength selected in the beauty operation can be used as the preset fusion strength.
  • the first ratio may be the fusion ratio of the original color in the fusion process
  • the second ratio may be the fusion ratio of the target color in the fusion process.
  • the preset fusion strength may be a percentage value less than 1.
  • the preset fusion strength may be used as the second fusion ratio of the target color, and the difference between 1 and the preset fusion strength may be used.
  • the first fusion ratio of the original color then the fusion is realized according to the first fusion ratio and the second fusion ratio, and the fusion process can be expressed by the following formula (5):
  • Color is the pixel value in the fusion face image after fusion
  • srcColor is the pixel value of the original color
  • lutColor is the pixel value of the target color
  • strength is the preset fusion strength
  • the preset fusion intensity the first fusion ratio and the second fusion ratio of the original color and the target color are respectively determined, and the original color and the target color are fused according to the corresponding fusion ratios, so as to obtain a fusion face image.
  • the preset fusion intensity can also be flexibly set according to actual needs, so as to obtain a fusion face image with fusion intensity and effect that meets the requirements, which improves the flexibility of image processing.
  • FIGS. 8 to 11 show schematic diagrams of a fusion face image according to an embodiment of the present disclosure (same as the above disclosed embodiments, in order to The object is protected, and part of the face in each figure has been subjected to mosaic processing), among which Figures 8 and 9 are the fused face images obtained by performing foundation processing operations on Figure 7 based on different selected colors. Since the image processing is After the grayscale image, the color difference between the two images may not be obvious; Figure 10 is the fused face image obtained under the natural lip makeup treatment; Figure 11 is obtained under the metallic light effect lip makeup treatment fused face images. It can be seen from the above images that, through the image processing methods proposed in the above disclosed embodiments, a more realistic and natural fused face image with better fusion effect can be obtained.
  • FIG. 12 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • the image processing apparatus 20 may include:
  • the original color extraction module 21 is configured to extract the original color of at least one pixel in the target object of the human face image in response to the cosmetic operation for the target object of the human face image.
  • the target color determination module 22 is configured to determine the target color of at least one pixel in the target object according to the color selected in the cosmetic operation and the original color of at least one pixel in the target object.
  • the fusion module 23 is used to fuse the original color of at least one pixel in the target object with the target color to obtain a fused face image.
  • the beauty operation includes a foundation processing operation
  • the target object includes a target face area to be performed the foundation processing operation
  • the original color extraction module is used to: determine the target based on the pixel color of the preset face material The position of the face region in the face image; according to the position of the target face region in the face image, the original color of at least one pixel in the target face region of the face image is extracted.
  • the original color extraction module is further configured to: fuse the preset face material and the preset face image to obtain a standard material image, wherein the pixel color of the target face area in the standard material image Match with the preset face material; obtain the position mapping relationship between the standard material image and the face image; determine the position of the target face area in the face image based on the pixel color and position mapping relationship of the standard material image.
  • the original color extraction module is further used to: perform key point recognition on the preset face image or standard material image to obtain the first key point recognition result; perform key point recognition on the face image to obtain The second key point identification result; the position mapping relationship between the standard material image and the face image is determined according to the position correspondence between the first key point identification result and the same key point in the second key point identification result.
  • the original color extraction module is further used to: determine the position of the target face area in the standard material image based on the pixel color of the standard material image; The position of the area is mapped to the face image, and the position of the target face area in the face image is determined.
  • the makeup operation includes a beautification operation on a target part of the face; the target object includes a target part to be beautified; the original color extraction module is used to: obtain the target material corresponding to the target part; The transparency of at least one pixel in the target material, and the original color of at least one pixel in the target part of the face image is extracted.
  • the device is further used for: recognizing the target part in the face image to obtain the initial position of the target part in the face image; the original color extraction module is further used for: obtaining and matching the target part according to the target part.
  • the original target material corresponding to the target part; the original target material is fused with the target part in the preset face image to obtain a standard material image; based on the initial position, the standard material image is extracted to obtain the target material.
  • the target color determination module is used to: according to the color selected in the beauty makeup operation, perform a corresponding color search on the original color of at least one pixel point in the target object of the face image, and obtain the color in the target object.
  • the target color of at least one pixel is used to: according to the color selected in the beauty makeup operation, perform a corresponding color search on the original color of at least one pixel point in the target object of the face image, and obtain the color in the target object.
  • the target color of at least one pixel is used to: according to the color selected in the beauty makeup operation, perform a corresponding color search on the original color of at least one pixel point in the target object of the face image, and obtain the color in the target object.
  • the target color of at least one pixel is used to: according to the color selected in the beauty makeup operation, perform a corresponding color search on the original color of at least one pixel point in the target object of the face image, and obtain the color in the target object.
  • the target color of at least one pixel is
  • the target color determination module is further configured to: obtain a color lookup table corresponding to the selected color according to the selected color in the beauty makeup operation, wherein the output color in the color lookup table is a gradient
  • the output color corresponding to the original color of at least one pixel in the target object of the face image is respectively searched in the color lookup table as the target color of at least one pixel in the target object.
  • the target object includes a target part
  • the target color of at least one pixel of the target object obtained through color search is the initial target color
  • the target color determination module is further configured to: according to at least one pixel in the target part The initial target color of the point, which determines the target color of at least one pixel in the target part.
  • the target color determination module is further configured to: in the case that the processing type corresponding to the beauty operation includes natural processing, use the initial target color of at least one pixel in the target part as at least one pixel in the target part.
  • the target color determination module is further configured to: for at least one pixel point in the target part, obtain the noise value corresponding to the pixel point respectively; when the noise value falls within the preset noise range, according to The noise value and the corresponding transparency of the pixel in the target material, adjust the initial target color of the pixel to obtain the target color of the pixel; or, when the noise value is outside the preset noise range, according to the brightness of the pixel information, adjust the initial target color of the pixel to obtain the target color of the pixel.
  • the target color determination module is further configured to: obtain a preset noise texture; according to the position of at least one pixel in the target part, sampling at the corresponding position of the preset noise texture to obtain a The noise value corresponding to the pixel.
  • the brightness information includes a first brightness, a second brightness, and a third brightness
  • the target color determination module is further configured to: determine the first brightness of the pixel according to the original color of the pixel; In the preset processing range in the target part, determine the second brightness of the pixel points with the target brightness in the preset processing range; filter the pixel points through the preset convolution check, and determine the intermediate color obtained by the filtering of the pixel points.
  • the third brightness of the pixel point wherein the filtering range of the preset convolution kernel is consistent with the preset processing range; according to the first brightness, the second brightness and the third brightness, the initial target color of the pixel point is adjusted to obtain the pixel point target color.
  • the target color determination module is further configured to: when the first brightness is less than the third brightness, adjust the initial target color of the pixel point according to the first brightness and the third brightness to obtain the pixel The target color of the point; when the first brightness is greater than the third brightness, the initial target color of the pixel is adjusted according to the first brightness, the second brightness, the third brightness and the preset brightness radius to obtain the pixel's target color. target color.
  • the fusion module is used to: determine the first fusion ratio of the original color and the second fusion ratio of the target color according to the preset fusion strength; The original color and the target color are fused to obtain a fused face image.
  • the functions or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments, and the specific implementation and technical effects may refer to the above method embodiments. Description, for brevity, will not be repeated here.
  • FIG. 13 shows a schematic diagram of an application example according to the present disclosure.
  • the application example of the present disclosure proposes an image processing method to obtain a more realistic and natural image of foundation makeup, including the following processes:
  • step S31 a face image is collected in real time, and an image of the face to be tried makeup is acquired.
  • Step S32 determining the target face area in the face image that needs to be processed with foundation, including:
  • Step S321 obtaining a standard preset face image and a preset face material (mask) corresponding to the preset face image, wherein the mask indicates the target face area in the preset face image that needs to be subjected to foundation treatment , the color of the target face area that needs foundation treatment in the mask is set to a preset value, such as the color value of red;
  • Step S322 pre-identified face key points (106 face key points or 240 face key points) in the preset face image, key points interpolated according to the key points of the face, and constructed by connecting the key points to obtain a triangle. grid;
  • Step S323 identify the key points on the face image, and use the same method to perform interpolation processing and build a triangular mesh on the face image;
  • Step S324 fuse the mask with the preset face image to obtain a standard material image, then the pixel value of the pixel point of the target face area that needs to be subjected to foundation processing in the standard material image is consistent with the preset value set by the pixel point in the mask ;
  • Step S325 establishing a mapping relationship between the standard material image and the face image according to the key points in the preset face image (or the standard material image) or the interpolated key points, and the key points in the face image to obtain the interpolated key points , because according to the pixel value color of the pixel in the standard material image, the pixel in the standard material image that belongs to the target face area can be determined, and according to the mapping relationship, the pixel in the face image that belongs to the target face area can also be determined.
  • Step S33 obtaining the color lookup table corresponding to the selected foundation color that is pre-made in the image processing software (such as photoshop), wherein, different foundation models can correspond to different foundation effects, so it can be used for different types of foundation colors. or color number, and customize the corresponding color lookup table respectively.
  • image processing software such as photoshop
  • Step S34 extracting the original color of the pixel in the target face area in the face image, and searching for the corresponding target color on the color lookup table through the original color.
  • Step S35 fuse the found target color and the original color according to the preset fusion intensity (such as the effect intensity given by the user) to obtain a fusion face image, wherein the fusion process can refer to the formulas in the above disclosed embodiments (1).
  • the preset fusion intensity such as the effect intensity given by the user
  • the target face area in the face image can be determined based on the pixel color of the preset face material, and according to the original color of at least one pixel of the target face area in the face image, the corresponding search Determine the target color of each pixel point, so as to obtain a fusion face image that fuses the original color and the target color.
  • the fusion target face region in the fusion face image has an accurate location, high rendering accuracy and precision, and the fused color Excessive natural, color gradient, with high authenticity and better cosmetic effect.
  • FIG. 14 shows a schematic diagram according to an application example of the present disclosure.
  • the application example of the present disclosure proposes an image processing method to obtain a more realistic and natural lip makeup processed image, including the following processes:
  • Step S41 in response to the lip makeup operation on the lips of the human face image, place the original lip makeup material (the lip makeup mask in FIG. 5 ) on the position where the lips are located in the preset human face image as shown in FIG. 3 . position, get the standard material image;
  • Step S42 in the face image, determine the face key points through key point recognition, and use the face key points and some points interpolated through the face key points to construct the face area in the face image as shown in Figure 4. the triangular mesh;
  • Step S43 through the triangular mesh corresponding to the key points of the face, determine the position coordinates of the lips in the face image to sample the standard material image to obtain the target material;
  • Step S44 according to the target material, determine the image area where the lips are located in the face image, and obtain an image of the lips in the face image;
  • Step S45 extracts the original color of a plurality of pixel points in the image of the lip part, and searches the corresponding initial target color on the color lookup table as shown in Figure 6 by this original color;
  • Step S46 through a preset convolution kernel, obtain the middle color of each pixel and the second brightness corresponding to the middle color after the image of the convolution kernel is subjected to Gaussian filtering on the lip part, and the convolution kernel is in the lip part.
  • Step S47 in the case of performing natural light effect processing on the lips in the face image, the initial target color of each pixel can be directly used as the target color, and the target color and the original color can be used according to the preset fusion intensity given by the user. Fusion is performed to obtain a fused face image as shown in Figure 10;
  • Step S48 in the case of performing metal light effect processing on the lips in the face image, the target color can be determined through the following process, and the target color and the original color can be fused according to the preset fusion intensity given by the user to obtain: The fused face image shown in Figure 11.
  • the process of determining the target color may be: sampling on the noise texture through texture coordinates to obtain random noise values corresponding to each pixel in the image of the lip;
  • the preset noise range can be distributed in different parts between 0 and 1, such as 0.98 to 1.0, 0.78 to 0.8, etc.
  • the following A method is used to determine the adjustment coefficient of the pixel, otherwise the following B method is used to determine the adjustment coefficient of the pixel:
  • the adjustment factor is first equal to the noise value
  • Adjustment coefficient adjustment coefficient * pow (transparency of the target material, 4.0).
  • the adjustment coefficient is obtained by calculating the first brightness of the pixel (the brightness value corresponding to the color value of the pixel), the second brightness, the third brightness and the preset brightness radius (determining the radius of the highlight point):
  • Adjustment coefficient pow((first brightness-third brightness)/(second brightness-third brightness), shinness), where shine is the preset brightness radius above.
  • the initial target color can be adjusted according to the obtained adjustment coefficient and the preset light source value to obtain the target color:
  • Target color initial target color+adjustment factor ⁇ light source value.
  • the target color of each pixel can be determined correspondingly according to the original color of at least one pixel of the target part in the face image, so as to obtain a fusion face image that fuses the original color and the target color.
  • the fused color in the fused face image is overly natural, with a gradient of color, which has higher authenticity and better cosmetic effect.
  • the image processing methods proposed in the above disclosed application examples can be extended to other beauty operations, such as blush or eye shadow, in addition to the foundation processing operation and/or lip makeup operation on the face image.
  • the image processing method proposed in the application example of the present disclosure can be flexibly expanded and modified accordingly.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a volatile computer-readable storage medium or a non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure also provide a computer program product, including computer-readable codes.
  • a processor in the device executes the image processing method for implementing the image processing method provided by any of the above embodiments. instruction.
  • Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform the operations of the image processing method provided by any of the foregoing embodiments.
  • An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to perform the above method.
  • the above-mentioned memory can be a volatile memory (volatile memory), such as RAM; or a non-volatile memory (non-volatile memory), such as ROM, flash memory (flash memory), hard disk (Hard Disk Drive) , HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to the processor.
  • volatile memory such as RAM
  • non-volatile memory such as ROM, flash memory (flash memory), hard disk (Hard Disk Drive) , HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to the processor.
  • the above-mentioned processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It can be understood that, for different devices, the electronic device used to implement the function of the processor may also be other, which is not specifically limited in the embodiment of the present disclosure.
  • the electronic device may be provided as a terminal, server or other form of device.
  • an embodiment of the present disclosure further provides a computer program, which implements the above method when the computer program is executed by a processor.
  • FIG. 15 is a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
  • An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • Electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages. Source or object code written in any combination.
  • ISA instruction set architecture
  • machine instructions machine-dependent instructions
  • microcode firmware instructions
  • state setting data or instructions in one or more programming languages.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

La présente demande concerne un procédé et un appareil de traitement d'image, ainsi qu'un dispositif électronique et un support de stockage. Le procédé consiste à : en réponse à une opération d'embellissement pour un objet cible dans une image faciale, extraire la couleur d'origine d'au moins un point de pixel dans l'objet cible dans l'image faciale ; déterminer une couleur cible du ou des points de pixel dans l'objet cible en fonction d'une couleur sélectionnée dans l'opération d'embellissement et de la couleur d'origine du ou des points de pixel dans l'objet cible ; et fusionner la couleur d'origine et la couleur cible du ou des points de pixel dans l'objet cible afin d'obtenir une image faciale fusionnée.
PCT/CN2021/133045 2021-02-23 2021-11-25 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage WO2022179215A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110203312.2A CN112801916A (zh) 2021-02-23 2021-02-23 图像处理方法及装置、电子设备和存储介质
CN202110203312.2 2021-02-23
CN202110571420.5 2021-05-25
CN202110571420.5A CN113160094A (zh) 2021-02-23 2021-05-25 图像处理方法及装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022179215A1 true WO2022179215A1 (fr) 2022-09-01

Family

ID=75815416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/133045 WO2022179215A1 (fr) 2021-02-23 2021-11-25 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage

Country Status (3)

Country Link
CN (2) CN112801916A (fr)
TW (1) TW202234341A (fr)
WO (1) WO2022179215A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348709A (zh) * 2022-10-18 2022-11-15 良业科技集团股份有限公司 适用于文旅的智慧云服务照明展示方法及系统
CN116503933A (zh) * 2023-05-24 2023-07-28 北京万里红科技有限公司 一种眼周特征提取方法、装置、电子设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801916A (zh) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113240760B (zh) * 2021-06-29 2023-11-24 北京市商汤科技开发有限公司 一种图像处理方法、装置、计算机设备和存储介质
CN113436284A (zh) * 2021-07-30 2021-09-24 上海商汤智能科技有限公司 一种图像处理方法、装置、计算机设备和存储介质
CN113570581A (zh) * 2021-07-30 2021-10-29 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113781359B (zh) * 2021-09-27 2024-06-11 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113763286A (zh) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113762212B (zh) * 2021-09-27 2024-06-11 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113763287A (zh) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN114723600A (zh) * 2022-03-11 2022-07-08 北京字跳网络技术有限公司 美妆特效的生成方法、装置、设备、存储介质和程序产品
CN117078685B (zh) * 2023-10-17 2024-02-27 太和康美(北京)中医研究院有限公司 基于图像分析的化妆品功效评价方法、装置、设备及介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191410A (zh) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 一种人脸图像融合方法、装置及存储介质
CN109859098A (zh) * 2019-01-15 2019-06-07 深圳市云之梦科技有限公司 人脸图像融合方法、装置、计算机设备及可读存储介质
US20200082607A1 (en) * 2018-09-11 2020-03-12 Apple Inc. Techniques for providing virtual lighting adjustments utilizing regression analysis and functional lightmaps
CN111047511A (zh) * 2019-12-31 2020-04-21 维沃移动通信有限公司 一种图像处理方法及电子设备
CN111784568A (zh) * 2020-07-06 2020-10-16 北京字节跳动网络技术有限公司 人脸图像处理方法、装置、电子设备及计算机可读介质
CN112766234A (zh) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN112767285A (zh) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN112801916A (zh) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191410A (zh) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 一种人脸图像融合方法、装置及存储介质
US20200082607A1 (en) * 2018-09-11 2020-03-12 Apple Inc. Techniques for providing virtual lighting adjustments utilizing regression analysis and functional lightmaps
CN109859098A (zh) * 2019-01-15 2019-06-07 深圳市云之梦科技有限公司 人脸图像融合方法、装置、计算机设备及可读存储介质
CN111047511A (zh) * 2019-12-31 2020-04-21 维沃移动通信有限公司 一种图像处理方法及电子设备
CN111784568A (zh) * 2020-07-06 2020-10-16 北京字节跳动网络技术有限公司 人脸图像处理方法、装置、电子设备及计算机可读介质
CN112766234A (zh) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN112767285A (zh) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN112801916A (zh) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113160094A (zh) * 2021-02-23 2021-07-23 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348709A (zh) * 2022-10-18 2022-11-15 良业科技集团股份有限公司 适用于文旅的智慧云服务照明展示方法及系统
CN116503933A (zh) * 2023-05-24 2023-07-28 北京万里红科技有限公司 一种眼周特征提取方法、装置、电子设备及存储介质
CN116503933B (zh) * 2023-05-24 2023-12-12 北京万里红科技有限公司 一种眼周特征提取方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112801916A (zh) 2021-05-14
CN113160094A (zh) 2021-07-23
TW202234341A (zh) 2022-09-01

Similar Documents

Publication Publication Date Title
WO2022179215A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2022179026A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique et support de stockage
CN108229278B (zh) 人脸图像处理方法、装置和电子设备
JP4862955B1 (ja) 画像処理装置、画像処理方法、および制御プログラム
JP4760999B1 (ja) 画像処理装置、画像処理方法、および制御プログラム
CN108229279B (zh) 人脸图像处理方法、装置和电子设备
CN106056064B (zh) 一种人脸识别方法及人脸识别装置
CN109829930A (zh) 人脸图像处理方法、装置、计算机设备及可读存储介质
KR20200014842A (ko) 이미지 조명 방법, 장치, 전자 기기 및 저장 매체
US10740959B2 (en) Techniques for providing virtual light adjustments to image data
CN108463823A (zh) 一种用户头发模型的重建方法、装置及终端
JP2024500896A (ja) 3d頭部変形モデルを生成するための方法、システム及び方法
CN109919030A (zh) 黑眼圈类型识别方法、装置、计算机设备和存储介质
JP2005276182A (ja) 人物の肌および唇領域マスクデータの作成方法および作成装置
CN112308944A (zh) 仿真唇妆的扩增实境显示方法
Mould et al. Developing and applying a benchmark for evaluating image stylization
JP2021144582A (ja) メイクアップシミュレーション装置、メイクアップシミュレーション方法及びプログラム
KR20210032489A (ko) 신체 영역 상의 메이크업 제품의 렌더링의 시뮬레이션 방법
JP2024503794A (ja) 2次元(2d)顔画像から色を抽出するための方法、システム及びコンピュータプログラム
CN107808372B (zh) 图像穿越处理方法、装置、计算设备及计算机存储介质
KR20230110787A (ko) 개인화된 3d 머리 및 얼굴 모델들을 형성하기 위한 방법들 및 시스템들
US10810775B2 (en) Automatically selecting and superimposing images for aesthetically pleasing photo creations
US20220028149A1 (en) System and method for automatically generating an avatar with pronounced features
US20200126314A1 (en) Method and system of automated facial morphing for eyebrow hair and face color detection
CN109447931A (zh) 图像处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927639

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927639

Country of ref document: EP

Kind code of ref document: A1