CN112801916A - Image processing method and device, electronic equipment and storage medium - Google Patents
Image processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112801916A CN112801916A CN202110203312.2A CN202110203312A CN112801916A CN 112801916 A CN112801916 A CN 112801916A CN 202110203312 A CN202110203312 A CN 202110203312A CN 112801916 A CN112801916 A CN 112801916A
- Authority
- CN
- China
- Prior art keywords
- color
- target
- pixel point
- brightness
- face image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 22
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 66
- 239000002537 cosmetic Substances 0.000 claims abstract description 36
- 230000004044 response Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 101
- 239000013077 target material Substances 0.000 claims description 75
- 230000004927 fusion Effects 0.000 claims description 55
- 239000003086 colorant Substances 0.000 claims description 37
- 230000015654 memory Effects 0.000 claims description 32
- 238000001914 filtration Methods 0.000 claims description 23
- 239000002184 metal Substances 0.000 claims description 18
- 229910052751 metal Inorganic materials 0.000 claims description 18
- 239000010421 standard material Substances 0.000 claims description 18
- 230000001795 light effect Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 35
- 230000008569 process Effects 0.000 description 19
- 230000000694 effects Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 239000000463 material Substances 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000007499 fusion processing Methods 0.000 description 4
- 230000005291 magnetic effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000005267 amalgamation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000002932 luster Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises the following steps: in response to cosmetic operation aiming at a target part of a face image, extracting the original color of at least one pixel point in the target part of the face image; determining the target color of at least one pixel point in the target part according to the selected color in the makeup operation and the original color of at least one pixel point in the target part; and fusing the original color and the target color of at least one pixel point in the target part to obtain a fused face image.
Description
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
Lip makeup refers to cosmetic makeup of the lips, and the lips decorated with the lip makeup are generally pure and glossy. With the development of computer vision technology, processing images including lips to obtain lip makeup processed images has been increasingly widely used in people's daily life. However, how to make the lip makeup processed image have a real and natural effect is still a problem to be solved.
Disclosure of Invention
The present disclosure proposes an image processing scheme.
According to an aspect of the present disclosure, there is provided an image processing method including:
in response to cosmetic operation aiming at a target part of a face image, extracting the original color of at least one pixel point in the target part of the face image; determining the target color of at least one pixel point in the target part according to the selected color in the makeup operation and the original color of at least one pixel point in the target part; and fusing the original color and the target color of at least one pixel point in the target part to obtain a fused face image.
In a possible implementation manner, the determining a target color of at least one pixel point in the target portion according to the color selected in the makeup operation and the original color of at least one pixel point in the target portion includes: according to the selected color in the makeup operation, performing corresponding color search on the original color of at least one pixel point in the target part to obtain the initial target color of at least one pixel point in the target part; and determining the target color of at least one pixel point in the target part according to the initial target color of at least one pixel point in the target part.
In a possible implementation manner, the determining a target color of at least one pixel point in the target portion according to an initial target color of at least one pixel point in the target portion includes: taking the initial target color of at least one pixel point in the target part as the target color of at least one pixel point in the target part under the condition that the processing type corresponding to the makeup art operation comprises natural processing; or, under the condition that the processing type corresponding to the makeup art operation includes metal light effect processing, adjusting the initial target color of at least one pixel point in the target part based on a randomly acquired noise value to obtain the target color of at least one pixel point in the target part.
In a possible implementation manner, the adjusting an initial target color of at least one pixel point in the target portion based on the randomly obtained noise value to obtain a target color of at least one pixel point in the target portion includes: aiming at least one pixel point in the target part, respectively acquiring a noise value corresponding to the pixel point; under the condition that the noise value is within a preset noise range, adjusting the initial target color of the pixel point according to the noise value and the corresponding transparency of the pixel point in a target material to obtain the target color of the pixel point; or, under the condition that the noise value is outside the preset noise range, adjusting the initial target color of the pixel point according to the brightness information of the pixel point to obtain the target color of the pixel point.
In a possible implementation manner, the obtaining, for at least one pixel point in the target portion, a noise value corresponding to the pixel point respectively includes: acquiring a preset noise texture; and sampling at the corresponding position of the preset noise texture according to the position of the at least one pixel point in the target part to obtain a noise value corresponding to the pixel point.
In one possible implementation, the luminance information includes a first luminance, a second luminance, and a third luminance; the adjusting the initial target color of the pixel point according to the brightness information of the pixel point to obtain the target color of the pixel point comprises: determining first brightness of the pixel point according to the original color of the pixel point; determining second brightness of the pixel point with target brightness in the preset processing range according to the preset processing range of the pixel point in the target part; filtering the pixel point through a preset convolution kernel, and determining third brightness of the pixel point according to an intermediate color obtained by filtering the pixel point, wherein the filtering range of the preset convolution kernel is consistent with the preset processing range; and adjusting the initial target color of the pixel point according to the first brightness, the second brightness and the third brightness to obtain the target color of the pixel point.
In a possible implementation manner, the adjusting the initial target color of the pixel point according to the first brightness, the second brightness, and the third brightness to obtain the target color of the pixel point includes: under the condition that the first brightness is smaller than the third brightness, adjusting the initial target color of the pixel point according to the first brightness and the third brightness to obtain the target color of the pixel point; and under the condition that the first brightness is greater than the third brightness, adjusting the initial target color of the pixel point according to the first brightness, the second brightness, the third brightness and a preset brightness radius to obtain the target color of the pixel point.
In a possible implementation manner, the performing, according to the color selected in the makeup operation, a corresponding color lookup on an original color of at least one pixel point in the target portion to obtain an initial target color of at least one pixel point in the target portion includes: acquiring a color lookup table corresponding to the selected color according to the selected color in the makeup operation, wherein the output colors in the color lookup table are arranged in a gradient form; and respectively searching an output color corresponding to the original color of at least one pixel point in the target part in the color lookup table to serve as the initial target color of at least one pixel point in the target part.
In a possible implementation manner, the fusing the original color and the target color of at least one pixel point in the target portion to obtain a fused face image includes: respectively determining a first fusion proportion of the original color and a second fusion proportion of the target color according to preset fusion intensity; and fusing the original color and the target color according to the first fusion proportion and the second fusion proportion to obtain a fused face image.
In one possible implementation manner, the extracting, in response to a makeup art operation for a target portion of a face image, an original color of at least one pixel point in the target portion of the face image includes: acquiring a target material corresponding to the target part; and extracting the original color of at least one pixel point in the target part of the face image according to the transparency of at least one pixel point in the target material.
In one possible implementation, the method further includes: identifying a target part in the face image to obtain an initial position of the target part in the face image; the acquiring of the target material corresponding to the target part includes: acquiring an original target material corresponding to the target part according to the target part; fusing the original target material with a target part in a preset face image to obtain a standard material image; and extracting the standard material image based on the initial position to obtain a target material.
In one possible implementation, the cosmetic operation includes a lip cosmetic operation, and the target site includes a lip site.
According to an aspect of the present disclosure, there is provided an image processing apparatus including:
the original color extraction module is used for responding to makeup operation aiming at a target part of the face image and extracting the original color of at least one pixel point in the target part of the face image; a target color determination module, configured to determine a target color of at least one pixel point in the target portion according to the color selected in the makeup operation and an original color of at least one pixel point in the target portion; and the fusion module is used for fusing the original color and the target color of at least one pixel point in the target part to obtain a fused face image.
In one possible implementation, the target color determination module is configured to: according to the selected color in the makeup operation, performing corresponding color search on the original color of at least one pixel point in the target part to obtain the initial target color of at least one pixel point in the target part; and determining the target color of at least one pixel point in the target part according to the initial target color of at least one pixel point in the target part.
In one possible implementation, the target color determination module is further configured to: taking the initial target color of at least one pixel point in the target part as the target color of at least one pixel point in the target part under the condition that the processing type corresponding to the makeup art operation comprises natural processing; or, under the condition that the processing type corresponding to the makeup art operation includes metal light effect processing, adjusting the initial target color of at least one pixel point in the target part based on a randomly acquired noise value to obtain the target color of at least one pixel point in the target part.
In one possible implementation, the target color determination module is further configured to: aiming at least one pixel point in the target part, respectively acquiring a noise value corresponding to the pixel point; under the condition that the noise value is within a preset noise range, adjusting the initial target color of the pixel point according to the noise value and the corresponding transparency of the pixel point in a target material to obtain the target color of the pixel point; or, under the condition that the noise value is outside the preset noise range, adjusting the initial target color of the pixel point according to the brightness information of the pixel point to obtain the target color of the pixel point.
In one possible implementation, the target color determination module is further configured to: acquiring a preset noise texture; and sampling at the corresponding position of the preset noise texture according to the position of the at least one pixel point in the target part to obtain a noise value corresponding to the pixel point.
In one possible implementation, the luminance information includes a first luminance, a second luminance, and a third luminance; the target color determination module is further to: determining first brightness of the pixel point according to the original color of the pixel point; determining second brightness of the pixel point with target brightness in the preset processing range according to the preset processing range of the pixel point in the target part; filtering the pixel point through a preset convolution kernel, and determining third brightness of the pixel point according to an intermediate color obtained by filtering the pixel point, wherein the filtering range of the preset convolution kernel is consistent with the preset processing range; and adjusting the initial target color of the pixel point according to the first brightness, the second brightness and the third brightness to obtain the target color of the pixel point.
In one possible implementation, the target color determination module is further configured to: under the condition that the first brightness is smaller than the third brightness, adjusting the initial target color of the pixel point according to the first brightness and the third brightness to obtain the target color of the pixel point; and under the condition that the first brightness is greater than the third brightness, adjusting the initial target color of the pixel point according to the first brightness, the second brightness, the third brightness and a preset brightness radius to obtain the target color of the pixel point.
In one possible implementation, the target color determination module is further configured to: acquiring a color lookup table corresponding to the selected color according to the selected color in the makeup operation, wherein the output colors in the color lookup table are arranged in a gradient form; and respectively searching an output color corresponding to the original color of at least one pixel point in the target part in the color lookup table to serve as the initial target color of at least one pixel point in the target part.
In one possible implementation, the fusion module is configured to: respectively determining a first fusion proportion of the original color and a second fusion proportion of the target color according to preset fusion intensity; and fusing the original color and the target color according to the first fusion proportion and the second fusion proportion to obtain a fused face image.
In one possible implementation, the raw color extraction module is configured to: acquiring a target material corresponding to the target part; and extracting the original color of at least one pixel point in the target part of the face image according to the transparency of at least one pixel point in the target material.
In one possible implementation, the apparatus is further configured to: identifying a target part in the face image to obtain an initial position of the target part in the face image; the raw color extraction module is further to: acquiring an original target material corresponding to the target part according to the target part; fusing the original target material with a target part in a preset face image to obtain a standard material image; and extracting the standard material image based on the initial position to obtain a target material.
In one possible implementation, the cosmetic operation includes a lip cosmetic operation, and the target site includes a lip site.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the above-described image processing method is performed.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image processing method.
In the embodiment of the disclosure, the original color of at least one pixel point in the target part of the face image is extracted in response to the makeup operation on the face image, and the target color of at least one pixel point in the target part is determined according to the selected color and the original color in the makeup operation, so that the original color and the target color of at least one pixel point in the target part are fused to obtain the fused face image. Through the process, the obtained target colors of the multiple pixel points correspond to the respective original colors respectively, so that the colors in the fused face image fused with the original colors and the target colors are excessively real and natural, and the effect and the authenticity of the fused face image are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
FIG. 2 shows a schematic diagram of target material according to an embodiment of the present disclosure.
FIG. 3 shows a schematic diagram of a constructed triangular mesh in accordance with an embodiment of the present disclosure.
Fig. 4 illustrates a schematic diagram of a preset face image according to an embodiment of the present disclosure.
FIG. 5 shows a schematic diagram of a color lookup table according to an embodiment of the present disclosure.
Fig. 6 shows a schematic diagram of fused face images according to an embodiment of the present disclosure.
Fig. 7 shows a schematic diagram of fused face images according to an embodiment of the present disclosure.
Fig. 8 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 9 shows a schematic diagram of an application example according to the present disclosure.
FIG. 10 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
FIG. 11 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, which may be applied to an image processing apparatus, which may be a terminal device, a server, or other processing device, or the like, or an image processing system, or the like. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In one example, the image processing method can be applied to a cloud server or a local server, the cloud server can be a public cloud server or a private cloud server, and the cloud server can be flexibly selected according to actual conditions.
In some possible implementations, the image processing method may also be implemented by the processor calling computer readable instructions stored in the memory.
As shown in fig. 1, in one possible implementation, the image processing method may include:
step S11, in response to the makeup operation aiming at the target part of the face image, extracting the original color of at least one pixel point in the target part of the face image.
The face image may be any image including a face, the face image may include one face or may include a plurality of faces, and the implementation form of the face image may be flexibly determined according to an actual situation, which is not limited in the embodiment of the present disclosure.
For the makeup operation of the face image, the operation content included in the face image can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments. In one possible implementation, the makeup operation may include an operation of instructing a makeup process on the face image; in one possible implementation, the cosmetic operation may further include selecting a color for cosmetic, and the like; in one possible implementation, the cosmetic operation may further include an operation indicating a treatment type of the cosmetic, or the like.
The form of the makeup operation may be flexibly determined according to actual conditions, and is not limited to the following embodiments. In one possible implementation, the cosmetic operation may include a lip cosmetic operation.
With different cosmetic operation forms, the treatment types included in the cosmetic operation may also be flexibly changed, in one possible implementation manner, the cosmetic operation may include one treatment type, and in some possible implementation manners, the cosmetic operation may also include multiple treatment types at the same time. In one possible implementation, where the cosmetic operation includes a lip cosmetic operation, the treatment type may include a natural treatment and/or a metal light effect treatment. Wherein the natural treatment can comprise natural decoration of the color of the lips, and the original bright effect of the lips is kept; the metal light treatment can comprise the modification of the color of the lips and the change of the light effect, so that the lip makeup effect with metal luster is obtained.
The target part can be any part of the face image which needs to be beautified, the target part comprises which parts, the implementation mode of the target part can be flexibly determined according to the actual situation of the beautification operation, and in a possible implementation mode, the target part can comprise a lip part under the condition that the beautification operation comprises the lip makeup operation.
The original color may be an unprocessed color of the target portion in the face image, and a manner of extracting the original color of at least one pixel point from the target portion of the face image is not limited in the embodiment of the present disclosure, and may be flexibly determined according to an actual situation. In some possible implementation manners, a region where the target portion is located in the face image may be determined, and the color of one or more pixel points included in the region is extracted to obtain the original color of at least one pixel point in the target portion of the face image.
And step S12, determining the target color of at least one pixel point in the target part according to the selected color in the cosmetic operation and the original color of at least one pixel point in the target part.
The selected color in the makeup operation may be a color for the makeup selected by the user together under the condition of selecting the makeup operation, or a preset color corresponding to the binding of the makeup operation under the condition of selecting the makeup operation, and the specific color value of the color may be flexibly determined according to the actual condition, which is not limited in the embodiment of the present disclosure.
According to the selected color and the original color of at least one pixel point of the target part in the face image, the target color of at least one pixel point of the target part can be respectively determined, wherein the target color can be related to the selected color and corresponds to the original color. In some possible implementation manners, the selected color and the original color of the at least one pixel point may be respectively fused to obtain a target color, and in some possible implementation manners, the target color corresponding to the original color, and the like in a certain color range where the selected color is located may also be respectively searched based on the original color of the at least one pixel point. Specifically, how to obtain the target color of at least one pixel point in the target portion according to the selected color and the original color respectively, the processing mode can be flexibly determined according to the actual situation, and the following disclosure embodiments are described in detail, and are not expanded first.
And step S13, fusing the original color and the target color of at least one pixel point in the target part to obtain a fused face image.
The original color and the target color of at least one pixel point in the target part are fused, a plurality of pixel points in the target part can be respectively fused, wherein under the condition that the pixel points are fused, the target color determined by the pixel points based on the original color can be fused with the original color of the pixel points to obtain the color of the pixel points after fusion, and then the fused face image is obtained.
The fusion mode in step S13 can be changed flexibly according to the actual situation, and is described in detail in the following disclosure embodiments, which are not first developed here.
In the embodiment of the disclosure, the original color of at least one pixel point in the target part of the face image is extracted in response to the makeup operation on the face image, and the target color of at least one pixel point in the target part is determined according to the selected color and the original color in the makeup operation, so that the original color and the target color of at least one pixel point in the target part are fused to obtain the fused face image. Through the process, the obtained target colors of the multiple pixel points correspond to the respective original colors respectively, so that the colors in the fused face image fused with the original colors and the target colors are excessively real and natural, and the effect and the authenticity of the fused face image are improved.
In one possible implementation, step S11 may include:
acquiring a target material corresponding to a target part;
and extracting the original color of at least one pixel point in the target part of the face image according to the transparency of at least one pixel point in the target material.
The target material can be a related material for realizing makeup on the face image, and the realization form of the target material can be flexibly determined according to the actual condition of makeup operation. In one possible implementation, where the cosmetic operation includes a lip cosmetic operation, the target material may be a lip cosmetic material, such as a lip mask (mask) or the like.
In one possible implementation, the target material may be a material selected by the user in the cosmetic operation; in some possible implementations, the target material may also be a preset material that is automatically invoked if a cosmetic operation is selected. In some possible implementation manners, the target material may also be a material obtained by processing an original target material based on a face image. How to obtain the target material can be realized by the following detailed disclosure embodiments, which are not first developed herein.
After a target material corresponding to the target part is obtained, the original color of at least one pixel point in the target part of the face image can be extracted according to the transparency of at least one pixel point in the target material. The extraction mode can be flexibly determined according to actual conditions, in a possible implementation mode, under the condition that the transparency of the pixel points in the target material belongs to a preset transparency range, the region corresponding to the positions of the pixel points in the face image is used as the image region where the target part is located, and the original colors of the pixel points in the image region are extracted.
The specific range condition of the preset transparency range can be flexibly determined according to actual conditions, in one possible implementation mode, the preset transparency range can be set to be lower than 100%, namely, under the condition that the transparency of pixel points in a target material is lower than 100% (not fully transparent), the region corresponding to the positions of the pixel points in the face image can be used as the image region where the target part is located, and the original color of the pixel points in the image region is extracted; in a possible implementation manner, the preset transparency range may also be set to be lower than other transparency values, or within a certain transparency range, and the like.
By extracting the original colors of the pixel points corresponding to the face image under the condition that the transparency of the pixel points in the target material belongs to the preset transparency range, the image area where the target part meeting the requirement is located can be determined more pertinently by setting the value of the preset transparency range through the process, so that the original colors in the more accurate target part are extracted from the face image, and the reliability and the authenticity of the subsequently obtained fused face image are improved.
Fig. 2 is a schematic diagram of a target material according to an embodiment of the present disclosure, and as can be seen from the diagram, in an example, the target material may be a lip mask, and transparency of different pixel points in the lip mask is different, so that a natural and real lip shape can be better represented, and therefore, an original color in a face image extracted based on the lip mask is more accurate and reliable.
By acquiring a target material corresponding to the target part and extracting the original color of at least one pixel point in the target part of the face image according to the transparency of at least one pixel point in the target material, the original color which is more real and reliable and corresponds to the real position of the lip in the face image can be extracted through the process, and then the fused face image obtained based on the original color is more real and natural.
In a possible implementation manner, the method provided by the embodiment of the present disclosure may further include: and identifying the target part in the face image to obtain the initial position of the target part in the face image.
The initial position may be an approximate position of the target portion in the face image, which is determined according to the face image. The method for determining the initial position of the target portion is not limited in the embodiments of the present disclosure, and may be flexibly selected according to the actual situation, and is not limited in the following embodiments.
In a possible implementation manner, the initial position of the target portion may be determined by identifying a key point of the target portion, for example, the initial position may be determined according to coordinates of the identified key point of the target portion in the face image; or determining the range of the target part in the face image according to the key point of the target part to obtain the initial position of the target part, and the like.
In a possible implementation manner, recognizing a target portion in a face image to obtain an initial position of the target portion in the face image may include:
acquiring at least one face key point in a face image;
constructing a triangular mesh corresponding to the target part in the face image according to the key points of the face;
and determining the initial position of the target part in the face image according to the position coordinates of the triangular mesh.
The face key points may be related key points for locating key region positions in the face of the person, such as eye key points, mouth key points, eyebrow key points, nose key points, or the like. The specific key points included in the acquired face key points and the number of the included key points are not limited in the embodiment of the present disclosure, and can be flexibly selected according to actual situations. In some possible implementation manners, all relevant key points in the Face image may be obtained, such as 106 whole Face key points (Face106) of the Face; in some possible implementations, part of the key points in the face image, such as key points related to the target portion, such as key points related to the lip portion, may also be obtained.
The manner of obtaining the face key points is not limited in the embodiment of the present disclosure, and any manner that can identify the face key points in the image can be used as an implementation manner of obtaining the face key points.
After at least one face key point is acquired, a triangular mesh can be constructed in the face image according to the face key point. The method for constructing the triangular mesh is not limited in the embodiment of the present disclosure, and in a possible implementation manner, every three adjacent points in the obtained face key points may be connected to obtain a plurality of triangular meshes. In some possible implementation manners, interpolation processing may be performed according to the acquired face key points to obtain interpolation points, and then every three adjacent points are connected to obtain a plurality of triangular meshes in a point set formed by the face key points and the interpolation points.
Fig. 3 shows a schematic diagram of a constructed triangular mesh (part of a face in the diagram is subjected to mosaic processing in order to protect an object in an image), and it can be seen from the diagram that in one possible implementation manner, a plurality of triangular meshes can be obtained by connecting key points and interpolation points of the face in the face image.
In a possible implementation manner, a triangular mesh corresponding to the target portion may also be constructed in the face image according to the face key points, where the manner of constructing the triangular mesh may refer to the above-described embodiments, and the difference is that the face key points and the interpolation points related to the target portion may be obtained to construct the triangular mesh corresponding to the target portion, and the construction of the triangular meshes of other portions in the face image is omitted.
After the triangular mesh corresponding to the target portion is obtained, the initial position of the target portion in the face image can be determined according to the position coordinates of the triangular mesh in the face image. The expression form of the initial position is not limited in the embodiment of the present disclosure, and in a possible implementation manner, the central point position of one or more triangulation grids corresponding to the target portion may be used as the initial position of the target portion; in one possible implementation manner, the vertex coordinates of one or more triangular meshes corresponding to the target portion may be used as the initial position of the target portion, and the selection may be flexible according to actual situations.
The method comprises the steps of obtaining at least one face key point in a face image, and constructing a triangular mesh corresponding to a target part in the face image according to the face key point, so that the initial position of the target part in the face image is determined according to the position coordinates of the triangular mesh. Through the process, the position of the target part in the face image can be efficiently and accurately preliminarily positioned in a key point identification and grid construction mode, so that the target material matched with the target part can be conveniently and subsequently acquired, and the precision and the authenticity of image processing are improved.
In a possible implementation manner, the obtaining of the target material corresponding to the target portion may include:
acquiring an original target material corresponding to a target part according to the target part;
fusing an original target material with a target part in a preset face image to obtain a standard material image;
and extracting the standard material image based on the initial position to obtain a target material.
The original target material may be a preset material bound to the makeup operation, for example, an original lip mask corresponding to the lip makeup operation may be used as the original target material. The method for obtaining the original target material is not limited in the embodiment of the present disclosure, and the material selected in the makeup operation may be used as the original target material, or the corresponding original target material may be automatically read according to the makeup operation.
The preset face image can be a standard face image template and can comprise complete and comprehensive face parts, and the positions of the face parts in the preset face image are standard. The implementation form of the preset face image can be flexibly determined according to the actual situation, and a standard face (standard face) adopted in any face image processing field can be used as the implementation form of the preset face image. Fig. 4 is a schematic diagram of a preset face image according to an embodiment of the present disclosure (like the above-mentioned embodiment, in order to protect an object in the image, a part of a face in the image is subjected to mosaic processing), and as can be seen from the diagram, in an example, a face part included in the preset face image is clear and complete and conforms to an objective distribution of face parts in the face.
Because the positions of all human face parts in the standard human face image are standard, the original target material can be directly fused with the positions corresponding to the target parts in the preset human face image to obtain the standard material image. The method for fusing the original target material and the target part in the preset face image is not limited in the embodiment of the present disclosure, and in a possible implementation manner, the original target material and the corresponding pixel point in the target part in the preset face image may be directly added to obtain a standard material image; in some possible implementation manners, the original target material and the target portion in the preset face image may also be subjected to addition fusion and the like according to a preset weight.
And fusing the original target material with a target part in a preset face image to obtain a standard material image. In one possible implementation, the target stories may be extracted from the standard story images based on the initial positions in the disclosed embodiments described above.
In one possible implementation, the extracting the target material based on the initial position may include: and obtaining the color value and the transparency of each pixel point in the range corresponding to the initial position in the standard material image, and taking the image formed by a plurality of pixel points containing the color value and the transparency as a target material.
The method comprises the steps of obtaining a standard material image by fusing an original target material with a target part in a preset face image, extracting the target material from the standard material image based on an initial position, and enabling the obtained target material to correspond to the position of the target part in the face image more through the process because the initial position is obtained by identifying the target part in the face image, so that the original color of at least one pixel point in the extracted target part is more real and reliable.
In one possible implementation, step S12 may include:
step S121, performing corresponding color search on the original color of at least one pixel point in the target part according to the selected color in the makeup operation to obtain the initial target color of at least one pixel point in the target part;
step S122, determining the target color of at least one pixel point in the target part according to the initial target color of at least one pixel point in the target part.
The initial target color can be in the range of the selected color, the determined color is correspondingly searched based on the original color, and the initial target color belongs to the range of the selected color and corresponds to the original color.
How to search the corresponding color of the initial target color of at least one pixel point in the target part according to the selected color and the initial color can be flexibly determined according to the actual situation, and the searching mode is detailed in the following disclosed embodiments and is not expanded at first.
After the initial target color is determined, the target color may be further determined based on the initial target color. In some possible implementations, the initial target color may be directly taken as the target color; in some possible implementations, some processing may be performed on the initial target color, such as adjusting or fusing with other colors, to obtain the target color; in some possible implementation manners, how to process the initial target color to obtain the target color and the like can be selected according to the processing type corresponding to the cosmetic operation. How to further determine the target color according to the initial target color can be realized in the following embodiments, which are not first developed herein.
The method comprises the steps of carrying out corresponding color difference on the original color of at least one pixel point in a target part according to the selected color in the makeup operation to obtain the initial target color of at least one pixel point in the target part, and determining the target color of at least one pixel point in the target part according to the initial target color. Through the process, the color can be searched, the range of the selected color is obtained, and the target color corresponding to the original color is obtained, so that the color of the target color is more real, the color transition between different pixel points is more natural, and the natural degree and the makeup beautifying effect of the fused face image are improved.
In one possible implementation, step S121 may include:
acquiring a color lookup table corresponding to the selected color according to the selected color in the makeup operation, wherein the output colors in the color lookup table are arranged in a gradual change form;
and respectively searching an output color corresponding to the original color of at least one pixel point in the target part in a color lookup table to be used as the initial target color of at least one pixel point in the target part.
The color lookup table may include a corresponding relationship between a plurality of input colors and output colors, where an input color may be a color looked up in the color lookup table, and an output color may be a color looked up in the color lookup table. For example, an output color B corresponding to a may be found, such as by looking up in a color look-up table based on the input color a. The corresponding relationship between the colors in the color lookup table can be flexibly set according to the actual situation, and is not limited in the embodiment of the disclosure. In a possible implementation manner, the output colors in the color lookup table may be arranged in a gradient manner, and the specific arrangement manner is not limited in the embodiments of the present disclosure, and is not limited in the following disclosure embodiments.
In a possible implementation manner, the color lookup table corresponding to the selected color may be obtained according to the selected color in the cosmetic operation, in which case, the output color in the color lookup table belongs to the range of the corresponding selected color, and thus the initial target color found according to the color lookup table may be within the corresponding range of the selected color and correspond to the original color.
After the color lookup table is obtained, the output color corresponding to each pixel point in the color lookup table can be respectively looked up according to the original colors of the plurality of pixel points in the target part, and the output color is used as the initial target color of the plurality of pixel points in the target part. The searching manner can be flexibly determined according to the form of the color lookup table, and is not limited in the embodiment of the disclosure.
Fig. 5 is a schematic diagram of a color lookup table according to an embodiment of the present disclosure, and as can be seen from the diagram, in an example, the color lookup table includes multiple naturally transitional gradient colors as output colors (colors with different shades in the diagram are actually gradient colors with color differences due to a limitation of gray scale image display), and after obtaining original colors of multiple pixel points in a target portion, the output colors of the multiple pixel points can be respectively looked up from the color lookup table to serve as initial target colors.
Through the color look-up table that contains a plurality of gradual change output colors that is corresponding with the colour of selecting according to the colour of selecting in the beautiful makeup operation, and look for the output color that corresponds with the original color of at least one pixel in the target site in the color look-up table respectively, come as the initial target color of at least one pixel in the target site, through above-mentioned process, can utilize the color look-up table that contains gradual change output color, obtain the natural initial target color of colour transition, thereby make the transition of the target color of follow-up acquisition also more natural, improve the natural degree and the beautiful makeup effect of the amalgamation face image that obtains.
In one possible implementation, step S122 may include:
step S1221, in a case where the processing type corresponding to the cosmetic operation includes natural processing, taking the initial target color of the at least one pixel point in the target portion as the target color of the at least one pixel point in the target portion. Or,
step S1222, under the condition that the processing type corresponding to the cosmetic operation includes metal light effect processing, based on the randomly obtained noise value, adjusting the initial target color of at least one pixel point in the target portion to obtain the target color of at least one pixel point in the target portion.
The processing effects of the natural processing and the metal light effect processing can refer to the above disclosed embodiments, and are not described herein again. The noise value may be a noise value or information added to each pixel point in the image, and the mode of randomly obtaining the noise value may be to obtain the randomly obtained noise value by generating random data, where the generation mode of the random data is not limited in the embodiments of the present disclosure, and is described in detail in each of the following disclosure embodiments, and is not expanded here first.
In one possible implementation, as can be seen from step S1221, the initial target color can be directly taken as the target color in the case that the processing type is natural processing.
In a possible implementation manner, it can be seen from step S1222 that, in the case that the processing type is the metal light effect processing, an initial target color of at least one pixel point in the target portion is adjusted based on the randomly obtained noise value, so as to change colors of different pixel points, so that the metal light effect appears in the target portion. The implementation manner of the method can be flexibly changed according to actual requirements, and is described in detail in the following disclosure embodiments, which is not expanded first.
Through the steps S1221 and S1222, the initial target color can be adjusted in different ways to determine the target color under the condition that the processing types corresponding to the makeup operations are different, so that the flexibility of the makeup operations is improved; and the initial target color of at least one pixel point in the target part is adjusted through the randomly acquired noise value, and the color can be adjusted based on random data to obtain a more natural metal light effect.
In one possible implementation, step S1222 may include:
respectively acquiring noise values corresponding to pixel points aiming at least one pixel point in a target part;
under the condition that the noise value is within the preset noise range, adjusting the initial target color of the pixel point according to the noise value and the corresponding transparency of the pixel point in the target material to obtain the target color of the pixel point; or,
and under the condition that the noise value is out of the preset noise range, adjusting the initial target color of the pixel point according to the brightness information of the pixel point to obtain the target color of the pixel point.
The noise value of each pixel point can be respectively obtained for one or more pixel points included in the target part, wherein the noise value of each pixel point can be obtained in a random mode, and the obtaining mode can be flexibly selected according to actual conditions. In a possible implementation manner, the noise value of each pixel point may be obtained by generating a random number within a certain numerical range. In a possible implementation manner, for at least one pixel point in the target portion, respectively obtaining noise values corresponding to the pixel points may include:
acquiring a preset noise texture;
and sampling at the corresponding position of the preset noise texture according to the position of at least one pixel point in the target part to obtain a noise value corresponding to the pixel point.
The preset noise texture may be an image with a shape matching the target portion, and the noise values of the points in the image may be randomly generated in advance. In a possible implementation manner, the noise values corresponding to the pixel points in the target portion in the preset noise texture may be respectively determined according to the position correspondence between the target portion and the preset noise texture.
Through obtaining the preset noise texture and obtaining the noise value of at least one pixel point in the target part according to the preset noise texture, through the process, the corresponding noise values of a plurality of pixel points can be conveniently obtained, the obtained noise value is a random value, meanwhile, the efficiency of obtaining the noise value is improved, and therefore the efficiency of image processing is improved.
After the noise values corresponding to at least one pixel point are respectively obtained, the processing modes corresponding to different pixel points can be determined through comparison between the noise values and the preset noise range. The value of the preset noise range can be flexibly set according to actual conditions, and is not limited to the following disclosure embodiments, in one example, the preset noise range can be distributed between 0 to 1, such as 0.98 to 1.0 or 0.78 to 0.8.
Under the condition that the noise value corresponding to the pixel point belongs to the preset noise range, the initial target color of the pixel point can be adjusted according to the noise value corresponding to the pixel point and the transparency of the pixel point corresponding to the pixel point in the target material, so that the target color of the pixel point is obtained. The specific adjustment mode can be flexibly selected according to the actual situation, and is not limited to the following disclosure embodiments.
In a possible implementation manner, adjusting an initial target color of a pixel point according to a noise value and a transparency of the pixel point corresponding to a target material to obtain a target color of the pixel point may include:
determining an adjusting coefficient for adjusting the initial target color according to the noise value and the transparency of the pixel point corresponding to the pixel point in the target material;
and adjusting the initial target color of the pixel point according to the adjustment coefficient and the preset light source value to obtain the target color of the pixel point.
The implementation manner of the target material may be detailed in each of the above disclosed embodiments, and is not described herein again. The adjustment coefficient may be a relevant parameter in adjusting the initial target color. The calculation method of the adjustment coefficient determined according to the noise value and the transparency can be flexibly determined according to actual situations, and is not limited to the following disclosed embodiments, and in one example, the method of determining the adjustment coefficient according to the noise value and the transparency can be represented by the following formula (1):
adjustment coefficient noise value × pow (transparency, 4.0) (1)
Where pow (x, y) indicates the result of calculating x to the y power, and thus pow (transparency, 4.0) is a value of calculating transparency to the 4 power.
After the adjustment coefficient is determined, the target color of the pixel point can be determined according to the adjustment coefficient and a preset light source value, wherein the preset light source value can be a light source value flexibly set for makeup operation according to actual conditions, and the size of the numerical value is not limited in the embodiment of the disclosure.
The method for adjusting the initial target color based on the adjustment coefficient and the preset light source value can also be flexibly set according to the actual situation, and is not limited to the following disclosed embodiments, and in one example, the method for determining the target color according to the adjustment coefficient and the preset light source value can be represented by the following formula (2):
target color initial target color + adjustment factor x preset light source value (2)
Through the process, the target color of at least one pixel point in the target part can be obtained under the condition that the noise value is within the preset noise range.
In a possible implementation manner, the noise value may also belong to a range outside a preset noise range, and in this case, the initial target color of the pixel point may be adjusted according to the luminance information of the pixel point to obtain the target color of the pixel point. The brightness information can be related information determined according to the color and other conditions of the pixel points in the target part of the face image, and the information content contained in the brightness information can be flexibly determined according to the actual conditions. How to determine the brightness information of the pixel point and how to adjust the initial target color according to the brightness information can be described in detail in the following disclosure embodiments, which are not expanded first.
In the embodiment of the disclosure, through at least one pixel in the target part, the noise value corresponding to the pixel is respectively obtained, and under the condition that the noise value belongs to within the preset noise range, the initial target color of the pixel is adjusted according to the noise value, under the condition that the noise value belongs to outside the preset noise range, the initial target color is adjusted according to the brightness information of the pixel, through the process, the color adjustment processing of different modes can be carried out on different pixels through the comparison of the noise value of the randomly generated pixel and the preset noise range, so that the flicker condition in the metal lighting effect is better simulated, and the more natural and vivid metal lighting effect is obtained.
In a possible implementation manner, the luminance information may include first luminance, second luminance, and third luminance, and the initial target color of the pixel point is adjusted according to the luminance information of the pixel point to obtain the target color of the pixel point, which may include:
and determining the first brightness of the pixel point according to the original color of the pixel point.
And determining the second brightness of the pixel point with the target brightness in the preset processing range according to the preset processing range of the pixel point in the target part.
And filtering the pixel points through a preset convolution kernel, and determining the third brightness of the pixel points according to the intermediate color obtained by filtering the pixel points, wherein the filtering range of the preset convolution kernel is consistent with the preset processing range.
And adjusting the initial target color of the pixel point according to the first brightness, the second brightness and the third brightness to obtain the target color of the pixel point.
The first brightness may be a brightness value determined according to a color value of an original color of the pixel, where the brightness value may be determined by calculating the color value, and in an example, the brightness value may be obtained by calculating values of three color channels (red R, green G, and blue B) in the color value.
The second brightness can also be determined according to the color value of the pixel point with the target brightness, wherein the pixel point with the target brightness can be the pixel point which is located within the preset processing range of the pixel point and has the highest brightness in the target part of the face image. The range of the preset processing range may be flexibly set according to actual conditions, and is not limited in the embodiments of the present disclosure.
The third brightness may be a brightness value determined according to a color value of a middle color of the pixel point, where the middle color of the pixel point may be a color obtained by filtering the pixel point through a preset convolution check. The form and size of the preset convolution kernel can be flexibly set according to actual conditions, in a possible implementation manner, the filtering range of the preset convolution kernel is consistent with the preset processing range in the disclosed embodiment, that is, in one example, on one hand, the pixel point can be checked through the preset convolution to be subjected to filtering processing so as to obtain the intermediate color of the pixel point after filtering, and the corresponding brightness value is calculated according to the color value of the intermediate color to be used as the third brightness, on the other hand, the region range covered by the pixel point checked through the preset convolution can be used as the preset processing range, and then the brightness value of the pixel point which is located within the preset processing range and has the highest brightness in the target part of the face image can be used as the second brightness.
The filtering mode is not limited in the embodiment of the present disclosure, and may be flexibly selected according to actual conditions, and in one example, the pixel points may be checked by a preset convolution to perform gaussian filtering.
In the above embodiments, the order of determining the first brightness, the second brightness, and the third brightness is not limited in the embodiments of the present disclosure, and may be determined simultaneously, or may be determined sequentially according to a certain order, and the like, and may be selected flexibly according to actual situations.
In a possible implementation manner, the initial target color of the pixel point may be adjusted according to the determined first brightness, the determined second brightness, and the determined third brightness to obtain the target color of the pixel point, and how to implement the adjustment according to the three brightnesses may be described in detail in the following disclosure embodiments, which is not expanded here.
The initial target color of the pixel point is adjusted through the first brightness determined according to the original color of the pixel point, the second brightness determined according to the preset processing range of the pixel point in the target part and the third brightness determined according to the color value of the pixel point after filtering, and the brightness information of the pixel point in a certain range in the face image can be fully considered, so that the target color determined based on the brightness information can be more real and reliable, and the makeup effect and the authenticity of the fused face image are improved.
In a possible implementation manner, adjusting an initial target color of a pixel point according to a first luminance, a second luminance, and a third luminance to obtain a target color of the pixel point includes:
under the condition that the first brightness is smaller than the third brightness, adjusting the initial target color of the pixel point according to the first brightness and the third brightness to obtain the target color of the pixel point;
and under the condition that the first brightness is greater than the third brightness, adjusting the initial target color of the pixel point according to the first brightness, the second brightness, the third brightness and a preset brightness radius to obtain the target color of the pixel point.
In the method for adjusting the initial target color, whether the first luminance is less than the third luminance or the first luminance is greater than the third luminance, the formula (2) in the above disclosed embodiment may be referred to, that is, the adjustment coefficient of the pixel point is determined according to the corresponding data, and then the initial target color is adjusted by using the adjustment coefficient and the preset light source value.
In the case that the first luminance is smaller than the third luminance, the adjustment coefficient may be determined according to the first luminance and the third luminance, and the determination manner may be flexibly selected according to the actual situation, and is not limited to the following disclosed embodiments. In one possible implementation manner, the manner of determining the adjustment coefficient according to the first luminance and the third luminance may be represented by the following formula (3):
adjustment coefficient ═ (third luminance-first luminance)/(1.0-first luminance) (3)
When the first brightness is greater than the third brightness, the adjustment coefficient may be determined according to the first brightness, the second brightness, the third brightness, and a preset brightness radius, where the preset brightness radius may determine a radius of a metal bright spot in the metal light effect, and a value of the preset brightness radius may be flexibly set according to an actual situation, which is not limited in the embodiment of the present disclosure. In one possible implementation manner, the manner of determining the adjustment coefficient according to the first brightness, the second brightness, the third brightness and the preset brightness radius can be represented by the following formula (4):
adjustment coefficient ═ pow ((first luminance-third luminance)/(second luminance-third luminance), shine) (4)
The calculation method of the pow may refer to the above formula (1), which is not described herein again, and shiness is a preset brightness radius.
In the case where the first luminance is equal to the third luminance, the adjustment coefficient may be calculated by the above equation (3), or may be calculated by the above equation (4), and the obtained adjustment coefficient is 0 in any manner.
Through the comparison condition of the first brightness and the third brightness, the initial target color of the pixel point is flexibly adjusted to obtain the target color of the pixel point.
After obtaining the target color, the original color and the target color may be fused through step S13, and in one possible implementation, step S13 may include:
respectively determining a first fusion proportion of the original color and a second fusion proportion of the target color according to the preset fusion intensity;
and fusing the original color and the target color according to the first fusion proportion and the second fusion proportion to obtain a fused face image.
The preset fusion intensity is used for indicating respective fusion proportion or weight of the original color and the target color in the fusion process, and the numerical value of the preset fusion intensity can be flexibly set according to the actual situation. In a possible implementation manner, the fusion weight of the original color and the target color can be preset as the preset fusion intensity; in a possible implementation manner, the makeup operation for the face image may also include a selection strength of the fusion strength, in which case, the selected fusion strength in the makeup operation may be used as the preset fusion strength.
The first ratio may be a fusion ratio of the original color in the fusion process, the second ratio may be a fusion ratio of the target color in the fusion process, and after the first fusion ratio and the second fusion ratio are respectively determined according to the preset fusion intensity, the original color and the target color may be mutually fused according to the corresponding fusion ratio to obtain a fused face image, wherein in the fusion process according to the fusion ratio, the fusion may be directly performed by adding, or may be performed by some other methods, such as image processing methods of front-sheet lamination or soft light, and the specific selection of which fusion method is not limited in the embodiment of the present disclosure.
Through the process, the preset fusion intensity can be flexibly set according to actual requirements, the fusion face image with the fusion intensity meeting requirements with the effect is obtained, and the flexibility of image processing is improved.
With the difference of processing types in makeup operation, the finally obtained fused face image can be flexibly changed along with the processing types. Fig. 6 and 7 are schematic diagrams of a fused face image according to an embodiment of the present disclosure (like the above-mentioned embodiments, in order to protect an object in an image, a part of a face in each figure is subjected to mosaic processing), where fig. 6 is the fused face image obtained in a natural processing manner; fig. 7 is a fused face image obtained in a metal light effect processing mode. It can be seen from the above images that the fused face image which is more real and natural and has better fusion effect can be obtained by the image processing method provided by the above disclosed embodiments.
Fig. 8 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown, the image processing apparatus 20 may include:
and the original color extraction module 21 is configured to extract an original color of at least one pixel point in the target portion of the face image in response to a makeup operation for the target portion of the face image.
And the target color determining module 22 is configured to determine the target color of at least one pixel point in the target portion according to the selected color in the makeup operation and the original color of at least one pixel point in the target portion.
And the fusion module 23 is configured to fuse the original color and the target color of at least one pixel point in the target portion to obtain a fused face image.
In one possible implementation, the target color determination module is configured to: according to the selected color in the makeup operation, carrying out corresponding color search on the original color of at least one pixel point in the target part to obtain the initial target color of at least one pixel point in the target part; and determining the target color of at least one pixel point in the target part according to the initial target color of at least one pixel point in the target part.
In one possible implementation, the target color determination module is further configured to: taking the initial target color of at least one pixel point in the target part as the target color of at least one pixel point in the target part under the condition that the processing type corresponding to the cosmetic operation comprises natural processing; or under the condition that the processing type corresponding to the makeup art operation comprises metal light effect processing, adjusting the initial target color of at least one pixel point in the target part based on the randomly acquired noise value to obtain the target color of at least one pixel point in the target part.
In one possible implementation, the target color determination module is further configured to: respectively acquiring noise values corresponding to pixel points aiming at least one pixel point in a target part; under the condition that the noise value is within the preset noise range, adjusting the initial target color of the pixel point according to the noise value and the corresponding transparency of the pixel point in the target material to obtain the target color of the pixel point; or, under the condition that the noise value is outside the preset noise range, adjusting the initial target color of the pixel point according to the brightness information of the pixel point to obtain the target color of the pixel point.
In one possible implementation, the target color determination module is further configured to: acquiring a preset noise texture; and sampling at the corresponding position of the preset noise texture according to the position of at least one pixel point in the target part to obtain a noise value corresponding to the pixel point.
In one possible implementation, the luminance information includes a first luminance, a second luminance, and a third luminance; the target color determination module is further to: determining first brightness of the pixel points according to original colors of the pixel points; determining second brightness of the pixel point with target brightness in the preset processing range according to the preset processing range of the pixel point in the target part; filtering the pixel points through a preset convolution kernel, and determining third brightness of the pixel points according to intermediate colors obtained by filtering the pixel points, wherein the filtering range of the preset convolution kernel is consistent with the preset processing range; and adjusting the initial target color of the pixel point according to the first brightness, the second brightness and the third brightness to obtain the target color of the pixel point.
In one possible implementation, the target color determination module is further configured to: under the condition that the first brightness is smaller than the third brightness, adjusting the initial target color of the pixel point according to the first brightness and the third brightness to obtain the target color of the pixel point; and under the condition that the first brightness is greater than the third brightness, adjusting the initial target color of the pixel point according to the first brightness, the second brightness, the third brightness and a preset brightness radius to obtain the target color of the pixel point.
In one possible implementation, the target color determination module is further configured to: acquiring a color lookup table corresponding to the selected color according to the selected color in the makeup operation, wherein the output colors in the color lookup table are arranged in a gradual change form; and respectively searching an output color corresponding to the original color of at least one pixel point in the target part in a color lookup table to be used as the initial target color of at least one pixel point in the target part.
In one possible implementation, the fusion module is configured to: respectively determining a first fusion proportion of the original color and a second fusion proportion of the target color according to the preset fusion intensity; and fusing the original color and the target color according to the first fusion proportion and the second fusion proportion to obtain a fused face image.
In one possible implementation, the raw color extraction module is configured to: acquiring a target material corresponding to a target part; and extracting the original color of at least one pixel point in the target part of the face image according to the transparency of at least one pixel point in the target material.
In one possible implementation, the apparatus is further configured to: identifying a target part in the face image to obtain an initial position of the target part in the face image; the raw color extraction module is further to: acquiring an original target material corresponding to a target part according to the target part; fusing an original target material with a target part in a preset face image to obtain a standard material image; and extracting the standard material image based on the initial position to obtain a target material.
In one possible implementation, the cosmetic operation includes a lip cosmetic operation, and the target site includes a lip site.
Application scenario example
In the field of computer vision, how to obtain a more real and natural lip makeup processed image becomes a problem to be solved urgently at present.
Fig. 9 is a schematic diagram illustrating an application example according to the present disclosure, and as shown in the drawing, the application example of the present disclosure proposes an image processing method including the following processes:
step S31, in response to the lip makeup operation aiming at the lips of the face image, placing an original lip makeup material (lip makeup mask in figure 2) at the position where the lips are in the preset face image shown in figure 4 to obtain a standard material image;
step S32, in the face image, determining face key points through key point identification, and constructing a triangular mesh of the face area in the face image as shown in figure 3 by using the face key points and some points interpolated by the face key points;
step S33, determining the position coordinates of lips in the face image through a triangular mesh corresponding to the face key points to sample a standard material image so as to obtain a target material;
step S34, determining the image area where the lips in the face image are located according to the target material to obtain the image of the lips in the face image;
step S35, extracting the original colors of a plurality of pixel points in the image of the lip part, and looking up the corresponding initial target color on the color lookup table shown in fig. 5 through the original colors;
step S36, acquiring the middle color of each pixel point and the second brightness corresponding to the middle color after the image of the lip part is subjected to Gaussian filtering by the convolution kernel, and acquiring the third brightness of the pixel point with the highest brightness in each covered area in the process that the convolution kernel moves on the image of the lip part;
step S37, under the condition of performing natural light effect processing on the lips in the face image, directly taking the initial target color of each pixel point as the target color, and fusing the target color and the initial color according to the preset fusion intensity given by the user to obtain a fused face image as shown in fig. 6;
in step S38, when the metal light effect processing is performed on the lips in the face image, the target color may be determined through the following process, and the target color and the original color are fused according to the preset fusion intensity given by the user, so as to obtain the fused face image shown in fig. 7.
The process of determining the target color may be: sampling on the noise texture through texture coordinates to obtain random noise values corresponding to all pixel points in the image of the lip part;
for each pixel point, respectively judging whether the corresponding noise value is within a preset noise range (the preset noise range can be distributed in different parts between 0 and 1, such as 0.98 to 1.0,0.78 to 0.8 and the like);
if the noise is in the preset noise range, determining the adjustment coefficient of the pixel point by adopting the following method A, otherwise determining the adjustment coefficient of the pixel point by adopting the following method B:
A. calculating an adjustment coefficient by the noise value and the transparency of the target material:
the adjustment factor is first equal to the noise value;
adjustment coefficient ═ adjustment coefficient × (transparency of target material, 4.0).
B. Calculating to obtain an adjustment coefficient through the first brightness (brightness value corresponding to the color value of the pixel point), the second brightness, the third brightness and a preset brightness radius (determining the radius of the highlight point) of the pixel point:
if the first luminance is less than the third luminance:
the adjustment coefficient is (third luminance-first luminance)/(1.0-first luminance);
if the first brightness is greater than the third brightness:
the adjustment coefficient ═ pow ((first luminance-third luminance)/(second luminance-third luminance), shine), where shine is the above-mentioned preset luminance radius.
After determining the adjustment coefficient by a or B, the initial target color may be adjusted according to the obtained adjustment coefficient and the preset light source value to obtain the target color:
target color is initial target color + adjustment factor × light source value.
By the method provided by the application example, the target color of each pixel point can be correspondingly searched and determined according to the original color of at least one pixel point of the target part in the face image, so that the fused face image fusing the original color and the target color is obtained, the fused face image is excessively natural in color and gradually changed in color, and the fused face image has high authenticity and good makeup effect.
The image processing method provided in the application example of the present disclosure may be applied to lip makeup processing on lips in a face image, and may also be applied to other makeup operations, such as makeup operations of blush or eye shadow, and the image processing method provided in the application example of the present disclosure may be correspondingly flexibly extended and changed according to different makeup operation types.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile computer readable storage medium or a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
In practical applications, the memory may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
The processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
The electronic device may be provided as a terminal, server, or other form of device.
Based on the same technical concept of the foregoing embodiments, the embodiments of the present disclosure also provide a computer program, which when executed by a processor implements the above method.
Fig. 10 is a block diagram of an electronic device 800 according to an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 10, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 11 is a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 11, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can execute computer-readable program instructions to implement various aspects of the present disclosure by utilizing state personnel information of the computer-readable program instructions to personalize the electronic circuitry.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (15)
1. An image processing method, comprising:
in response to cosmetic operation aiming at a target part of a face image, extracting the original color of at least one pixel point in the target part of the face image;
determining the target color of at least one pixel point in the target part according to the selected color in the makeup operation and the original color of at least one pixel point in the target part;
and fusing the original color and the target color of at least one pixel point in the target part to obtain a fused face image.
2. The method of claim 1, wherein determining the target color of at least one pixel in the target portion according to the color selected in the cosmetic operation and the original color of at least one pixel in the target portion comprises:
according to the selected color in the makeup operation, performing corresponding color search on the original color of at least one pixel point in the target part to obtain the initial target color of at least one pixel point in the target part;
and determining the target color of at least one pixel point in the target part according to the initial target color of at least one pixel point in the target part.
3. The method of claim 2, wherein determining the target color of at least one pixel in the target region based on the initial target color of at least one pixel in the target region comprises:
taking the initial target color of at least one pixel point in the target part as the target color of at least one pixel point in the target part under the condition that the processing type corresponding to the makeup art operation comprises natural processing; or,
and under the condition that the processing type corresponding to the makeup art operation comprises metal light effect processing, adjusting the initial target color of at least one pixel point in the target part based on the randomly acquired noise value to obtain the target color of at least one pixel point in the target part.
4. The method of claim 3, wherein the adjusting the initial target color of at least one pixel in the target portion based on the randomly obtained noise value to obtain the target color of at least one pixel in the target portion comprises:
aiming at least one pixel point in the target part, respectively acquiring a noise value corresponding to the pixel point;
under the condition that the noise value is within a preset noise range, adjusting the initial target color of the pixel point according to the noise value and the corresponding transparency of the pixel point in a target material to obtain the target color of the pixel point; or,
and under the condition that the noise value is out of the preset noise range, adjusting the initial target color of the pixel point according to the brightness information of the pixel point to obtain the target color of the pixel point.
5. The method according to claim 4, wherein the obtaining, for at least one pixel in the target region, a noise value corresponding to the pixel respectively comprises:
acquiring a preset noise texture;
and sampling at the corresponding position of the preset noise texture according to the position of the at least one pixel point in the target part to obtain a noise value corresponding to the pixel point.
6. The method according to claim 4 or 5, wherein the luminance information comprises a first luminance, a second luminance, a third luminance;
the adjusting the initial target color of the pixel point according to the brightness information of the pixel point to obtain the target color of the pixel point comprises:
determining first brightness of the pixel point according to the original color of the pixel point;
determining second brightness of the pixel point with target brightness in the preset processing range according to the preset processing range of the pixel point in the target part;
filtering the pixel point through a preset convolution kernel, and determining third brightness of the pixel point according to an intermediate color obtained by filtering the pixel point, wherein the filtering range of the preset convolution kernel is consistent with the preset processing range;
and adjusting the initial target color of the pixel point according to the first brightness, the second brightness and the third brightness to obtain the target color of the pixel point.
7. The method of claim 6, wherein the adjusting the initial target color of the pixel according to the first brightness, the second brightness, and the third brightness to obtain the target color of the pixel comprises:
under the condition that the first brightness is smaller than the third brightness, adjusting the initial target color of the pixel point according to the first brightness and the third brightness to obtain the target color of the pixel point;
and under the condition that the first brightness is greater than the third brightness, adjusting the initial target color of the pixel point according to the first brightness, the second brightness, the third brightness and a preset brightness radius to obtain the target color of the pixel point.
8. The method according to any one of claims 2 to 7, wherein the performing, according to the color selected in the makeup operation, a corresponding color lookup on an original color of at least one pixel point in the target portion to obtain an initial target color of at least one pixel point in the target portion includes:
acquiring a color lookup table corresponding to the selected color according to the selected color in the makeup operation, wherein the output colors in the color lookup table are arranged in a gradient form;
and respectively searching an output color corresponding to the original color of at least one pixel point in the target part in the color lookup table to serve as the initial target color of at least one pixel point in the target part.
9. The method according to any one of claims 1 to 8, wherein the fusing the original color and the target color of at least one pixel point in the target portion to obtain a fused face image comprises:
respectively determining a first fusion proportion of the original color and a second fusion proportion of the target color according to preset fusion intensity;
and fusing the original color and the target color according to the first fusion proportion and the second fusion proportion to obtain a fused face image.
10. The method according to any one of claims 1 to 9, wherein the extracting an original color of at least one pixel point in the target part of the face image in response to a makeup operation for the target part of the face image comprises:
acquiring a target material corresponding to the target part;
and extracting the original color of at least one pixel point in the target part of the face image according to the transparency of at least one pixel point in the target material.
11. The method of claim 10, further comprising:
identifying a target part in the face image to obtain an initial position of the target part in the face image;
the acquiring of the target material corresponding to the target part includes:
acquiring an original target material corresponding to the target part according to the target part;
fusing the original target material with a target part in a preset face image to obtain a standard material image;
and extracting the standard material image based on the initial position to obtain a target material.
12. The method of any one of claims 1 to 11, wherein the cosmetic operation comprises a lip cosmetic operation and the target site comprises a lip site.
13. An image processing apparatus characterized by comprising:
the original color extraction module is used for responding to makeup operation aiming at a target part of the face image and extracting the original color of at least one pixel point in the target part of the face image;
a target color determination module, configured to determine a target color of at least one pixel point in the target portion according to the color selected in the makeup operation and an original color of at least one pixel point in the target portion;
and the fusion module is used for fusing the original color and the target color of at least one pixel point in the target part to obtain a fused face image.
14. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 12.
15. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 12.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110203312.2A CN112801916A (en) | 2021-02-23 | 2021-02-23 | Image processing method and device, electronic equipment and storage medium |
CN202110571420.5A CN113160094A (en) | 2021-02-23 | 2021-05-25 | Image processing method and device, electronic equipment and storage medium |
PCT/CN2021/133045 WO2022179215A1 (en) | 2021-02-23 | 2021-11-25 | Image processing method and apparatus, electronic device, and storage medium |
TW110147368A TW202234341A (en) | 2021-02-23 | 2021-12-17 | Image processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110203312.2A CN112801916A (en) | 2021-02-23 | 2021-02-23 | Image processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112801916A true CN112801916A (en) | 2021-05-14 |
Family
ID=75815416
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110203312.2A Pending CN112801916A (en) | 2021-02-23 | 2021-02-23 | Image processing method and device, electronic equipment and storage medium |
CN202110571420.5A Pending CN113160094A (en) | 2021-02-23 | 2021-05-25 | Image processing method and device, electronic equipment and storage medium |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110571420.5A Pending CN113160094A (en) | 2021-02-23 | 2021-05-25 | Image processing method and device, electronic equipment and storage medium |
Country Status (3)
Country | Link |
---|---|
CN (2) | CN112801916A (en) |
TW (1) | TW202234341A (en) |
WO (1) | WO2022179215A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240760A (en) * | 2021-06-29 | 2021-08-10 | 北京市商汤科技开发有限公司 | Image processing method and device, computer equipment and storage medium |
CN113436284A (en) * | 2021-07-30 | 2021-09-24 | 上海商汤智能科技有限公司 | Image processing method and device, computer equipment and storage medium |
CN113570581A (en) * | 2021-07-30 | 2021-10-29 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113781359A (en) * | 2021-09-27 | 2021-12-10 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN114972009A (en) * | 2022-03-28 | 2022-08-30 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2022179215A1 (en) * | 2021-02-23 | 2022-09-01 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, electronic device, and storage medium |
WO2023045941A1 (en) * | 2021-09-27 | 2023-03-30 | 上海商汤智能科技有限公司 | Image processing method and apparatus, electronic device and storage medium |
WO2023169287A1 (en) * | 2022-03-11 | 2023-09-14 | 北京字跳网络技术有限公司 | Beauty makeup special effect generation method and apparatus, device, storage medium, and program product |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762212B (en) * | 2021-09-27 | 2024-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113763287B (en) * | 2021-09-27 | 2024-09-17 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN115348709B (en) * | 2022-10-18 | 2023-03-28 | 良业科技集团股份有限公司 | Smart cloud service lighting display method and system suitable for text travel |
CN116503933B (en) * | 2023-05-24 | 2023-12-12 | 北京万里红科技有限公司 | Periocular feature extraction method and device, electronic equipment and storage medium |
CN117078685B (en) * | 2023-10-17 | 2024-02-27 | 太和康美(北京)中医研究院有限公司 | Cosmetic efficacy evaluation method, device, equipment and medium based on image analysis |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191410B (en) * | 2018-08-06 | 2022-12-13 | 腾讯科技(深圳)有限公司 | Face image fusion method and device and storage medium |
US10467803B1 (en) * | 2018-09-11 | 2019-11-05 | Apple Inc. | Techniques for providing virtual lighting adjustments utilizing regression analysis and functional lightmaps |
CN109859098B (en) * | 2019-01-15 | 2022-11-22 | 深圳市云之梦科技有限公司 | Face image fusion method and device, computer equipment and readable storage medium |
CN111047511A (en) * | 2019-12-31 | 2020-04-21 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111784568A (en) * | 2020-07-06 | 2020-10-16 | 北京字节跳动网络技术有限公司 | Face image processing method and device, electronic equipment and computer readable medium |
CN112767285B (en) * | 2021-02-23 | 2023-03-10 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic device and storage medium |
CN112801916A (en) * | 2021-02-23 | 2021-05-14 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112766234B (en) * | 2021-02-23 | 2023-05-12 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
-
2021
- 2021-02-23 CN CN202110203312.2A patent/CN112801916A/en active Pending
- 2021-05-25 CN CN202110571420.5A patent/CN113160094A/en active Pending
- 2021-11-25 WO PCT/CN2021/133045 patent/WO2022179215A1/en active Application Filing
- 2021-12-17 TW TW110147368A patent/TW202234341A/en unknown
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022179215A1 (en) * | 2021-02-23 | 2022-09-01 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN113240760A (en) * | 2021-06-29 | 2021-08-10 | 北京市商汤科技开发有限公司 | Image processing method and device, computer equipment and storage medium |
CN113240760B (en) * | 2021-06-29 | 2023-11-24 | 北京市商汤科技开发有限公司 | Image processing method, device, computer equipment and storage medium |
CN113436284A (en) * | 2021-07-30 | 2021-09-24 | 上海商汤智能科技有限公司 | Image processing method and device, computer equipment and storage medium |
CN113570581A (en) * | 2021-07-30 | 2021-10-29 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2023005850A1 (en) * | 2021-07-30 | 2023-02-02 | 上海商汤智能科技有限公司 | Image processing method and apparatus, and electronic device, storage medium and computer program product |
CN113781359A (en) * | 2021-09-27 | 2021-12-10 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2023045941A1 (en) * | 2021-09-27 | 2023-03-30 | 上海商汤智能科技有限公司 | Image processing method and apparatus, electronic device and storage medium |
CN113781359B (en) * | 2021-09-27 | 2024-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
WO2023169287A1 (en) * | 2022-03-11 | 2023-09-14 | 北京字跳网络技术有限公司 | Beauty makeup special effect generation method and apparatus, device, storage medium, and program product |
CN114972009A (en) * | 2022-03-28 | 2022-08-30 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2022179215A1 (en) | 2022-09-01 |
TW202234341A (en) | 2022-09-01 |
CN113160094A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112767285B (en) | Image processing method and device, electronic device and storage medium | |
CN112801916A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112766234B (en) | Image processing method and device, electronic equipment and storage medium | |
CN111553864B (en) | Image restoration method and device, electronic equipment and storage medium | |
EP3208745B1 (en) | Method and apparatus for identifying picture type | |
CN112991553B (en) | Information display method and device, electronic equipment and storage medium | |
CN111091610B (en) | Image processing method and device, electronic equipment and storage medium | |
CN113194254A (en) | Image shooting method and device, electronic equipment and storage medium | |
CN109472738B (en) | Image illumination correction method and device, electronic equipment and storage medium | |
CN112219224B (en) | Image processing method and device, electronic equipment and storage medium | |
CN113570581A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112767288A (en) | Image processing method and device, electronic equipment and storage medium | |
CN114463212A (en) | Image processing method and device, electronic equipment and storage medium | |
CN111815750A (en) | Method and device for polishing image, electronic equipment and storage medium | |
CN113763286A (en) | Image processing method and device, electronic equipment and storage medium | |
CN111935418A (en) | Video processing method and device, electronic equipment and storage medium | |
WO2023045946A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN113570583B (en) | Image processing method and device, electronic equipment and storage medium | |
WO2023045961A1 (en) | Virtual object generation method and apparatus, and electronic device and storage medium | |
CN113781359B (en) | Image processing method and device, electronic equipment and storage medium | |
CN105447829B (en) | Image processing method and device | |
EP3273437A1 (en) | Method and device for enhancing readability of a display | |
CN114266305A (en) | Object identification method and device, electronic equipment and storage medium | |
US20220270313A1 (en) | Image processing method, electronic device and storage medium | |
CN113254118B (en) | Skin color display device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210514 |