CN111784568A - Face image processing method and device, electronic equipment and computer readable medium - Google Patents

Face image processing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111784568A
CN111784568A CN202010642253.4A CN202010642253A CN111784568A CN 111784568 A CN111784568 A CN 111784568A CN 202010642253 A CN202010642253 A CN 202010642253A CN 111784568 A CN111784568 A CN 111784568A
Authority
CN
China
Prior art keywords
color value
special effect
processed
face image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010642253.4A
Other languages
Chinese (zh)
Inventor
袁知洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010642253.4A priority Critical patent/CN111784568A/en
Publication of CN111784568A publication Critical patent/CN111784568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure provides a face image processing method, a face image processing device, electronic equipment and a computer readable medium, wherein the method comprises the following steps: acquiring a special effect adding instruction of a user to the face image to be processed, wherein the special effect adding instruction comprises a special effect identifier of a special effect to be added; based on the special effect to be added corresponding to the special effect identification, carrying out corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed to obtain a processed color value; and fusing the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image. In the embodiment of the disclosure, after the color value adjustment is performed twice, the obtained color value in the face image fuses the original color value in the face image to be processed and the color value after the initial processing, so that the color value in the processed face image is more natural, and further the user experience of the user is improved.

Description

Face image processing method and device, electronic equipment and computer readable medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing a face image, an electronic device, and a computer-readable medium.
Background
In the prior art, before uploading an image to a social platform, a user usually adds some special effects to the image through an image processing tool, and then sends the image with the added special effects to the social platform, so as to improve the popularity of the user.
However, at present, when a special effect is added to a face image to be processed, the face image to be processed is usually directly processed to obtain the face image to which the special effect is added, so that details such as skin texture and highlight cannot be retained, the processing effect is poor, and the user experience is poor.
Disclosure of Invention
The purpose of this disclosure is to solve at least one of the above technical drawbacks and to improve the user experience. The technical scheme adopted by the disclosure is as follows:
in a first aspect, the present disclosure provides a method for processing a face image, including:
acquiring a special effect adding instruction of a user to the face image to be processed, wherein the special effect adding instruction comprises a special effect identifier of a special effect to be added;
based on the special effect adding instruction and the special effect to be added corresponding to the special effect identification, carrying out corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed to obtain a processed color value;
and fusing the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image.
In a second aspect, the present disclosure provides a face image processing apparatus, comprising:
the instruction acquisition module is used for acquiring a special effect adding instruction of a user to the face image to be processed, wherein the special effect adding instruction comprises a special effect identifier of a special effect to be added;
the preliminary adjustment module is used for carrying out corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect to be added corresponding to the special effect identification to obtain a processed color value;
and the fusion module is used for fusing the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image.
In a third aspect, the present disclosure provides an electronic device comprising:
a processor and a memory;
a memory for storing operating instructions;
a processor for executing the method as shown in any embodiment of the first aspect of the present disclosure by calling an operation instruction.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon computer program instructions for causing a computer to execute to implement a method as shown in any embodiment of the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
the face image processing method, device, electronic device and computer readable medium of the embodiments of the present disclosure, when receiving a special effect adding instruction from a user to a face image to be processed, perform corresponding processing on an original color value of each pixel point of at least one face part in the face image to be processed based on a special effect to be added corresponding to a special effect identifier in the instruction to obtain a processed color value, and then fuse the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image, so that after two times of color value processing, a color value in the obtained face image fuses an original color value in the face image to be processed and a color value after primary processing, so that details such as skin texture and highlight are retained in the processed face image, and the processed face image is more natural, so that the user experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings that are required to be used in the description of the embodiments of the present disclosure will be briefly described below.
Fig. 1 is a schematic flow chart of a face image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a face image to be processed provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a face image after color makeup special effect processing provided in the embodiment of the present disclosure;
fig. 4a is a schematic color value diagram of a spangle material provided in an embodiment of the present disclosure;
fig. 4b is a schematic color value diagram of a setting material provided in the embodiment of the present disclosure;
fig. 4c is a schematic diagram of a face image after paillette special effect processing provided in the embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
In order to solve the technical problems in the prior art, the disclosed embodiment provides a method for processing a face image, which can perform corresponding processing on an original color value of each pixel point of at least one face part in the face image to be processed based on a special effect to be added corresponding to a special effect identifier to obtain a processed color value when receiving a special effect adding instruction of a user in the face image to be processed, and then fuse the original color value of each pixel point of at least one face part with the processed color value to obtain the processed face image, so that after two times of color value processing, a color value in the obtained face image fuses an original color value in the face image to be processed and a color value after primary processing, and the processed face image retains details of skin texture, highlight and the like, and the processed face image is more natural, so that the user experience of the user is improved.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
The execution main body of the present disclosure may be any electronic device, may be a server, may also be a user terminal, and the like, and specifically may be implemented as an application program that can be installed on the user terminal, where the application program may provide a function of adding a special effect to an image for a user, and before publishing a photographed face image, the user may add a special effect to the face image by the method, that is, beautify the face image, so that the processed face image is more natural.
Fig. 1 shows a schematic flowchart of a face image processing method provided in an embodiment of the present disclosure, and as shown in the diagram, the present disclosure takes a user terminal as an execution subject for description, and the method may include steps S110 to S130, where:
step S110, a special effect adding instruction of the user to the face image to be processed is obtained, and the special effect adding instruction comprises a special effect identification of the special effect to be added.
The face image to be processed comprises corresponding face parts, such as eyes, a nose, a mouth and the like; the embodiment of the present disclosure is not limited to a mode of acquiring a face image to be processed, and the face image may be a face image stored locally in a user terminal, or an image acquired in real time or in other modes, for example, the image may be acquired by shooting with an image shooting device of the user terminal, and the user terminal may be an electronic product having an image shooting function, such as a beauty camera, a smart phone, and a tablet computer. The user can input a camera starting instruction through input equipment such as a touch screen or a physical key in the user terminal, control the camera of the terminal equipment to be in a photographing mode, and acquire a to-be-processed face image acquired by the camera, or the user can trigger the start of the camera through an image photographing trigger button of an application program on the terminal so as to acquire the image.
The camera may be a built-in camera of the terminal device, such as a front camera and a rear camera, or an external camera of the terminal device, such as a rotary camera, and optionally a front camera.
Wherein, the special effect adding instruction indicates that the user wants to add a special effect in the face image. The special effect adding instruction may be an instruction generated based on a special effect adding operation of a user on a terminal interface, where the special effect adding operation indicates that the user selects an operation of adding a special effect to be added to a face image, that is, an action of adding the special effect by the user on the user interface of the terminal device, and a specific form of the operation is configured as required, and may be, for example, a trigger action of a specific operation position of the user on an interface of an application program of a client.
In practical applications, the operation may be triggered through a relevant trigger of the client, such as a specified trigger button or an input box on a client interface, or may be a voice instruction of the user, specifically, for example, a virtual button displayed on a display interface of the client, and an operation of clicking the button by the user is a special effect adding operation of the user.
As an optional manner, the scheme of the present disclosure may be implemented as a functional plug-in of an application, where the application may have an image capturing function and an image adjusting function, a user may trigger a call to an image capturing device (e.g., a camera) of a user terminal by triggering the image capturing function (e.g., an image capturing virtual button), so that a face image of the user may be obtained by the image capturing device, the image may be used as a face image to be processed, the user may trigger a special effect adding instruction by adjusting a special effect adding function in the image adjusting function, and the user terminal device may add a special effect to the face image to be processed based on the instruction after receiving the instruction of the user.
Different special effects can correspond to different special effect marks, the special effect marks can be graphs, characters and the like displayed on a user interface, and the specific expression form of the special effect marks is not limited in the disclosure. The to-be-added special effect corresponding to the special effect identification is the special effect which is selected by the user and is to be added into the to-be-processed face image, and when the triggering operation of the user on the special effect identification on the user interface is received, the special effect adding instruction corresponding to the special effect identification is generated based on the operation.
And step S120, based on the special effect to be added corresponding to the special effect identification, performing corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed to obtain a processed color value.
Where there is a complete color value at each pixel location in the color image. Such color values are understood here as positions in the color space. The most commonly used color spaces are three-dimensional, such as RGB (Red, Green, Blue) color space, XYZ color space, YUV color space or Lab color space. A color value usually has three components as the position of a three-dimensional color space, for example, for an RGB color space, the color value of a pixel point can be understood as the value of the pixel point in R, G, B three color channels.
The color value of the pixel in the face image can reflect the details such as the texture and highlight of the skin in the face image, for example, the color value of the pixel corresponding to the highlight part of the face is lighter, the color value of the pixel of the shadow part of the face is darker, and the color value of the pixel of the highlight part is deeper than the color value of the pixel of the shadow part. Adding the special effect to be added to the face image to be processed actually means adjusting the original color value of each pixel point of at least one face part in the face image to be processed.
In practical application, based on the special effect to be added corresponding to the special effect identifier, the specific step of correspondingly processing the original color value of each pixel point of at least one face part in the face image to be processed is that the original color value is processed according to the special effect parameter of the special effect to be added corresponding to the special effect identifier. The original color values of all the parts in the face image to be processed can be adjusted based on a pre-configured adjustment strategy. In order to simplify the operation steps of the user, the color values of the parts in the face image can be adjusted at one time based on the adjustment strategy, that is, the color values of the parts in the face image can be adjusted based on one color value adjustment operation of the user. For example, the original color value of each pixel point is adjusted to a set color value.
Further, considering that the color values of the parts in the face image may have different effects, if the color values of the parts in the face image need to be adjusted differently, the adjustment strategies corresponding to the parts may be preconfigured and the adjustment strategies corresponding to the parts are different, and based on the adjustment strategies corresponding to the parts, the color values of the corresponding parts may be adjusted correspondingly, so that the adjustment effect better meets the requirements of the user. As an example, the adjustment policy may be: the color value corresponding to the special effect parameter may be a target color value, that is, the target color value replaces the original color value, or further processing, such as superposition, may be performed according to the color value corresponding to the original color value.
As an example, for example, the adjustment policy for the cheek portion is to adjust the original color value based on the color value a in the special effect parameter, and the adjustment policy for the nose portion is to: adjusting the original color value based on the color value B in the special-effect parameter, then based on the special-effect adding instruction, adjusting the original color value of each pixel point in the nose part based on the adjusting strategy corresponding to the nose part, namely, according to the color value B, correspondingly adjusting the original color value, and based on the color value A, adjusting the original color value of each pixel point in the eye part based on the adjusting strategy corresponding to the cheek part.
In practical application, one special effect adding instruction can adjust the color value of one part of the human face at one time, and can also adjust the color values of at least two parts of the human face at one time, and the special effect adding instruction can be configured based on actual requirements. For example, a user may be provided with an option to select a location to be adjusted, and the user may select which location to adjust as desired.
And step S130, fusing the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image.
In the scheme of the disclosure, the original color values of the pixels and the corresponding processed color values are fused to obtain the color values of the fused pixels, and in the processed face image, the color values of the pixels are the color values of the fused pixels. The processed face image refers to the face image to which the special effect is added.
According to the scheme in the embodiment of the disclosure, when a special effect adding instruction of a user to a human face image to be processed is received, the special effect to be added corresponding to the special effect identification can be firstly based on, the original color value of each pixel point of at least one face part in the face image to be processed is correspondingly processed to obtain a processed color value, then, the original color value of each pixel point of at least one face part is fused with the processed color value to obtain a processed face image, thus, after the color value processing is carried out twice, the color value in the obtained face image fuses the original color value in the face image to be processed and the color value after the primary processing, so that the details of skin texture, highlight and the like are kept in the processed face image, and the processed face image is more natural, so that the user experience of the user is improved.
In an embodiment of the present disclosure, the method further includes:
acquiring a face image to be processed; and determining a region corresponding to at least one face part in the face image to be processed based on the face image to be processed and the pre-configured region template.
In order to reduce data processing amount, before processing the original color value of each pixel point in at least one face part in the face image to be processed based on the special effect adding instruction, a region corresponding to at least one face part in the face image to be processed is determined, and then corresponding processing is only performed on the original color value of each pixel point in the region corresponding to at least one face part.
Wherein, one region template may be a template corresponding to at least one face part in the face. For example, the region template may be a template corresponding to all parts in the face, that is, each part in the face is included in the region template, and the region corresponding to each part in the face image may be determined at one time based on the region template. Each part in the face determined based on the region template is fixed, and when the face image to be processed is subjected to special effect processing based on the special effect adding instruction, the corresponding special effect processing is performed on each part in the face determined by the region template, which is equivalent to performing 'one-key processing' on all parts corresponding to the region template.
The region template may also be a template corresponding to each part of the face, that is, one region template includes only one part of the face, for example, the region template a corresponds to a nose, and the region template B corresponds to an eye. Before the special effect processing is performed on the face image to be processed based on the special effect adding instruction, a part corresponding to the region template in the face image to be processed is determined based on the region template corresponding to the part selected by the user, and then the determined part is subjected to corresponding special effect processing based on the special effect adding instruction.
The inclusion of several sites in a region template may be configured based on actual needs and is not limited in the context of this disclosure. Optionally, a template may be configured for each region, and after the user selects a specific special effect to be added, the template corresponding to the special effect is used to determine the region to be adjusted.
In the solution of the present disclosure, a determination manner for determining a region corresponding to at least one face part in a face image to be processed is not limited, for example, the region corresponding to at least one face part in the face image to be processed may be determined based on a pre-configured region template.
In an embodiment of the present disclosure, the method further includes:
responding to the special effect adding instruction, and displaying a special effect parameter setting interface;
and receiving the setting operation of a settable special effect parameter of a special effect to be added by a user through a special effect parameter setting interface to obtain the set special effect parameter, wherein the settable special effect parameter comprises at least one of a special effect processing region adjusting parameter, a feathering parameter, a color value and transparency.
Based on the special effect adding instruction, the original color value of each pixel point of at least one face part in the face image to be processed is subjected to special effect adding processing corresponding to the special effect identification, and the method comprises the following steps:
and based on the special effect adding instruction, adding a special effect corresponding to the special effect identifier to the original color value of each pixel point in at least one face part in the face image to be processed according to the set special effect parameters.
The special effect parameter setting interface can be understood as an editing tool, the editing tool refers to a tool used when special effect processing is performed on a face image to be processed, the editing tool can be a virtual identifier, and the expression form of the virtual identifier is not limited in the disclosure, such as a painting brush, a brush and the like. And if the special effect parameter refers to the special effect adjustment strength corresponding to the special effect to be added, carrying out special effect processing on the face image to be processed based on the special effect parameter of the special effect to be added, and obtaining a processed color value corresponding to the special effect parameter.
The user may configure the special effect parameter to be added with the special effect, or may directly select the default configured special effect parameter of the system. If the user does not configure the special effect parameters, the special effect parameters configured by default by the system can be used for subsequent special effect processing. Therefore, the special effect parameter to be added with the special effect can be a special effect parameter set by a user or a special effect parameter configured by default by a system. Different special effects can correspond to different default configured special effect parameters of the system, and can also correspond to the same default configured parameters.
The settable special effect parameters may include, but are not limited to, a special effect processing region adjustment parameter, a feathering parameter, a color value, and a transparency. The special effect processing region refers to a region in the face image to be processed corresponding to the region, that is, a region that needs to be adjusted (special effect processing) when the special effect processing is performed, and the special effect processing region adjustment parameter refers to a parameter such as a size and a shape corresponding to the region that is processed at one time. Feathering is the effect of obscuring the edges of the selected range of images. The feathering parameter designation is a parameter that determines the selected range, for example, the feathering parameter may be a feathering value, the larger the feathering value, the wider the processing region.
The color value in the special effect parameter refers to a color value according to which the original color value is correspondingly processed on the basis of the original color value. For example, the color corresponding to the original color value is white, and the color corresponding to the color value to be added with the special effect is red, so that when the corresponding special effect processing is performed on the face image to be processed based on the color value to be added with the special effect, the color value in the special effect parameter may be superimposed on the basis of the original color value, or the original color value is replaced with the color value in the special effect parameter. As an example, the processed color value may be a superimposed color value of the original color value and the color value in the special effect parameter. The manner of superimposing two colors is not limited in this disclosure, and is within the scope of the disclosure.
The transparency of the special effect to be added refers to the degree of color change corresponding to the color value of the special effect to be added when the original color value of each pixel point in the face image to be processed is adjusted, the greater the transparency, the darker the color corresponding to the color value representing the special effect to be added, the smaller the transparency, and the lighter the color corresponding to the color value representing the special effect to be added. For example, the color value to be added with the special effect is red, when the transparency is a, the corresponding red is H1, when the transparency is B, the B is greater than a, when the corresponding red is H2, and when H1 is compared with H2, the color of H1 is lighter than that of H2. Furthermore, the degree of change between the original color value and the processed color value may be reflected based on the transparency, and the greater the transparency, the greater the degree of change.
The special effect parameters can be configured in advance, and when the special effect processing is carried out on the face image to be processed, the processing can be carried out based on the pre-configured special effect parameters. The special effect parameter may also be configured in real time based on the user's requirement, for example, after selecting a to-be-added special effect to be added to the to-be-processed face image, the special effect parameter is set at a special effect parameter setting interface, and the set special effect parameter represents a special effect adjustment strength corresponding to the to-be-added special effect, that is, what effect the user wants to adjust the original color value of each pixel point in the to-be-processed face image.
In practical application, the step of setting the special effect parameter of the special effect to be added may be performed before the user selects the special effect to be added, or after the user selects the special effect to be added, and if the user selects the special effect to be added, the set special effect parameter corresponds to the selected special effect to be added regardless of what special effect is selected to be added. If the special effect to be added is selected, the set special effect parameter corresponds to the selected special effect to be added, namely the special effect parameter corresponding to the selected special effect to be added, and when other special effects to be added are selected, the corresponding special effect parameter is not the set special effect parameter.
In an alternative of the present disclosure, in response to the setting operation, the settable special effect parameter is set, and specifically, the settable special effect parameter may be received through a human-computer interaction mode such as an input box, a pull-down menu, a selection box, a button, a sliding control, and the like, which is not limited specifically herein.
In the scheme of this disclosure, fuse the original colour value of each pixel point of at least one people's face position and the colour value after handling, include:
and when the transparency corresponding to the special effect to be added is not larger than a set threshold, fusing the original color value of each pixel point of at least one face part with the processed color value.
As described above, the greater the transparency corresponding to the special effect to be added, the deeper the color corresponding to the color value representing the special effect to be added, the greater the degree of change between the original color value and the processed color value, the smaller the transparency corresponding to the special effect to be added, the lighter the color corresponding to the color value representing the special effect to be added, and the smaller the degree of change between the original color value and the processed color value. When the transparency corresponding to the special effect to be added is not greater than the set threshold, the change degree between the original color value and the processed color value can be relatively small, the original color value of each pixel point of at least one human face part and the processed color value can be fused, the processed human face image is obtained, the original color value and the processed color value are fused to the color value in the processed human face image, and the color value in the processed human face image is more natural.
The transparency corresponding to the special effect to be added may be the transparency set by the user or the transparency default by the system.
When the transparency corresponding to the special effect to be added is larger than the set threshold, the method further comprises the following steps:
and taking the processed color value as the color value of the processed face image.
Based on the foregoing description, if the transparency corresponding to the special effect to be added is greater than the set threshold, it may be indicated that the degree of change between the original color value and the processed color value is relatively large, and when the original color value of each pixel point of at least one face part and the processed color value may not be fused, the original color value may not be considered, and the processed color value is directly adopted as the color value in the processed face image, so that not only the processing effect of the processed face image may be embodied, but also the color of the processed face image may look more natural.
In the scheme of this disclosure, fuse the original colour value of each pixel point of at least one people's face position and the colour value after handling, include:
determining a first weight corresponding to an original color value of each pixel point of at least one face part and a second weight corresponding to a processed color value;
and fusing the original color value of each pixel point of the at least one face part with the processed color value based on the first weight and the second weight corresponding to each pixel point in the at least one face part.
Wherein, to a position, the original colour value of each pixel and the colour value after each pixel is handled are different to the contribution degree of this position in the face image after handling, based on first weight, can reflect the contribution degree of the original colour value of each pixel in this position to this position in the face image after handling, based on the second weight, can reflect the contribution degree of the colour value after each pixel is handled in this position to this position in the face image after handling, then when the original colour value of each pixel with at least one face position fuses with the colour value after handling, fuse based on first weight and second weight, can make the colour value of the face image after handling more natural.
In practical application, the first weight and the second weight may be preconfigured based on different contribution degrees of the original color value of each pixel in the part and the processed color value of each pixel to the part in the processed face image, and the first weight and the second weight may also be determined in real time based on the original color value of each pixel in the part and the processed color value of each pixel.
It can be understood that the first weights corresponding to the pixels in each portion may be the same, the second weights may be the same, the first weights may also be different, and the second weights may also be different. If the first weights corresponding to the pixel points in each part are the same, and the second weights are also the same, the contribution degree of the original color value of each pixel point in each part and the contribution degree of the color value processed by each pixel point to the corresponding part in the processed human face image are the same. If the first weights corresponding to the pixel points in each part are different, and the second weights are also different, the contribution degree of the original color value of each pixel point in each part and the contribution degree of the color value processed by each pixel point to the corresponding part in the processed human face image are different. The first weight or the second weight may be that one pixel corresponds to one weight value, that is, when the pixel is a plurality of or one pixel in one region, the first weight is a weight matrix, or all the pixels correspond to one weight value.
In the solution of the present disclosure, the first weight and the second weight corresponding to each pixel in each part may be determined based on the above weight determining manner, and then the original color value and the processed color value of each pixel in each part are fused based on the first weight and the second weight corresponding to each part, so as to obtain the processed face image.
As an example, for example, only the color values of the mouth part and the nose part in the face image are adjusted, after the processed color values corresponding to the mouth part and the nose part are obtained, the first weight a corresponding to the original color value of each pixel point in the mouth part, the second weight B corresponding to the processed color value, the first weight C corresponding to the original color value of each pixel point in the nose part, and the second weight D corresponding to the processed color value are determined. Based on the first weight A and the second weight B, the original color values and the processed color values of all the pixel points in the mouth part are fused to obtain the final color values corresponding to the adjusted mouth part, and similarly, based on the first weight C and the second weight D, the original color values and the processed color values of all the pixel points in the nose part are fused to obtain the final color values corresponding to the adjusted nose part. And finally, obtaining a processed face image based on the adjusted final color value corresponding to the mouth part and the adjusted final color value corresponding to the nose part.
In the scheme of the disclosure, the special effect adding instruction may be an adding instruction for all parts in the face image to be processed, or may also be an adjusting instruction for any part in the face image to be processed. If the special effect addition instruction is an adjustment instruction corresponding to all the parts, which is equivalent to "one-key adjustment", the first weight and the second weight corresponding to each part may be the same. If the special effect addition instruction is an addition instruction for any part, the first weight and the second weight corresponding to each part may be different or the same.
It is to be understood that the sum of the first weight and the second weight is 1. The larger the weight is, the larger the contribution degree is, for example, if the first weight is larger than the second weight, the influence of the original color value of each pixel point on the special effect in the processed face image is more considered.
For example, based on the above two cases, when the transparency is not greater than the set threshold, the determined first weight and the second weight may both be numbers greater than 0 and less than 1, and when the transparency is greater than the set threshold, the determined first weight may be 0 and the second weight may be 1.
In the alternative scheme of the present disclosure, determining a first weight corresponding to an original color value and a second weight corresponding to a processed color value of each pixel point of at least one face part in a face image to be processed includes at least one of:
determining a first weight and a second weight based on at least one of the following information;
adjustment intensity indication information contained in the special effect adding instruction;
the transparency of the special effect is added;
the method is used for determining the transparency of the region template of at least one face part in the face image to be processed.
And determining a first weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value based on at least one item of the information.
Firstly, if the information includes adjustment intensity indication information in the special effect adding instruction, determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed based on the adjustment intensity indication information; based on the first weight, a second weight is determined.
The adjustment intensity indication information indicates whether the user wants to deepen or lighten the color value of at least one face part in the face image to be processed, namely, on the basis of the original color value, whether color value enhancement adjustment or color value reduction adjustment is performed. The first weight and the second weight are determined in consideration of the adjustment intention of the user, so that the color value in the processed face image is more in consideration of the intention of the user, and the processed color value is more in accordance with the preference of the user. When the first weight and the second weight are determined based on the adjustment strength indication information, the weight corresponding to the original color value of each pixel point is the first weight, and the weight corresponding to the processed color value is the second weight.
If the special effect adding instruction is an instruction corresponding to all parts, the adjustment intensity indicating information of the instruction indicates the adjustment intensity corresponding to each part in the face image to be processed, namely the adjustment intensity of the original color value of each pixel point in each part is the same. If the special effect adding instruction is an instruction for any part, the adjustment intensity indicating information of the instruction indicates the adjustment intensity corresponding to the part in the face image to be processed, and the adjustment intensities corresponding to different parts can be the same or different.
In an alternative of the present disclosure, the weight determined corresponding to the adjustment strength indication information may be used as the second weight, and the larger the value corresponding to the adjustment strength indication information is, the larger the adjustment strength of the processed color value is, the larger the second weight is. As an example, if the adjustment strength indication information is 0.2, the first weight corresponding to the original color value of each pixel point is 0.8, and the second weight corresponding to the processed color value is 0.2.
It can be understood that, if the value range of the adjustment strength indication information is-1 to 1, the adjustment strengths corresponding to the adjustment strength indication information of 0.3 and-0.3 are the same for the processed color value, and if the value of the adjustment strength indication information is the same positive number or the same negative number, for example, the adjustment strength corresponding to 0.6 is greater than the adjustment strength of 0.3 when the value is the same positive numbers, for example, the value is the same two positive numbers of 0.6 and 0.3.
In the scheme of the disclosure, if the absolute value of the corresponding numerical value of the adjustment strength indication information is between-1 and 1, the adjustment strength indication information can be directly used as the first weight or the second weight, and if the first weight is alpha, the second weight is 1-alpha.
In an alternative of the present disclosure, the strength indication information may also be adjusted to determine whether the special effect adding instruction is a color value deepening adjustment instruction or a color value shallowing adjustment instruction. For example, the adjustment intensity instruction information in the first setting range corresponds to a color value deepening adjustment instruction, and the adjustment intensity instruction information in the second setting range corresponds to a color value shallowing adjustment instruction. As an example, for example, the adjustment strength indication information has a value range of-1 to 1, the first setting range is 0 to 1, the second setting range is-1 to 0, and 0 indicates that the original color value of each pixel point in the corresponding portion is not adjusted. Then, when the adjustment intensity indication information is within-1 to 0, the color value adjustment instruction is a color value shallow adjustment instruction, and when the adjustment intensity indication information is within 0 to 1, the color value adjustment instruction is a color value deep adjustment instruction.
Secondly, if the information includes the transparency of the region template for determining at least one face part in the face image to be processed, that is, at least one face part in the face image to be processed is determined based on the pre-configured region template, a first weight corresponding to an original color value of each pixel point of each part and a second weight corresponding to a color value of each pixel point of each part in the at least one face part after processing can be determined based on the transparency corresponding to the region template.
The transparency of the region template can be pre-configured and fixed, so that the first weight corresponding to the original color value of each pixel of any face image to be processed is fixed, and the second weight corresponding to the processed color value is also fixed.
When the region corresponding to at least one face part in the face image to be processed is determined based on the region template, the original color value of each pixel point in the at least one face part is adjusted on the basis of the region corresponding to the at least one face part, and the processed color value is obtained. The transparency of the region template is the transparency of the adjusted region corresponding to the at least one face part, and when the processed color value is fused with the original color value, the fusion strength between the two color values can be determined based on the transparency of the region template, where the greater the transparency is, the greater the second weight corresponding to the processed color value is, and correspondingly, the smaller the first weight corresponding to the original color value is.
Thirdly, if the information includes the transparency of the special effect to be added, a first weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value can be determined based on the transparency.
As the transparency can determine whether to fuse the original color value of each pixel of the at least one face part with the processed color value, as described above, it can be known that the greater the transparency corresponding to the special effect to be added, the deeper the color corresponding to the color value representing the special effect to be added, the greater the contribution degree of the processed color value, and the greater the second weight corresponding to the processed color value. The smaller the transparency corresponding to the special effect to be added is, the lighter the color corresponding to the color value representing the special effect to be added is, the smaller the contribution degree of the processed color value is, and the smaller the second weight corresponding to the processed color value is. And fusing the original color value of each pixel point in the part and the processed color value based on the first weight and the second weight determined by the transparency of the special effect to be added, so that the special effect of the part in the processed human face image is more natural.
Fourthly, if the information comprises adjustment intensity indicating information and transparency of a region template used for determining at least one face part in the face image to be processed, determining a first weight corresponding to an original color value of each pixel point of the at least one face part in the face image to be processed and a second weight corresponding to a processed color value based on the transparency of the region template and the adjustment intensity indicating information.
In the scheme of the disclosure, the first weight and the second weight may also be determined based on the adjustment of the intensity indication information and the transparency of the region template, that is, the original color value of each pixel point is also considered while the intention of the user is considered, so that the determined first weight and the determined second weight are more accurate.
In determining the first weight and the second weight based on the adjusted intensity indicating information and the transparency, the weight a may be determined based on the adjusted intensity indicating information, the weight B may be determined based on the transparency, and the weight a and the weight B may be fused, for example, averaged, to obtain the first weight. The manner of fusing the weight a and the weight B is not limited in the alternative of the present disclosure, and is within the scope of the present disclosure.
In practical application, the transparency of the region templates corresponding to different face positions may be completely the same, or completely different, or not completely the same, and no matter whether the transparency of each region template is the same or not, when determining the first weight and the second weight, the transparency of each region template corresponding to each face position may be determined according to the transparency of each region template corresponding to each face position.
It should be noted that the four descriptions for determining the first weight and the second weight are merely exemplary descriptions, and not exhaustive all ways for determining the first weight and the second weight, and other ways may be inferred based on the several examples, and are not described herein again.
In an alternative of the present disclosure, the special effect adding instruction includes at least one of a makeup special effect adding instruction, a skin special effect instruction, and a paillette special effect adding instruction;
if the special effect adding instruction comprises a paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to the paillette material targeted by the paillette material selecting instruction;
determining a first weight corresponding to an original color value of each pixel point of at least one face part in a face image to be processed and a second weight corresponding to a processed color value, wherein the determining comprises the following steps:
and determining a first weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value based on the transparency of the paillette material.
The transparency of the paillette material can reflect the difference degree between the processed color value and the original color value, the larger the transparency of the paillette material is, the larger the representation difference is, in the fusion process, the influence of the transparency of the paillette material on the processed color value is considered, the first weight and the second weight are determined based on the transparency of the paillette material, and the larger the transparency is, the larger the contribution degree of the processed color value to the processed face image is, and the larger the corresponding second weight is. Then, when the original color value of each pixel point of at least one face part is fused with the processed color value, the processed face image can be more natural based on the first weight and the second weight.
If the special effect to be added is a paillette special effect, in the scheme of the disclosure, when the special effect to be added selected by the user is the paillette special effect, the paillette special effect adding instruction includes a paillette material selecting instruction, that is, when the user selects the paillette special effect, a user interface is displayed in response to the paillette special effect adding instruction, at least two paillette materials can be displayed on the user interface, based on paillette special effect selecting operation (operation corresponding to the paillette material selecting instruction) of the user on the at least two paillette materials, one paillette material is selected, and the special effect corresponding to the paillette material is used as the special effect to be added. In practical application, if the user does not participate in the selection of the paillette material, the paillette material configured by the default of the system can be selected.
As an example, the special effect corresponding to the paillette material C is a red paillette effect, and the special effect corresponding to the paillette material D is a blue paillette effect. The color value of the special effect to be added may be the color value corresponding to the paillette material.
In an alternative of the present disclosure, the special effect adding instruction includes at least one of a makeup special effect adding instruction, a skin special effect instruction, and a paillette special effect adding instruction; correspondingly, the special effects to be added comprise at least one of a color makeup special effect, a skin special effect and a paillette special effect.
The makeup color special effect means that after corresponding special effect processing is carried out on the face image to be processed, the processed face image has a makeup color effect on the basis of the face image to be processed, such as an eye shadow makeup color effect and a blush makeup color effect. The skin special effect refers to that color values of skins in a face image to be processed are correspondingly adjusted, for example, the skins are whitened and ruddy on the basis of the face image to be processed. The paillette special effect means that after the corresponding special effect processing is carried out on the face image to be processed, the processed face image has a paillette effect, for example, the face has a brilliant effect on the face, on the basis of the face image to be processed.
When the special effect adding instruction comprises a makeup special effect adding instruction or a skin special effect adding instruction, the original color value of each pixel point of at least one face part in the face image to be processed is correspondingly processed based on the special effect to be added corresponding to the special effect identification, and the processed color value is obtained, and the method comprises the following steps:
and on the basis of the color value corresponding to the special effect to be added, overlapping the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added respectively to obtain the processed color value.
When the special effect adding instruction comprises a paillette special effect adding instruction, based on the special effect to be added corresponding to the special effect identification, the original color value of each pixel point of at least one face part in the face image to be processed is correspondingly processed to obtain a processed color value, and the method comprises the following steps:
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added (namely, the color value corresponding to the special effect of the paillette) to obtain the processed color value.
Wherein, different special effects can correspond different colour values of waiting to add the special effect, also can correspond the same colour value of waiting to add the special effect, and the colour value that the special effect of waiting to add corresponds can be set for based on user's demand, if the user does not set for, then adopts the colour value of acquiescence waiting to add the special effect.
In this disclosed alternative, if the color value that pending face image corresponds is first color space, with the original color value of each pixel of at least one face position in pending face image replacement for the color value that paillette special effect that special effect adds the instruction corresponding corresponds, obtain the color value after handling, include:
converting the original color value of each pixel point in at least one face part in the face image to be processed into a second color space to obtain a first color value;
converting the color value corresponding to the special effect to be added into a second color space to obtain a second color value;
replacing color values of other channels except the brightness channel in the first color value with color values of corresponding channels in the second color value to obtain a replaced color value;
and converting the replaced color value into a first color space to obtain a processed color value.
When replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the paillette special effect, converting the original color value and the color value corresponding to the paillette special effect into a second color space, then only replacing the color values of other channels except the brightness channel in the first color value with the color values of the corresponding channels in the second color value, and after replacement, converting the replaced color value into the first color space, wherein the replaced color value comprises the replaced color value and the color value which is not replaced; the color values of the channels other than the luminance channel may reflect highlight portions in the face image, and thus, only for the color values of the channels other than the luminance channel, the highlight portions in the face image to be processed may be retained.
In an alternative of the present disclosure, the first color space may be an RGB space, and the second color space may be an HSV (Hue, Value) space, which is a color space created according to intuitive characteristics of colors. The designated channel may be an HS channel, where H represents hue, S represents saturation, and V represents lightness. For the paillette special effect, in an HSV space, the color value of the V channel can embody the highlight part in the image, and only the color value corresponding to the HS channel is replaced, so that the highlight part in the face image to be processed can be reserved.
In an alternative of the present disclosure, the conversion between the HSV space and the RGB space may adopt a color space conversion manner in the prior art, and is not described herein again.
In an alternative scheme of the disclosure, if the special effect adding instruction comprises a paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to a paillette material targeted by the paillette material selecting instruction;
based on the special effect to be added corresponding to the special effect identifier, the original color value of each pixel point of at least one face part in the face image to be processed is correspondingly processed to obtain a processed color value, which comprises the following steps:
based on the special effect adding instruction, the original color value of each pixel point of at least one face part in the face image to be processed is added with the special effect corresponding to the special effect identification, and the processed color value is obtained, and the method comprises the following steps:
replacing the original color value of each pixel point of at least one face part in the face image to be processed with a color value corresponding to a special effect to be added (here, the special effect to be added is a paillette special effect), and obtaining a third color value;
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with a set color value to obtain a fourth color value, wherein the processed color value is determined based on the third color value and the fourth color value.
When the special effect to be added is the paillette special effect, the corresponding paillette effect can be added to the face image to be processed based on the paillette material, the paillette effect corresponding to the set color value can be added, the special effect corresponding to the set color value is different from the effect corresponding to the paillette special effect (for example, the paillette color is different), and then the paillette effect with two different colors can be obtained after the paillette special effect processing is performed on the face image to be processed.
The set color value can be pre-configured, that is, the set color value is fixed and unchanged for any face image to be processed.
In an alternative, the set color value may be determined based on a set material, the color value corresponding to the set material is fixed, and the set color value may be set based on the preference of most users in practical application.
The processed color value is determined based on the third color value and the fourth color value, for example, the third color value and the fourth color value may be fused to obtain the processed color value, and a specific fusion manner of the processed color value is not limited in the scheme of the present disclosure.
In this disclosed alternative, fuse the original colour value of each pixel of at least one face position and the colour value after handling, obtain the face image after handling, include:
fusing the original color value and the third color value of each pixel point of at least one face part to obtain a color value after primary fusion;
fusing the primarily fused color value and the fourth color value to obtain a secondarily fused color value;
and fusing the original color value of each pixel point of at least one face part with the secondarily fused color value to obtain a processed face image.
When the original color value of each pixel point of at least one face part is fused with the processed color value, the third color value is fused, and the fourth color value can also be fused. Namely, the original color values of the pixels are fused with the third color values and the fourth color values, and the fusion sequence is not limited. The color values in the processed face image are richer and more natural.
In the alternative of this disclosure, fuse the colour value after tentatively fusing with fourth colour value, obtain the colour value after the secondary fusion, include:
determining a third weight of the fourth color value and a fourth weight corresponding to the color value after the preliminary fusion based on the transparency of the set material;
and fusing the primarily fused color value and the fourth color value based on the third weight and the fourth weight to obtain a secondarily fused color value.
Wherein, similar with the transparency of spangle material, the transparency of setting for the material can reflect the difference degree between the color value after preliminary fusion and the fourth color value, and the transparency of setting for the material is bigger, and it is bigger to express the difference. Considering that the contribution degree of the original color value of each pixel point and the processed color value to the primarily fused color value is different, when the original color value of each pixel point and the processed color value are fused, considering that the contribution degree of the primarily fused color value and the fourth color value to the secondarily fused color value is different, the larger the transparency is, the larger the contribution of the fourth color value is, and the larger the corresponding third weight is. Then when the primarily fused color value and the fourth color value are fused, the color value obtained after the secondary fusion is more natural based on the third weight and the fourth weight.
The third weight and the fourth weight may be preconfigured, and the third weight and the fourth weight may also be determined in real time based on the third weight of the fourth color value and the preliminarily fused color value.
In the alternative disclosed herein, the original color value of each pixel point of at least one face part is fused with the color value after the secondary fusion, when the processed face image is obtained, the contribution of the original color value of each pixel point and the color value after the secondary fusion to the processed face image is different, then the fifth weight corresponding to the original color value of each pixel point and the sixth weight of the color value after the secondary fusion can be respectively determined, based on the fifth weight and the sixth weight, the original color value of each pixel point and the color value after the secondary fusion are fused, so that the processed face image is more natural.
In alternative examples of the present disclosure, the manner of determining the fifth weight and the sixth weight may be determined based on the three manners described above:
firstly, when the special effect adding instruction comprises adjustment intensity indicating information, determining a fifth weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed based on the adjustment intensity indicating information, and determining a sixth weight based on the fifth weight.
Secondly, at least one face part in the face image to be processed is determined based on a pre-configured area template, and the transparency of the area template corresponding to each part in the at least one face part is obtained; and determining a fifth weight corresponding to the original color value of each pixel point of each part and a sixth weight corresponding to the color value after secondary fusion based on the transparency corresponding to each region template.
Thirdly, the special effect adding instruction comprises adjustment intensity indicating information, at least one face part in the face image to be processed is determined based on a pre-configured region template, and the transparency of the region template corresponding to each part in the at least one face part is obtained; and determining a fifth weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a sixth weight corresponding to the color value after secondary fusion based on the transparency corresponding to each region template and the adjustment intensity indication information.
For better explanation of the scheme of the present disclosure, the following describes the face image processing method of the present disclosure in detail with reference to fig. 2 to 4 c:
as shown in fig. 2, before the user releases the face image to be processed, the original color values of the pixel points in at least one face portion of the face image to be processed may be adjusted based on the method of the present disclosure, so that the processed face image is more natural and more beautiful.
Step 1, a special effect adding instruction of a user to a to-be-processed face image is obtained, in the example, the special effect adding instruction is a makeup special effect adding instruction, and the to-be-added special effect is a makeup special effect.
And 2, responding to the special effect adding instruction, displaying a special effect parameter setting interface corresponding to the editing tool, and setting the settable special effect parameters by a user based on the interface to obtain the set special effect parameters.
In this example, the expression form of the editing tool is a brush, and in this example, the set special effect parameters are: the width of the painting brush (adjustment parameter of the special effect processing area) is 200 pixels, the feather parameter is a feather value (feather size), the feather value is the size of the area corresponding to the feather processing, in this example, the feather value is 100 pixels, the color value of the special effect to be added is red, the transparency is 20 (decimal), if the transparency is represented by hexadecimal number, AA is designated transparency, 00 is completely transparent, and FF is completely opaque. The value range of the transparency is 0x00000000-0xFFFFFFFF, the default value is 0x00000000, and the opacity is black. The units of transparency are # RRGGBB color units. The Color value and transparency Mask Color of the special effect to be added are expressed as follows: (r, g, b, a), wherein r represents a color value of the r channel, g represents a color value of the g channel, b represents a color value of the b channel, and a represents transparency.
It should be noted that, for a grayscale image, the color value and the transparency of the special effect to be added can be represented by values of 0 to 255, for example, (r, g, b, a) is represented as: (92.94, 53.72, 65.49, 20), or normalized to (0-1), for example, (r, g, b, a) can be expressed as: (0.9294,0.5372,0.6549,0.2).
And 3, carrying out corresponding special effect processing on the original color value of each pixel point of at least one face part in the face image to be processed based on the makeup special effect instruction and the set special effect parameters to obtain the processed color value. The specific special effect treatment process comprises the following steps: and superposing the original color value and the color value of the special effect to be added to obtain the processed color value.
In this example, at least one face part in the face image to be processed may be determined first based on the region template.
And 4, when the transparency of the painting brush is larger than a set threshold, in the example, the set threshold can be 0.6 (represented by a number of 0-1), and when the transparency of the painting brush is larger than 0.6, the original color value of each pixel point of at least one face part and the processed color value are not fused, and the processed color value is directly used as the color value of the processed face image.
When the transparency of the painting brush is not larger than 0.6, the original color value of each pixel point of at least one face part and the processed color value can be fused to obtain a processed face image.
In the present example, for each stroke (i.e., for each special effect processing), the transparency of the painting brush is set, when a user paints for multiple times (performs multiple special effect processing) in the same place, the transparencies of the face images after multiple processing are superimposed according to the number of times, if the transparency corresponding to one stroke in the current region is 0.2, the transparency corresponding to two strokes becomes 0.4, and when the transparency is greater than the set threshold value, when the region is painted again, other processing modes are not changed, but the transparency is not changed.
In this example, the special effect adding instruction further includes adjustment intensity indication information, and when the transparency of the painting brush is not greater than 0.6, the original color value of each pixel point of the at least one face part and the processed color value may be fused to obtain a processed face image, which may further include:
when the transparency of the painting brush is not more than 0.6, obtaining the transparency of an area template corresponding to each part in at least one face part;
determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to a processed color value based on the transparency corresponding to each region template and the adjustment intensity indication information;
and fusing the original color value of each pixel point of at least one face part with the processed color value based on the corresponding first weight and second weight of each pixel point in at least one face part to obtain a processed face image.
The processed face image is shown as the face image in fig. 3, fig. 3 is a corresponding makeup effect image obtained by adding a makeup special effect to the face image to be processed, the makeup special effect is correspondingly added to the area a (cheek) and the area B (left eye) in fig. 3, the highlight part and the skin texture of the skin are reserved in the processed face image, and the processed face image is more natural.
When the special effect adding instruction is a skin special effect adding instruction, namely when the special effect to be added is a skin special effect, the special effect parameters of the skin special effect are as follows: the width of the painting brush is 200 pixel points, the eclosion parameter is 100 pixel points, the Color value of the special effect to be added is red, the transparency is 20 (decimal), and the Color value of the special effect to be added and the transparency Mask Color are expressed as follows: (1.0,0.0,0.0,0.2).
The processing procedure when the skin special effect is to be added is consistent with the processing procedure of the makeup special effect, which is not repeated herein, and when (r, g, b, a) is (1.0, 0.0, 0.0, 0.2), the face image to be processed is correspondingly processed based on the color value corresponding to the skin special effect, and substantially, on the basis of the original color value, the color value corresponding to the skin special effect is superimposed, so that the skin is more ruddy. In the face image processed based on the skin special effect, the skin is more white and ruddy compared with the skin in the face image to be processed, and the highlight part and the skin texture of the skin are kept, so that the processed face image is more natural.
When the special effect adding instruction is a paillette special effect adding instruction, namely when the special effect to be added is a paillette special effect, the special effect parameters of the paillette special effect are as follows: the width of the painting brush is 150 pixel points, the eclosion parameter is 100 pixel points, the transparency is 100 (decimal), the Color value of the special effect to be added is white, and the Color value of the special effect to be added and the transparency Mask Color are expressed as follows: (0.0,0.0,1.0,1.0).
Based on the special effect to be added, the original color value of each pixel point of at least one face part in the face image to be processed is subjected to corresponding special effect processing, and the processing process corresponding to the processed color value is as follows:
if the color space of the original color value corresponding to the face image to be processed is RGB space;
converting the original color value of each pixel point in at least one face part in the face image to be processed into HSV space, wherein the original color value of each pixel point is baseHSV (first color value);
converting the color value corresponding to the paillette special effect into an HSV space, and recording the converted color value (second color value) as colorHSV; the color value corresponding to the paillette material is determined by the paillette material selected from the paillette material library, and the color value corresponding to the selected paillette material is shown in fig. 4 a.
Replacing the color value of the HS channel of the baseHSV with the color value of the HS channel of the colorHSV to obtain a replaced color value;
converting the replaced color value into an RGB space to obtain a third color value baseRGB;
and replacing the original color value of each pixel point in at least one face part in the face image to be processed by the set color value to obtain a fourth color value, and taking the third color value and the fourth color value as the processed color values.
As can be seen from the foregoing description, the set color values are determined based on the set material.
And after the processed color value is obtained, fusing the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image.
Wherein, the first weight and the second weight can be determined based on the transparency of the paillette material;
fusing the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image, which specifically comprises:
and fusing the original color value of each pixel point of at least one face part with the processed color value based on the corresponding first weight and second weight of each pixel point in at least one face part to obtain the color value colorVal after primary fusion.
Determining a third weight of the fourth color value and a fourth weight of the color value after the preliminary fusion based on the transparency of the set material; the color values corresponding to the setting material are shown in fig. 4 b.
Fusing the primarily fused color value and the fourth color value based on the third weight and the fourth weight to obtain a secondarily fused color value WhiteVal;
acquiring the transparency of a region template corresponding to at least one face part;
determining a fifth weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a sixth weight corresponding to the color value after secondary fusion based on the transparency corresponding to the region template and the adjustment intensity indication information;
and fusing the original color value of each pixel point of the at least one face part with the secondarily fused color value based on the fifth weight and the sixth weight corresponding to each pixel point in the at least one face part, so as to obtain a processed face image.
In this example, the fifth weight may be a product between the adjusted intensity indication information and the transparency corresponding to the region template.
As shown in fig. 4C, based on fig. 4C, it can be seen that, in order to add a corresponding effect map after a paillette special effect to the face image to be processed, paillette materials and effects corresponding to the set materials are added to the processed face image, wherein part C in the map is the special effect corresponding to the paillette materials, part D is the special effect corresponding to the set materials, and after the paillette special effect is added, the highlight part and skin texture of the skin are retained in the processed face image, so that the processed face image is more natural.
Based on the same principle as the method shown in fig. 1, the embodiment of the present disclosure further provides a face image processing apparatus 20, as shown in fig. 5, where the apparatus 20 may include: an instruction acquisition module 210, a preliminary adjustment module 220, and a fusion module 230, wherein,
the instruction obtaining module 210 is configured to obtain a special effect adding instruction of a user to a to-be-processed face image, where the special effect adding instruction includes a special effect identifier of a to-be-added special effect;
the preliminary adjustment module 220 is configured to perform corresponding processing on an original color value of each pixel point of at least one face part in the face image to be processed based on the special effect to be added corresponding to the special effect identifier, so as to obtain a processed color value;
and the fusion module 230 is configured to fuse the original color value of each pixel point in at least one face part and the processed color value to obtain a processed face image.
In an embodiment of the present disclosure, the apparatus further includes:
the special effect parameter setting module is used for responding to the special effect adding instruction and displaying a special effect parameter setting interface; receiving a setting operation of a user for setting a special effect parameter of a special effect to be added through a special effect parameter setting interface to obtain the set special effect parameter, wherein the set special effect parameter comprises at least one of a special effect processing region adjusting parameter, a feathering parameter, a color value and transparency;
the preliminary adjustment module 220 is specifically configured to, when performing addition processing of a special effect corresponding to the special effect identifier on the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect addition instruction:
and based on the special effect adding instruction, adding a special effect corresponding to the special effect identifier to the original color value of each pixel point in at least one face part in the face image to be processed according to the set special effect parameters.
In the embodiment of the present disclosure, when the fusion module fuses the original color value of each pixel point of at least one face part and the processed color value, the fusion module is specifically configured to:
when the transparency corresponding to the special effect to be added is not larger than a set threshold, fusing the original color value of each pixel point of at least one face part with the processed color value;
when the transparency corresponding to the special effect to be added is larger than the set threshold, the method further comprises the following steps:
and taking the processed color value as the color value of the processed face image.
In the embodiment of the present disclosure, when the fusion module fuses the original color value of each pixel point of at least one face part and the processed color value, the fusion module is specifically configured to:
determining a first weight corresponding to an original color value of each pixel point of at least one face part and a second weight corresponding to a processed color value;
and fusing the original color value of each pixel point of the at least one face part with the processed color value based on the first weight and the second weight corresponding to each pixel point in the at least one face part.
In the embodiment of the present disclosure, when determining a first weight corresponding to an original color value and a second weight corresponding to a processed color value of each pixel point of at least one face part in a face image to be processed, the fusion module is specifically configured to:
determining the first weight and the second weight based on at least one of the following information:
adjustment intensity indication information contained in the special effect adding instruction;
the transparency of the special effect is added;
the method is used for determining the transparency of the region template of at least one face part in the face image to be processed.
In the embodiment of the disclosure, the special effect adding instruction comprises at least one of a makeup special effect adding instruction, a skin special effect instruction and a paillette special effect adding instruction;
if the special effect adding instruction comprises a paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to the paillette material targeted by the paillette material selecting instruction;
the fusion module is specifically used for determining a first weight corresponding to an original color value and a second weight corresponding to a processed color value of each pixel point of at least one face part in a face image to be processed:
and determining a first weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value based on the transparency of the paillette material.
In the embodiment of the disclosure, the special effect adding instruction comprises at least one of a makeup special effect adding instruction, a skin special effect instruction and a paillette special effect adding instruction;
when the special effect adding instruction comprises a makeup special effect adding instruction or a skin special effect adding instruction, the initial adjusting module performs corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect to be added corresponding to the special effect identification, and when the processed color value is obtained, the initial adjusting module is specifically used for:
based on a color value corresponding to a special effect to be added, overlapping an original color value of each pixel point of at least one face part in a face image to be processed with a color value corresponding to the special effect to be added respectively to obtain a processed color value;
when the special effect adding instruction includes a paillette special effect adding instruction, the preliminary adjustment module performs corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect to be added corresponding to the special effect identifier, and when the processed color value is obtained, the preliminary adjustment module is specifically used for:
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added to obtain the processed color value.
In the embodiment of the disclosure, if the color value corresponding to the to-be-processed face image is the first color space, the preliminary adjustment module is configured to replace the original color value of each pixel point of at least one face part in the to-be-processed face image with the color value corresponding to the paillette special effect corresponding to the special effect adding instruction, and when the processed color value is obtained, the preliminary adjustment module is specifically configured to:
converting the original color value of each pixel point in at least one face part in the face image to be processed into a second color space to obtain a first color value;
converting the color value corresponding to the special effect to be added into a second color space to obtain a second color value;
replacing color values of other channels except the brightness channel in the first color value with color values of corresponding channels in the second color value to obtain a replaced color value;
and converting the replaced color value into a first color space to obtain a processed color value.
In the embodiment of the disclosure, if the special effect adding instruction comprises a paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to the paillette material for which the paillette material selecting instruction is directed;
the preliminary adjustment module is based on the special effect adding instruction, and the original color value of each pixel point of at least one face part in the face image to be processed is added with the special effect corresponding to the special effect identifier, so that when the processed color value is obtained, the preliminary adjustment module is specifically used for:
replacing the original color value of each pixel point of at least one face part in the face image to be processed with a color value corresponding to the special effect to be added to obtain a third color value;
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with a set color value to obtain a fourth color value, wherein the processed color value is determined based on the third color value and the fourth color value.
The image processing apparatus of the embodiment of the present disclosure may execute a face image processing method provided by the embodiment of the present disclosure, and the implementation principles thereof are similar, the actions executed by each module in the face image processing apparatus in each embodiment of the present disclosure correspond to the steps in the face image processing method in each embodiment of the present disclosure, and for the detailed functional description of each module of the face image processing apparatus, reference may be specifically made to the description in the corresponding face image processing method shown in the foregoing, and details are not repeated here.
Based on the same principle as the face image processing method in the embodiment of the present disclosure, an embodiment of the present disclosure further provides an electronic device, which may include but is not limited to: a processor and a memory; a memory for storing computer operating instructions; and the processor is used for executing the method shown in the embodiment by calling the computer operation instruction.
Based on the same principle as the face image processing method in the embodiment of the present disclosure, an embodiment of the present disclosure further provides a computer-readable storage medium, where at least one operation, at least one program, a code set, or an operation set is stored in the storage medium, and the at least one operation, the at least one program, the code set, or the operation set is loaded and executed by a processor to implement the method shown in the embodiment, which is not described herein again.
Based on the same principle as the method in the embodiment of the present disclosure, reference is made to fig. 6, which shows a schematic structural diagram of an electronic device (e.g., a terminal device or a server in fig. 1) 600 suitable for implementing the embodiment of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 601 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)602, a Random Access Memory (RAM)603 and a storage device 608 hereinafter, which are specifically shown as follows:
as shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a special effect adding instruction of a user to the face image to be processed, wherein the special effect adding instruction comprises a special effect identifier of a special effect to be added; based on the special effect to be added corresponding to the special effect identification, carrying out corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed to obtain a processed color value; and fusing the original color value of each pixel point of at least one face part with the processed color value to obtain a processed face image.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the designation of a module or unit does not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a face image processing method, including:
acquiring a special effect adding instruction of a user to a to-be-processed face image, wherein the special effect adding instruction comprises a special effect to be added;
based on the special effect to be added corresponding to the special effect identification, carrying out corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed to obtain a processed color value;
and fusing the original color value of each pixel point of the at least one face part with the processed color value to obtain a processed face image.
According to one or more embodiments of the present disclosure, the method further comprises:
responding to the special effect adding instruction, and displaying a special effect parameter setting interface;
receiving a setting operation of the user on the settable special effect parameter of the special effect to be added through the special effect parameter setting interface to obtain the set special effect parameter, wherein the settable special effect parameter comprises at least one of a special effect processing region adjusting parameter, a feathering parameter, a color value and a transparency;
and based on the special effect adding instruction, adding a special effect corresponding to the special effect identifier to the original color value of each pixel point of at least one face part in the face image to be processed, wherein the adding comprises the following steps:
and based on the special effect adding instruction, adding a special effect corresponding to the special effect identifier to the original color value of each pixel point in at least one face part in the face image to be processed according to the set special effect parameters.
According to one or more embodiments of the present disclosure, the fusing the original color values of the pixels of the at least one face part and the processed color values includes:
when the transparency corresponding to the special effect to be added is not larger than a set threshold, fusing the original color value of each pixel point of the at least one face part with the processed color value;
when the transparency corresponding to the special effect to be added is greater than the set threshold, the method further comprises the following steps:
and taking the processed color value as the color value of the processed face image.
According to one or more embodiments of the present disclosure, the fusing the original color values of the pixels of the at least one face part and the processed color values includes:
determining a first weight corresponding to an original color value of each pixel point of the at least one face part and a second weight corresponding to the processed color value;
and fusing the original color value of each pixel point of the at least one face part with the processed color value based on the first weight and the second weight corresponding to each pixel point in the at least one face part.
According to one or more embodiments of the present disclosure, the determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value includes at least one of:
determining the first weight and the second weight based on at least one of the following information:
adjusting intensity indication information contained in the special effect adding instruction;
the transparency of the special effect to be added;
the transparency of the region template of at least one face part in the face image to be processed is determined.
According to one or more embodiments of the present disclosure, the special effect adding instruction includes at least one of a makeup special effect adding instruction, a skin special effect instruction, and a paillette special effect adding instruction;
if the special effect adding instruction comprises a paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to the paillette material for which the paillette material selecting instruction is directed;
the determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value includes:
and determining a first weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value based on the transparency of the paillette material.
According to one or more embodiments of the present disclosure, the special effect adding instruction includes at least one of a makeup special effect adding instruction, a skin special effect instruction, and a paillette special effect adding instruction;
when the special effect adding instruction includes the makeup special effect adding instruction or the skin special effect adding instruction, the processing of the original color value of each pixel point of at least one face part in the face image to be processed is performed correspondingly based on the special effect to be added corresponding to the special effect identification, so as to obtain a processed color value, and the processing includes:
based on the color value corresponding to the special effect to be added, overlapping the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added respectively to obtain a processed color value;
when the special effect adding instruction includes the paillette special effect adding instruction, the processing of the original color value of each pixel point of at least one face part in the face image to be processed is performed correspondingly based on the special effect to be added corresponding to the special effect identification, so as to obtain a processed color value, including:
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added to obtain the processed color value.
According to one or more embodiments of the present disclosure, if the color value corresponding to the to-be-processed face image is in the first color space, replacing the original color value of each pixel point of at least one face part in the to-be-processed face image with the color value corresponding to the paillette special effect corresponding to the special effect adding instruction to obtain the processed color value, including:
converting the original color value of each pixel point in at least one face part in the face image to be processed into a second color space to obtain a first color value;
converting the color value corresponding to the special effect to be added into the second color space to obtain a second color value;
replacing color values of other channels except the brightness channel in the first color value with color values of corresponding channels in the second color value to obtain a replaced color value;
and converting the replaced color value into the first color space to obtain the processed color value.
According to one or more embodiments of the present disclosure, if the special effect adding instruction includes the paillette special effect adding instruction, the paillette special effect adding instruction includes a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to a paillette material targeted by the paillette material selecting instruction;
the adding processing of the special effect corresponding to the special effect identification is carried out on the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect adding instruction, so as to obtain a processed color value, and the adding processing comprises the following steps:
replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added to obtain a third color value;
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with a set color value to obtain a fourth color value, wherein the processed color value is determined based on the third color value and the fourth color value.
According to one or more embodiments of the present disclosure, [ example two ] there is provided a face image processing apparatus, including:
the instruction acquisition module is used for acquiring a special effect adding instruction of a user to the face image to be processed, wherein the special effect adding instruction comprises a special effect to be added;
the preliminary adjustment module is used for correspondingly processing the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect to be added corresponding to the special effect identification to obtain a processed color value;
and the fusion module is used for fusing the original color value of each pixel point of the at least one face part with the processed color value to obtain a processed face image.
According to one or more embodiments of the present disclosure, the apparatus further comprises:
the special effect parameter setting module is used for responding to the special effect adding instruction and displaying a special effect parameter setting interface; receiving a setting operation of the user on the settable special effect parameter of the special effect to be added through the special effect parameter setting interface to obtain the set special effect parameter, wherein the settable special effect parameter comprises at least one of a special effect processing region adjusting parameter, a feathering parameter, a color value and a transparency;
the preliminary adjustment module is specifically configured to, when performing addition processing of a special effect corresponding to the special effect identifier on an original color value of each pixel point of at least one face part in the face image to be processed based on the special effect addition instruction:
and based on the special effect adding instruction, adding a special effect corresponding to the special effect identifier to the original color value of each pixel point in at least one face part in the face image to be processed according to the set special effect parameters.
According to one or more embodiments of the present disclosure, when the fusion module fuses the original color value of each pixel point of the at least one face part and the processed color value, the fusion module is specifically configured to:
when the transparency corresponding to the special effect to be added is not larger than a set threshold, fusing the original color value of each pixel point of the at least one face part with the processed color value;
when the transparency corresponding to the special effect to be added is greater than the set threshold, the method further comprises the following steps:
and taking the processed color value as the color value of the processed face image.
According to one or more embodiments of the present disclosure, when the fusion module fuses the original color value of each pixel point of the at least one face part and the processed color value, the fusion module is specifically configured to:
determining a first weight corresponding to an original color value of each pixel point of the at least one face part and a second weight corresponding to the processed color value;
and fusing the original color value of each pixel point of the at least one face part with the processed color value based on the first weight and the second weight corresponding to each pixel point in the at least one face part.
According to one or more embodiments of the present disclosure, when determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value, the fusion module is specifically configured to:
determining the first weight and the second weight based on at least one of the following information:
adjusting intensity indication information contained in the special effect adding instruction;
the transparency of the special effect to be added;
the transparency of the region template of at least one face part in the face image to be processed is determined.
According to one or more embodiments of the present disclosure, the special effect adding instruction includes at least one of a makeup special effect adding instruction, a skin special effect instruction, and a paillette special effect adding instruction;
if the special effect adding instruction comprises a paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to the paillette material for which the paillette material selecting instruction is directed;
the fusion module is specifically configured to, when determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value:
and determining a first weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value based on the transparency of the paillette material.
According to one or more embodiments of the present disclosure, the special effect adding instruction includes at least one of a makeup special effect adding instruction, a skin special effect instruction, and a paillette special effect adding instruction;
when the special effect adding instruction includes the makeup special effect adding instruction or the skin special effect adding instruction, the preliminary adjustment module is configured to, based on the to-be-added special effect corresponding to the special effect identifier, perform corresponding processing on an original color value of each pixel point of at least one face part in the to-be-processed face image, and when a processed color value is obtained, specifically:
based on the color value corresponding to the special effect to be added, overlapping the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added respectively to obtain a processed color value;
when the special effect adding instruction comprises the paillette special effect adding instruction, the preliminary adjustment module is used for correspondingly processing the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect to be added corresponding to the special effect identification, and when the processed color value is obtained, the preliminary adjustment module is specifically used for:
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added to obtain the processed color value.
According to one or more embodiments of the present disclosure, if the color value corresponding to the to-be-processed face image is in the first color space, the preliminary adjustment module is specifically configured to, when the processed color value is obtained, replace the original color value of each pixel point of at least one face part in the to-be-processed face image with the color value corresponding to the special effect adding instruction, that is, the paillette special effect corresponding to the special effect adding instruction:
converting the original color value of each pixel point in at least one face part in the face image to be processed into a second color space to obtain a first color value;
converting the color value corresponding to the special effect to be added into the second color space to obtain a second color value;
replacing color values of other channels except the brightness channel in the first color value with color values of corresponding channels in the second color value to obtain a replaced color value;
and converting the replaced color value into the first color space to obtain the processed color value.
According to one or more embodiments of the present disclosure, if the special effect adding instruction includes the paillette special effect adding instruction, the paillette special effect adding instruction includes a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to a paillette material targeted by the paillette material selecting instruction;
the preliminary adjustment module is used for performing special effect adding processing corresponding to the special effect identification on the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect adding instruction, and specifically used for:
replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added to obtain a third color value;
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with a set color value to obtain a fourth color value, wherein the processed color value is determined based on the third color value and the fourth color value.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1. A face image processing method is characterized by comprising the following steps:
acquiring a special effect adding instruction of a user to a to-be-processed face image, wherein the special effect adding instruction comprises a special effect identifier of a to-be-added special effect;
based on the special effect to be added corresponding to the special effect identification, carrying out corresponding processing on the original color value of each pixel point of at least one face part in the face image to be processed to obtain a processed color value;
and fusing the original color value of each pixel point of the at least one face part with the processed color value to obtain a processed face image.
2. The method of claim 1, further comprising:
responding to the special effect adding instruction, and displaying a special effect parameter setting interface;
receiving a setting operation of the user on the settable special effect parameter of the special effect to be added through the special effect parameter setting interface to obtain the set special effect parameter, wherein the settable special effect parameter comprises at least one of a special effect processing region adjusting parameter, a feathering parameter, a color value and a transparency;
and based on the special effect adding instruction, adding a special effect corresponding to the special effect identifier to the original color value of each pixel point of at least one face part in the face image to be processed, wherein the adding comprises the following steps:
and based on the special effect adding instruction, adding a special effect corresponding to the special effect identifier to the original color value of each pixel point in at least one face part in the face image to be processed according to the set special effect parameters.
3. The method of claim 1, wherein the fusing the original color values of the pixels of the at least one face part with the processed color values comprises:
when the transparency corresponding to the special effect to be added is not larger than a set threshold, fusing the original color value of each pixel point of the at least one face part with the processed color value;
when the transparency corresponding to the special effect to be added is greater than the set threshold, the method further comprises the following steps:
and taking the processed color value as the color value of the processed face image.
4. The method of claim 1, wherein the fusing the original color values of the pixels of the at least one face part with the processed color values comprises:
determining a first weight corresponding to an original color value of each pixel point of the at least one face part and a second weight corresponding to the processed color value;
and fusing the original color value of each pixel point of the at least one face part with the processed color value based on the first weight and the second weight corresponding to each pixel point in the at least one face part.
5. The method according to claim 4, wherein the determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value comprises at least one of:
determining the first weight and the second weight based on at least one of the following information:
adjusting intensity indication information contained in the special effect adding instruction;
the transparency of the special effect to be added;
the transparency of the region template of at least one face part in the face image to be processed is determined.
6. The method of claim 4, wherein the effect add instructions include at least one of a make-up effect add instruction, a skin effect instruction, and a paillette effect add instruction;
if the special effect adding instruction comprises a paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to the paillette material for which the paillette material selecting instruction is directed;
the determining a first weight corresponding to an original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value includes:
and determining a first weight corresponding to the original color value of each pixel point of at least one face part in the face image to be processed and a second weight corresponding to the processed color value based on the transparency of the paillette material.
7. The method of any one of claims 1 to 6, wherein the effect add instructions include at least one of a make-up effect add instruction, a skin effect instruction, and a paillette effect add instruction;
when the special effect adding instruction includes the makeup special effect adding instruction or the skin special effect adding instruction, the processing of the original color value of each pixel point of at least one face part in the face image to be processed is performed correspondingly based on the special effect to be added corresponding to the special effect identification, so as to obtain a processed color value, and the processing includes:
based on the color value corresponding to the special effect to be added, overlapping the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added respectively to obtain a processed color value;
when the special effect adding instruction includes the paillette special effect adding instruction, the processing of the original color value of each pixel point of at least one face part in the face image to be processed is performed correspondingly based on the special effect to be added corresponding to the special effect identification, so as to obtain a processed color value, including:
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added to obtain the processed color value.
8. The method of claim 7, wherein if the color value corresponding to the to-be-processed face image is in the first color space, replacing the original color value of each pixel point of at least one face part in the to-be-processed face image with the color value corresponding to the special effect adding instruction, and obtaining the processed color value comprises:
converting the original color value of each pixel point in at least one face part in the face image to be processed into a second color space to obtain a first color value;
converting the color value corresponding to the special effect to be added into the second color space to obtain a second color value;
replacing color values of other channels except the brightness channel in the first color value with color values of corresponding channels in the second color value to obtain a replaced color value;
and converting the replaced color value into the first color space to obtain the processed color value.
9. The method according to any one of claims 1 to 6, wherein if the special effect adding instruction comprises the paillette special effect adding instruction, the paillette special effect adding instruction comprises a paillette material selecting instruction, and the special effect to be added is a special effect corresponding to a paillette material for which the paillette material selecting instruction is directed;
the adding processing of the special effect corresponding to the special effect identification is carried out on the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect adding instruction, so as to obtain a processed color value, and the adding processing comprises the following steps:
replacing the original color value of each pixel point of at least one face part in the face image to be processed with the color value corresponding to the special effect to be added to obtain a third color value;
and replacing the original color value of each pixel point of at least one face part in the face image to be processed with a set color value to obtain a fourth color value, wherein the processed color value is determined based on the third color value and the fourth color value.
10. A face image processing apparatus, comprising:
the instruction acquisition module is used for acquiring a special effect adding instruction of a user to the face image to be processed, wherein the special effect adding instruction comprises a special effect identifier of a special effect to be added;
the preliminary adjustment module is used for correspondingly processing the original color value of each pixel point of at least one face part in the face image to be processed based on the special effect to be added corresponding to the special effect identification to obtain a processed color value;
and the fusion module is used for fusing the original color value of each pixel point of the at least one face part with the processed color value to obtain a processed face image.
11. An electronic device, comprising:
a processor and a memory;
the memory is used for storing computer operation instructions;
the processor is used for executing the method of any one of the claims 1 to 9 by calling the computer operation instruction.
12. A computer-readable medium having computer program instructions stored thereon for causing a computer to perform the method of any of claims 1 to 9.
CN202010642253.4A 2020-07-06 2020-07-06 Face image processing method and device, electronic equipment and computer readable medium Pending CN111784568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010642253.4A CN111784568A (en) 2020-07-06 2020-07-06 Face image processing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010642253.4A CN111784568A (en) 2020-07-06 2020-07-06 Face image processing method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN111784568A true CN111784568A (en) 2020-10-16

Family

ID=72759060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010642253.4A Pending CN111784568A (en) 2020-07-06 2020-07-06 Face image processing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111784568A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700385A (en) * 2020-12-31 2021-04-23 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112766234A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112767285A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113160094A (en) * 2021-02-23 2021-07-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113240760A (en) * 2021-06-29 2021-08-10 北京市商汤科技开发有限公司 Image processing method and device, computer equipment and storage medium
CN113469874A (en) * 2021-06-29 2021-10-01 展讯通信(上海)有限公司 Beauty treatment method and device, electronic equipment and storage medium
CN113763287A (en) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113902611A (en) * 2021-10-09 2022-01-07 Oppo广东移动通信有限公司 Image beautifying processing method and device, storage medium and electronic equipment
WO2023169287A1 (en) * 2022-03-11 2023-09-14 北京字跳网络技术有限公司 Beauty makeup special effect generation method and apparatus, device, storage medium, and program product
CN117455753A (en) * 2023-10-12 2024-01-26 书行科技(北京)有限公司 Special effect template generation method, special effect generation device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404082A (en) * 2008-11-14 2009-04-08 深圳市迅雷网络技术有限公司 Portrait buffing method and apparatus
CN105956576A (en) * 2016-05-18 2016-09-21 广东欧珀移动通信有限公司 Image beautifying method and device and mobile terminal
CN107833178A (en) * 2017-11-24 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110223246A (en) * 2019-05-17 2019-09-10 杭州趣维科技有限公司 A kind of windy lattice portrait U.S. face mill skin method and device
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template
CN111046763A (en) * 2019-11-29 2020-04-21 广州久邦世纪科技有限公司 Portrait cartoon method and device
CN111063008A (en) * 2019-12-23 2020-04-24 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium
CN111127591A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404082A (en) * 2008-11-14 2009-04-08 深圳市迅雷网络技术有限公司 Portrait buffing method and apparatus
CN105956576A (en) * 2016-05-18 2016-09-21 广东欧珀移动通信有限公司 Image beautifying method and device and mobile terminal
CN107833178A (en) * 2017-11-24 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110223246A (en) * 2019-05-17 2019-09-10 杭州趣维科技有限公司 A kind of windy lattice portrait U.S. face mill skin method and device
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template
CN111046763A (en) * 2019-11-29 2020-04-21 广州久邦世纪科技有限公司 Portrait cartoon method and device
CN111063008A (en) * 2019-12-23 2020-04-24 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium
CN111127591A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700385A (en) * 2020-12-31 2021-04-23 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2022179026A1 (en) * 2021-02-23 2022-09-01 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112767285A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113160094A (en) * 2021-02-23 2021-07-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
WO2022179215A1 (en) * 2021-02-23 2022-09-01 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112766234A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112767285B (en) * 2021-02-23 2023-03-10 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN113240760A (en) * 2021-06-29 2021-08-10 北京市商汤科技开发有限公司 Image processing method and device, computer equipment and storage medium
CN113469874A (en) * 2021-06-29 2021-10-01 展讯通信(上海)有限公司 Beauty treatment method and device, electronic equipment and storage medium
CN113240760B (en) * 2021-06-29 2023-11-24 北京市商汤科技开发有限公司 Image processing method, device, computer equipment and storage medium
CN113763287A (en) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113902611A (en) * 2021-10-09 2022-01-07 Oppo广东移动通信有限公司 Image beautifying processing method and device, storage medium and electronic equipment
WO2023169287A1 (en) * 2022-03-11 2023-09-14 北京字跳网络技术有限公司 Beauty makeup special effect generation method and apparatus, device, storage medium, and program product
CN117455753A (en) * 2023-10-12 2024-01-26 书行科技(北京)有限公司 Special effect template generation method, special effect generation device and storage medium

Similar Documents

Publication Publication Date Title
CN111784568A (en) Face image processing method and device, electronic equipment and computer readable medium
CN112767285B (en) Image processing method and device, electronic device and storage medium
CN109584180A (en) Face image processing process, device, electronic equipment and computer storage medium
CN111127591B (en) Image hair dyeing processing method, device, terminal and storage medium
CN113160094A (en) Image processing method and device, electronic equipment and storage medium
CN111369644A (en) Face image makeup trial processing method and device, computer equipment and storage medium
CN104076928B (en) A kind of method for adjusting text importing image
JP2000134486A (en) Image processing unit, image processing method and storage medium
CN111583103A (en) Face image processing method and device, electronic equipment and computer storage medium
CN113450431B (en) Virtual hair dyeing method, device, electronic equipment and storage medium
CN113132696A (en) Image tone mapping method, device, electronic equipment and storage medium
CN113570581A (en) Image processing method and device, electronic equipment and storage medium
WO2022166907A1 (en) Image processing method and apparatus, and device and readable storage medium
CN114841853A (en) Image processing method, device, equipment and storage medium
CN112541955B (en) Image processing method, device and equipment
CN112634155B (en) Image processing method, device, electronic equipment and storage medium
CN111462158B (en) Image processing method and device, intelligent equipment and storage medium
CN113160099B (en) Face fusion method, device, electronic equipment, storage medium and program product
JP5896204B2 (en) Image processing apparatus and program
WO2023045946A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111583102A (en) Face image processing method and device, electronic equipment and computer storage medium
CN115953597B (en) Image processing method, device, equipment and medium
CN113570583B (en) Image processing method and device, electronic equipment and storage medium
WO2023045961A1 (en) Virtual object generation method and apparatus, and electronic device and storage medium
CN116456148A (en) Method and device for determining similarity between video frames, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20201016