CN112184540A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112184540A
CN112184540A CN201910589702.0A CN201910589702A CN112184540A CN 112184540 A CN112184540 A CN 112184540A CN 201910589702 A CN201910589702 A CN 201910589702A CN 112184540 A CN112184540 A CN 112184540A
Authority
CN
China
Prior art keywords
color
eyebrow
area
hair
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910589702.0A
Other languages
Chinese (zh)
Inventor
王倩
成亦辉
杜俊增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910589702.0A priority Critical patent/CN112184540A/en
Publication of CN112184540A publication Critical patent/CN112184540A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image processing method, apparatus, electronic device and storage medium; wherein the method comprises the following steps: determining a hair area and an eyebrow area of a person in a target image on which a face is displayed; and matching the color value of the hair area with the color value of the eyebrow area so as to enable the difference between the color value of the hair area and the color value of the eyebrow area to be within a first preset range.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
Currently, many users change the color of hair to a desired color by dyeing hair in order to pursue fashion or change mood. However, since the eyebrow area is small, it is not easy to dye, or because the user is afraid of the damage of the dyeing agent to the facial skin, most users generally dye hair without eyebrow, so the hair color and eyebrow color are not uniform, and the user may look more obtrusive as a whole. As a result, the aesthetic appearance of the user image captured by the user is also affected to some extent.
In order to solve the above problems, in the related art, after a user selects an eyebrow sticker with a desired color number as required, the eyebrow sticker selected by the user is covered on an eyebrow portion in an image of the current user by presetting eyebrow sticker materials with different color numbers in an application program (APP) with an image processing function, so that the eyebrow color in the image of the user is changed, and the aesthetic degree of the image of the user is improved to a certain extent.
However, the eyebrow shapes corresponding to the eyebrow stickers provided by the related art are all the eyebrow shapes corresponding to the user in the normal expression state, if the user performs eyebrow selection or eyebrow wrinkling or even more other facial expressions, the eyebrow sticker corresponding to the eyebrow shape with only one expression cannot be consistent with the eyebrow shapes of the user performing other expressions, and if the user still replaces the original eyebrow part with the eyebrow sticker, the user may feel more uncomfortable. Moreover, in the related art, by directly covering the eyebrow area of the user with the eyebrow sticker, the generated visual effect is relatively rigid, and the user can easily wear the eyebrow sticker, which can be understood that the eyebrow part in the user image is easily found to be fake by the viewer.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, apparatus, electronic device, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
determining a hair area and an eyebrow area of a person in a target image on which a face is displayed;
and matching the color value of the hair area with the color value of the eyebrow area so as to enable the difference between the color value of the hair area and the color value of the eyebrow area to be within a first preset range.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the first determining module is used for determining a hair area and an eyebrow area in the target image with the face and determining a hair area and an eyebrow area of a person in the target image with the face;
the first matching module is used for matching the color values of the hair area and the eyebrow area so that the difference between the color values of the hair area and the eyebrow area is within a first preset range.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing a computer program executable by the processor;
wherein the processor implements the steps of the image processing method when executing the program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method.
The technical method provided by the embodiment of the disclosure at least brings the following beneficial technical effects:
the color values of the hair area and the eyebrow area of the character displayed by the image are matched, so that the difference between the color values of the hair area and the eyebrow area is within a first preset range, the problem that the appearance attractiveness of the character is influenced due to the fact that the color development and the eyebrow color difference of the character shown by the image are too large can be avoided, the problems that visual effects generated when eyebrow stickers directly cover the eyebrow area of the character in the related art are hard and are easy to wear in the related art can be solved, and the method is favorable for improving the authenticity, attractiveness and naturalness of the face of the character in the processed image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flow chart illustrating an image processing method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an original target image shown in accordance with an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an original target image shown in FIG. 2 according to an exemplary embodiment of the present disclosure, which is processed by one of the image processing methods provided by the embodiments of the present disclosure to obtain a target image;
FIG. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure;
fig. 5 is a block diagram illustrating a configuration of an electronic device of an image processing apparatus according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The current camera application generally has some common image processing functions, such as a filter function, an image color conversion function, a sticker function, etc., so as to facilitate a user to process an image obtained by shooting or downloading or other ways, so that the processed image meets the effect required by the user, such as a beautifying effect, a beautifying effect or an antique effect, etc. For example, for a user with inconsistent hair color and eyebrow color, the image effect may be affected by the difference between the hair color and the eyebrow color of the user in an image captured by the user, and therefore, the user needs to replace the original eyebrow portion with an eyebrow sticker provided by the current camera application to reduce the difference between the hair color and the eyebrow color of the user displayed in the image. However, the eyebrow shapes corresponding to the eyebrow stickers provided by the related art are all the eyebrow shapes corresponding to the user in the normal expression state, if the user performs eyebrow selection or eyebrow wrinkling or even more other facial expressions, the eyebrow sticker corresponding to only one expression cannot be consistent with the eyebrow shapes of the user performing other expressions, and if the user still replaces the original eyebrow part with the eyebrow sticker, the user may feel more uncomfortable. Moreover, in the related art, by directly covering the eyebrow area of the user with the eyebrow sticker, the generated visual effect is relatively rigid, and the user can easily wear the eyebrow sticker, which can be understood that the eyebrow part in the user image is easily found to be fake by the viewer.
Based on this, the embodiment of the present disclosure provides a new image processing method, by matching color values of a hair region and an eyebrow region of a character displayed in an image, so that a difference between the color values of the hair region and the eyebrow region is within a first preset range, which not only can avoid that the difference between the color of the character and the eyebrow color shown in the image is too large to affect the appearance aesthetic degree of the character, but also can solve the problems that the related art directly covers the eyebrow region of the character by using an eyebrow sticker, so that the visual effect is hard and the eyebrow region is easy to wear, and is beneficial to improving the authenticity, aesthetic degree and natural degree of the face of the character in the processed image.
As shown in fig. 1, fig. 1 is a flowchart of an image processing method shown in the present disclosure according to an exemplary embodiment, and the method may be applied to a terminal, for example, a camera or an album system provided in the terminal itself, or a third-party camera or an album application installed in the terminal, or may be an image processing function provided in the system itself, or may be installed in the terminal in the form of a new application. The method comprises the following steps:
in step S011, a hair region and an eyebrow region of a person in a target image on which a face is displayed are determined;
in step S012, the color values of the hair region and the eyebrow region are matched so that a difference between the color values of the hair region and the eyebrow region is within a first preset range.
In the above, the target image may be obtained by performing face recognition processing on a currently input image. The currently input image can be automatically detected and obtained by the system, for example, when the system detects that the image stored in the terminal is increased, the system can take the newly increased image as the currently input image; or, the currently input image may be obtained based on a user operation, for example, a user may first specify an image to be processed on an application interface of an application program that may execute the method, and then trigger generation of a user instruction for indicating matching of the hair color and the eyebrow color by clicking a control provided by the application interface and used for indicating matching of the hair color and the eyebrow color, so that the currently input image may be obtained according to the user instruction and the currently input image may be processed accordingly.
Based on this, in order to avoid that the system is jammed due to excessive system computing resource occupation and the user experience is affected because the system automatically detects the currently input image and triggers the computation amounts generated in the step S011 and the step S012, in an embodiment, the step S011 may be executed only when receiving the instruction, and based on this, the step S011 may be adaptively adjusted to: upon receiving a user instruction for indicating matching of the hair color and the eyebrow color, a hair region and an eyebrow region of a person in the currently input image are determined.
Therefore, the image currently input by the user can be obtained according to the user instruction and is processed in a targeted manner, the processing requirement of the user on the appointed image can be met, the detection step of the newly added image does not need to be executed, and the step of performing image processing on each newly added image to match the color and eyebrow of the person in the image is also not needed, so that the system operation amount is greatly reduced, and the system is prevented from being blocked due to the fact that the system operation resources are excessively occupied.
As can be seen from the above, when the user needs to adjust the eyebrow color and the hair color of the person in the designated image to be relatively consistent, the image input to the system is generally an image with a human face displayed thereon, and thus the currently input image is the target image with a human face displayed thereon. However, it is not excluded that the user inadvertently selects the image without the face displayed and triggers the user instruction, and if the matching processing of the hair and eyebrows is continuously performed on the image based on the instruction, it is likely to cause the system to crash by mistake or misunderstand that the face exists and identify the eyebrow area and the hair area which do not actually exist, which not only affects the final image processing effect, but also causes unnecessary operation. Because, to solve the technical problem, in an embodiment, the determining the hair region and the eyebrow region of the person in the target image with the human face displayed in step S011 includes:
in step S0111, a currently input image is obtained, and face recognition is carried out on the currently input image;
in step S0112, whether a face exists in the currently input image is determined according to the face recognition result;
in step S0113, when a face exists in the currently input image, determining the currently input image as the target image, and identifying a hair region and an eyebrow region of a person in the target image;
in step S0114, when the currently input image does not have a face, prompt information is output to prompt the user to re-input the image on which the face is displayed.
In the foregoing, since the related technologies already have mature face recognition technologies, person hair area recognition technologies, and eyebrow area recognition technologies, for example, the face, person hair area, and eyebrow area recognition can be implemented by an AI face image recognition technology, which is not described in detail in this disclosure. In addition, the prompt message can be output through a screen or played through a voice module.
Therefore, in the embodiment, the face recognition is performed on the currently input image, the hair region and the eyebrow region of the person in the target image are recognized when the currently input image is the target image with the face displayed, and the prompt information is directly output without recognizing the hair region and the eyebrow region when the currently input image does not have the face displayed, so that the problems of system error collapse or continuous recognition of the hair region and the eyebrow region caused by continuous recognition of the input image without the face displayed can be avoided, the system operation stability is favorably ensured, and unnecessary operation is reduced.
After obtaining the hair region and the eyebrow region of the person in the target image, the step S012 can be performed so that the difference between the color value of the hair region and the color value of the eyebrow region is within a first preset range. In order to achieve that the difference between the color value of the hair region and the color value of the eyebrow region is within a first preset range, the embodiments of the present disclosure provide various technical solutions for matching the color value of the hair region and the color value of the eyebrow region, please refer to the following contents:
in a first embodiment, in order to improve the consistency between the color and the eyebrow color after the matching process and achieve better image processing, the first preset range may be represented as a zero value, and based on this, the step of matching the color value of the hair region and the color value of the eyebrow region in step S012 may include:
determining a hair color value from the hair region in step S01211; the hair color value is used for representing the color of a person displayed in the current target image;
in step S01212, the color value of the eyebrow area is set to the hair color value.
As can be seen from the steps S01211 and S01212, after the hair region and the eyebrow region are obtained, only the hair color values in the hair region may be extracted, and finally, the color values in the eyebrow region only need to be set as the hair color values, and color values in the eyebrow region do not need to be extracted additionally, so that not only can the consistency of the hair color and the eyebrow color of the character in the target image be ensured, and a good image processing effect be achieved, but also data and operation steps required to be extracted can be reduced, and the matching efficiency of the hair color and the eyebrow color can be improved.
Similarly to the first embodiment, in the second embodiment, the first preset range is also represented as a zero value, and based on this, the step of matching the color value of the hair region and the color value of the eyebrow region in step S012 may include:
in step S01221, determining an eyebrow color value according to the eyebrow region; the eyebrow color values are used for representing the eyebrow color of a person displayed in the current target image;
in step S01222, the color value of the hair region is set to the eyebrow color value.
As can be seen from the steps S01221 and S01222, after the hair region and the eyebrow region are obtained, only the eyebrow color values in the eyebrow region may be extracted, and finally, the color values in the hair region are set as the eyebrow color values, and it is not necessary to additionally extract the color values in the hair region, so that not only can the consistency of the hair color and the eyebrow color of the person in the target image be ensured, and a good image processing effect be achieved, but also the data and operation steps required to be extracted can be reduced, and the matching efficiency of the hair color and the eyebrow color can be improved.
In a third embodiment, the step of matching the color values of the hair region and the eyebrow region in step S012 may include:
in step S01231, when the difference between the color value of the hair region and the color value of the eyebrow region is not within the first preset range, the color value of the hair region or the color value of the eyebrow region is updated so that the difference between the color values calculated based on the updated color value is within the first preset range.
In a third embodiment, corresponding hair color values and eyebrow color values may be extracted from the hair region and the eyebrow region, respectively, and it is determined whether the difference between the hair color values and the eyebrow color values is within the first preset range, and if not, the color values of the eyebrow region or the hair region may be updated according to the difference between the two. For example, assuming that the difference between the hair color value and the eyebrow color value is a positive number and is a, the right endpoint threshold of the preset range is b, and a is 2 greater than b; on this basis, the hair color value may be updated as the difference between the original hair color value and the value 2, or the eyebrow color value may be updated as the sum of the original eyebrow color value and the value 2. The value 2 is not the only choice as long as the difference between the color values calculated based on the updated color values is within the first preset range.
It should be noted that the color values may be represented as RGB color values, or may be represented as hexadecimal color codes. The first preset range may be set by a user according to actual requirements, or may be preset by a developer in a development stage according to experience or experiments, for example, the first preset range may be represented by a numerical value, or may be represented by an interval, which is not limited in this disclosure. If the first preset range is represented by an interval, the difference between the hair color value and the eyebrow color value may be a positive number or a negative number, based on which the threshold value at the left end of the interval may be a negative number, the threshold value at the right end may be a positive number, and the absolute values of the negative number and the positive number may be the same or different.
In a scene where the color values are represented as RGB color values, the color values include numerical values of 3 channels of RGB, namely, R numerical values, G numerical values, and B numerical values; based on this, the difference between the color value of the hair region and the color value of the eyebrow region includes an R-value difference, a G-value difference, and a B-value difference; accordingly, the difference between the color values within the first preset range may be understood as any one of the R value difference, the G value difference and the B value difference being within the first preset range. In a scene where the color values are represented as hexadecimal color codes, the color values only include one hexadecimal color code; based on this, the difference between the color value of the hair region and the color value of the eyebrow region can be expressed as the difference between the hexadecimal color code corresponding to the hair color and the hexadecimal color code corresponding to the eyebrow color.
In addition, if it is not desirable that the interval includes a negative number, but it is desirable that the interval may be configured by a non-negative number, in another embodiment, the scheme in step S012 may be adaptively adjusted, for example, a description that the difference between the color value of the hair region and the color value of the eyebrow region is within a first preset range may be adjusted as: the absolute value of the difference between the color value of the hair region and the color value of the eyebrow region is within a first preset range.
In the fourth embodiment, in order to satisfy the user's requirement for setting the color values of the hair region and the eyebrow region of the same person in the target image to be different from the color values of the original color and eyebrow color while ensuring the uniformity of the hair color and eyebrow color of the person. In step S011, the step of determining the hair region and the eyebrow region of the person in the target image on which the face is displayed is performed when a user instruction for instructing matching of the hair color and the eyebrow color is received, where the user instruction carries the target color values set by the user. In step S012, the matching the color value of the hair region and the color value of the eyebrow region may include:
in step S01241, determining the target color values set by the user for the hair region and the eyebrow region according to the user instruction;
in step S01242, the color values of the hair region and the eyebrow region are both set as the target color values.
Therefore, according to the fourth embodiment, the original color and eyebrow color of the character in the target image can be changed by the user according to needs, more use requirements of the user can be met, and the use interest of the user can be improved.
In the present disclosure, the hair region and the eyebrow region belong to the same person in the target image.
Therefore, the difference between the color and the eyebrow in the target image can be reduced by the color and eyebrow matching scheme in any of the above embodiments, for example, as shown in fig. 2 and 3, fig. 2 is a schematic diagram of an original target image shown in the present disclosure according to an exemplary embodiment, and fig. 3 is a schematic diagram of a target image obtained after the original target image shown in fig. 2 in the present disclosure according to an exemplary embodiment is processed by one of the image processing methods provided by the embodiments of the present disclosure. As can be seen from fig. 2 and 3, after the matching processing of the brow color and the brow color, the brow color and the hair color of the person in the updated target image are the same, wherein a black border is left at the edge of the brow area shown in fig. 3 to clearly indicate the area shown by the brow, in practice, if the person is purple in color and the brow color is black, then after the processing by the method according to any embodiment of the present disclosure, the brow color and the hair color of the person in the updated target image may both be purple or both be black, or both may be target color values set by the user. In any case, the color and eyebrow of the character in the updated target image are similar or identical, so that a good beautifying effect can be achieved; and because the color values of the eyebrow area and/or the hair area are updated after the eyebrow area and the hair area are identified, the change of the eyebrow area or the hair area can be well avoided, the eyebrow shape of a character subjected to color value updating is well ensured to be consistent with the facial expression of the character in the original image, and the problems that the eyebrow sticker directly covers the eyebrow area of the character in the related art is hard in visual effect, easy to put on the eyebrow, and not suitable for eyebrow shapes with different expressions or different eyebrow shapes are solved, and the authenticity, the aesthetic degree and the natural degree of the face of the character in a processed image are favorably improved.
In addition to matching the color and eyebrow of the person in the image, in order to meet the beautifying requirement of the user on other areas of the person in the image and enhance the visual effect and aesthetic degree of the processed image, in an embodiment, based on any of the above embodiments, the method may further include:
in step S021, an eyelash region in the target image is determined;
in step S022, color values of the eyelash regions are matched according to the color values of the hair regions or the eyebrow regions, so that a difference between the color values of the hair regions or the eyebrow regions and the color values of the eyelash regions is within a second preset range.
In the above, the eyelash area of the person in the target image can be identified and obtained through an AI face identification technology in the related art, which is not described herein again. Similarly, the eyelash area, the hair area, and the eyebrow area all belong to the same person in the target image. The value range of the second preset range may be the same as or different from the value range of the first preset range.
In addition, the principle of matching the color values of the eyelash area and the hair area or the eyebrow area is the same as the principle of matching the color values of the hair area and the eyebrow area, so the above matching scheme of the hair and the eyebrow can be referred to, and details are not repeated here.
Therefore, the color values of the eyelash areas and the color values of the hair areas or the eyebrow areas are further matched, so that the color of the person in the target image, the color of the eyebrow and the color of the eyelash are similar or identical, the coordination of the visual effects of the person in the target image, the eyebrow and the eyebrow can be improved, and the image processing effect and the image aesthetic degree are further improved.
In addition, for a user who applies makeup to a face, the user may apply makeup to only the face and not apply makeup to the neck, which may result in inconsistent colors of the face and the neck, so that the colors of the face area and the neck area displayed in the captured target image are greatly different, and the visual effect and the aesthetic degree of the image are affected, therefore, to solve the technical problem and meet the beautifying requirement of the user on other areas of the image character, in an embodiment based on any of the above embodiments, the method may further include:
in step S031, a face skin region and a neck region in the target image are determined;
in step S032, color values of the face skin region and the neck region are matched, so that a difference between the color values of the face skin region and the neck region is within a third preset range.
In the above, the face skin area and the neck area of the person in the target image can be obtained by an AI face recognition technology in the related art, which is not described herein again. Similarly, the human face skin area and the neck area are the same person in the target image. The value range of the third preset range may be the same as or different from the value range of the first preset range.
In addition, the principle of matching the color values of the face skin region and the neck region is the same as the principle of matching the color values of the hair region and the eyebrow region, so that reference may be made to the above matching scheme of hair color and eyebrow color, which is not described herein again.
Therefore, the color value of the face skin area is matched with the color value of the neck area, so that the face skin and the neck skin of a person in the target image are similar or identical, and the image processing effect and the image aesthetic degree can be improved.
In addition, for an individual user, the user may have color blocks with different colors and shadows in the face area displayed in the captured target image due to the fact that the face has birthmarks or is damaged, or the face color is not uniform due to physical reasons or due to the problem of light angles, so as to affect the visual effect and the aesthetic degree of the image, and therefore, to solve the technical problem and meet the beautifying requirement of the user on other areas of the image character, based on any of the above embodiments, in one embodiment, the method may further include:
in step S041, color values of pixel points included in the face skin region in the target image are determined;
in step S042, determining whether the color of the face skin area is uniform according to the color values of the pixel points;
in step S043, if the color of the face skin region is not uniform, the color value of the face skin region is updated so that the color of the face skin region is uniform.
In the above, the face skin area of the person in the target image can be identified and obtained through an AI face identification technology in the related technology, which is not described herein again.
In step S042, the face skin region may be processed by an image region segmentation technique in the related art to determine whether there is a region with different pixel attributes, where in this embodiment, the pixel attributes may include color values of pixels. If there are areas with different pixel attributes, the color of the face skin area can be considered to be non-uniform. At this time, the color value of the face skin region may be updated.
In the above, in order to make the color of the face skin area uniform, the embodiments of the present disclosure provide various skin color value updating schemes for updating the color value of the face skin area, please see the following contents
In a first example, a color value of a maximum region with a largest area may be obtained, and color values of other regions of the face skin region except the maximum region may be set as the color value of the maximum region, so that the color of the updated face skin region is uniform.
In a second example, the color value of the target area with the highest brightness or the middle brightness or the lowest brightness may be obtained, and the color values of the areas other than the target area in the face skin area may be set as the color value of the target area, so as to make the color of the updated face skin area uniform.
In a third example, after the regions with different pixel attributes included in the face skin region are identified, second prompt information for prompting the user to select a designated region from the regions with different pixel attributes may be output. After the second prompt message is output, when a first specified instruction for indicating a specified area selected by a user is received, acquiring a color value of the specified area according to the first specified instruction, and setting color values of other areas except the specified area in the face skin area as the color value of the specified area, so that the color of the updated face skin area is uniform.
In a fourth example, after the regions with different pixel attributes included in the face skin region are identified, third prompting information for prompting the user to input a specified color value of the face skin region may be output, so that the user may change the color value of the face skin region to another color value in a unified manner as needed. After the third prompt message is output, when a second specified instruction for indicating the specified color value selected by the user is received, the specified color value set by the user is obtained according to the second specified instruction, and the color value of the face skin area is set as the specified color value.
Therefore, the human face skin area is further processed, so that the human face skin of a person in the target image is changed from an uneven state to an even state, the image processing effect and the image aesthetic degree can be improved, the use requirement of a user can be better met, and the user experience is improved.
Corresponding to the embodiment of the image processing method, the disclosure also provides an image processing device, and the device can be applied to a terminal. As shown in fig. 4, fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure, where the image processing apparatus 400 includes:
a first determining module 401, configured to determine a hair region and an eyebrow region of a person in a target image in which a human face is displayed;
a first matching module 402, configured to match the color values of the hair region and the eyebrow region, so that a difference between the color values of the hair region and the eyebrow region is within a first preset range.
In one embodiment, the first matching module 402 comprises:
a first color determination unit for determining a hair color value from the hair region; the head color is used for representing the color of a person displayed in the current target image;
and the first color value setting unit is used for setting the color value of the eyebrow area as the hair color value.
In another embodiment, the first matching module 402 comprises:
the first eyebrow color determining unit is used for determining eyebrow color values according to the eyebrow areas; the eyebrow color values are used for representing the eyebrow color of a person displayed in the current target image;
and the second color value setting unit is used for setting the color value of the hair area as the eyebrow color value.
In yet another embodiment, the first matching module 402 includes:
and the first color value updating unit is used for updating the color values of the hair area or the eyebrow area when the difference between the color values of the hair area and the eyebrow area is not in the first preset range, so that the difference between the color values calculated based on the updated color values is in the first preset range.
In a further embodiment, the first determining module 401 is configured to determine a hair region and an eyebrow region of a person in a target image with a human face displayed thereon, when a user instruction for indicating matching hair color and eyebrow color is received; the user instruction carries a target color value set by a user;
the first matching module 402 comprises:
a target color value determining unit, configured to determine the target color values set by the user for the hair region and the eyebrow region according to the user instruction;
and the third color value setting unit is used for setting the color values of the hair area and the eyebrow area as the target color values.
In an embodiment, the apparatus 400 may further include:
a second determining module, configured to determine an eyelash region in the target image;
the second matching module is used for matching the color values of the eyelash regions according to the color values of the hair regions or the eyebrow regions so that the difference between the color values of the hair regions or the eyebrow regions and the color values of the eyelash regions is within a second preset range.
In an embodiment, the apparatus 400 may further include:
the third determining module is used for determining a human face skin area and a neck area in the target image;
and the third matching module is used for matching the color values of the face skin area and the neck area so as to enable the difference between the color values of the face skin area and the neck area to be in a third preset range.
In an embodiment, the apparatus 400 may further include:
the fourth determining module is used for determining color values of pixel points contained in the face skin area in the target image;
a fifth determining module, configured to determine whether the color of the face skin area is uniform according to the color values of the pixel points;
and the first updating module is used for updating the color value of the face skin area when the color of the face skin area is not uniform so as to ensure that the color of the face skin area is uniform.
The implementation process of the functions and actions of each module and unit in each device is specifically described in the implementation process of the corresponding steps in the method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts shown as units may or may not be physical units.
Corresponding to the embodiment of the foregoing image processing method, the present disclosure also provides an electronic device of an image processing apparatus, the electronic device including:
a processor;
a memory for storing a computer program executable by the processor;
wherein the processor implements the steps of the image processing method in any of the preceding embodiments when executing the program.
As shown in fig. 5, fig. 5 is a block diagram illustrating a structure of an electronic device of an image processing apparatus according to an exemplary embodiment of the present disclosure. The electronic device 500 may be a computer, a mobile phone, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or other terminal device.
Referring to fig. 5, electronic device 500 may include one or more of the following components: processing component 501, memory 502, power component 503, multimedia component 504, audio component 505, interface to input/output (I/O) 506, sensor component 507, and communication component 508.
The processing component 501 generally controls overall operations of the electronic device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 501 may include one or more processors 509 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 501 may include one or more modules that facilitate interaction between the processing component 501 and other components. For example, the processing component 501 may include a multimedia module to facilitate interaction between the multimedia component 504 and the processing component 501.
The memory 502 is configured to store various types of data to support operations at the electronic device 500. Examples of such data include instructions for any application or method operating on the electronic device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 502 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 503 provides power to the various components of the electronic device 500. The power components 503 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 500.
The multimedia component 504 includes a screen that provides an output interface between the electronic device 500 and a user. The screen may include a Touch Panel (TP), implemented as a touch screen, to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 504 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 505 is configured to output and/or input audio signals. For example, the audio component 505 may include a Microphone (MIC) configured to receive external audio signals when the electronic device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 502 or transmitted via the communication component 508. In some embodiments, audio component 505 further comprises a speaker for outputting audio signals.
The I/O interface 502 provides an interface between the processing component 501 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 507 includes one or more sensors for providing various aspects of status assessment for the electronic device 500. For example, the sensor assembly 507 may detect an open/closed state of the electronic device 500, the relative positioning of components, such as a display and keypad of the electronic device 500, the sensor assembly 507 may also detect a change in the position of the electronic device 500 or a component of the electronic device 500, the presence or absence of user contact with the electronic device 500, orientation or acceleration/deceleration of the electronic device 500, and a change in the temperature of the electronic device 500. The sensor assembly 507 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 507 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 507 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, a temperature sensor, a photoelectric sensor, or a GPS sensor.
The communication component 508 is configured to facilitate wired or wireless communication between the electronic device 500 and other devices. The electronic device 500 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G LTE, 5G NR (5G New Radio), or a combination thereof. In an exemplary embodiment, the communication component 508 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 508 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
The implementation process of the functions and actions of each unit in the electronic device is specifically described in the implementation process of the corresponding step in the method, and is not described herein again.
For the electronic device embodiment, the above-described electronic device embodiment is only illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
In correspondence with the aforementioned embodiments of the image processing method, the present disclosure also provides a computer-readable storage medium on which a computer program is stored, which, when executed by the processor 509 of the aforementioned electronic device, implements the steps of the image processing method described in any of the aforementioned embodiments.
The present disclosure may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (18)

1. An image processing method, characterized in that the method comprises:
determining a hair area and an eyebrow area of a person in a target image on which a face is displayed;
and matching the color value of the hair area with the color value of the eyebrow area so as to enable the difference between the color value of the hair area and the color value of the eyebrow area to be within a first preset range.
2. The method of claim 1, wherein the step of matching the color values of the hair region and the eyebrow region comprises:
determining a hair color value from the hair region; the hair color value is used for representing the color of a person displayed in the current target image;
and setting the color value of the eyebrow area as the hair color value.
3. The method of claim 1, wherein the step of matching the color values of the hair region and the eyebrow region comprises:
determining an eyebrow color value according to the eyebrow area; the eyebrow color values are used for representing the eyebrow color of a person displayed in the current target image;
and setting the color value of the hair area as the eyebrow color value.
4. The method of claim 1, wherein the step of matching the color values of the hair region and the eyebrow region comprises:
when the difference between the color value of the hair region and the color value of the eyebrow region is not within the first preset range, updating the color value of the hair region or the color value of the eyebrow region so that the difference between the color values calculated based on the updated color values is within the first preset range.
5. The method according to claim 1, wherein the step of determining the hair region and the eyebrow region of the person in the target image on which the face is displayed is performed upon receiving a user instruction for indicating matching of the hair color and the eyebrow color, the user instruction carrying target color values set by the user;
matching the color values of the hair region and the eyebrow region, comprising:
determining the target color values set for the hair region and the eyebrow region by the user according to the user instruction;
setting the color values of the hair area and the eyebrow area as the target color values.
6. The method according to any one of claims 1 to 5, further comprising:
determining an eyelash region in the target image;
and matching the color values of the eyelash areas according to the color values of the hair areas or the eyebrow areas so that the difference between the color values of the hair areas or the eyebrow areas and the color values of the eyelash areas is within a second preset range.
7. The method of claim 1, further comprising:
determining a human face skin area and a neck area in the target image;
and matching the color values of the face skin area and the neck area so as to enable the difference between the color values of the face skin area and the neck area to be in a third preset range.
8. The method of claim 1, further comprising:
determining color values of pixel points contained in a face skin area in the target image;
determining whether the color of the human face skin area is uniform or not according to the color values of the pixel points;
and if the color of the face skin area is not uniform, updating the color value of the face skin area so as to make the color of the face skin area uniform.
9. An image processing apparatus, characterized in that the apparatus comprises:
the first determining module is used for determining a hair area and an eyebrow area of a person in a target image with a human face;
the first matching module is used for matching the color values of the hair area and the eyebrow area so that the difference between the color values of the hair area and the eyebrow area is within a first preset range.
10. The apparatus of claim 9, wherein the first matching module comprises:
a first color determination unit for determining a hair color value from the hair region; the head color is used for representing the color of a person displayed in the current target image;
and the first color value setting unit is used for setting the color value of the eyebrow area as the hair color value.
11. The apparatus of claim 9, wherein the first matching module comprises:
the first eyebrow color determining unit is used for determining eyebrow color values according to the eyebrow areas; the eyebrow color values are used for representing the eyebrow color of a person displayed in the current target image;
and the second color value setting unit is used for setting the color value of the hair area as the eyebrow color value.
12. The apparatus of claim 9, wherein the first matching module comprises:
and the first color value updating unit is used for updating the color values of the hair area or the eyebrow area when the difference between the color values of the hair area and the eyebrow area is not in the first preset range, so that the difference between the color values calculated based on the updated color values is in the first preset range.
13. The apparatus of claim 9, wherein the first determining module is configured to determine a hair region and an eyebrow region of a person in a target image with a face displayed thereon, upon receiving a user instruction indicating matching hair color and eyebrow color; the user instruction carries a target color value set by a user;
the first matching module includes:
a target color value determining unit, configured to determine the target color values set by the user for the hair region and the eyebrow region according to the user instruction;
and the third color value setting unit is used for setting the color values of the hair area and the eyebrow area as the target color values.
14. The apparatus of any one of claims 9 to 13, further comprising:
a second determining module, configured to determine an eyelash region in the target image;
the second matching module is used for matching the color values of the eyelash regions according to the color values of the hair regions or the eyebrow regions so that the difference between the color values of the hair regions or the eyebrow regions and the color values of the eyelash regions is within a second preset range.
15. The apparatus of claim 9, further comprising:
the third determining module is used for determining a human face skin area and a neck area in the target image;
and the third matching module is used for matching the color values of the face skin area and the neck area so as to enable the difference between the color values of the face skin area and the neck area to be in a third preset range.
16. The apparatus of claim 9, further comprising:
the fourth determining module is used for determining color values of pixel points contained in the face skin area in the target image;
a fifth determining module, configured to determine whether the color of the face skin area is uniform according to the color values of the pixel points;
and the first updating module is used for updating the color value of the face skin area when the color of the face skin area is not uniform so as to ensure that the color of the face skin area is uniform.
17. An electronic device, comprising:
a processor;
a memory for storing a computer program executable by the processor;
wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the program.
18. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN201910589702.0A 2019-07-02 2019-07-02 Image processing method, image processing device, electronic equipment and storage medium Pending CN112184540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910589702.0A CN112184540A (en) 2019-07-02 2019-07-02 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910589702.0A CN112184540A (en) 2019-07-02 2019-07-02 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112184540A true CN112184540A (en) 2021-01-05

Family

ID=73915600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910589702.0A Pending CN112184540A (en) 2019-07-02 2019-07-02 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112184540A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883821A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN114373057A (en) * 2021-12-22 2022-04-19 聚好看科技股份有限公司 Method and equipment for matching hair with head model
CN114758027A (en) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883821A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112883821B (en) * 2021-01-27 2024-02-20 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN114373057A (en) * 2021-12-22 2022-04-19 聚好看科技股份有限公司 Method and equipment for matching hair with head model
CN114758027A (en) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
US10565763B2 (en) Method and camera device for processing image
US20090251484A1 (en) Avatar for a portable device
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN107862673A (en) Image processing method and device
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
CN111047526A (en) Image processing method and device, electronic equipment and storage medium
CN107730448B (en) Beautifying method and device based on image processing
US11222223B2 (en) Collecting fingerprints
CN110580688B (en) Image processing method and device, electronic equipment and storage medium
CN106980840A (en) Shape of face matching process, device and storage medium
CN112184540A (en) Image processing method, image processing device, electronic equipment and storage medium
CN107392166A (en) Skin color detection method, device and computer-readable recording medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN111161131A (en) Image processing method, terminal and computer storage medium
CN114007099A (en) Video processing method and device for video processing
CN107085823B (en) Face image processing method and device
CN113409342A (en) Training method and device for image style migration model and electronic equipment
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN107369142A (en) Image processing method and device
CN111127352B (en) Image processing method, device, terminal and storage medium
CN112347911A (en) Method and device for adding special effects of fingernails, electronic equipment and storage medium
CN114463212A (en) Image processing method and device, electronic equipment and storage medium
CN110502993B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112669233A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination