CN114663549A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114663549A
CN114663549A CN202210328130.2A CN202210328130A CN114663549A CN 114663549 A CN114663549 A CN 114663549A CN 202210328130 A CN202210328130 A CN 202210328130A CN 114663549 A CN114663549 A CN 114663549A
Authority
CN
China
Prior art keywords
hair
color
brightness
user image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210328130.2A
Other languages
Chinese (zh)
Inventor
赵薇
肖任意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Tianjin Co Ltd
Original Assignee
Spreadtrum Communications Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Tianjin Co Ltd filed Critical Spreadtrum Communications Tianjin Co Ltd
Priority to CN202210328130.2A priority Critical patent/CN114663549A/en
Publication of CN114663549A publication Critical patent/CN114663549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method comprises the steps of obtaining a user image; acquiring brightness information of a hair area of the user image and color information of target hair color; obtaining a first color according to the brightness information of the hair area of the user image and the color information of the target color; acquiring a hair mask of the user image according to the user image, and fusing the first hair and the original hair of the hair area according to the hair mask to obtain a target hair dyeing image; wherein the hair mask is a mask for a hair region. The adaptability of the virtual hair dyeing is improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
With the development of internet technology, various special effect processing functions based on images are popular. For example, the function of adjusting the color of the user's hair in the image to achieve virtual hair dyeing is provided in the related application program.
In the related art, the image virtual hair dyeing process may include the following steps: firstly, acquiring a target image; then, extracting a region to be adjusted from the target image; and then, carrying out color development adjustment processing on the area to be adjusted, thereby obtaining a target image after color development adjustment.
In the above related art, only the hair color in the target image is directly adjusted to the first target hair color, and the brightness of the hair itself in the target image is not considered, so that the same hair color is presented with the same hair dyeing effect when the same hair color is applied to users with different hair colors. In practical situations, users with different hair colors can have slight differences in hair dyeing effect when using the same hair dye, so that the hair dyeing effect of the users is unnatural and the fitting degree is poor.
Disclosure of Invention
In view of this, the present application provides an image processing method, apparatus, device and storage medium, so as to solve the problem in the prior art that the user hair dyeing effect is not fit and unnatural.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a user image;
acquiring brightness information of a hair area of the user image and color information of target hair color;
obtaining a first color according to the brightness information of the hair area of the user image and the color information of the target color;
acquiring a hair mask of the user image according to the user image, and fusing the first hair and the original hair of the hair area according to the hair mask to obtain a target hair dyeing image; wherein the hair mask is a mask for a hair region.
Preferably, the acquiring brightness information of the hair region of the user image includes:
acquiring the original brightness of a hair area of the user image;
determining a brightness adjustment coefficient of the hair area according to the original brightness of the hair area of the user image and the preset brightness of the target hair color;
and adjusting the original brightness of the hair area of the user image according to the brightness adjustment coefficient of the hair area, and taking the adjusted brightness as the brightness information of the hair area of the user image.
Preferably, the determining the brightness adjustment coefficient of the hair region according to the original brightness of the hair region of the user image and the preset brightness of the target hair color comprises:
calculating the original average brightness of the hair area of the user image according to the original brightness of the hair area of the user image;
and determining a brightness adjustment coefficient of the hair area according to the original average brightness of the hair area of the user image and the preset brightness of the target hair color.
Preferably, the determining the brightness adjustment coefficient of the hair region according to the original average brightness of the hair region of the user image and the preset brightness of the target hair color comprises:
according to the original average brightness of the hair area of the user image and the preset brightness of the target hair color, using a formula k-MAX (1.0-Y)1/(srcYmean+1),1), k is more than or equal to 1, and the brightness adjusting coefficient of the hair area is determined; where k denotes a luminance adjustment coefficient, max () denotes a maximum value, Y1Preset Brightness, srcY, indicating the color development of the targetmeanRepresenting the original average intensity of the hair region.
Preferably, the acquiring of the hair mask of the user image from the user image comprises:
and carrying out hair segmentation processing on the user image by adopting a pre-trained hair segmentation model to obtain a hair mask image of the user image.
Preferably, the performing, by using a pre-trained hair segmentation model, hair segmentation processing on the user image to obtain a hair mask image of the user image includes:
performing hair segmentation processing on the user image by adopting a pre-trained hair segmentation model to obtain an initial hair mask image of the user image;
and carrying out edge feathering treatment on the initial hair mask image of the user image to obtain the hair mask image of the user image.
Preferably, before the acquiring the brightness information of the hair region of the user image and the color information of the target hair color, the method further includes:
detecting the user image, and acquiring face information in the user image;
determining whether the user image meets a preset hair dyeing condition or not according to the facial information;
the hair segmentation processing of the user image to obtain a first hairline layout image comprises:
and when the user image meets the preset hair dyeing condition, acquiring brightness information of a hair area of the user image and color information of target hair color.
Preferably, the fusing the first hair color and the original hair color of the hair region according to the hair mask to obtain the target hair color image comprises:
acquiring a first fusion weight of the first color and a second fusion weight of the original color; the first blend weight is associated with the hair mask;
and fusing the first hair color and the original hair color of the hair area according to the first fusion weight of the first hair color and the second fusion weight of the original hair color to obtain a target hair-dyeing image.
Preferably, the obtaining the first fusion weight of the first hair color and the second fusion weight of the original hair color comprises:
acquiring a preset hair dyeing effect intensity coefficient;
determining a first fusion weight of the first hair color according to the preset hair dyeing effect intensity coefficient and the hair mask;
calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value; the preset maximum weight value is the maximum value of the preset fusion weight.
Preferably, the determining the first fusion weight of the first hair color according to the preset hair dyeing effect intensity coefficient and the hair mask comprises:
according to the preset hair dyeing effect intensity coefficient and the hair mask, utilizing a formula alpha ═ hair modeldst·σ,σ∈[0,1]Determining a first fusion weight for the first hair color; where alpha represents the first blending weight, hairModeldstExpressing a hair mask, and expressing a preset hair dyeing effect intensity coefficient by sigma;
the calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value includes:
calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value by using a formula beta as A-alpha; wherein, A represents a preset maximum weight value, and beta represents a second fusion weight.
Preferably, the acquiring brightness information of the hair region of the user image and color information of the target hair color comprises:
acquiring brightness information of a hair area of a user image in a Hue Saturation Value (HSV) color space and color information of target hair color;
the obtaining a first color according to the brightness information of the hair region of the user image and the color information of the target color comprises:
obtaining a first color development of the HSV color space according to the brightness information of the hair area of the user image of the HSV color space and the color information of the target color development of the HSV color space;
and converting the first color of the HSV color space into a first color of a red, green and blue (RGB) color space.
Preferably, the fusing the first hair color and the user image according to the hair mask to obtain a target hair color image includes:
acquiring a virtual hair dyeing type;
when the virtual hair dyeing type is a first type, fusing the first hair and the user image according to the hair mask to obtain a target hair dyeing image;
when the virtual hair dyeing type is a second type, acquiring a hair color brightness effect coefficient corresponding to the target hair color;
according to the hair mask, fusing the first hair color and the original hair color of the hair area to obtain a second hair color;
calculating the brightness adjustment quantity of the hair area according to the hair color brightness effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair area and the average brightness of the hair area;
and adjusting the brightness of the second hair according to the brightness adjustment amount of the hair area, and taking the image of the second hair with the adjusted brightness as the target hair-dyeing image.
Preferably, the calculating the brightness adjustment amount of the hair region according to the hair color brightening effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair region, and the original average brightness of the hair region includes:
based on the hair color brightening effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair region, and the average brightness image of the hair region, using a formula B of δ -MAX (srcY-srcY)mean,0)·hairModeldst/A,δ∈(0,m],m>0, calculating the brightness adjustment quantity of the hair area; wherein B represents a brightness adjustment amount of the hair region; delta denotes the color development gloss effect coefficient corresponding to the target color development, srcY denotes the original luminance, srcY denotes the color development gloss effect coefficientmeanRepresenting the original mean lightness, hairModel, of a region of hairdstRepresenting a hair mask; a represents a preset maximum weight value, and m represents the maximum value of the color development brightness effect coefficient corresponding to the target color development;
the adjusting the brightness of the second hair color according to the brightness adjustment amount of the hair region includes:
according to the hair regionThe brightness adjustment amount of (2) is determined by the formula Y3=Y2+ B, adjusting the brightness of the second color; wherein, Y3Indicating the adjusted luminance of the second color emission, Y2The luminance of the second color before adjustment is shown.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
an acquisition unit configured to acquire a user image;
the acquiring unit is further used for acquiring brightness information of a hair area of the user image and color information of target hair color;
the processing unit is used for obtaining a first color according to the brightness information of the hair area of the user image and the color information of the target color;
the processing unit is further used for acquiring a hair mask of the user image according to the user image, and fusing the first hair color and the original hair color of the hair area according to the hair mask to obtain a target hair-dyeing image; wherein the hair mask is a mask for a hair region.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, which, when run on a computer, causes the computer to perform the method of any one of the above first aspects.
By adopting the scheme provided by the embodiment of the application, the brightness information of the hair area of the user image and the color information of the target hair color are obtained by obtaining the user image; the method comprises the steps of obtaining a first hair color according to brightness information of a hair area of a user image and color information of a target hair color, obtaining a hair mask of the user image according to the user image, and fusing the first hair color and the original hair color of the hair area according to the hair mask to obtain a target hair color image. That is, in the embodiment of the present application, when the user image is virtually dyed, the color of the target hair color needs to be combined with the brightness of the hair area in the user image to form the first hair color, and the color hair colors of the hair area in the user image and the first hair color are fused to obtain the target hair-dyed image. Therefore, in the application, the target hair can be utilized, the user image is subjected to virtual hair dyeing treatment according to the brightness of the hair area in the user image and the original hair, the hair of the hair area in the output target hair dyeing image is closer to a real hair dyeing effect, the hair dyeing is natural and the fitting degree is higher, the accuracy of virtual hair dyeing is improved, and the adaptability of virtual hair dyeing is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a scene schematic diagram of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Before specifically describing the embodiments of the present application, terms applied or likely to be applied to the embodiments of the present application will be explained first.
Face detection: given an image, the positions of all faces in the image are found.
HSV (Hue, Saturation, Value) color space is a format of color space. The parameters of the color thereof include: hue (H), saturation (S), lightness (V). The hue is measured by an angle, the value range is 0-360 degrees, the red is 0 degree, the green is 120 degrees and the blue is 240 degrees, which are calculated from the red in a counterclockwise direction. Their complementary colors are: yellow is 60 °, cyan is 180 °, violet is 300 °. The saturation degree indicates the degree to which the color approaches the spectral color. A color can be seen as the result of a mixture of a certain spectral color and white. The larger the proportion of the spectral colors, the higher the degree of color approaching the spectral colors, and the higher the saturation of the color. High saturation and dark and bright color. The white light component of the spectral color is 0, and the saturation reaches the highest. Usually the value ranges from 0% to 100%, the larger the value, the more saturated the color. Lightness represents the degree of brightness of a color, and for a light source color, the lightness value is related to the lightness of the illuminant; for object colors, this value is related to the transmittance or reflectance of the object. Values typically range from 0% (black) to 100% (white). The HSV color space is a user-oriented color model.
The RGB (Red, Green, Blue, Red, Green, Blue) color space is a color standard in the industry, and various colors are obtained by changing three color channels of Red (Red), Green (Green), and Blue (Blue) and superimposing the three color channels on each other, RGB represents colors of the three channels of Red, Green, and Blue, and the standard almost includes all colors that can be perceived by human vision, and is one of the most widely used color systems. And allocating an intensity value within the range of 0-255 to the RGB components of each pixel in the image by using the RGB color space. The RGB color space is designed based on the principle of color light emission, and it is popular to say that its color mixing mode is as if there are three lamps of red, green and blue, when their lights are superimposed, the colors are mixed, but the brightness is equal to the sum of the two brightnesses, the higher the mixed brightness is, the additive mixing is.
YUV color space refers to a pixel format in which a Luminance parameter and a chrominance parameter are separately expressed, where "Y" represents brightness (Luma) that is a gray value; "U" and "V" denote Chroma (Chroma) which describes the color and saturation of an image and is used to specify the color of a pixel. The design is mainly used in the field of television systems and analog videos, the brightness information (Y) and the color information (UV) are separated, complete images can be displayed without the UV information, and the images are black and white only, so that the problem that the color television is compatible with the black and white television is well solved. Moreover, YUV does not require three independent video signals to be transmitted simultaneously like RGB, and by some compression means, the occupied frequency band is reduced a lot when YUV is transmitted in a YUV manner.
In the related art, the image virtual hair dyeing process may include the following steps: firstly, acquiring a target image; then, extracting a region to be adjusted from the target image; and then, carrying out color development adjustment processing on the area to be adjusted, thereby obtaining a target image after color development adjustment.
In the above related art, only the hair color in the target image is directly adjusted to the first target hair color, and the brightness of the hair itself in the target image is not considered, so that the same hair color is presented with the same hair dyeing effect when the same hair color is applied to users with different hair colors. In practical situations, users with different hair colors can have slight differences in hair dyeing effect when using the same hair dye.
In view of the foregoing problems, an embodiment of the present application provides an image processing method, including: acquiring brightness information of a hair area of a user image and color information of target hair color by acquiring the user image; the method comprises the steps of obtaining a first hair color according to brightness information of a hair area of a user image and color information of a target hair color, obtaining a hair mask of the user image according to the user image, and fusing the first hair color and the original hair color of the hair area according to the hair mask to obtain a target hair color image. That is, in the embodiment of the present application, when performing virtual hair dyeing on a user image, it is necessary to combine the color of the target hair color with the brightness of the hair region in the user image to form a first hair color, and fuse the first hair color and the color of the hair region in the user image to obtain a target hair dyeing image. Therefore, in the application, the target hair can be utilized, the user image is subjected to virtual hair dyeing treatment according to the brightness of the hair area in the user image and the original hair, the hair of the hair area in the output target hair dyeing image is closer to a real hair dyeing effect, the hair dyeing is natural and the fitting degree is higher, the accuracy of virtual hair dyeing is improved, and the adaptability of virtual hair dyeing is improved. The details will be described below.
The image processing method can be applied to electronic equipment. The electronic device related to the embodiment of the application may be a mobile phone, and may also be a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), an intelligent watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an on-vehicle device, an intelligent automobile, an intelligent sound, a robot, an intelligent glasses, an intelligent television, and the like.
It should be noted that, in some possible implementations, the electronic device may also be referred to as a terminal device, a User Equipment (UE), and the like, which is not limited in this embodiment of the present application.
Referring to fig. 1, a schematic flow chart of an image processing method according to an embodiment of the present application is shown. As shown in fig. 1, the method includes:
and step S101, acquiring a user image.
In the embodiment of the present application, there are many methods for acquiring the user image by the image processing apparatus, and the embodiment of the present application is not limited thereto. The user image acquisition method can be used for acquiring the user image from a local gallery of an image processing device where the application program client is located, and can also be used for shooting an image in real time as the user image acquired this time by starting a shooting function on the image processing device. For example, there may be two options "shoot" and "select from album" on the avatar setting avatar of the application, and if the user clicks "shoot", the image processing apparatus will start the shooting function at this time, and acquire the image shot by the user this time as the user image acquired in this step. If the user selects 'select from album', the image processing device enters the local gallery at the moment and displays the images in the gallery to the user, and the image selected by the user is used as the user image acquired in the step.
And step S102, acquiring brightness information of a hair area of the user image and color information of target hair color.
In the embodiment of the application, in order to make the effect of the target hair-dyeing image obtained in the image processing device more real and natural, the first hair-dyeing can be determined by using the brightness information of the hair area in the user image and the original information in the target hair-dyeing, and then the first hair-dyeing and the original hair-dyeing in the user image are fused to obtain the target hair-dyeing image. In this case, in order to obtain the first color, it is necessary to acquire brightness information of the hair region in the user image and color information of the target color. The image processing device can acquire the brightness of the original hair color of the hair area in the user image as the brightness information of the hair area, and directly acquire the color information of the target hair color from the target hair color.
As one possible implementation, in performing the virtual coloring, it is necessary to consider the luminance of the hair region itself in the user image and the luminance of the target hair color, and therefore the luminance obtained by combining the luminance of the hair region itself in the user image and the luminance of the target hair color is actually presented. Therefore, in order to obtain a more real and natural virtual hair dyeing effect, after the image processing device performs brightness and color separation on the original hair color of the hair area in the user image to obtain the original brightness, the original brightness can be adjusted according to the preset brightness of the target hair color, and the adjusted brightness is used as the brightness information of the hair area of the user image. That is, acquiring the brightness information of the hair region of the user image includes:
acquiring the original brightness of a hair area of the user image; determining a brightness adjustment coefficient of a hair area according to the original brightness of the hair area of the user image and the preset brightness of the target hair color; and adjusting the original brightness of the hair area of the user image according to the brightness adjustment coefficient of the hair area, and taking the adjusted brightness as the brightness information of the hair area of the user image.
That is, the original color of the hair region in the user image may be separated from the brightness to obtain the original brightness. And carrying out color and brightness separation on the target color development to obtain the preset brightness of the target color development. And calculating a brightness adjustment coefficient of the hair area according to the original brightness of the hair area in the user image and the preset brightness of the target hair color, further adjusting the original brightness of the hair area according to the brightness adjustment coefficient of the hair area, and taking the adjusted brightness as the brightness information of the hair area of the user image.
As a possible implementation manner, determining the brightness adjustment coefficient of the hair region according to the original brightness of the hair region of the user image and the preset brightness of the target hair color includes:
calculating the original average brightness of the hair region of the user image according to the original brightness of the hair region of the user image; and determining the brightness adjustment coefficient of the hair area according to the original average brightness of the hair area of the user image and the preset brightness of the target hair color.
In the embodiment of the present application, since the brightness of each pixel in the hair region of one user image is not much different, and the brightness of the target hair color is the same as that of each pixel in the hair region, when the original brightness of each pixel in the hair region in the user image is adjusted according to the brightness of the target hair color, the adjusted values are substantially the same. Therefore, in order to reduce the amount of calculation, only one adjustment coefficient may be calculated as the luminance adjustment coefficient of the hair region. I.e. as a brightness adjustment system for each pixel in the hair region. Based on the above, the original average brightness of the hair region may be calculated, that is, the original average brightness of the hair region may be calculated according to the original brightness of each pixel in the hair region, and the brightness adjustment coefficient of the hair region may be determined according to the original average brightness of the hair region and the preset brightness of the target hair color.
As a possible implementation manner, determining the brightness adjustment coefficient of the hair region according to the original average brightness of the hair region of the user image and the preset brightness of the target hair color includes:
according to the original average brightness of the hair area of the user image and the preset brightness of the target hair color, the formula k is MAX (1.0Y)1/(srcYmean+1),1), k is more than or equal to 1, and the brightness adjusting coefficient of the hair area is determined. Where k denotes a luminance adjustment coefficient, max () denotes a maximum value, Y1Preset Brightness, srcY, indicating the color development of the targetmeanRepresenting the original average intensity of the hair region.
In the embodiment of the present application, the formula k MAX (1.0Y) is used according to the preset brightness of the target hair color and the original average brightness of the hair region1/(srcYmean+1),1), at 1.0 x Y1/(srcYmean+1) and 1, the larger value is selected as the brightness adjustment coefficient of the hair region.
As a possible implementation manner, adjusting the original brightness of the hair region according to the brightness adjustment coefficient of the hair region, and using the adjusted brightness as the brightness information of the hair region of the user image includes:
and taking the product of the original brightness in the hair area and the brightness adjusting coefficient of the hair area as the adjusted brightness of the hair area, wherein the adjusted brightness is the brightness information of the hair area of the user image.
As a possible implementation manner, when acquiring the user image, the image processing apparatus may perform a hair recognition or hair key point recognition detection manner on the user image to determine a hair region in the user image, so as to determine an original color of the hair region in the user image. The original hair color of the hair area in the user image is determined, namely the pixel value of each pixel in the hair area is determined.
In a specific implementation, corresponding color attribute information may be configured for each color, and the color attribute information is used to represent the color and brightness of the color. The representation manner of the color attribute information is different according to different selected color spaces, for example, based on RGB color spaces, R, G, B, Y may be used for representation, where R represents red, G represents green, B represents blue, and Y represents preset brightness of color development. By adopting the method to configure the color attribute information of the color development, the test dyeing effect of any color development can be realized only by configuring and acquiring the [ R, G, B, Y ] of the color development, and the color types of the color development can be enriched conveniently. The preset luminance Y of the color can be obtained by converting R, G and B information of the color, specifically, the corresponding preset luminance Y can be obtained by converting the color of the color from the RGB color space to the YUV color space. It is understood that the color attribute information of the color can be represented by other color spaces, such as a YUV color space, or an HSV color space.
As a possible implementation manner, the acquiring brightness information of the hair region of the user image and the color information of the target hair color includes:
and acquiring brightness information of a hair area of the user image in the HSV color space and color information of the target hair color.
In the embodiment of the application, in order to optimize the test dyeing effect, the brightness information of the hair region of the user image in the HSV color space and the color information of the target hair color can be acquired.
As one possible implementation, when the brightness information of the hair region of the user image and the color information of the target hair color are not HSV color space types, the color space types may be converted into HSV color spaces.
For example, when the original color of the hair region in the user image is a YUV color space and the target color is an RGB color space, the color spaces of the original color and the target color of the hair region in the user image may be converted into an HSV color space, so that the brightness information of the hair region and the color information of the target color of the user image in the HSV color space may be obtained.
Specifically, the corresponding color of the target color development in the HSV color space can be calculated from [ R, G, B ] of the target color development using the following formula (1).
[h,s,v]=RGB2HSV(R,G,B); (1)
Wherein, RGB2HSV () represents the conversion of RGB color space to HSV color space, h is the hue of the target color development, s is the saturation of the target color development, and v is the lightness of the target color development.
The color space of the original hair color of the hair region in the user image is the YUV color space, and the original hair color is converted from the YUV color space to the RGB color space. Such as by using the following equation (2).
[srcR,srcG,srcB]=YUV2RGB(srcY,srcU,srcV); (2)
Wherein YUV2RGB () represents the conversion of the YUV color space to the RGB color space, srcY is the luminance of the original color development, srcU and srcV are the chrominance of the original color development, srcR is the red channel information of the original color development, srcG is the green channel information of the original color development, and srcB is the blue channel information of the original color development.
After the original color is converted from YUV color space to RGB color space, the original color is converted from RGB color space to HSV color space. Such as using the following equation (3).
[h1Tmp,s1Tmp,v1Tmp]=RGB2HSV(srcR,srcG,srcB); (3)
Wherein RGB2HSV () represents the conversion of the RGB color space into the HSV color space, h1Tmp is the hue of the original color, s1Tmp is the saturation of the original color development, v1Tmp is the lightness of the original color, srcR is the red channel information of the original color, srcG is the green channel information of the original color, and srcB is the blue channel information of the original color.
In this case, the image processing apparatus may adjust the lightness of the original color by the following formula (4) using the brightness adjustment coefficient, and use the adjusted lightness of the original color as the brightness information of the hair region of the user image.
v2Tmp=k×v1Tmp (4);
Wherein v is2Tmp represents the original lightness of the original hair color in the adjusted hair region.
And step S103, obtaining a first color according to the brightness information of the hair area of the user image and the color information of the target color.
In the present embodiment, the image processing apparatus may obtain the brightness information of the hair region of the user image and the color information of the target color, and then combine the brightness information and the color information to form the first color. In this way, the brightness in the first hair color is the brightness after the brightness of the original hair color in the hair region is adjusted according to the preset brightness of the target hair color, so that the brightness of the first hair color is more real and natural.
As a possible implementation manner, in order to ensure the effect of the first hair color obtained in the subsequent step, brightness information of the hair region of the user image and color information of the target hair color can be obtained in the HSV color space. For example, in the HSV color space, the brightness information of the hair region of the user image is obtained according to the brightness Y of the hair region of the user image after brightness and color separation, the color information of the target hair color is obtained according to the color (hue H and saturation S) of the target hair color after brightness and color separation, and the first hair color is obtained according to the brightness information of the hair region of the user image and the color information of the target hair color.
It should be noted that, if the original color and the target color of the hair region of the obtained user image are not HSV color spaces, the original color and the target color which are not HSV color spaces may be first converted into HSV color spaces, and then the luminance information of the hair region of the user image and the color information of the target color are obtained, and then the first color is obtained according to the luminance information of the hair region of the user image and the color information of the target color.
It is understood that other situations may be included, and are not illustrated here.
As one possible implementation manner, obtaining the first color according to the brightness information of the hair region of the user image and the color information of the target color comprises:
obtaining a first color development of the HSV color space according to the brightness information of the hair area of the user image of the HSV color space and the color information of the target color development of the HSV color space; the first color of the HSV color space is converted into a first color of a red, green, blue, RGB color space.
In this embodiment of the application, after the brightness information of the hair region of the user image in the HSV space and the color information of the target hair in the HSV color space are obtained in S102, a first color can be obtained in the HSV color space according to the brightness information of the hair region in the user image and the color information of the target hair color. That is, the first color is obtained according to the adjusted lightness of the original color of the hair region in the user image in the HSV color space and the hue and saturation of the target color in the HSV color space. For better display to the user in the image processing device, the first illuminant in the HSV color space is converted into the RGB color space.
For example, the first color is obtained by the following formula (5).
[dstR,dstG,dstB]=HSV2RGB(h,s,v2Tmp); (5)
Wherein HSV2RGB () denotes converting HSV color space to RGB color space, dstR is red channel information of a first color, dstG is green channel information of a first color, dstB is blue channel information of a first color, h is target colorS is the saturation of the target color, v2Tmp is the brightness information of the hair region in the user image, i.e. the brightness after the original color development adjustment. The brightness of the hair region in the user image after the adjustment of the original hair color is the brightness of the hair region in the user image after the adjustment of the original hair color by using the brightness adjustment coefficient.
In this embodiment, the brightness of the hair region in the user image after the adjustment of the original color is the brightness of the hair region in the user image after the adjustment of the original color by using the brightness adjustment coefficient.
As a possible implementation manner, when the original color of the hair region of the user image is a YUV color space, and the target color is a YUV color space, the first color can be obtained according to that the brightness information of the hair region in the user image in the YUV color space is an adjusted brightness Y of the original color, and the color information of the target color in the YUV color space is a color (UV).
In a possible implementation manner, when the original color of the hair region of the user image adopts a YUV color space, and the target color adopts other types of color spaces such as an RGB color space or an HSV color space, the other types of color spaces such as the RGB color space or the HSV color space adopted by the target color are converted into the YUV color space, the adjusted brightness Y of the original color is obtained according to the brightness information of the hair region in the user image of the YUV color space, and the color information (UV) of the target color in the YUV color space, so as to obtain the first color.
In some possible implementation manners, when the original hair color of the hair area of the user image adopts an HSV color space, and when the target hair color adopts an HSV color space, the first hair color is obtained according to the fact that the brightness information of the hair area of the user image in the HSV color space is the adjusted brightness V of the original hair color, and the color information of the target hair color in the HSV color space is the hue H and the saturation S.
It is understood that there are other color spaces for the original hair color as well as the target hair color for the hair region of the user image, which are not illustrated here. The following rules are followed for processing: if the color space of the original hair color of the hair area of the user image is the same as the color space of the target hair color, and the initial color space type can be subjected to brightness and color classification, then the first hair color can be obtained based on the brightness information of the hair area in the initial color space user image, namely the adjusted brightness of the original hair color and the color information of the target hair color in the initial color space; if the color space of the original hair color of the hair area of the user image is different from the color space of the target hair color, converting one or both of the color space of the original hair color and the color space of the target hair color of the hair area of the user image into a color space capable of separating brightness and color, and obtaining the first hair color according to brightness information of the hair area in the user image, namely the adjusted brightness of the original hair color and the color information of the target hair color in the converted color space based on the converted color space.
And S104, acquiring a hair mask of the user image according to the user image, and fusing the first hair and the original hair of the hair area according to the hair mask to obtain a target hair dyeing image.
Wherein the hair mask is a mask for a hair region.
In the embodiment of the application, after the first hair color is obtained, the first hair color needs to be fused with the original hair color in the user image, that is, the most true hair color effect can be obtained by dyeing the target hair color on the basis of the original hair color of the user image. Therefore, it is necessary to fuse the first color and the original color of the hair region. In order to fuse the original hair colors in the hair region in the user image more accurately according to the first hair color, the hair mask needs to be calculated first.
As a possible implementation, acquiring a hair mask of a user image from the user image includes: and carrying out hair segmentation processing on the user image by adopting a pre-trained hair segmentation model to obtain a hair mask image of the user image.
In the embodiment of the present application, the hair segmentation model is a pre-trained neural network model for implementing hair segmentation on an image. For example, a training sample set may be obtained, where the training sample set includes a plurality of sample images. And for each sample image, carrying out hair marking on the sample image, and further carrying out training of a neural network model according to the marked sample image to obtain a hair segmentation model.
As a possible implementation manner, the hair Segmentation model may be a lightweight neural Network model, for example, a BiSeNet (bidirectional Segmentation Network), or a shuffle networking V2, a BiSeNet V2, or the like, so as to improve the real-time performance of the image processing apparatus and output the hair mask according to the user image quickly.
After acquiring the pre-trained hair segmentation model, the image processing apparatus may input the user image as an input to the hair segmentation model, and may process the user image into the hair segmentation model, as shown in fig. 2.
As a possible implementation manner, in order to make the edge of the hair region for the image smoother and natural and solve the problem of too abrupt dyeing effect when the virtual dyeing image is displayed, edge feathering may be performed on the hair mask image, that is, in order to make the edge of the hair region in the user image more blurred, the edge feathering may be performed on the hair mask image.
At this time, the hair segmentation processing is performed on the user image by adopting a pre-trained hair segmentation model to obtain a hair mask image of the user image, and the hair mask image comprises:
carrying out hair segmentation processing on the user image by adopting a pre-trained hair segmentation model to obtain an initial hair mask image of the user image;
and performing edge feathering treatment on the initial hair mask image of the user image to obtain the hair mask image of the user image.
In the embodiment of the present application, the image processing apparatus may implement edge feathering processing and edge smoothing processing by using fast guided filtering. That is, the image processing apparatus may perform fast oriented filtering processing on the initial hair mask after acquiring the initial hair mask according to the hair segmentation model, to obtain the hair mask of the user image. By conducting rapid guiding filtering processing on the initial hair mask, the obtained hair mask of the user image can avoid edge jump of a hair region, so that the fitting degree of the hair edge in the obtained hair mask in the user image is higher, and the subsequent effect of conducting hair color fusion on the obtained target hair color image is further improved.
When the initial hair mask is subjected to fast guide filtering processing to obtain the hair mask of the user image, the brightness channel information in the user image can be used as a guide graph.
As a possible implementation manner, the window of the fast steering filtering may be set to 1, and of course, other values may also be set, and the setting may be performed according to actual requirements, which is not limited in this application.
It should be noted that, in the embodiment of the present application, the initial hair mask may also be subjected to an edge feathering process, such as bilateral filtering, and the like, which is not limited in the application.
After the image processing device acquires the hair mask, the first hair color and the original hair color of the hair area can be fused according to the hair mask to obtain a target hair-dyeing image.
As can be seen from the above, the first color is obtained according to the brightness information of the hair region in the user image and the color information of the target color. And according to the hair mask, carrying out image fusion on the first hair and the original hair of the hair area in the user image to obtain a mask hair dyeing image. The obtained first hair color is based on the brightness information of the hair area in the user image, so that the obtained first hair color can show the color of the target hair color and also contains the texture information of the hair area, the target hair color image obtained by fusing the first hair color and the original hair color of the hair area can better retain the texture information of the hair area, the real hair colors of different crowds can be considered, the effect of simulating the target hair color to be colored to the hair more truly and naturally is realized, the difference between the coloring effect of the target hair color in the obtained target hair color image and the real effect of coloring the target hair color to the hair in reality is reduced, and the coloring effect and the natural degree of the target hair color in the target hair color image are improved.
As a possible implementation manner, the fusing the first hair color and the original hair color of the hair region according to the hair mask to obtain the target hair-dyeing image includes:
and acquiring a first fusion weight of the first color and a second fusion weight of the original color. And fusing the first hair color and the original hair color of the hair area according to the first fusion weight of the first hair color and the second fusion weight of the original hair color to obtain a target hair-dyeing image.
Wherein the first blending weight is associated with the hair mask, i.e. is obtained based on the hair mask.
In the implementation of the present application, after the first hair color is calculated, since the virtual hair color needs to mix the first hair color based on the original hair color, when the original hair color and the first hair color need to be mixed after the first hair color is calculated, the respective weights of the original hair color and the first hair color need to be determined. That is, when the intensity of the hair dyeing effect is large, the weight of the first hair color is large, and the weight of the original hair color is small. Based on the method, the first fusion weight of the first hair color and the second fusion weight of the original hair color can be obtained according to the hair mask and the selected hair dyeing effect intensity of the user. That is, the first fusion weight and the second fusion weight may be determined according to the intensity of the hair dyeing effect selected by the user. The sum of the first fusion weight and the second fusion weight is a predetermined maximum weight, for example, the predetermined maximum weight may be 255. After the first fusion weight of the first hair color and the second fusion weight of the original hair color are obtained, the first hair color can be weighted by the first fusion weight to obtain a first weighting result, the original hair color is weighted by the second fusion weight to obtain a second weighting result, and the hair color of the hair area in the target hair-dyeing image is obtained according to the first weighting result and the second weighting result, so that the target hair-dyeing image can be obtained.
As a possible implementation manner, the first hair color and the original hair color may be subjected to image fusion in a designated color space according to a hair mask to obtain a target hair color image. The designated color space may be an RGB color space, a YUV color space, or another suitable color space.
In a specific implementation, when the color space type of the first color is different from the color space type of the original color, one or both of the color space type of the first color and the color space type of the original color is color space converted so that the color space type of the converted first color and the color space type of the original color are the same.
As one possible implementation manner, taking as an example that the target hair-dyeing image is obtained by fusing the first hair color and the original hair color of the hair region according to the hair mask in the RGB color space, the target hair-dyeing image can be obtained by using the following formulas (6), (7) and (8):
dstR1=(srcR·beta+dstR·alpha)/A; (6)
dst1=(src ·beta+dst ·alpha)/A; (7)
dstB1=(srcB·beta+dstB·alpha)/A; (8)
wherein beta represents the second fusion weight and alpha represents the first fusion weight. dstR1 represents R-channel information in the target hair-dyeing image, dst 1 represents G-channel information in the target hair-dyeing image, dstB1 represents B-channel information in the target hair-dyeing image, and a represents a preset maximum weight value.
When the fusion weight of each pixel is expressed by a gray scale value (color depth) in the hair mask, if the gray scale value is 8 bits, the range of the fusion weight of each pixel is [0,255], and in this case, a is 255. When the color mask expresses the fusion weight by a coefficient, the value range of the fusion weight of each pixel is [0,1], and in this case, a is 1.
As a possible implementation manner, the obtaining of the first fusion weight of the first color and the second fusion weight of the original color includes:
and acquiring a preset hair dyeing effect intensity coefficient.
And determining a first fusion weight of the first hair color according to a preset hair dyeing effect intensity coefficient and a hair mask.
And calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value.
The preset maximum weight value is the maximum value of the preset fusion weight.
In the embodiment of the application, the hair dyeing effect intensity coefficient is used for adjusting the effect intensity of the target hair color, and can be configured by users according to requirements so as to meet the individual requirements of different users. The user can set the intensity coefficient of the hair dyeing effect when selecting the target hair color, and the image processing device can directly acquire the intensity coefficient of the hair dyeing effect set by the user. For example, a hair dyeing effect intensity adjustment intensity bar or a key for the target hair color may be configured on the user interface, and the size of the hair dyeing makeup test effect intensity coefficient may be adjusted by dragging the hair dyeing effect intensity adjustment intensity bar or operating the key. The higher the intensity coefficient of the hair dyeing and makeup test effect is, the more obvious the hair dyeing effect of the target hair is; accordingly, the smaller the intensity factor of the hair coloring test effect, the less pronounced the effect of the target hair color. When the intensity coefficient of the hair dyeing cosmetic effect is 0, there is no hair dyeing effect of the target hair color.
Of course, if the user does not set the intensity coefficient of the hair dyeing effect, a intensity coefficient of the hair dyeing effect can be preset in the image processing apparatus, and the preset intensity coefficient of the hair dyeing effect can be directly obtained at this time. After the preset hair dyeing effect intensity coefficient is obtained, a first fusion weight of the first hair color can be determined according to the hair dyeing effect intensity coefficient and the hair mask.
After the first fusion weight is determined, a second fusion weight of the original color can be calculated according to the first fusion weight of the first color and a preset maximum weight value.
As a possible implementation manner, determining the first fusion weight of the first hair color according to the preset hair dyeing effect intensity coefficient and the hair mask includes:
according to presetThe intensity coefficient of the hair dyeing effect and the hair mask are calculated by using the formula alpha ═ hair modeldst·σ,σ∈[0,1]A first fusion weight for the first hair color is determined.
Where alpha represents the first blending weight, hairModeldstRepresents a hair mask, and sigma represents a preset intensity coefficient of the hair dyeing effect.
According to the first fusion weight of the first color and the preset maximum weight value, calculating the second fusion weight of the original color comprises the following steps:
and calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value by using a formula beta as A-alpha.
Wherein, A represents a preset maximum weight value, and beta represents a second fusion weight.
In a specific implementation, the hair mask is a fusion weight of each pixel in the first hair color, where the fusion weight of the pixel in the non-hair region in the first hair color may be 0, so that when the first hair color is fused with the original hair color, the non-hair region in the first hair color does not participate in image fusion, that is, information of the non-hair region in the target hair color image obtained by image fusion is from the user image. And the fusion weight of the pixels of the hair area in the first hair color is not 0, so that the hair area of the target hair color image obtained by image fusion is from the first hair color and the original hair color.
As a possible implementation manner, the fusing the first hair color and the user image according to the hair mask to obtain the target hair color image includes:
and acquiring a virtual hair dyeing type. And when the virtual hair dyeing type is the first type, fusing the first hair and the user image according to the hair mask to obtain a target hair dyeing image. When the virtual hair dyeing type is the second type, acquiring a hair color brightening effect coefficient corresponding to the target hair color when the virtual hair dyeing type is the second type; fusing the first hair color and the original hair color of the hair area according to the hair mask to obtain a second hair color; calculating the brightness adjustment quantity of the hair area according to the hair color brightness effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair area and the average brightness of the hair area; and adjusting the brightness of the second hair according to the brightness adjustment amount of the hair area, and taking the image of the second hair with the adjusted brightness as a target hair-dyeing image.
The color development brightness effect coefficient is used for adjusting the brightness of the target color development after dyeing, and the brightness of the target color development after dyeing is higher when the color development brightness effect intensity coefficient is higher.
In this embodiment, different virtual hair dyeing types may be preset, and the virtual hair dyeing types may include a first type and a second type. When the virtual hair dyeing type is the first type, the hair dyeing texture is the fog surface type, the first hair and the user image can be directly fused according to the hair mask to obtain a target hair dyeing image, and brightness adjustment after dyeing is not needed. And when the virtual hair dyeing type is the second type, the hair dyeing texture is the bright type, and the brightness after dyeing can be adjusted according to the hair color brightness effect coefficient selected by the user, so that the brightness of the displayed hair color can be further increased. Based on this, the image processing apparatus may acquire the virtual hair dyeing type selected by the user. When the virtual hair dyeing type is the first type, the first hair color and the user image are fused directly according to the hair mask without adjusting the brightness after dyeing, so that a target hair dyeing image is obtained. When the virtual hair dyeing type is the second type, the user can set the hair color brightening effect coefficient simultaneously when selecting the second type. The image processing device first obtains the color brightening effect coefficient corresponding to the target color, and the coefficient can be set by a user. Of course, a default value may be preset. The image processing apparatus may fuse the first color and the original color of the hair region according to the hair mask to obtain a second color. And firstly, dyeing a hair area in the user image according to the first color to obtain a second color. And calculating the brightness adjustment quantity of the hair area according to the hair color brightness effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair area and the average brightness of the hair area. And adjusting the brightness of the second hair according to the brightness adjustment amount of the hair area, and taking the image of the second hair with the adjusted brightness as a target hair-dyeing image.
As one possible implementation manner, the calculating the brightness adjustment amount of the hair region according to the hair color brightening effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair region, and the original average brightness of the hair region includes:
based on the color brightness effect coefficient corresponding to the target color, the hair mask, the original brightness of the hair region, and the average brightness image of the hair region, the formula B is δ · MAX (srcY-srcY)mean,0)·hairModeldst/A,δ∈(0,m],m>And 0, calculating the brightness adjustment quantity of the hair area.
Wherein B represents the amount of brightness adjustment of the hair region; delta denotes the color development gloss effect coefficient corresponding to the target color development, srcY denotes the original luminance, srcY denotes the color development gloss effect coefficientmeanRepresenting the original average intensity of the hair region, hairModeldstThe color tone value is expressed by representing a hair mask, A represents a preset maximum weight value, and m represents the maximum value of the hair color brightness effect coefficient corresponding to the target hair color.
The adjusting the brightness of the second color based on the brightness adjustment amount of the hair region includes:
according to the brightness adjustment amount of the hair region, using formula Y3=Y2+ B, adjusting the brightness of the second color.
Wherein, Y3Indicating the adjusted luminance of the second color emission, Y2The luminance of the second color before adjustment is shown.
In the embodiments of the present application, the color brightening effect coefficients for each color type can be preconfigured. Corresponding marks can be distributed to different hair color types, and each mark is respectively mapped with corresponding hair color brightness effect coefficients one by one. For example, lipType ═ 1 denotes a first type, which may be a matte type, for example; lipType ═ 2, indicating a second type, which may be a glossy type, for example. In the first type in which the color development brightening effect adjustment is not necessary, δ may be set to 0, and in the second type in which the color development brightening effect adjustment is necessary, the values of δ and m may be set empirically.
In some embodiments, m is 2, it is understood that m may have different requirements for adjusting the color brightening effect, and m may have other values, which are not limited herein. Therefore, through the mode, the adjustment of corresponding brightness can be carried out aiming at hair colors with different textures in the embodiment of the application, so that the finally presented dyeing effect of the target hair dyeing image in a hair area is closer to the real hair dyeing effect, and the adaptability of virtual hair dyeing is improved.
Referring to fig. 3, a schematic flow chart of another image processing method provided in the embodiment of the present application is shown. As shown in fig. 3, the method includes:
step S301, acquiring a user image.
For details, reference may be made to step S101.
Step S302, detecting the user image and acquiring the face information in the user image.
In the embodiment of the application, the image processing apparatus may detect a face contour of the user in the user image, coordinates of the key points, and the like, acquire coordinates of the face and a size of the face in the user image, and obtain face information in the user image.
And step S303, determining whether the user image meets a preset hair dyeing condition according to the facial information.
In the embodiment of the present application, when a user takes an image, if the face of the user in the user image is small or the face of the user is too large, the hair area of the user is small, and if the user image is subjected to virtual hair dyeing, a virtual hair dyeing effect cannot be presented, so that the user image with the small face or the too large face needs to be screened out. At this time, it may be preset, for example, that the hair dyeing condition is that the size of the face of the user is within a preset proportion range of the proportion of the face of the user in the user image, and at this time, according to the size of the face of the user, the size proportion of the face in the user image is determined, so that the user image with the size proportion of the face in the user image within the preset proportion range may be determined as the user image with the hair dyeing condition. And determining the user image as the user image which does not meet the condition according to the user image with the face size ratio not within the preset ratio range in the user image. Or, the hair dyeing condition is preset in a preset distance range between the user and the terminal device, and at this time, the distance between the user and the terminal device can be estimated according to the size of the face in the user image, so that whether the distance is in the preset distance range can be detected. And if the distance is within the preset distance range, determining that the user image is the user image meeting the hair dyeing condition. And if the distance is not within the preset distance range, determining that the user image is not the user image which meets the hair dyeing condition.
It should be noted that the preset hair dyeing condition may also be other conditions, which is not limited in this application.
And step S304, when the user image meets the preset hair dyeing condition, acquiring the brightness information of the user image and the color information of the target hair color.
Specifically, the step S102 is not described herein again.
Step S305, obtaining a first color according to the brightness information of the user image and the color information of the target color.
Specifically, the step S103 is not described herein again.
And S306, acquiring a hair mask of the user image according to the user image, and fusing the first hair and the original hair of the hair area according to the hair mask to obtain a target hair dyeing image.
Wherein the hair mask is a mask for the hair region.
Specifically, reference to step S104 is not repeated herein.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 4, the image processing apparatus includes:
an acquiring unit 401 is configured to acquire a user image.
The obtaining unit 401 is further configured to obtain luminance information of the user image and color information of the target color.
As a possible implementation manner, the obtaining unit 401 is specifically configured to obtain an original brightness of a hair region of the user image; determining a brightness adjustment coefficient of a hair area according to the original brightness of the hair area of the user image and the preset brightness of the target hair color; and adjusting the original brightness of the hair area of the user image according to the brightness adjustment coefficient of the hair area, and taking the adjusted brightness as the brightness information of the hair area of the user image.
As a possible implementation manner, the obtaining unit 401 is specifically configured to calculate an original average brightness of a hair region of the user image according to an original brightness of the hair region of the user image; and determining the brightness adjustment coefficient of the hair area according to the original average brightness of the hair area of the user image and the preset brightness of the target hair color.
As a possible implementation manner, the obtaining unit 401 is specifically configured to use the formula k ═ MAX (1.0 × Y) according to the original average brightness of the hair region of the user image and the preset brightness of the target hair color1/(srcYmean+1),1), k is more than or equal to 1, and the brightness adjusting coefficient of the hair area is determined.
Where k denotes a luminance adjustment coefficient, max () denotes a maximum value, Y1Preset Brightness, srcY, indicating the color development of the targetmeanRepresenting the original average intensity of the hair region.
As a possible implementation manner, the obtaining unit 401 is specifically configured to obtain brightness information of a user image in the HSV color space and color information of a target color development.
A processing unit 402, configured to obtain a first color according to the brightness information of the user image and the color information of the target color.
As a possible implementation manner, the processing unit 402 is specifically configured to obtain a first color of the HSV color space according to the brightness information of the hair region of the user image in the HSV color space and the color information of the target color of the HSV color space; the first coloration of the HSV color space is converted to a first coloration of a red, green, blue, RGB color space.
The processing unit 402 is further configured to obtain a hair mask of the user image according to the user image, and fuse the first hair color and the original hair color of the hair area according to the hair mask to obtain a target hair color image.
Wherein the hair mask is a mask for the hair region.
As a possible implementation manner, the processing unit 403 is specifically configured to perform hair segmentation processing on the user image by using a pre-trained hair segmentation model to obtain a hair mask image of the user image.
As a possible implementation manner, the processing unit 403 is specifically configured to perform hair segmentation processing on the user image by using a pre-trained hair segmentation model to obtain an initial hair mask image of the user image; and performing edge feathering treatment on the initial hair mask image of the user image to obtain the hair mask image of the user image.
As a possible implementation manner, the processing unit 403 is specifically configured to obtain a first fusion weight of the first color and a second fusion weight of the original color; a first fusion weight is associated with the hair mask;
and fusing the first hair and the original hair of the hair area according to the first fusion weight of the first hair and the second fusion weight of the original hair to obtain a target hair dyeing image.
As a possible implementation manner, the processing unit 403 is specifically configured to obtain a preset intensity coefficient of the hair dyeing effect;
determining a first fusion weight of the first hair color according to a preset hair dyeing effect intensity coefficient and the hair mask;
calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value; the preset maximum weight value is the maximum value of the preset fusion weight.
As a possible implementation manner, the processing unit 403 is specifically configured to utilize a formula alpha ═ hair model according to a preset hair dyeing effect intensity factor and a hair maskdst·σ,σ∈[0,1]A first fusion weight for the first hair color is determined. And calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value by using a formula beta as A-alpha.
Where alpha represents the first blending weight, hairModeldstRepresents a hair mask, and sigma represents a preset intensity coefficient of the hair dyeing effect. A represents a preset maximum weight value, and beta represents a second fusion weight.
As a possible implementation manner, the processing unit 403 is specifically configured to obtain a virtual hair dyeing type; when the virtual hair dyeing type is a first type, fusing the first hair and the user image according to the hair mask to obtain a target hair dyeing image; when the virtual hair dyeing type is a second type, acquiring a hair color brightness effect coefficient corresponding to the target hair color; fusing the first hair color and the original hair color of the hair area according to the hair mask to obtain a second hair color; calculating the brightness adjustment quantity of the hair area according to the hair color brightness effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair area and the average brightness of the hair area; and adjusting the brightness of the second hair according to the brightness adjustment amount of the hair area, and taking the image of the second hair with the adjusted brightness as a target hair-dyeing image.
As one possible implementation manner, processing section 403 is specifically configured to use the formula B ═ δ · MAX (srcY-srcY) according to the hair color brightening effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair region, and the average brightness image of the hair regionmean,0)·hairModeldst/A,δ∈(0,m],m>And 0, calculating the brightness adjustment quantity of the hair area.
Wherein B represents the amount of brightness adjustment of the hair region; delta denotes the gloss effect coefficient of the color corresponding to the target color, srcY denotes the original lightnessmeanRepresenting the original average intensity of the hair region, hairModeldstRepresenting a hair mask; a represents a preset maximum weight value, and m represents the maximum value of the color development brightness effect coefficient corresponding to the target color development;
as a possible implementation manner, the processing unit 403 is specifically configured to utilize the formula Y according to the brightness adjustment amount of the hair region3=Y2+ B, adjusting the brightness of the second color; wherein, Y3Indicating the adjusted luminance of the second color emission, Y2Tone of expressionBrightness of the second color as a whole.
Corresponding to the embodiment, the application further provides the electronic equipment. Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 500 may include: a processor 501, a memory 502, and a communication unit 503. The components communicate over one or more buses, and those skilled in the art will appreciate that the configuration of the servers shown in the figures are not meant to limit embodiments of the present invention, and may be in the form of buses, stars, more or fewer components than those shown, some components in combination, or a different arrangement of components.
The communication unit 503 is configured to establish a communication channel, so that the storage device can communicate with other devices. Receiving the user data sent by other devices or sending the user data to other devices.
The processor 501, which is a control center of the storage device, connects various parts of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and/or processes data by running or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory. The processor may be composed of Integrated Circuits (ICs), for example, a single packaged IC, or a plurality of packaged ICs connected to the same or different functions. For example, the processor 501 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
The memory 502 is used for storing instructions executed by the processor 501, and the memory 502 may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
The execution of the instructions in the memory 502, when executed by the processor 501, enables the electronic device 500 to perform some or all of the steps in the embodiment shown in fig. 3.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments of the image processing method provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented using software plus any required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, as for the device embodiment and the terminal embodiment, since they are basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.

Claims (16)

1. An image processing method, comprising:
acquiring a user image;
acquiring brightness information of a hair area of the user image and color information of target hair color;
obtaining a first color according to the brightness information of the hair area of the user image and the color information of the target color;
acquiring a hair mask of the user image according to the user image, and fusing the first hair and the original hair of the hair area according to the hair mask to obtain a target hair dyeing image; wherein the hair mask is a mask for a hair region.
2. The method of claim 1, wherein the obtaining brightness information for the hair region of the user image comprises:
acquiring the original brightness of a hair area of the user image;
determining a brightness adjustment coefficient of the hair area according to the original brightness of the hair area of the user image and the preset brightness of the target hair color;
and adjusting the original brightness of the hair area of the user image according to the brightness adjustment coefficient of the hair area, and taking the adjusted brightness as the brightness information of the hair area of the user image.
3. The method according to claim 2, wherein the determining the brightness adjustment coefficient of the hair region according to the original brightness of the hair region of the user image and the preset brightness of the target hair color comprises:
calculating the original average brightness of the hair area of the user image according to the original brightness of the hair area of the user image;
and determining a brightness adjustment coefficient of the hair area according to the original average brightness of the hair area of the user image and the preset brightness of the target hair color.
4. The method according to claim 3, wherein the determining the brightness adjustment coefficient of the hair region according to the original average brightness of the hair region of the user image and the preset brightness of the target hair color comprises:
according to the original average brightness of the hair area of the user image and the preset brightness of the target hair color, using a formula k-MAX (1.0-Y)1/(srcYmean+1),1), k is more than or equal to 1, and the hair is determinedA brightness adjustment coefficient of the region; where k denotes a luminance adjustment coefficient, max () denotes a maximum value, Y1Preset Brightness, srcY, indicating the color development of the targetmeanRepresenting the original average intensity of the hair region.
5. The method of claim 1, wherein the obtaining a hair mask of the user image from the user image comprises:
and carrying out hair segmentation processing on the user image by adopting a pre-trained hair segmentation model to obtain a hair mask image of the user image.
6. The method according to claim 5, wherein the performing, by using a pre-trained hair segmentation model, the user image on a user image to obtain a hair mask image of the user image comprises:
performing hair segmentation processing on the user image by adopting a pre-trained hair segmentation model to obtain an initial hair mask image of the user image;
and carrying out edge feathering treatment on the initial hair mask image of the user image to obtain the hair mask image of the user image.
7. The method according to claim 1, further comprising, before the obtaining the brightness information of the hair region of the user image and the color information of the target hair color:
detecting the user image, and acquiring face information in the user image;
determining whether the user image meets a preset hair dyeing condition or not according to the facial information;
the hair segmentation processing of the user image to obtain a hair mask image of the user image comprises:
and when the user image meets the preset hair dyeing condition, acquiring brightness information of a hair area of the user image and color information of target hair color.
8. The method of any one of claims 1-7, wherein fusing the first hair color and the original hair color of the hair region according to the hair mask to obtain a target hair color image comprises:
acquiring a first fusion weight of the first color and a second fusion weight of the original color; the first blend weight is associated with the hair mask;
and fusing the first hair color and the original hair color of the hair area according to the first fusion weight of the first hair color and the second fusion weight of the original hair color to obtain a target hair-dyeing image.
9. The method of claim 8, wherein obtaining the first fusion weight for the first hair color and the second fusion weight for the original hair color comprises:
acquiring a preset hair dyeing effect intensity coefficient;
determining a first fusion weight of the first hair color according to the preset hair dyeing effect intensity coefficient and the hair mask;
calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value; the preset maximum weight value is the maximum value of the preset fusion weight.
10. The method of claim 9, wherein determining the first blending weight for the first hair color according to the preset hair coloring effect intensity factor and the hair mask comprises:
according to the preset hair dyeing effect intensity coefficient and the hair mask, utilizing a formula alpha ═ hair modeldst·σ,σ∈[0,1]Determining a first fusion weight for the first hair color; where alpha represents the first blending weight, hairModeldstRepresenting a hair mask, and sigma representing a preset hair dyeing effect intensity coefficient;
the calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value includes:
calculating a second fusion weight of the original color according to the first fusion weight of the first color and a preset maximum weight value by using a formula beta as A-alpha; wherein, A represents a preset maximum weight value, and beta represents a second fusion weight.
11. The method according to any one of claims 1 to 7, wherein the acquiring brightness information of the hair region of the user image and color information of the target hair color comprises:
acquiring brightness information of a hair area of a user image in an HSV color space and color information of target hair color;
the obtaining a first color according to the brightness information of the hair region of the user image and the color information of the target color comprises:
obtaining a first color of the HSV color space according to the brightness information of the hair region of the user image in the HSV color space and the color information of the target color of the HSV color space;
and converting the first color of the HSV color space into a first color of a red, green and blue (RGB) color space.
12. The method of any of claims 2-4, wherein fusing the first hair color and the user image according to the hair mask to obtain the target hair color image comprises:
acquiring a virtual hair dyeing type;
when the virtual hair dyeing type is a first type, fusing the first hair color and the user image according to the hair mask to obtain a target hair dyeing image;
when the virtual hair dyeing type is a second type, acquiring a hair color brightness effect coefficient corresponding to the target hair color;
according to the hair mask, fusing the first hair color and the original hair color of the hair area to obtain a second hair color;
calculating the brightness adjustment quantity of the hair area according to the hair color brightness effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair area and the average brightness of the hair area;
and adjusting the brightness of the second hair color according to the brightness adjustment amount of the hair area, and taking the image of the second hair color with the adjusted brightness as the target hair-dyeing image.
13. The method of claim 12, wherein calculating the brightness adjustment for the hair region based on the hair color lightening effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair region, and the original average brightness of the hair region comprises:
based on the hair color brightening effect coefficient corresponding to the target hair color, the hair mask, the original brightness of the hair region, and the average brightness image of the hair region, using a formula B of δ -MAX (srcY-srcY)mean,0)·hairModeldst/A,δ∈(0,m]And m is greater than 0, calculating the brightness adjustment quantity of the hair area; wherein B represents a brightness adjustment amount of the hair region; delta denotes the color development gloss effect coefficient corresponding to the target color development, srcY denotes the original luminance, srcY denotes the color development gloss effect coefficientmeanRepresenting the original average intensity of the hair region, hairModeldstRepresenting a hair mask; a represents a preset maximum weight value, and m represents the maximum value of the color development brightness effect coefficient corresponding to the target color development;
the adjusting the brightness of the second hair color according to the brightness adjustment amount of the hair region includes:
according to the brightness adjustment amount of the hair region, using formula Y3=Y2+ B, adjusting the brightness of the second color; wherein, Y3Indicating the adjusted luminance of the second color emission, Y2The luminance of the second color before adjustment is shown.
14. An image processing apparatus characterized by comprising:
an acquisition unit configured to acquire a user image;
the acquiring unit is further used for acquiring brightness information of a hair area of the user image and color information of target hair color;
the processing unit is used for obtaining a first color according to the brightness information of the hair area of the user image and the color information of the target color;
the processing unit is further used for acquiring a hair mask of the user image according to the user image, and fusing the first hair color and the original hair color of the hair area according to the hair mask to obtain a target hair-dyeing image; wherein the hair mask is a mask for a hair region.
15. An electronic device, characterized in that the electronic device comprises a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method steps of any of claims 1-13.
16. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method according to any one of claims 1-13.
CN202210328130.2A 2022-03-30 2022-03-30 Image processing method, device, equipment and storage medium Pending CN114663549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210328130.2A CN114663549A (en) 2022-03-30 2022-03-30 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210328130.2A CN114663549A (en) 2022-03-30 2022-03-30 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114663549A true CN114663549A (en) 2022-06-24

Family

ID=82032490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210328130.2A Pending CN114663549A (en) 2022-03-30 2022-03-30 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114663549A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024169838A1 (en) * 2023-02-15 2024-08-22 北京字跳网络技术有限公司 Image processing method and apparatus, and computer device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024169838A1 (en) * 2023-02-15 2024-08-22 北京字跳网络技术有限公司 Image processing method and apparatus, and computer device and storage medium

Similar Documents

Publication Publication Date Title
KR102225266B1 (en) Image processing methods and devices, electronic devices and computer storage media
US11323676B2 (en) Image white balance processing system and method
CN113344836B (en) Face image processing method and device, computer readable storage medium and terminal
CN104282002B (en) A kind of quick beauty method of digital picture
CN107507144B (en) Skin color enhancement processing method and device and image processing device
CN110634169A (en) Image processing apparatus and method
US20080013827A1 (en) Method of Electronic Color Image Saturation Processing
CN108564526A (en) Image processing method and device, electronic equipment and medium
CN111784568A (en) Face image processing method and device, electronic equipment and computer readable medium
CN109862389A (en) A kind of method for processing video frequency, device, server and storage medium
US20180139359A1 (en) Method for increasing the saturation of an image, and corresponding device
JP2012516076A (en) Image processing
CN109949248B (en) Method, apparatus, device and medium for modifying color of vehicle in image
CN110335279A (en) Real-time green curtain is scratched as method, apparatus, equipment and storage medium
CN113888534A (en) Image processing method, electronic device and readable storage medium
US5828819A (en) Apparatus and method for automatically creating a picture in a style having painting-like coloring using an image processing
CN114663549A (en) Image processing method, device, equipment and storage medium
CN106599185B (en) HSV-based image similarity identification method
CN103295250A (en) Image colorization method based on L1 mixed norm solving
Kuang et al. A psychophysical study on the influence factors of color preference in photographic color reproduction
JP2002010283A (en) Display method and processor for face image
CN113298921A (en) Theme template color matching method and device, electronic equipment and storage medium
KR0151918B1 (en) Image generation apparatus and method for image processing
KR101772626B1 (en) Method for separating reflection components from a single image and image processing apparatus using the method thereof
Safibullaevna et al. Processing Color Images, Brightness and Color Conversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination