CN112102196A - Image hairdressing processing method and device, electronic equipment and readable storage medium - Google Patents

Image hairdressing processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112102196A
CN112102196A CN202010975513.XA CN202010975513A CN112102196A CN 112102196 A CN112102196 A CN 112102196A CN 202010975513 A CN202010975513 A CN 202010975513A CN 112102196 A CN112102196 A CN 112102196A
Authority
CN
China
Prior art keywords
image
hair
face
face image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010975513.XA
Other languages
Chinese (zh)
Inventor
华路延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202010975513.XA priority Critical patent/CN112102196A/en
Publication of CN112102196A publication Critical patent/CN112102196A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image hairdressing processing method, an image hairdressing processing device, an electronic device and a readable storage medium. In the scheme, because the adjustment among the channels in the LAB color space does not influence each other, the aims of accurately adjusting the hair color and the density of the hair in the hair area and improving the accuracy of adjusting the hair color and the brightness are achieved by adjusting the information of each channel in the LAB color space.

Description

Image hairdressing processing method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image hairdressing processing method and apparatus, an electronic device, and a readable storage medium.
Background
With the development of internet technology and image processing technology, people's demands for entertainment functions based on related technologies are more and more diverse, for example, in a live webcasting scene, a live webcasting user wants to be able to adjust hair color and density, so that the live webcasting effect is better, or, for a single picture, the user may also have a demand for changing the hair color and density of the hair in the picture. Therefore, some technical schemes for adjusting hair color and density are also generated at present. However, most of these technical solutions are directly performed based on the RGB color space, and in the RGB color space, because the correlation between the color channels is strong, changes of other color channels will be caused when a single color channel is adjusted, so that the final adjustment effect is affected, and the adjustment result often cannot meet the user requirements.
Disclosure of Invention
An object of the present application includes, for example, providing an image hairdressing processing method, apparatus, electronic device, and readable storage medium, which can improve the accuracy of hair color and brightness adjustment.
The embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides an image hairdressing processing method, including:
processing the obtained face image to obtain a hair area image;
converting the color space of the face image into an LAB color space;
changing color channel information in the LAB color space to adjust the color of the face image and/or changing brightness channel information in the LAB color space to adjust the brightness of the face image;
and obtaining an effect picture containing the hair area image after the color and/or brightness adjustment according to the adjusted face image and the hair area image.
In an alternative embodiment, the step of processing the acquired face image to obtain a hair region image includes:
identifying a face region contained in the face image;
and expanding by taking the face area as a reference to obtain a hair area image at the periphery of the face area.
In an optional implementation manner, the step of expanding the face region as a reference to obtain a hair region image located at the periphery of the face region includes:
sequentially expanding the edges of the face area to the periphery by preset widths;
carrying out color recognition processing on an extended area between an extended edge formed after each extension and the edge of the face area;
and determining a hair region image from the expansion region based on the obtained color recognition result of each expansion region.
In an alternative embodiment, the step of determining an image of the hair region from the expansion region based on the obtained color recognition result of each expansion region includes:
dividing each obtained expansion area into a plurality of sub-areas;
judging whether the subarea belongs to a hair area or not according to the color identification result of each subarea;
and all the sub-areas judged to belong to the hair area constitute the hair area image.
In an alternative embodiment, the step of obtaining an effect map including a hair region image with adjusted color and/or brightness according to the adjusted face image and the hair region image includes:
intercepting a non-hair area image in the face image according to a hair area image in the face image before adjustment, and intercepting a hair area image in the face image after adjustment;
and combining the intercepted non-hair area image and the hair area image to obtain an effect image containing the hair area image after color and/or brightness adjustment.
In an optional embodiment, the step of capturing a non-hair region image in the face image according to a hair region image in the face image before adjustment, and capturing a hair region image in the face image after adjustment includes:
constructing a mask according to the information of the hair region image contained in the face image before adjustment;
masking the face image before adjustment by using the constructed mask, and intercepting a non-hair region image in the face image before adjustment after masking;
and performing mask processing on the adjusted face image by using the constructed mask, and intercepting a hair region image in the adjusted face image after the mask processing.
In an alternative embodiment, the method further comprises:
performing mean fuzzy processing on a plurality of pixel points contained in the effect graph aiming at a color channel and a brightness channel contained in the LAB color space;
and converting the processed effect graph from an LAB color space to an RGBA color space.
In an optional embodiment, when the obtained face images are a plurality of continuous face images, the step of processing the obtained face images to obtain a hair region image includes:
detecting whether a hair region image of a preset face image in front of a current face image to be processed is obtained or not;
if the hair region image of the previous preset face image is obtained, obtaining displacement information of the current face image relative to a face region in the previous preset face image;
and determining the hair region image of the current face image according to the hair region image of the previous preset face image and the displacement information.
In a second aspect, an embodiment of the present application provides an image hairdressing processing device, including:
the processing module is used for processing the acquired face image to obtain a hair area image;
the conversion module is used for converting the color space of the face image into an LAB color space;
an adjusting module, configured to change color channel information in the LAB color space to adjust a color of the face image, and/or change luminance channel information in the LAB color space to adjust a luminance of the face image;
and the obtaining module is used for obtaining an effect picture containing the hair area image after the color and/or brightness adjustment according to the adjusted face image and the hair area image.
In a third aspect, embodiments of the present application provide an electronic device, which includes one or more storage media and one or more processors in communication with the storage media, where the one or more storage media store machine-executable instructions executable by the processors, and when the electronic device is running, the processors execute the machine-executable instructions to perform the image hairdressing processing method described in any one of the foregoing embodiments.
In a fourth aspect, the present application provides a computer-readable storage medium storing machine-executable instructions, which when executed, implement the image hairdressing processing method according to any one of the foregoing embodiments.
The beneficial effects of the embodiment of the application include, for example:
according to the image hairdressing processing method, the image hairdressing processing device, the electronic equipment and the readable storage medium, firstly, a face image is processed to obtain a hair region image, then, a color space of the face image is converted into an LAB color space, the color and/or the brightness of the face image are respectively adjusted by changing color channel information and/or brightness channel information of the LAB color space, and finally, an effect image containing the hair region image after color and/or brightness adjustment is obtained according to the adjusted face image and the hair region image. In the scheme, because the adjustment among the channels in the LAB color space does not influence each other, the aims of accurately adjusting the hair color and the density of the hair in the hair area and improving the accuracy of adjusting the hair color and the brightness are achieved by adjusting the information of each channel in the LAB color space.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of an image hairdressing processing method provided in an embodiment of the present application;
fig. 2 is a flowchart of an image hairdressing processing method provided by an embodiment of the application;
fig. 3 is a schematic diagram of a face image before brightness adjustment according to an embodiment of the present application;
fig. 4 is a schematic view of a face image after brightness adjustment according to an embodiment of the present application;
FIG. 5 is a flowchart of sub-steps included in step S210 of FIG. 2;
FIG. 6 is a flowchart of sub-steps included in step S212 of FIG. 5;
fig. 7 is a flowchart of sub-steps included in step S2123 in fig. 6;
FIG. 8 is a flowchart of sub-steps included in step S240 in FIG. 2;
FIG. 9 is a schematic view of a color adjustment operation interface provided in an embodiment of the present application;
fig. 10 is a schematic view of a brightness adjustment operation interface provided in the embodiment of the present application;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a functional block diagram of an image hairdressing processing device according to an embodiment of the present application.
Icon: 100-live broadcast providing terminal; 200-a live broadcast server; 300-a live broadcast receiving terminal; 110-a storage medium; 120-a processor; 130-image hair treatment device; 131-a processing module; 132-a conversion module; 133-an adjustment module; 134-an obtaining module; 140-communication interface.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that the features in the embodiments of the present application may be combined with each other without conflict.
Referring to fig. 1, a schematic view of a possible application scenario of the image hairdressing processing method provided in the embodiment of the present application is shown, where the scenario includes a live broadcast providing terminal 100, a live broadcast server 200, and a live broadcast receiving terminal 300. The live broadcast server 200 is in communication connection with the live broadcast providing terminal 100 and the live broadcast receiving terminal 300, respectively, and is configured to provide live broadcast services for the live broadcast providing terminal 100 and the live broadcast receiving terminal 300. For example, the live broadcast providing terminal 100 may transmit a live video stream to the live broadcast server 200, and the viewer may access the live broadcast server 200 through the live broadcast receiving terminal 300 to view the live video.
The live video stream pushed by the live server 200 may be a video stream currently live in a live platform or a complete video stream formed after the live broadcast is completed.
It is understood that the scenario shown in fig. 1 is only one possible example, and in other possible embodiments, the scenario may include only a part of the components shown in fig. 1 or may also include other components.
In this embodiment, the live broadcast providing terminal 100 and the live broadcast receiving terminal 300 may be, but are not limited to, a smart phone, a personal digital assistant, a tablet computer, a personal computer, a notebook computer, a virtual reality terminal device, an augmented reality terminal device, and the like.
The live broadcast providing terminal 100 and the live broadcast receiving terminal 300 may have internet products installed therein for providing live broadcast services of the internet, for example, the internet products may be applications APP, Web pages, applets, etc. related to live broadcast services of the internet used in a computer or a smart phone.
In this embodiment, a video capture device for capturing the anchor video frame may be further included in the scene, and the video capture device may be, but is not limited to, a camera, a lens of a digital camera, a monitoring camera, a webcam, or the like.
The video capture device may be directly installed or integrated in the live broadcast providing terminal 100. For example, the video capture device may be a camera configured on the live broadcast providing terminal 100, and other modules or components in the live broadcast providing terminal 100 may receive videos and images transmitted from the video capture device via the internal bus. Alternatively, the video capture device may be independent of the live broadcast providing terminal 100, and the two may communicate with each other in a wired or wireless manner.
It should be noted that the foregoing is only one possible implementation scenario of the image hairdressing processing method provided in the present application, and furthermore, the image hairdressing processing method can also be used for processing a single captured picture.
Fig. 2 is a flowchart illustrating an image hairdressing processing method provided in an embodiment of the present application, which can be executed by the live broadcast providing terminal 100 or the live broadcast server 200 illustrated in fig. 1. It should be understood that in other embodiments, the order of some steps in the image hairdressing processing method of the present embodiment can be interchanged according to actual needs, or some steps can be omitted or deleted. The detailed steps of the image hairdressing processing method are described as follows.
And step S210, processing the acquired face image to obtain a hair region image.
Step S220, converting the color space of the face image into an LAB color space.
And step S230, changing color channel information in the LAB color space to adjust the color of the face image, and/or changing brightness channel information in the LAB color space to adjust the brightness of the face image.
And step S240, obtaining an effect image containing the hair area image after the color and/or brightness adjustment according to the adjusted face image and the hair area image.
The face image acquired in this embodiment may be a face image of a main broadcast in which live video is being performed, or a face image in a video frame included in a live video stream acquired after the live broadcast is completed, or a face image acquired in another scene. The face image may be a face image that needs to be subjected to hairdressing processing and is obtained from a plurality of different application scenes, and this embodiment is not particularly limited.
In view of the fact that the hair styling effect of the hair region needs to be obtained finally, in the present embodiment, the obtained face image may be processed first to obtain the hair region image.
The color space of the acquired face image is generally an RGB (Red, Green, Blue) color space that can be rendered and displayed by the device, but when the color, brightness, and the like of the image are adjusted in the RGB color space, due to the mutual influence between the color channel and the brightness channel, when any channel information needs to be adjusted, the change of other channels is caused, and the final adjustment effect is further influenced.
The LAB color space is a device-independent color model, which is a physiological feature-based color model. The adjustment among the channels in the LAB color space is not influenced mutually, so that the problem that the information of other channels is changed while certain channel information is adjusted is greatly reduced. Therefore, in this embodiment, the color space of the acquired face image is converted into the LAB color space. In this step, the RGB color space of the face image is generally converted into the LAB color space.
The LAB color space includes three channels, a luminance channel (L channel) and two color channels (a color channel and b color channel). The color channel a comprises colors from dark green to gray to bright pink, and the color channel b comprises colors from bright blue to gray to yellow. Thus, the various colors will produce different color effects when mixed. The brightness degree of the picture can be changed by adjusting the brightness channel information, when the brightness degree is low, the color in the picture is darker, and for the hair area, the hair is more dense in visual effect. And when the brightness degree is higher, the color in the picture is brighter, and for the hair area, the hair is more sparse in visual effect. For example, fig. 3 shows the display effect when the luminance is high, and fig. 4 shows the display effect when the luminance is low. Fig. 4 shows that the hair zone is more dense in visual effect than fig. 3.
Therefore, in this embodiment, the color of the face image can be adjusted by changing the color channel information in the LAB color space, and the brightness channel information in the LAB color space can be changed to adjust the brightness of the face image. In the step, the color and the brightness of the face image are adjusted, and meanwhile, the color and the brightness of the hair area image are adjusted.
In this embodiment, when changing the brightness channel information, the adjustment operation may be performed according to an adjustment operation triggered by a user, an adjustment bar convenient for the user to operate may be included in the adjustment interface, and the user may trigger an adjustment rod on the adjustment bar to trigger brightness adjustment (i.e., adjustment of how dense hair is). The user's trigger operation on the adjusting lever may correspond to a hair quality capability degree parameter a', which may participate in a brightness adjustment formula to perform a brightness adjustment process, for example, the brightness adjustment formula may be as follows:
L′=La′ a≥1.0
wherein, L ' is the adjusted brightness, L is the brightness before the adjustment, L ' and L are decimal between 0 and 1, and a ' is the parameter of hair quality ability degree.
In practice, the user's needs may include only the adjustment of hair color, only the adjustment of hair brightness (thickness), or both. Therefore, in implementation, the color of the face image can be adjusted only by changing the color channel information in the LAB color space, or the brightness of the face image can be adjusted only by changing the brightness channel information in the LAB color space. Or the color channel information and the brightness channel information in the LAB color space can be changed simultaneously to adjust the color and the brightness of the face image. And specifically may be adjusted accordingly, as selected by the user.
In the present embodiment, the color and brightness adjustment is performed for the whole face image, and actually, in order to avoid the influence of the color and brightness of the hair region image on other regions, such as the face region and the background region, while adjusting the color and brightness of the hair region image, only the processing result of the adjustment processing performed on the hair region image in the adjustment process needs to be retained. In the processing process, the hair region image is obtained by processing the face image, so that the relative relationship between the hair region image and the face image can be obtained, and further the relative relationship between the non-hair region image and the face image can be obtained, wherein the relative relationship can be the position in the face image, the region in the face image and the like.
After the adjusted face image is obtained, the obtained hair area image is combined for processing, and then the effect image containing the hair area image after color and/or brightness adjustment can be obtained.
The image hairdressing processing method provided by the embodiment adjusts the color and/or brightness of the face image by changing the color channel information and/or brightness channel information in the LAB color space by converting the color space of the face image to be processed into the LAB color space. Therefore, the purpose of adjusting the color and the density of the hair in the image of the hair area is achieved, and the adjusting effect is more in line with the actual requirement.
In this embodiment, the processing procedure may be executed in the live broadcast providing terminal 100, or the live broadcast providing terminal uploads the acquired information such as the image to the live broadcast service 200, and after the processing is completed in the live broadcast server, the processing result is fed back to the live broadcast providing terminal 100 for rendering and displaying.
In this embodiment, the acquired face image includes a face region, a hair region, and a background region. Because the hair regions in different face images are different in shape and do not have obvious and easily-recognized characteristics, the hair regions are difficult to recognize directly based on the acquired face images. In view of this, referring to fig. 5, in the present embodiment, the hair region image is obtained by processing the face image in the following manner.
Step S211, identifying a face region included in the face image.
And step S212, expanding by taking the face area as a reference to obtain a hair area image at the periphery of the face area.
Because the key points of the face in different face images are basically similar, the current face feature recognition technology is mature, and the hair area is generally attached to the face area, in this embodiment, the face area included in the face image can be recognized first. In this step, face contour recognition may be performed based on the face image to obtain a face contour. And determining a face area contained in the face contour based on the face contour obtained by recognition.
The hair region is often located at the periphery of the face region, for example, the periphery of the upper half of the face region with a certain width is the hair region, or on the basis of the hair region, the whole hair region is formed by a region extending downwards with a certain length. I.e., the hair region may be expanded based on the face region to be substantially determined.
Because the hair regions of different users have different widths extending outwards relative to the face region, namely, the hair of some users is close to the face, the width of the hair region is relatively small. In addition, some users have a fluffy hair, and thus, the width of the hair area is relatively wide. The width can be understood as the width between the end point of the divergent line segment and the face contour point in the divergent line segment when the central point of the face region is taken as a circular point and the divergent shape diverges outwards.
Thus, when a hair region is determined based on face region expansion, it is difficult to determine the hair region by performing expansion at a time with a fixed width standard. Therefore, referring to fig. 6, in the present embodiment, when determining the hair region according to the face region extension, the following steps may be performed:
and S2121, sequentially expanding the edges of the face regions to the periphery by preset widths.
And step S2122, performing color recognition processing on the extended area between the extended edge formed after each extension and the edge of the face area.
Step S2123, determining a hair region image from the expanded regions based on the obtained color recognition result of each of the expanded regions.
In this embodiment, when the expansion is performed based on the face region, the expansion may be performed in a manner of performing the expansion to the periphery with a smaller width for a plurality of times. After each time of expansion, correspondingly forming an expansion edge obtained by the time of expansion, wherein the width between the expansion edge and the edge of the face region is the preset width based on the time of expansion. An expansion area is formed between the expansion edge and the edge of the face area.
The face image includes a face region, a hair region, and other regions, which may be background regions. While there is often a color difference between the hair region and the background region, e.g., the hair region tends to be black and the background region tends to be white.
Taking the hair region shown in fig. 3 as the peripheral region of the upper half of the face region as an example, after the expanded region is obtained by each expansion, in the case where the expanded width is small, the upper half of the expanded region may only include the hair region, and the expanded region is black. And when the expansion is continued so that the expanded region extends to a region other than the hair region, the expanded region contains black and white. If the expansion is continued, the proportion of the white area in the obtained expanded area is larger and smaller, and the proportion of the black area is smaller and smaller.
Therefore, in the present embodiment, the hair region image can be determined from the expanded region by the color recognition result of the expanded region formed after each expansion. For example, when a white area starts to appear in the expanded area but the white area occupies a small proportion, it can be determined that the expanded area formed by the expansion is a hair area.
In the above manner of determining the hair region by extending the face region as a reference, since the hair region is determined based on the color recognition result of the extended region obtained by each extension as a whole, although the hair region can be roughly determined by this manner, the determined hair region may include the background region or some hairs may not be included in the determined hair region because the edge of the hair region is often not in a regular shape. Therefore, in order to further improve the accuracy of the hair region, referring to fig. 7, in the present embodiment, a specific hair region image can be determined in the following manner.
Step S21231 is to divide each of the obtained extension regions into a plurality of sub-regions.
Step S21232, determining whether the sub-region belongs to the hair region according to a ratio between different colors included in the color identification result of each sub-region.
In step S21233, the hair region image is configured by all the sub-regions determined to belong to the hair region.
As can be seen from the above, one extended area is formed after each extension, in this embodiment, the extended area is divided into a plurality of sub-areas, for example, the extended area may be divided into a plurality of sub-areas in a grid manner. Wherein, for each sub-region, the sub-region may be all hair regions, for example, when the sub-region is a sub-region close to a part of a human face region. Or the sub-area may be a part of the hair area, another part of the hair area is the background area, or the sub-area may be the background area.
A color recognition process may be performed for each sub-region, and it is determined whether the sub-region belongs to the hair region based on the color recognition result. For example, taking the hair region as black and the background region as white as an example, if a certain sub-region only contains black pixels, it may be determined that the sub-region belongs to the hair region, and if the sub-region only contains white pixels, it may be determined that the sub-region does not belong to the hair region.
In addition, if the sub-region includes both black pixel points and white pixel points, it can be determined whether the sub-region belongs to a hair region according to a ratio between pixel points of different colors, for example, a ratio between the numbers of different pixel points or a ratio between areas of images formed by different pixel points can be used for determination.
For example, if the ratio of the number of white pixels in the sub-region to the total number of pixels in the sub-region exceeds a preset ratio, for example, 0.4, 0.5, etc., it may be determined that the sub-region belongs to the hair region, otherwise it may be determined that the sub-region does not belong to the hair region.
After the sub-regions included in the expanded region are determined in the above manner, all the sub-regions belonging to the hair region can be screened out, and the sub-regions can constitute the hair region. In the embodiment, in the sub-region division method for the extended region, the greater the number of the divided sub-regions, the higher the accuracy of the finally determined hair region and the greater the data processing amount. Therefore, the determination of the corresponding division value can be performed according to actual requirements, and the embodiment is not particularly limited.
In this embodiment, the determination of the hair region image can be realized through the above manner, and it is considered that when the method is applied to hair dressing processing in a main broadcast image in a live broadcast scene, a face image to be processed is often a plurality of continuous face images. In order to avoid the problem of large processing amount in the above-mentioned manner of determining the hair region image each time, in this embodiment, in a possible implementation manner, the following manner is further included to determine the hair region image with respect to the obtained face image.
The method comprises the steps of detecting whether a hair region image of a face image preset before a current face image to be processed is obtained or not according to the current face image to be processed, obtaining displacement information of the current face image relative to a face region in the current face image if the front face image preset before the current face image is obtained, and determining the hair region image of the current face image according to the hair region image and the displacement information of the front face image.
Wherein, the previous preset sheet can be the previous sheet, or the previous two sheets, etc. without limitation. For example, if the current face image is the second face image in the obtained image sequence to be processed, the previous face image is the first face image in the image sequence. The first face image can determine the hair region image according to the hair region image determining mode.
From the above, for the face image, the recognition of the face region is relatively easy to implement, and the recognition and determination process of the hair region image is relatively complicated. If the face image to be processed is a plurality of continuous face images, when the user moves and the face area is displaced, the hair area is correspondingly displaced. Also, the displacement information of the face region and the displacement information of the hair region should be consistent. Therefore, in this embodiment, displacement information of a face region in an adjacent face image may be obtained, and the displacement information is used as position information of a hair region in the adjacent face image, so as to determine a hair region image in a current face image based on the obtained hair region image and the displacement information. Therefore, the data processing amount can be reduced while the hair region image is accurately determined.
In this embodiment, the hair region image may be determined based on the face region in the face image in the above manner, and it can be seen from the above that the color and brightness adjustment is performed on the whole face image. What is actually needed is only the adjustment result for the hair region image, and the non-hair region image needs to be retained as the image information in the original image. Therefore, referring to fig. 8, after the adjusted face image is obtained, the final effect image can be obtained by combining the hair region image in the following manner.
And step S241, intercepting a non-hair region image in the face image according to the hair region image in the face image before adjustment, and intercepting the hair region image in the face image after adjustment.
And step S242, combining the intercepted non-hair area image and the hair area image to obtain an effect image containing the hair area image after color and/or brightness adjustment.
In this embodiment, it is required to obtain the adjusted hair region image while retaining the non-hair region image in the original image. For the face image which is not processed, the non-hair area image in the face image, including the face area image and the background area image, can be intercepted. And the hair area image can be intercepted according to the face image after the adjustment processing.
And splicing the adjusted hair region image and the non-hair region image before adjustment together to obtain a required effect image for adjusting only the hair region image.
In order to facilitate the capturing of a part of images from the whole image and the splicing of different partial images, a mask processing mode is adopted in the embodiment. The mask can be constructed according to the information of the hair region image contained in the face image on the basis of the hair region image obtained by face image recognition. In the step, a corresponding mask can be constructed according to the position information of the hair region image in the face image. The mask is constructed for distinguishing the hair region image from the non-hair region image in the face image.
The mask may be constructed, for example, by constructing a plurality of grids covering the face image based on the face image, wherein for each grid, the grid may correspond to a hair region image in the face image or to a non-hair region image in the face image. Correspondingly, a plurality of mask grids are constructed in the same way based on a blank template, and then the mask grids are respectively in one-to-one correspondence with the grids in the face image. For the plurality of mask meshes, a mask mesh corresponding to a mesh corresponding to the hair region image in the face image may be set to 1, and a mask mesh corresponding to a mesh corresponding to the non-hair region image in the face image may be set to 0.
In this way, the obtained mask plate embodies a hair region image and a non-hair region image in the face image by using the mask grids set to be 0 or 1 and combining the position information of the mask grids. On the basis, the constructed mask is used for carrying out mask processing on the face image before the adjustment period, the non-hair region image of the face image before the adjustment after the mask processing is intercepted, the constructed mask is used for carrying out mask processing on the face image after the adjustment, and the hair region image of the face image after the mask processing is intercepted.
In the above steps, the constructed mask is combined with the face image before adjustment processing, and then the mask mesh corresponding to 0 in the combined image is cut out, so that the non-hair region image in the face image before adjustment processing can be obtained.
And aiming at the face image after the adjustment processing, the mask plate formed by the above method can be combined with the face image, and the mask grid corresponding to 1 in the combined image is intercepted, so that the aim of intercepting the hair area image in the face image after the adjustment processing can be realized.
On the basis, the cut-out non-hair area image and the hair area image are spliced, and then the final effect picture can be obtained.
The joints between the face regions and the hair regions in the spliced effect images are unnatural. Therefore, in this embodiment, the mean value blurring processing may be performed on the obtained effect graph to sharpen the difference at the junction.
In this embodiment, for a color channel and a luminance channel included in an LAB color space, mean value blurring processing is performed on a plurality of pixel points included in the obtained effect graph. In this step, for each pixel point included in the effect graph, adjacent pixel points of the pixel point, for example, four adjacent pixel points, may be obtained, and the luminance mean value and the color mean value of the adjacent pixel points are calculated, so that the obtained luminance mean value and the obtained color mean value are used as the luminance value and the color value of the pixel point. Therefore, the problem of large difference between adjacent pixel points at the joint of the face region and the hair region is avoided.
On this basis, since the device display needs to render and display based on the RGBA color information of the image, in this embodiment, the processed effect map can be converted from the LAB color space to the RGBA color space. And performing rendering display of the effect map based on the RGBA color information of the effect map.
In this embodiment, when the image hairdressing processing method is applied to a live broadcast providing terminal, a user such as a director can adjust the hair color and the hair brightness based on a relevant adjustment interface in live broadcast software. For example, referring to fig. 9 and 10 in combination, fig. 9 and 10 show an operation interface for performing hair setting adjustment, where the operation interface includes a display area and an operation area. The display area can display the collected face image of the user, and the operation area comprises a plurality of options of different beautifying items, such as style options for performing different color transformation on hair, options for beautifying, filter options and the like.
As shown in fig. 9, a plurality of different hair color options, such as purple hair color, yellow hair color, and the like, are included under the style option. The user can select the required hair color according to the self requirement, and after the user selects the hair color, the live broadcast providing terminal can change the image color according to the hair color selected by the user in the hair color processing mode, so that the aim of adjusting the hair color to the hair color selected by the user is fulfilled.
Further, as shown in fig. 10, an option of adjusting the thickness of hair may be included under the beauty option, and when the user selects the 'beauty' option, an adjustment lever may be displayed on the upper side of the operation region, and the user may pull the adjustment lever to a corresponding position according to a desired adjustment degree. The operation of the user on the adjusting rod can be mapped into a hair quality ability degree parameter, the live broadcast providing terminal converts the adjustment operation into a corresponding hair quality ability degree parameter after detecting the adjustment operation, and the hair brightness is adjusted according to the method, so that the adjustment of the thick hair degree is realized.
Referring to fig. 11, a schematic diagram of exemplary components of an electronic device according to an embodiment of the present disclosure is shown, where the electronic device may be the live broadcast server 200 or the live broadcast providing terminal 100 shown in fig. 1. The electronic device may include a storage medium 110, a processor 120, a hair image processing device 130, and a communication interface 140. In this embodiment, the storage medium 110 and the processor 120 are both located in the electronic device and are separately disposed. However, it should be understood that the storage medium 110 may be separate from the electronic device and may be accessed by the processor 120 through a bus interface. Alternatively, the storage medium 110 may be integrated into the processor 120, for example, may be a cache and/or general purpose registers.
The image hairdressing processing device 130 can be understood as the electronic device or the processor 120 of the electronic device, and can also be understood as a software functional module which is independent of the electronic device or the processor 120 and realizes the image hairdressing processing method under the control of the electronic device.
As shown in fig. 12, the image hairstyling processing device 130 may include a processing module 131, a converting module 132, an adjusting module 133, and an obtaining module 134. The functions of the respective functional blocks of the image hairstyling processing device 130 will be explained in detail below.
The processing module 131 is configured to process the acquired face image to obtain a hair region image;
it is understood that the processing module 131 can be used to execute the step S210, and for the detailed implementation of the processing module 131, reference can be made to the above-mentioned contents related to the step S210.
A conversion module 132, configured to convert a color space of the face image into an LAB color space;
it is understood that the converting module 132 can be used to execute the step S220, and for the detailed implementation of the converting module 132, reference can be made to the contents related to the step S220.
An adjusting module 133, configured to change color channel information in the LAB color space to adjust a color of the face image, and/or change luminance channel information in the LAB color space to adjust a luminance of the face image;
it is understood that the adjusting module 133 can be used to execute the step S230, and for the detailed implementation of the adjusting module 133, reference can be made to the above-mentioned content related to the step S230.
An obtaining module 134, configured to obtain an effect map including the hair region image with the adjusted color and/or brightness according to the adjusted face image and the hair region image.
It is understood that the obtaining module 134 may be configured to perform the step S240, and for a detailed implementation of the obtaining module 134, reference may be made to the content related to the step S240.
In one possible embodiment, the processing module 131 is configured to obtain the hair region image by:
identifying a face region contained in the face image;
and expanding by taking the face area as a reference to obtain a hair area image at the periphery of the face area.
In one possible embodiment, the processing module 131 is configured to perform expansion based on the face region to obtain an image of a hair region at the periphery of the face region by:
sequentially expanding the edges of the face area to the periphery by preset widths;
carrying out color recognition processing on an extended area between an extended edge formed after each extension and the edge of the face area;
and determining a hair region image from the expansion region based on the obtained color recognition result of each expansion region.
In one possible embodiment, the processing module 131 is configured to determine the hair region image from the extended region based on the obtained color recognition result of each extended region by:
dividing each obtained expansion area into a plurality of sub-areas;
judging whether the subarea belongs to a hair area or not according to the color identification result of each subarea;
and all the sub-areas judged to belong to the hair area constitute the hair area image.
In a possible implementation, the obtaining module 134 is configured to obtain the effect map by:
intercepting a non-hair area image in the face image according to a hair area image in the face image before adjustment, and intercepting a hair area image in the face image after adjustment;
and combining the intercepted non-hair area image and the hair area image to obtain an effect image containing the hair area image after color and/or brightness adjustment.
In one possible implementation, the obtaining module 134 is configured to intercept a non-hair region image in the face image before adjustment and intercept a hair region image in the face image after adjustment by:
constructing a mask according to the information of the hair region image contained in the face image before adjustment;
masking the face image before adjustment by using the constructed mask, and intercepting a non-hair region image in the face image before adjustment after masking;
and performing mask processing on the adjusted face image by using the constructed mask, and intercepting a hair region image in the adjusted face image after the mask processing.
In one possible embodiment, the image hairdressing processing device 130 further includes a mean blurring module for:
performing mean fuzzy processing on a plurality of pixel points contained in the effect graph aiming at a color channel and a brightness channel contained in the LAB color space;
and converting the processed effect graph from an LAB color space to an RGBA color space.
In one possible implementation, the processing module 131 may be configured to obtain the hair region image by:
detecting whether a hair region image of a preset face image in front of a current face image to be processed is obtained or not;
if the hair region image of the previous preset face image is obtained, obtaining displacement information of the current face image relative to a face region in the previous preset face image;
and determining the hair region image of the current face image according to the hair region image of the previous preset face image and the displacement information.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Further, the embodiment of the present application also provides a computer-readable storage medium, where machine-executable instructions are stored, and when the machine-executable instructions are executed, the method for processing image hairdressing provided by the above-mentioned embodiment is implemented.
Specifically, the computer-readable storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, or the like, and the computer program on the computer-readable storage medium can be executed to execute the image hairdressing processing method. With regard to the processes involved when the executable instructions in the computer-readable storage medium are executed, reference may be made to the related descriptions in the above method embodiments, which are not described in detail herein.
In summary, embodiments of the present application provide an image hairdressing processing method, an apparatus, an electronic device, and a readable storage medium, where a face image is processed to obtain a hair region image, a color space of the face image is converted into an LAB color space, color and brightness of the face image are respectively adjusted by changing color channel information and brightness channel information of the LAB color space, and finally an effect diagram including the hair region image after color and brightness adjustment is obtained according to the adjusted face image and the hair region image. In the scheme, because the coupling among all the channels in the LAB color space is weaker, the adjustment among all the channels does not influence each other, and therefore, the aims of accurately adjusting the hair color of the hair area and the density of the hair are achieved by adjusting the information of all the channels in the LAB color space, and the accuracy of adjusting the hair color and the brightness is improved.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. An image hairdressing processing method characterized by comprising:
processing the obtained face image to obtain a hair area image;
converting the color space of the face image into an LAB color space;
changing color channel information in the LAB color space to adjust the color of the face image and/or changing brightness channel information in the LAB color space to adjust the brightness of the face image;
and obtaining an effect picture containing the hair area image after the color and/or brightness adjustment according to the adjusted face image and the hair area image.
2. The image hairdressing processing method according to claim 1, wherein the step of processing the acquired face image to obtain the hair region image comprises:
identifying a face region contained in the face image;
and expanding by taking the face area as a reference to obtain a hair area image at the periphery of the face area.
3. The image hairstyling processing method according to claim 2, wherein the step of expanding with the face area as a reference to obtain a hair area image at a periphery of the face area includes:
sequentially expanding the edges of the face area to the periphery by preset widths;
carrying out color recognition processing on an extended area between an extended edge formed after each extension and the edge of the face area;
and determining a hair region image from the expansion region based on the obtained color recognition result of each expansion region.
4. The image hairstyling processing method according to claim 3, wherein the step of determining a hair region image from the expanded regions based on the obtained color recognition result of each of the expanded regions includes:
dividing each obtained expansion area into a plurality of sub-areas;
judging whether the subarea belongs to a hair area or not according to the color identification result of each subarea;
and all the sub-areas judged to belong to the hair area constitute the hair area image.
5. The image hairstyling processing method according to claim 1, wherein the step of obtaining an effect map including a color and/or brightness adjusted hair region image based on the adjusted face image and the hair region image includes:
intercepting a non-hair area image in the face image according to a hair area image in the face image before adjustment, and intercepting a hair area image in the face image after adjustment;
and combining the intercepted non-hair area image and the hair area image to obtain an effect image containing the hair area image after color and/or brightness adjustment.
6. The image hairdressing processing method according to claim 5, wherein the step of truncating the non-hair region image in the face image according to the hair region image in the face image before adjustment and truncating the hair region image in the face image after adjustment comprises:
constructing a mask according to the information of the hair region image contained in the face image before adjustment;
masking the face image before adjustment by using the constructed mask, and intercepting a non-hair region image in the face image before adjustment after masking;
and performing mask processing on the adjusted face image by using the constructed mask, and intercepting a hair region image in the adjusted face image after the mask processing.
7. An image hairstyling treatment method according to claim 1, characterized in that said method further comprises:
performing mean fuzzy processing on a plurality of pixel points contained in the effect graph aiming at a color channel and a brightness channel contained in the LAB color space;
and converting the processed effect graph from an LAB color space to an RGBA color space.
8. The image hairdressing processing method according to claim 1, wherein when the acquired face images are a plurality of consecutive face images, the step of processing the acquired face images to obtain the hair region image comprises:
detecting whether a hair region image of a preset face image in front of a current face image to be processed is obtained or not;
if the hair region image of the previous preset face image is obtained, obtaining displacement information of the current face image relative to a face region in the previous preset face image;
and determining the hair region image of the current face image according to the hair region image of the previous preset face image and the displacement information.
9. An image hairdressing processing device characterized by comprising:
the processing module is used for processing the acquired face image to obtain a hair area image;
the conversion module is used for converting the color space of the face image into an LAB color space;
an adjusting module, configured to change color channel information in the LAB color space to adjust a color of the face image, and/or change luminance channel information in the LAB color space to adjust a luminance of the face image;
and the obtaining module is used for obtaining an effect picture containing the hair area image after the color and/or brightness adjustment according to the adjusted face image and the hair area image.
10. An electronic device comprising one or more storage media and one or more processors in communication with the storage media, the one or more storage media storing processor-executable machine-executable instructions that, when executed by the electronic device, are executed by the processors to perform the method of image hair processing of any one of claims 1-8.
11. A computer-readable storage medium storing machine-executable instructions which, when executed, implement the image hairdressing processing method according to any one of claims 1 to 8.
CN202010975513.XA 2020-09-16 2020-09-16 Image hairdressing processing method and device, electronic equipment and readable storage medium Pending CN112102196A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010975513.XA CN112102196A (en) 2020-09-16 2020-09-16 Image hairdressing processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010975513.XA CN112102196A (en) 2020-09-16 2020-09-16 Image hairdressing processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112102196A true CN112102196A (en) 2020-12-18

Family

ID=73759281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010975513.XA Pending CN112102196A (en) 2020-09-16 2020-09-16 Image hairdressing processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112102196A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1411282A (en) * 2001-10-08 2003-04-16 Lg电子株式会社 Method for extracting target area
CN1440503A (en) * 2000-05-12 2003-09-03 宝洁公司 Method for analyzing hair and predicting achievable hair dyeing ending colors
CN101458817A (en) * 2008-12-22 2009-06-17 北京中星微电子有限公司 Color analysis system and method
CN102103690A (en) * 2011-03-09 2011-06-22 南京邮电大学 Method for automatically portioning hair area
CN105404846A (en) * 2014-09-15 2016-03-16 中国移动通信集团广东有限公司 Image processing method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1440503A (en) * 2000-05-12 2003-09-03 宝洁公司 Method for analyzing hair and predicting achievable hair dyeing ending colors
CN1411282A (en) * 2001-10-08 2003-04-16 Lg电子株式会社 Method for extracting target area
CN101458817A (en) * 2008-12-22 2009-06-17 北京中星微电子有限公司 Color analysis system and method
CN102103690A (en) * 2011-03-09 2011-06-22 南京邮电大学 Method for automatically portioning hair area
CN105404846A (en) * 2014-09-15 2016-03-16 中国移动通信集团广东有限公司 Image processing method and apparatus

Similar Documents

Publication Publication Date Title
CN111127591B (en) Image hair dyeing processing method, device, terminal and storage medium
CN109302628B (en) Live broadcast-based face processing method, device, equipment and storage medium
WO2020215861A1 (en) Picture display method, picture display apparatus, electronic device and storage medium
US20020041393A1 (en) Method and apparatus for compressing reproducible color gamut
US20140212037A1 (en) Image processing apparatus, image processing method, and computer readable medium
CN111627076A (en) Face changing method and device and electronic equipment
EP3975043A1 (en) Image processing method, terminal, and storage medium
CN113132696A (en) Image tone mapping method, device, electronic equipment and storage medium
WO2021128593A1 (en) Facial image processing method, apparatus, and system
US9589338B2 (en) Image processing apparatus, image processing system, image processing method, and non-transitory computer readable medium for varied luminance adjustment in different image regions
CN115294055A (en) Image processing method, image processing device, electronic equipment and readable storage medium
US9092889B2 (en) Image processing apparatus, image processing method, and program storage medium
CN112437237B (en) Shooting method and device
CN113709949A (en) Control method and device of lighting equipment, electronic equipment and storage medium
CN112419218A (en) Image processing method and device and electronic equipment
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
CN111968605A (en) Exposure adjusting method and device
CN112102196A (en) Image hairdressing processing method and device, electronic equipment and readable storage medium
WO2022262848A1 (en) Image processing method and apparatus, and electronic device
CN111652792A (en) Image local processing method, image live broadcasting method, image local processing device, image live broadcasting equipment and storage medium
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
CN114816619A (en) Information processing method and electronic equipment
CN113822784A (en) Image processing method and device
CN113947708A (en) Lighting device lamp efficiency control method, system, device, electronic device and medium
CN113572955A (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination