CN111275648B - Face image processing method, device, equipment and computer readable storage medium - Google Patents

Face image processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111275648B
CN111275648B CN202010073356.3A CN202010073356A CN111275648B CN 111275648 B CN111275648 B CN 111275648B CN 202010073356 A CN202010073356 A CN 202010073356A CN 111275648 B CN111275648 B CN 111275648B
Authority
CN
China
Prior art keywords
pixel
face image
pixels
foreground
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010073356.3A
Other languages
Chinese (zh)
Other versions
CN111275648A (en
Inventor
田野
王志斌
王梦娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010073356.3A priority Critical patent/CN111275648B/en
Publication of CN111275648A publication Critical patent/CN111275648A/en
Application granted granted Critical
Publication of CN111275648B publication Critical patent/CN111275648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a face image processing method, a face image processing device, face image processing equipment and a computer readable storage medium. The method comprises the following steps: determining a first gray threshold value for distinguishing a background pixel from a foreground pixel in a face image according to gray values of pixels in the face image; determining pixels with gray values smaller than the first gray threshold value in the face image as background pixels, and determining pixels with gray values larger than the first gray threshold value as foreground pixels; generating a mask corresponding to the face image according to the background pixels and the foreground pixels; fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the foreground pixel. The method and the device can remove the influence of uneven illumination on the whole of the face image.

Description

Face image processing method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a face image processing method, device, apparatus, and computer readable storage medium.
Background
With the continuous development of intelligent terminal technology, people can use the intelligent terminal to shoot at any time and any place. It is always desirable to obtain better photographing effects, especially photographing of a portrait by self-timer shooting or the like, and thus many applications for processing a portrait specifically have been derived.
However, in the process of capturing a human image, the problem that the brightness of the human face is unbalanced in the captured image due to uneven light is easy, so that the highlight area of the human face part in the image very affects the subsequent processing of the human face, and a better human image effect cannot be obtained.
To solve this problem, in the prior art implementation, an image area with a luminance higher than a preset luminance threshold is determined as a highlight area, and then the luminance of this highlight area is turned down, thereby facilitating the subsequent processing. However, when the brightness of the highlight region is reduced, the transition between the image region corresponding to the highlight region and the surrounding image region is likely to be unnatural.
Therefore, the problem that a better face image effect cannot be obtained due to uneven shooting illumination still exists in the prior art.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present application provide a face image processing method, device, apparatus, and computer readable storage medium, which are based on face image processing performed in the embodiments of the present application, and have better processing effects without occurrence of an unnatural transition problem of an image area.
The technical scheme adopted by the application is as follows:
a face image processing method, comprising: determining a first gray threshold value for distinguishing a background pixel from a foreground pixel in a face image according to gray values of pixels in the face image; determining pixels with gray values smaller than the first gray threshold value in the face image as background pixels, and determining pixels with gray values larger than the first gray value as foreground pixels; generating a mask corresponding to the face image according to the background pixels and the foreground pixels; fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the foreground pixel.
A face image processing apparatus comprising: the gray value acquisition module is used for determining a first gray threshold value for distinguishing a background pixel from a foreground pixel in the face image according to the gray value of each pixel in the face image; a front background pixel determining module, configured to determine a pixel in the face image having a gray value smaller than the first gray threshold as the background pixel, and determine a pixel having a gray value larger than the first gray threshold as the foreground pixel; the mask generation module is used for generating a mask corresponding to the face image according to the background pixels and the foreground pixels; the image fusion module is used for fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the foreground pixel.
A face image processing apparatus includes a processor and a memory having stored thereon computer readable instructions which, when executed by the processor, implement a face image processing method as described above.
A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform a face image processing method as described above.
In the above technical solution, the first gray value is determined according to the gray value of each pixel in the face image, so that the distribution of the foreground pixels obtained by differentiating according to the first gray threshold value can reflect the overall distribution of the image area with stronger brightness and the image area with weaker brightness in the face image. In the target face image obtained by fusing the mask and the face image, the brightness of the pixel point corresponding to the background pixel is the same as that of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than that of the foreground pixel.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 is a flow chart illustrating a face image processing method according to an exemplary embodiment;
FIG. 2 is a flow chart of step 110 in an exemplary embodiment of the embodiment shown in FIG. 1;
FIG. 3 is a flow chart of step 150 in one exemplary embodiment of the embodiment shown in FIG. 1;
FIG. 4 is a flow chart of step 170 in an exemplary embodiment of the embodiment shown in FIG. 1;
FIG. 5 is a flowchart illustrating a face image processing method according to another exemplary embodiment;
FIG. 6 is a flow chart of step 250 in one embodiment of the embodiment shown in FIG. 5;
FIG. 7 is a flow chart of step 250 in another embodiment of the embodiment of FIG. 5;
FIG. 8 is a flow chart of step 250 in another embodiment of the embodiment of FIG. 5;
FIG. 9 is a schematic diagram of a face image shown in accordance with an exemplary embodiment;
FIG. 10 is a schematic illustration of face feature area protection for the face image of FIG. 9;
FIG. 11 is a schematic illustration of background and foreground pixel differentiation of the face image of FIG. 9;
FIG. 12 is a schematic illustration of an exemplary mask generated based on the face image of FIG. 9;
FIG. 13 is a schematic view of a target face image obtained by processing the face image shown in FIG. 9;
FIG. 14 is a schematic diagram showing a face image processing effect according to an exemplary embodiment;
fig. 15 is a block diagram of a face image processing apparatus according to an exemplary embodiment;
fig. 16 is a block diagram of a face image processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
As described above, in the process of capturing a person image, the problem of uneven brightness of the face part in the captured image due to uneven light is likely to affect the subsequent processing of the face part, and thus a better person image processing effect cannot be obtained.
In the prior art implementation, only an image area with the brightness higher than a preset brightness threshold value in the face image is simply determined as a highlight area, and then the brightness of the highlight area is reduced, so that the face image with the brightness adjusted is obtained, and the face image is convenient to process subsequently. However, when the brightness of the highlight region is reduced, the transition between the image region corresponding to the highlight region and the surrounding image region is likely to be unnatural, and a good portrait processing effect cannot be obtained.
Based on this, in order to solve the technical problem that a better face image effect cannot be obtained due to uneven shot illumination, embodiments of the present application respectively provide a face image processing method, a face image processing device, a face image processing apparatus and a computer readable storage medium, by processing a face image, the brightness of the face image can be reduced as a whole, the problem that an image area in the face image is not natural in transition is avoided, and the subsequent processing of the face image after brightness adjustment can obtain a better effect.
Referring to fig. 1, fig. 1 is a flowchart illustrating a face image processing method according to an exemplary embodiment. As shown in fig. 1, in an exemplary embodiment, the face image processing method at least includes the following steps:
step 110, determining a first gray threshold for distinguishing a background pixel from a foreground pixel in the face image according to gray values of pixels in the face image.
In this embodiment, the problem of unnatural transition between an image area corresponding to a highlight area and surrounding image areas is easily caused after the brightness of the highlight area in the face image is dimmed, so that in this embodiment, the problem of unnatural transition between the image area can be avoided by considering the overall brightness adjustment of the image area with larger brightness in the face image, and therefore, the image area with larger brightness in the face image needs to be determined first.
The foreground pixels of the face image refer to pixels with larger brightness in the face image, and the background pixels refer to pixels with lower brightness in the face image, so that an image area formed by the foreground pixels is an image area with larger brightness in the face image.
The gray values of the pixels in the face image reflect the brightness distribution of the pixels in the face image, so that a first gray threshold can be determined according to the face image to distinguish the background pixels from the foreground pixels in the face image according to the first gray threshold.
In step 130, pixels in the face image having a gray value less than the first gray threshold are determined as background pixels, and pixels having a gray value greater than the first gray threshold are determined as foreground pixels.
As described above, the foreground pixels refer to pixels with larger brightness in the face image, the background pixels refer to pixels with smaller brightness in the face image, the pixels with gray values smaller than the first gray threshold in the face image are determined as background pixels according to the requirement, and the pixels with gray values larger than the first gray threshold in the face image are determined as foreground pixels.
Therefore, in this embodiment, the pixels in the face image are divided into the background pixels with darker brightness and the foreground pixels with brighter brightness, and compared with the highlight region directly determined in the prior art, the image region formed by the foreground pixels in this embodiment includes not only the highlight region in the face image but also all other image regions with larger brightness, for example, the image regions located around the highlight region, that is, the image region formed by the foreground pixels in this embodiment can reflect the image region with larger brightness in the face image as a whole.
Therefore, in the subsequent face image processing, the overall brightness of the face image can be adjusted based on the determined background pixels and foreground pixels, and other image areas with larger brightness in the face image can be further dimmed while the highlight areas in the face image are removed, so that the problem that the transition between the highlight areas after processing and surrounding image areas is unnatural can be effectively avoided.
And step 150, generating a mask corresponding to the face image according to the background pixels and the foreground pixels.
The mask is a specific image used for partially covering an image to be processed, and in the field of digital image processing, the mask is expressed as a two-dimensional matrix array, and the two-dimensional matrix array corresponds to a pixel value corresponding to each pixel in the specific image.
In this embodiment, the generated mask is used to cover all or part of the face image, for example, only the face region in the face image may be covered.
The mask comprises a background mask pixel corresponding to the background pixel and a foreground mask pixel corresponding to the foreground pixel, wherein the background mask pixel is used for covering the corresponding background pixel in the face image, and the foreground mask pixel is used for covering the corresponding foreground pixel in the face image.
Step 170, fusing the mask and the face image to obtain a target face image, wherein in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the corresponding background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the corresponding foreground pixel.
The fusing of the mask and the face image means that the mask is covered on the face image, so that a target face image is obtained. Therefore, each pixel point in the target face image corresponds to each pixel point in the face image that is identical in position.
And fusing the pixel values of the two pixels corresponding to the same image position in the mask and the face image.
Because the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the corresponding background pixel in the target face image, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the corresponding foreground pixel, the embodiment is based on the fusion of the mask and the face image, and the integral brightness dimming processing is carried out on the image area with larger brightness in the face image, so that the brightness adjusting effect is more natural, and the problem of unnatural transition of the image area in the face image is avoided.
The target face image obtained by the embodiment can be applied to subsequent modification treatment such as whitening, skin grinding, face changing and the like of the face part, and as the highlight area in the face image is removed from the target face image, the brightness of other image areas with larger brightness is correspondingly reduced, the influence of unbalanced brightness on the face image processing effect is avoided to a great extent, and therefore, a better face image effect can be obtained.
FIG. 2 is a flow chart of step 110 in an exemplary embodiment of the embodiment shown in FIG. 1. As shown in fig. 2, in an exemplary embodiment, determining a first gray threshold for distinguishing a background pixel from a foreground pixel in a face image according to gray values of pixels in the face image includes at least the following steps:
and step 111, distinguishing background pixels and foreground pixels in the face image according to a preset gray threshold.
In this case, it is necessary to acquire the optimal first gray threshold value in consideration of the fact that the degree of distinction between the background pixel and the foreground pixel in the face image is maximized when the first gray threshold value is optimal, and the region of the face image with the larger brightness determined from the foreground pixel is more accurate.
In this embodiment, an initial gray threshold is preset, and the optimal first gray threshold is obtained by traversing the initial gray threshold under the condition that the degree of distinction between the background pixel and the foreground pixel in the face image is maximum, and when the degree of distinction is maximum, the corresponding gray threshold is traversed to be optimal, and the optimal gray threshold is determined as the first gray threshold.
Similar to the distinguishing the background pixel and the foreground pixel in the face image according to the first gray threshold in step 130, distinguishing the background pixel and the foreground pixel in the face image according to the preset gray threshold means that the pixel in the face image with the gray value smaller than the preset gray threshold is determined as the foreground pixel, and the pixel with the gray value larger than the preset gray threshold is determined as the background pixel.
Step 113, calculating a background pixel ratio, a foreground pixel ratio, an average background pixel gray value and an average foreground pixel gray value corresponding to the face image according to the number of background pixels and the number of foreground pixels, the gray value of the background pixels and the gray value of the foreground pixels contained in the face image.
The background pixel ratio corresponding to the face image is the ratio of the number of background pixels to the total number of pixels in the face image, the foreground pixel ratio is the ratio of the number of foreground pixels to the total number of pixels, the average gray value of the foreground pixels is the average value of gray values corresponding to all background pixels in the face image, and the average gray value of the foreground pixels is the average value of gray values corresponding to all foreground pixels.
If N is used 1 Representing the number of background pixels in the face image by N 2 Representing the number of foreground pixels in the face image, expressed by SumThe total number of pixels in the face image is omega 1 Representing background pixel duty cycle, in ω 2 Representing the foreground pixel duty cycle, in mu 1 Represents the average gray value of background pixels, expressed in mu 2 Representing the average gray value of the foreground pixels can be expressed by the formulaCalculating to obtain the corresponding background pixel ratio of the face image through the formula +. >Calculating to obtain the corresponding foreground pixel ratio of the face image, and calculating the corresponding foreground pixel ratio according to the formula +.> Calculating the average gray value of the background pixels of the face image by the formula +.>And calculating the average gray value of foreground pixels corresponding to the face image. Where μ represents the sum of the gray values of all pixels in the face image, μ (t) represents the sum of the gray values of all background pixels in the face image.
Step 115, calculating the square value of the difference between the average gray value of the background pixel and the average gray value of the foreground pixel, and calculating the product among the foreground pixel duty ratio, the background pixel duty ratio and the square value to obtain the inter-class variance of the face image.
Firstly, it should be noted that the inter-class variance of the face image is used to represent the degree of distinction between the background pixel and the foreground pixel in the face image, and the larger the inter-class variance corresponding to the face image is, the larger the degree of distinction between the background pixel and the foreground pixel in the face image is.
If the inter-class variance of the face image is represented by g, such inter-class variance is calculated by the following formula:
g=ω 12 *(μ 12 ) 2
step 117, traversing the gray threshold under the condition of maximizing the inter-class variance, and determining the gray threshold corresponding to the maximum inter-class variance as the first gray threshold.
The step 111 to 115 is performed iteratively, and each time the step 111 to 115 is performed, the step is called as a step of traversing the gray threshold value.
In each traversal, the gray threshold value used for distinguishing the background pixel and the foreground pixel of the face image in step 111 needs to be adjusted, so that the inter-class variances respectively corresponding to different gray threshold values are obtained through multiple traversals, and therefore the gray threshold value corresponding to the maximum inter-class variance can be determined to be the first gray threshold value.
Therefore, the embodiment determines the degree of distinction between the background pixel and the foreground pixel in the face image based on the inter-class variance, can embody the degree of distinction between the background pixel and the foreground pixel, and can determine the first gray level corresponding to the maximum inter-class variance as the best gray level in a plurality of gray level thresholds and inter-class variances obtained by traversing under the condition of maximizing the inter-class variance, so that the subsequent distinction between the background pixel and the foreground pixel through the first gray level value can reflect the brightness distribution condition in the face image to the greatest extent, the subsequent overall brightness adjustment for the face image is facilitated, and a better processing effect is obtained.
FIG. 3 is a flow chart of step 150 in an exemplary embodiment of the embodiment shown in FIG. 1. As shown in fig. 3, generating a mask corresponding to a face image at least includes the following steps:
In step 151, a blank mask is generated based on the face image, where the blank mask includes background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels.
It should be noted that, in the blank mask generated based on the face image, the image size of the blank mask is the same as the image size of the face image, and the pixel value corresponding to each pixel is zero.
The blank mask contains background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels, so that the distribution of the background mask pixels and the foreground mask pixels in the blank mask corresponds to the distribution of the background pixels and the foreground pixels in the face image.
Step 153, determining half of the highest-order channel value in the color mode corresponding to the face image as the channel value of the background mask pixel in each color channel, and determining the channel value of the foreground mask pixel in each color channel according to the channel value of the foreground pixel in each color channel, wherein the channel value of the foreground mask pixel in each color channel is lower than half of the highest-order channel value.
The color mode corresponding to the face image comprises an RGB mode, which is a color standard for obtaining colors through brightness change of three color channels of red, green and blue and superposition of the brightness change and the brightness change, wherein the channel value of each color channel reflects the color intensity of each color channel.
In the RGB mode, each pixel of the face image contains three color channels, the colors corresponding to the pixels are obtained by superposition of the channel values of the color channels, the channel values corresponding to the color channels are set to 0-255 levels, and the higher the level is, the higher the color intensity of the corresponding color channel is. Therefore, the highest-order channel value in the color mode corresponding to the face image is 255.
In order to ensure that the brightness of the background pixels in the target face image is not changed relative to the brightness of the corresponding background pixels in the face image, the pixel values obtained by fusing the background mask pixels in the mask and the corresponding background pixels in the face image are required to be identical to the pixel values of the corresponding background pixels in the face image. It should be understood that the pixel values corresponding to the pixels in the mask and the face image are the channel values of the pixels in the color channels.
For example, in the process of fusing the mask and the face image, fusion calculation needs to be performed on the channel values of the background mask pixel and the corresponding background pixel in each color channel, and when the channel value of the background mask pixel in each color channel is half of the highest-order channel value 255, the channel value of each color channel obtained by fusion calculation is the same as the channel value of the corresponding background pixel in the face image in the corresponding color channel. Therefore, in this embodiment, half of the highest-order channel value in the color mode corresponding to the face image needs to be determined as the channel value of the background mask pixel in each color channel.
In the process of respectively carrying out fusion calculation on the channel values of the foreground mask pixels and the corresponding foreground pixels in each color channel, when the channel value of the foreground mask pixels in each color channel is lower than half of the highest-order channel value, the channel value of each color channel obtained by fusion calculation is smaller than the channel value of the corresponding background pixels in the face image in the corresponding color channel, so that the effect of reducing brightness can be achieved.
In this embodiment, the intensities of the foreground pixels with different brightness in the face image for brightness reduction are different from each other, so as to obtain a more natural brightness dimming effect, so that the channel values of the foreground mask pixels in the mask in each color channel need to be determined according to the channel values of the foreground pixels in the face image in each color channel.
In one embodiment, the channel value of the foreground mask pixel in each color channel needs to be calculated according to the channel value of the foreground mask pixel in each color channel, the gray value of the foreground pixel corresponding to the foreground mask pixel, the highest-order channel value and the first gray threshold value.
If b is used to represent the ratio of the channel value of any foreground mask pixel in a certain color channel to the highest-order channel value, gary is used to represent the gray value of the foreground pixel corresponding to the foreground mask pixel, and Gary' is used to represent the first gray threshold, the following calculation formula is provided:
b=0.5-(log 2 (1+(Gary-Gary′)/(255-Gary′)))*0.5
After the value of b is calculated according to the formula, the channel value of the foreground mask pixel in the corresponding color channel can be correspondingly determined by calculating the product of b and the highest-order channel value.
It can be seen that, the ratio of the channel value of the foreground mask pixel in each color channel to the highest-order channel value calculated in this embodiment is less than 0.5, that is, the channel value of the foreground mask pixel in each color channel is less than half of the highest-order channel value.
And 155, filling channel values of the background mask pixels and the foreground mask pixels in the color channels into blank masks correspondingly to obtain masks corresponding to the face images.
The filling of the channel values of the background mask pixel and the foreground mask pixel in the respective color channels into the blank mask means that the channel values of the background mask pixel and the foreground mask pixel in the respective color channels determined in step 153 are given to the respective color channels of the corresponding pixels in the blank mask, so that the corresponding pixels in the blank mask are colored.
Therefore, in the embodiment, a blank mask is generated based on the face image, channel values of all background mask pixels in the blank mask in all color channels and channel values of all foreground mask pixels in all color channels are respectively determined, the determined channel values of the background mask pixels and the foreground mask pixels in all color channels are correspondingly filled into the blank mask, and after the obtained mask is fused with the face image, the brightness of a pixel corresponding to the background pixel in the target face image can be effectively ensured to be the same as the brightness of a corresponding background pixel in the face image, and the brightness of a pixel corresponding to the foreground pixel is lower than the brightness of a corresponding foreground pixel in the face image, so that the aim of integrally reducing the brightness is achieved.
FIG. 4 is a flow chart of step 170 of FIG. 1 in an exemplary embodiment. As shown in fig. 4, in an exemplary embodiment, the mask is fused with the face image to obtain the target face image, which at least includes the following steps:
and 171, respectively acquiring normalized channel values of each pixel in the face image and the mask in different color channels, wherein the normalized channel values are the ratio of the channel value to the highest-order channel value.
In this embodiment, it is necessary to obtain channel values of each pixel in the face image and the mask in different color channels, and then calculate a ratio of each channel value to the highest-order channel value, so as to obtain normalized channel values of each pixel in the face image and the mask in different color channels.
In the subsequent fusion of the face image and the mask, the fusion calculation is carried out on the normalized channel values of each pixel in the face image and the mask in different color channels, so that the fusion calculation process is more convenient.
And as can be seen from the foregoing, the normalized channel value of each background mask pixel in the mask at the different color channels is 0.5.
Step 173, for the target pixels in the same position in the face image and the mask, fusion calculating the normalized channel value of each color channel, and obtaining the fusion channel value of the target pixels in each color channel.
Firstly, it should be noted that the target pixels in the same positions in the face image and the mask are substantially a pixel pair, and each pixel in the pixel pair corresponds to the pixel position in the face image and the mask, and each pixel has a channel value corresponding to each color channel.
Illustratively, the formula for fusing the normalized channel values of the calculation target pixel at each color channel is:
wherein f (a, b) represents a fusion channel value of the target pixel in a single color channel, a represents a normalized channel value of the corresponding color channel of the target pixel in the face image, b represents a normalized channel value of the corresponding color channel of the target pixel in the mask, and a and b are both values between 0 and 1.
When b < 0.5, it means that the target pixel is a foreground pixel in the face image, and when b=0.5, the target pixel is a background pixel in the face image.
It can be seen that when b < 0.5, the calculated fusion channel value is smaller than a, that is, the channel value of the color channel reflected by the fusion channel value is lower than the channel value of the background pixel in the face image in the corresponding color channel, and the obtained brightness is lower than the brightness of the background pixel in the face image based on the superposition of the channel values of the color channels.
And when b=0.5, the calculated fusion channel value is equal to a, that is, the channel value of the color channel reflected by the fusion channel value is the same as the channel value of the background pixel in the face image in the corresponding color channel, and the brightness obtained by superposition of the channel values of the color channels is consistent with the brightness of the background pixel in the face image.
Step 175, updating the channel value of each color channel of the target pixel in the face image according to the fusion channel value of the target pixel in each color channel, and obtaining the target face image.
In this embodiment, the channel value of each color channel of the target pixel in the face image is updated by the fusion channel value of the target pixel in each color channel, that is, the fusion channel value of each color channel is determined as the channel value of the corresponding pixel in each color channel in the face image.
Therefore, in this embodiment, the brightness of the pixel corresponding to the background pixel in the target image should be the same as the brightness of the corresponding background pixel, and the brightness of the pixel corresponding to the foreground pixel is lower than the brightness of the corresponding foreground pixel, so as to implement the overall brightness dimming processing of the image area with larger brightness in the face image, and make the brightness adjusting effect more natural.
As shown in fig. 5, in another exemplary embodiment, before step 110, the face image processing method further includes the steps of:
step 210, converting the face image into a face gray scale map.
According to the calculation formula based on the fusion channel value of the target pixel in the single color channel, if the channel value of the pixel in a certain color channel in the face image is large, the value of a is close to 1, the value of b is adjusted to be close to 1, and the brightness reduction effect of the corresponding pixel in the target face image is not obvious.
Therefore, for the highlight region in the face region of the face image, in the process of executing the fusion of the mask and the face image, the channel value of the mask pixel corresponding to the highlight region in the face region in the mask in each color channel is adjusted anyway, and the brightness of the highlight region in the face region cannot be effectively reduced in the finally obtained target face image.
To solve this problem, the present embodiment obtains a better brightness reduction effect by eliminating the highlight region of the face region in advance and then performing the overall brightness reduction process on the face image. In this embodiment, therefore, the face image needs to be converted into a face gray-scale map to determine a highlight region in the face region from the face gray-scale map.
And 230, determining a target area with the gray value larger than a preset second gray threshold value in a face area of the face gray map, and determining a highlight area corresponding to the target area in the face image.
Firstly, face recognition can be performed on a face image in advance to obtain a face region in the face image, and after the face image is converted into a face gray level image, the face region in the face gray level image can be determined correspondingly.
The second gray level threshold is a preset minimum boundary value of the highlight region, for example, 220 may be used, and after determining that the gray level value in the face region of the face gray level map is greater than the target region of the second gray level threshold, the highlight region corresponding to the target region in the face image may be determined accordingly.
Step 250 fills the highlight region with surrounding pixels of the highlight region.
Wherein, for the highlight region in the face region, the brightness of each pixel located around it is lower than that of the highlight region, and is not too low. Therefore, the brightness of the highlight region can be effectively lower by filling the surrounding pixels of the highlight region, and meanwhile, the brightness of the filled highlight region is ensured not to have the problem of unnatural transition with the surrounding image region.
In one exemplary embodiment, as shown in fig. 6, filling the highlight region with surrounding pixels of the highlight region may include the steps of:
step 2511, determining an isocenter line corresponding to a boundary of the highlight region;
at step 2513, surrounding pixels of the highlight region are transferred to the highlight region along the direction of the isocenter to fill the highlight region.
In this embodiment, the method of Navier-Stokes (Navier-Stokes, an equation of motion describing conservation of incompressible fluid momentum in fluid mechanics) is used to fill the highlight region, which is essentially a gradient transition method, and surrounding pixels of the highlight region are used to fill the highlight region.
The isochrone is a curve obtained by plotting all points of the same illuminance on the illuminated surface, which is the boundary of the highlight region in this embodiment.
In the Navier-Stokes method, surrounding pixel information is transmitted to a region to be repaired (highlight region in this embodiment) in the direction of the isochrone, thereby filling the region to be repaired. By way of example, by solving the Navier-Stokes equation on the whole area to be repaired, the image function after the area to be repaired is obtained correspondingly, and detailed calculation process is not repeated here.
In another exemplary embodiment, as shown in fig. 7, filling the highlight region with surrounding pixels of the highlight region may include the steps of:
step 2521, respectively taking each pixel in the highlight region as a central point, and determining a weight matrix corresponding to a pixel set positioned in a preset radius of each central point;
step 2523, calculating the pixel value of the center point according to the pixel value set corresponding to the pixel set and the weight matrix;
step 2525, padding the pixel values to the center point.
In this embodiment, surrounding pixels of the highlight region are filled into the highlight region by a gaussian blur method.
And respectively taking each pixel in the highlight region as a central point, and determining a pixel set positioned in a preset radius of each central point, wherein the preset radius is also called a fuzzy radius of the central point, and the pixel set contains pixel values of all pixels positioned in the preset radius of each central point. The larger the preset radius is, the more pixels are contained in the pixel set, and the better the blurring effect on the center point is.
And because the pixels in the image are continuous, the pixels closer to the center point are closely related to the center point, so that the weights corresponding to the pixels in the pixel set are also needed to be determined, and a weight matrix corresponding to the pixel set is obtained. In one embodiment, the weights corresponding to each pixel in the set of pixels are normally distributed based on the distance of each pixel from the center point.
Thus, the pixel value of the center point can be obtained by performing weighted sum operation on the pixel value of each pixel in the pixel set and the weight corresponding to each pixel. Filling of the highlight region is achieved by filling this pixel value to the corresponding center point.
In another exemplary embodiment, as shown in fig. 8, the highlight region is filled with surrounding pixels of the highlight region, and the steps of:
step 2531, determining a pixel to be repaired on the boundary of the highlight region;
step 2533, determining a pixel corresponding to the minimum pixel value square error as a sampling pixel matched with the pixel to be repaired by calculating the pixel value square error of the pixel to be repaired and each pixel in the face image;
step 2535, filling the pixel value of the sampling pixel into the pixel to be repaired, removing the pixel to be repaired from the boundary of the highlight region, and redefining the boundary of the highlight region;
in step 2537, the pixel value filling of the pixels to be repaired in the boundary of the highlight region and the updating of the boundary of the highlight region are iteratively performed until the filling of all pixels in the highlight region is completed.
In this embodiment, a sampling filling method is adopted, a certain point is selected on the boundary of the highlight region to determine a pixel to be repaired, and then a sampling pixel matched with the pixel to be repaired is searched in the face image to copy the sampling pixel to the pixel to be repaired, so that the pixel value filling of the region to be repaired is completed.
The pixel to be repaired may be any pixel on the boundary of the highlight region, or may be determined as the pixel to be repaired according to the priority corresponding to each pixel in the boundary.
And calculating the square error of the pixel value between the pixel to be repaired and each pixel in the face image, searching the pixel corresponding to the minimum square error of the pixel value in the face image, and determining the pixel corresponding to the minimum square error of the pixel value as a sampling pixel. Wherein the pixel value squared error may be the spatial distance between the two pixels in the corresponding color space.
After the pixel value of the pixel to be repaired is filled, the boundary of the highlight region is updated, namely the pixel to be repaired is removed from the boundary of the highlight region, and the updated boundary is obtained. And (5) the repairing process is repeated until the highlight area is repaired.
Therefore, the brightness of the highlight region can be effectively reduced by filling the highlight region with the surrounding pixels of the highlight region, and meanwhile, the brightness of the highlight region after filling is ensured not to have the problem of unnatural transition with the surrounding image region.
In another exemplary embodiment, before converting the face image into the face gray map, face recognition is further performed on the face image to obtain a face region and a face feature region in the face image, where the face feature region includes at least an eye region, an eyebrow region, and a mouth region, and the face region in the face image corresponds to the face region in the face gray map.
In consideration of the fact that if the highlight region contains a face feature region, after the highlight region is filled by surrounding pixels of the highlight region, the appearance of a human image in the face image is easily changed greatly, the processing effect of the face image is very affected, and therefore the face feature region in the face image is required to be protected.
The protection of the face feature area in the face image means that in the process of determining the target area with the gray value larger than the second gray threshold in the face area of the face gray image, the corresponding face feature area in the face area of the face gray image is determined according to the face area and the face feature area in the face image, and then the target area with the gray value larger than the second gray threshold is determined according to the image area except the five sense organs area in the face area of the face gray image.
Therefore, the face feature area in the face image is protected, so that the influence of the face image processing on the portrait appearance can be reduced to the greatest extent.
In one exemplary application scenario, the procedure for face image processing is as follows:
as shown in fig. 9, fig. 9 is a face image to be processed, in which the brightness of the forehead, nose wings, cheek, etc. of the face area is large, if the face area is directly subjected to the decoration processing such as whitening, skin abrasion, face changing, etc., the area with large brightness will very affect the decoration effect, so that the overall brightness dimming processing of the face image needs to be performed in advance.
For the face feature area in the face area, such as the eyebrow, eyes, mouth and other areas, the surrounding pixels are used for filling the corresponding areas, so that the appearance of the human image is greatly influenced, the processing effect of the face image is further influenced, and therefore the face feature area of the face image is required to be protected. That is, when the highlight region in the face region is processed later, the protected face feature region will not be processed. As shown in fig. 10, a black region other than the white region represents an image region other than the face region in the face image, and a black region in the white region represents a protected face feature region.
In the process of executing the fusion of the mask and the face image, the brightness of the highlight region in the face region still cannot be effectively reduced in the final obtained target face image no matter how the channel values of the mask pixels corresponding to the highlight region in the mask in each color channel are adjusted. Therefore, the highlight region in the face region needs to be repaired, and the surrounding pixels of the highlight region are used for filling the highlight region, so that the brightness of the highlight region is effectively reduced, and the influence of the highlight region on the subsequent overall brightness adjustment is avoided.
Next, an image area to be subjected to overall brightness adjustment in the face image needs to be determined, and illustratively, in a face gray scale image corresponding to the face image, the foreground pixel and the background pixel are distinguished according to a first gray scale threshold value, that is, the image area formed by the foreground pixels is used as the image area to be subjected to overall brightness reduction. As shown in fig. 11, after the distinction between the foreground pixel and the background pixel is performed on the face gray scale map, the white area is determined as the area to be adjusted.
Then, a mask corresponding to the face image needs to be generated, and the mask contains background mask pixels corresponding to background pixels in the face image and foreground mask pixels corresponding to foreground pixels in the face image, that is, each pixel in the mask corresponds to each pixel in the face image. Fig. 12 is a schematic diagram of a mask shown in an exemplary embodiment.
The brightness value of the foreground pixels in the face image is reduced by the fusion of the foreground pixels in the face image and the foreground mask pixels in the mask, and the brightness value of the background pixels in the face image is not changed by the fusion of the background pixels in the face image and the background mask pixels in the mask, so that the brightness of the face region is dimmed on the whole face image, and the subsequent requirements for the face image modification treatment can be met.
Fig. 13 is a schematic view of a target face image obtained by processing the face image shown in fig. 9 according to the above. It can be seen that in the face region of the face image shown in fig. 13, the brightness of the forehead, the nose wings and the cheek regions is obviously reduced, the problem of unnatural transition with the surrounding image region does not occur, and the appearance of the portrait is not changed, so that the problem of unbalanced brightness in the face image can be effectively solved.
It should also be mentioned that the speed of brightness processing for 2K resolution face images in the embodiments of the present application may be less than 0.5 seconds, and the processing time is shorter.
In another exemplary application scenario, as shown in fig. 14, fig. 14A is a face image to be processed, fig. 14B is a mask generated based on fig. 14A, and fig. 14C is a target face image obtained by fusing fig. 14A and 14B.
In the application scene, a mask generated based on the face image corresponds to a face region in the face image, and the face region is covered by the mask so as to carry out overall brightness adjustment on the face region. As can be seen from fig. 14C, the brightness of the face region in the target face image is reduced as a whole, thereby facilitating the subsequent retouching process for the target face image.
Fig. 15 is a block diagram illustrating a face image processing apparatus according to an exemplary embodiment. As shown in fig. 15, in an exemplary embodiment, the apparatus includes a gray value acquisition module 310, a front background pixel determination module 330, a mask generation module 350, and an image fusion module 370.
The gray value obtaining module 310 is configured to determine a first gray threshold for distinguishing a background pixel from a foreground pixel in the face image according to gray values of pixels in the face image.
The foreground-background pixel determining module 330 is configured to determine pixels in the face image having a gray value smaller than the first gray threshold value as background pixels, and pixels having a gray value larger than the first gray threshold value as foreground pixels.
The mask generation module 350 is configured to generate a mask corresponding to the face image according to the background pixel and the foreground pixel.
The image fusion module 370 is configured to fuse the mask with the face image to obtain a target face image, where the brightness of the pixel corresponding to the background pixel is the same as the brightness of the corresponding background pixel, and the brightness of the pixel corresponding to the foreground pixel is lower than the brightness of the corresponding foreground pixel.
In another exemplary embodiment, the mask generation module 350 includes a blank mask generation unit, a mask pixel value determination unit, and a mask pixel value filling unit.
The blank mask generating unit is used for generating a blank mask based on the face image, wherein the blank mask contains background mask pixels corresponding to background pixels and foreground mask pixels corresponding to foreground pixels.
The mask pixel value determining unit is used for determining the channel value of the background mask pixel in each color channel and determining the channel value of the foreground mask pixel in each color channel according to the channel value of the foreground pixel in each color channel.
And the mask pixel value filling unit is used for correspondingly filling channel values of the background mask pixels and the foreground mask pixels in each color channel into the blank mask to obtain a mask corresponding to the face image.
In another exemplary embodiment, the channel value of the background mask pixel in each color channel is half of the highest order channel value in the color mode corresponding to the face image, and the channel value of the foreground mask pixel in each color channel is lower than half of the highest order channel value.
In another exemplary embodiment, the mask pixel value determining unit includes a foreground mask pixel determining subunit, configured to calculate a channel value of the foreground mask pixel in each color channel according to a channel value of the foreground mask pixel in each color channel, a gray value of the foreground pixel corresponding to the foreground mask pixel, a highest-order channel value, and a first gray threshold value.
In another exemplary embodiment, the image fusion module 370 includes a normalization processing unit, a fusion calculation unit, and a channel value update unit.
The normalization processing unit is used for respectively acquiring normalized channel values of each pixel in the face image and the mask in different color channels, wherein the normalized channel values are the ratio of the channel value to the highest-order channel value.
The fusion calculation unit is used for fusion calculating the normalized channel value of each color channel aiming at the target pixels which are respectively positioned at the same position in the face image and the mask, and obtaining the fusion channel value of the target pixels in each color channel.
The channel value updating unit is used for updating the channel value of each color channel of the target pixel in the face image according to the fusion channel value of the target pixel in each color channel, and obtaining the target face image.
In another exemplary embodiment, the gray value acquisition module 310 includes a pixel discriminating unit, an information calculating unit, an inter-class method calculating unit, and a traversing unit.
The pixel distinguishing unit is used for distinguishing background pixels and foreground pixels in the face image according to a preset gray threshold value.
The information calculating unit is used for calculating the background pixel ratio, the foreground pixel ratio, the background pixel average gray value and the foreground pixel average gray value corresponding to the face image according to the number of the background pixels and the number of the foreground pixels contained in the face image, and the gray value of the background pixels and the gray value of the foreground pixels.
The inter-class method calculating unit is used for calculating the square value of the difference between the average gray value of the background pixel and the average gray value of the foreground pixel, and calculating the product among the foreground pixel duty ratio, the background pixel duty ratio and the square value to obtain the inter-class variance of the face image.
The traversing unit is used for traversing the gray threshold value, and determining the gray threshold value corresponding to the maximum inter-class variance as the first gray threshold value
In another exemplary embodiment, the face image processing apparatus further includes a gray map conversion module, a highlight region determination module, and a highlight region filling module.
The gray level map conversion module is used for converting the human face image into a human face gray level map.
The highlight region determining module is used for determining a target region with a gray value larger than a preset second gray threshold value in a face region of the face gray map and determining a highlight region corresponding to the target region in the face image.
The highlight region filling module is used for filling the highlight region according to surrounding pixels of the highlight region.
In another exemplary embodiment, the highlight region filling module includes an isocenter determining unit and a surrounding pixel transmitting unit.
The isocenter determining unit is used for determining an isocenter corresponding to the boundary of the highlight region.
The surrounding pixel transmission unit is used for transmitting surrounding pixels of the highlight region to the highlight region along the direction of the isocenter so as to fill the highlight region.
In another exemplary embodiment, the highlight region filling module includes a pixel set acquisition unit, a blurred pixel value calculation unit, and a blurred pixel value filling unit.
The pixel set acquisition unit is used for respectively taking each pixel in the highlight region as a center point, determining a pixel set positioned in a preset radius of each center point and a weight matrix corresponding to the pixel set.
The pixel value calculating unit is used for calculating the pixel value of the center point according to the pixel value set corresponding to the pixel set and the weight matrix.
The pixel value filling unit is used for filling the pixel value to the center point.
In another exemplary embodiment, the highlight region filling module includes a to-be-repaired pixel determining unit, a sampling pixel determining unit, a boundary updating unit, and an iterative executing unit.
The pixel to be repaired determining unit is used for determining the pixel to be repaired on the boundary of the highlight region.
The sampling pixel determining unit is used for determining a pixel corresponding to the minimum pixel value square error as a sampling pixel matched with the pixel to be repaired by calculating the pixel value square error of the pixel to be repaired and each pixel in the face image.
The boundary updating unit is used for filling the pixel value of the sampling pixel into the pixel to be repaired, removing the pixel to be repaired from the boundary and redefining the boundary of the highlight region.
The iteration execution unit is used for iteratively executing pixel value filling of pixels to be repaired in the boundary of the highlight region and updating of the boundary of the highlight region until all pixels in the highlight region are filled.
In another exemplary embodiment, the face image processing apparatus further includes a face recognition module, configured to perform face recognition on a face image, to obtain a face region and a face feature region in the face image, where the face feature region includes at least an eye region, an eyebrow region, and a mouth region, and the face region in the face image corresponds to the face region in the face gray map.
In another exemplary embodiment, the highlight region determining module includes a face feature region determining unit and a target region determining unit.
The face feature area determining unit is used for determining a corresponding face feature area in the face area of the face gray level graph according to the face area and the face feature area in the face image.
The target area determining unit is used for determining a target area with gray values larger than a second gray threshold value for image areas except for a face characteristic area in a face area of the face gray map.
It should be noted that, the apparatus provided in the foregoing embodiments and the method provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module and unit perform the operation has been described in detail in the method embodiments, which is not repeated herein.
Another aspect of the present application also provides a face image processing apparatus, including a processor and a memory, wherein the memory stores computer readable instructions thereon, which when executed by the processor, implement a face image processing method as described above.
Fig. 16 is a schematic diagram showing a structure of a face image processing apparatus according to an exemplary embodiment.
It should be noted that the face image processing apparatus is just one example adapted to the present application, and is not to be construed as providing any limitation to the scope of use of the present application. Nor should the face image processing device be interpreted as having to rely on or necessarily have one or more of the components of the exemplary face image processing device shown in fig. 16.
As shown in fig. 16, in an exemplary embodiment, the face image processing apparatus includes a processing component 401, a memory 402, a power supply component 403, a multimedia component 404, an audio component 405, a sensor component 407, and a communication component 408. The above components are not required, and the face image processing apparatus may add other components or reduce some components according to its own functional requirement, which is not limited in this embodiment.
The processing component 401 generally controls overall operations of the face image processing apparatus, such as operations associated with display, data communication, and log data processing, and the like. The processing component 401 may include one or more processors 409 to execute instructions to perform all or part of the steps of the operations described above. Further, the processing component 401 may include one or more modules to facilitate interactions between the processing component 401 and other components. For example, the processing component 401 may include a multimedia module to facilitate interaction between the multimedia component 404 and the processing component 401.
The memory 402 is configured to store various types of data to support operation at the facial image processing device, examples of which include instructions for any application or method operating on the facial image processing device. The memory 402 has stored therein one or more modules configured to be executed by the one or more processors 409 to perform all or part of the steps in the face image processing method described in the above embodiments.
The power supply component 403 supplies power to various components of the face image processing apparatus. The power supply components 403 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the facial image processing apparatus.
The multimedia component 404 includes a screen between the facial image processing device and the user that provides an output interface. In some embodiments, the screen may include a TP (Touch Panel) and an LCD (Liquid Crystal Display ). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 405 is configured to output and/or input an audio signal. For example, the audio component 405 includes a microphone configured to receive external audio signals when the facial image processing device is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. In some embodiments, the audio component 405 also includes a speaker for outputting audio signals.
The sensor assembly 407 includes one or more sensors for providing status assessment of various aspects for the face image processing device. For example, the sensor assembly 407 may detect an on/off state of the face image processing apparatus, and may also detect a temperature change of the face image processing apparatus.
The communication component 408 is configured to facilitate wired or wireless communication between the facial image processing device and other devices. The face image processing apparatus may access a Wireless network based on a communication standard, such as Wi-Fi (Wireless-Fidelity).
It will be appreciated that the configuration shown in fig. 16 is merely illustrative, and that the face image processing apparatus may include more or less components than those shown in fig. 16, or have different components than those shown in fig. 16. Each of the components shown in fig. 16 may be implemented in hardware, software, or a combination thereof.
Another aspect of the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a face image processing method as described above. The computer-readable storage medium may be contained in the face image processing apparatus described in the above embodiment or may exist alone without being incorporated in the face image processing apparatus.
The foregoing is merely a preferred exemplary embodiment of the present application and is not intended to limit the embodiments of the present application, and those skilled in the art may make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A face image processing method, comprising:
determining a first gray threshold value for distinguishing a background pixel from a foreground pixel in a face image according to gray values of pixels in the face image;
determining pixels with gray values smaller than the first gray threshold value in the face image as background pixels, and determining pixels with gray values larger than the first gray threshold value as foreground pixels;
generating a mask corresponding to the face image according to the background pixels and the foreground pixels;
fusing the mask and the face image to obtain a target face image;
in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the foreground pixel;
the generating the mask corresponding to the face image according to the background pixel and the foreground pixel includes:
generating a blank mask based on the face image, wherein the blank mask contains background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels;
Determining channel values of the background mask pixels and the foreground mask pixels in all color channels, wherein the channel value of the background mask pixels in all color channels is half of the highest-order channel value under the color mode corresponding to the face image, and the channel value of the foreground mask pixels in all color channels is lower than half of the highest-order channel value;
and filling channel values of the background mask pixels and the foreground mask pixels in each color channel into the blank mask correspondingly to obtain a mask corresponding to the face image.
2. The method of claim 1, wherein determining channel values for the foreground mask pixels for each color channel comprises:
and calculating the channel value of the foreground mask pixel in each color channel according to the channel value of the foreground mask pixel in each color channel, the gray value of the foreground pixel corresponding to the foreground mask pixel, the highest-order channel value and the first gray threshold value.
3. The method according to claim 1 or 2, wherein the fusing the mask with the face image to obtain a target face image comprises:
Respectively acquiring normalized channel values of each pixel in the face image and the mask in channels with different colors, wherein the normalized channel values are ratios of channel values to the highest-order channel values;
aiming at target pixels which are respectively positioned at the same positions in the face image and the mask, carrying out fusion calculation on the normalized channel value of each color channel to obtain the fusion channel value of the target pixels in each color channel;
updating the channel value of each color channel of the target pixel in the face image according to the fusion channel value of each color channel of the target pixel, and obtaining the target face image.
4. The method of claim 1, wherein determining a first gray threshold for distinguishing between background pixels and foreground pixels in the face image based on gray values of respective pixels in the face image comprises:
distinguishing background pixels and foreground pixels in the face image according to a preset gray threshold;
according to the number of background pixels and the number of foreground pixels contained in the face image, and the gray values of the background pixels and the gray values of the foreground pixels, calculating the corresponding background pixel duty ratio, foreground pixel duty ratio, background pixel average gray value and foreground pixel average gray value of the face image;
Calculating a square value of a difference between the average gray value of the background pixel and the average gray value of the foreground pixel, and calculating a product among the duty ratio of the foreground pixel, the duty ratio of the background pixel and the square value to obtain an inter-class variance of the face image;
and traversing the gray threshold, and determining the gray threshold corresponding to the maximum inter-class variance as the first gray threshold.
5. The method of claim 1, wherein prior to determining a first gray threshold value for distinguishing a background pixel from a foreground pixel in a face image based on gray values of individual pixels in the face image, the method further comprises:
converting the face image into a face gray scale image;
determining a target area with a gray value larger than a preset second gray threshold value in a face area of the face gray map, and determining a highlight area corresponding to the target area in the face image;
the highlight region is filled with surrounding pixels of the highlight region.
6. The method of claim 5, wherein the filling the highlight region with surrounding pixels of the highlight region comprises:
determining an isocenter line corresponding to the boundary of the highlight region;
And transmitting surrounding pixels of the highlight region to the highlight region along the direction of the isocenter so as to fill the highlight region.
7. The method of claim 5, wherein the filling the highlight region with surrounding pixels of the highlight region comprises:
respectively taking each pixel in the highlight region as a central point, and determining a pixel set positioned in a preset radius of each central point and a weight matrix corresponding to the pixel set;
calculating the pixel value of the central point according to the pixel value set corresponding to the pixel set and the weight matrix;
and filling the pixel value into the center point.
8. The method of claim 5, wherein the filling the highlight region with surrounding pixels of the highlight region comprises:
determining pixels to be repaired on the boundary of the highlight region;
the pixel corresponding to the minimum pixel value square error is determined as a sampling pixel matched with the pixel to be repaired by calculating the pixel value square error of the pixel to be repaired and each pixel in the face image;
filling the pixel value of the sampling pixel into the pixel to be repaired, removing the pixel to be repaired from the boundary, and redefining the boundary of the highlight region;
And iteratively executing the pixel value filling of the pixels to be repaired in the boundary of the highlight region and updating the boundary of the highlight region until the filling of all the pixels in the highlight region is completed.
9. The method of claim 5, wherein prior to converting the face image to a face gray scale map, the method further comprises:
and carrying out face recognition on the face image to obtain a face region and a face feature region in the face image, wherein the face feature region at least comprises an eye region, an eyebrow region and a mouth region, and the face region in the face image corresponds to the face region in the face gray level image.
10. The method according to claim 9, wherein determining a target area having a gray value greater than a preset second gray threshold value in the face area of the face gray map includes:
according to the face region and the face feature region in the face image, determining a corresponding face feature region in the face region of the face gray level image;
and determining a target area with a gray value larger than the second gray threshold value aiming at an image area except the face characteristic area in the face area of the face gray map.
11. A control device for video transmission, comprising:
the gray value acquisition module is used for determining a first gray threshold value for distinguishing a background pixel from a foreground pixel in the face image according to the gray value of each pixel in the face image;
a front background pixel determining module, configured to determine a pixel in the face image having a gray value smaller than the first gray threshold as the background pixel, and determine a pixel having a gray value larger than the first gray threshold as the foreground pixel;
the mask generation module is used for generating a mask corresponding to the face image according to the background pixels and the foreground pixels;
the image fusion module is used for fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the foreground pixel;
the mask generating module is further configured to perform the following steps:
generating a blank mask based on the face image, wherein the blank mask contains background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels;
Determining channel values of the background mask pixels and the foreground mask pixels in all color channels, wherein the channel value of the background mask pixels in all color channels is half of the highest-order channel value under the color mode corresponding to the face image, and the channel value of the foreground mask pixels in all color channels is lower than half of the highest-order channel value;
and filling channel values of the background mask pixels and the foreground mask pixels in each color channel into the blank mask correspondingly to obtain a mask corresponding to the face image.
12. A control apparatus for video transmission, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored in a memory to perform the method of any one of claims 1-10.
13. A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1-10.
CN202010073356.3A 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium Active CN111275648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010073356.3A CN111275648B (en) 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010073356.3A CN111275648B (en) 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111275648A CN111275648A (en) 2020-06-12
CN111275648B true CN111275648B (en) 2024-02-09

Family

ID=71001178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010073356.3A Active CN111275648B (en) 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111275648B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822806B (en) * 2020-06-19 2023-10-03 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN112053389A (en) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 Portrait processing method and device, electronic equipment and readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013006A (en) * 2009-09-07 2011-04-13 泉州市铁通电子设备有限公司 Method for automatically detecting and identifying face on the basis of backlight environment
CN107194900A (en) * 2017-07-27 2017-09-22 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107301405A (en) * 2017-07-04 2017-10-27 上海应用技术大学 Method for traffic sign detection under natural scene
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107454315A (en) * 2017-07-10 2017-12-08 广东欧珀移动通信有限公司 The human face region treating method and apparatus of backlight scene
CN107633485A (en) * 2017-08-07 2018-01-26 百度在线网络技术(北京)有限公司 Face's luminance regulating method, device, equipment and storage medium
CN107845080A (en) * 2017-11-24 2018-03-27 信雅达系统工程股份有限公司 Card image enhancement method
CN108875759A (en) * 2017-05-10 2018-11-23 华为技术有限公司 A kind of image processing method, device and server
CN108898546A (en) * 2018-06-15 2018-11-27 北京小米移动软件有限公司 Face image processing process, device and equipment, readable storage medium storing program for executing
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970206B2 (en) * 2006-12-13 2011-06-28 Adobe Systems Incorporated Method and system for dynamic, luminance-based color contrasting in a region of interest in a graphic image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013006A (en) * 2009-09-07 2011-04-13 泉州市铁通电子设备有限公司 Method for automatically detecting and identifying face on the basis of backlight environment
CN108875759A (en) * 2017-05-10 2018-11-23 华为技术有限公司 A kind of image processing method, device and server
CN107301405A (en) * 2017-07-04 2017-10-27 上海应用技术大学 Method for traffic sign detection under natural scene
CN107454315A (en) * 2017-07-10 2017-12-08 广东欧珀移动通信有限公司 The human face region treating method and apparatus of backlight scene
CN107194900A (en) * 2017-07-27 2017-09-22 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107633485A (en) * 2017-08-07 2018-01-26 百度在线网络技术(北京)有限公司 Face's luminance regulating method, device, equipment and storage medium
CN107845080A (en) * 2017-11-24 2018-03-27 信雅达系统工程股份有限公司 Card image enhancement method
CN108898546A (en) * 2018-06-15 2018-11-27 北京小米移动软件有限公司 Face image processing process, device and equipment, readable storage medium storing program for executing
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Intensity-based gain adaptive unsharp masking for image contrast enhancement;N. M. Kwok等;《2012 5th International Congress on Image and Signal Processing》;第529-533页 *
人脸识别中光照补偿方法的研究及FPGA实现;许峰;《中国优秀硕士学位论文全文数据库信息科技辑》;I138-1859 *
复杂背景彩色图像中的人脸分割技术;郭红建等;《计算机工程与应用》(第35期);第73-76页 *

Also Published As

Publication number Publication date
CN111275648A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
US11948282B2 (en) Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data
CN111369644A (en) Face image makeup trial processing method and device, computer equipment and storage medium
JP2020530920A (en) Image lighting methods, devices, electronics and storage media
JP6576083B2 (en) Image processing apparatus, image processing method, and program
EP3772038A1 (en) Augmented reality display method of simulated lip makeup
CN113610723B (en) Image processing method and related device
US10565741B2 (en) System and method for light field correction of colored surfaces in an image
CN111275648B (en) Face image processing method, device, equipment and computer readable storage medium
CN109919866A (en) Image processing method, device, medium and electronic equipment
CN112950499A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110677557B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113240760A (en) Image processing method and device, computer equipment and storage medium
CN111127367A (en) Method, device and system for processing face image
CN114359021A (en) Processing method and device for rendered picture, electronic equipment and medium
CN112489144B (en) Image processing method, image processing device, terminal device and storage medium
JP6896811B2 (en) Image processing equipment, image processing methods, and programs
CN116055895B (en) Image processing method and device, chip system and storage medium
US20170163852A1 (en) Method and electronic device for dynamically adjusting gamma parameter
CN114005168A (en) Physical world confrontation sample generation method and device, electronic equipment and storage medium
WO2021069282A1 (en) Perceptually improved color display in image sequences on physical displays
CN113703881A (en) Display method, display device and storage medium
CN111866407A (en) Image processing method and device based on motion digital camera
CN112740264A (en) Design for processing infrared images
CN114565506B (en) Image color migration method, device, equipment and storage medium
KR102563621B1 (en) Artificial intelligence-based tooth shade diagnostic system and operation method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024356

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant