CN111275648A - Face image processing method, device and equipment and computer readable storage medium - Google Patents

Face image processing method, device and equipment and computer readable storage medium Download PDF

Info

Publication number
CN111275648A
CN111275648A CN202010073356.3A CN202010073356A CN111275648A CN 111275648 A CN111275648 A CN 111275648A CN 202010073356 A CN202010073356 A CN 202010073356A CN 111275648 A CN111275648 A CN 111275648A
Authority
CN
China
Prior art keywords
pixel
face image
pixels
value
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010073356.3A
Other languages
Chinese (zh)
Other versions
CN111275648B (en
Inventor
田野
王志斌
王梦娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010073356.3A priority Critical patent/CN111275648B/en
Publication of CN111275648A publication Critical patent/CN111275648A/en
Application granted granted Critical
Publication of CN111275648B publication Critical patent/CN111275648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a face image processing method, a face image processing device, face image processing equipment and a computer-readable storage medium. The method comprises the following steps: determining a first gray threshold value for distinguishing background pixels and foreground pixels in a face image according to the gray value of each pixel in the face image; determining pixels with gray values smaller than the first gray threshold value in the face image as the background pixels, and determining pixels with gray values larger than the first gray threshold value as the foreground pixels; generating a mask corresponding to the face image according to the background pixels and the foreground pixels; fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as that of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than that of the foreground pixel. The method and the device can remove the influence of uneven illumination on the whole face image.

Description

Face image processing method, device and equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for processing a face image.
Background
With the continuous development of intelligent terminal technology, people can use intelligent terminals to shoot anytime and anywhere. People always expect better shooting effects, especially the shooting of portrait by self-timer shooting and the like, and therefore many applications specially aiming at portrait processing are derived.
However, in the process of photographing the portrait, the problem of uneven brightness of the face of a photographed image is easily caused by uneven light, and the highlight area of the face part in the image greatly affects the subsequent processing of the face of the person, so that a good portrait effect cannot be obtained.
To solve this problem, in the prior art implementation, an image area with a brightness higher than a preset brightness threshold is determined as a highlight area, and then the brightness of the highlight area is adjusted to be low, thereby facilitating subsequent processing. However, when the luminance of the highlight region is adjusted to be low, the transition between the image region corresponding to the highlight region and the surrounding image region tends to be unnatural.
Therefore, the problem that a good face image effect cannot be obtained due to uneven shooting illumination still exists in the prior art.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present application provide a method, an apparatus, a device, and a computer-readable storage medium for processing a face image, so that the problem of unnatural transition of an image region does not occur in the face image processing performed according to the embodiments of the present application, and the processing effect is better.
Wherein, the technical scheme who this application adopted does:
a face image processing method comprises the following steps: determining a first gray threshold value for distinguishing background pixels and foreground pixels in a face image according to the gray value of each pixel in the face image; determining pixels with gray values smaller than the first gray threshold value in the face image as the background pixels, and determining pixels with gray values larger than the first gray value as the foreground pixels; generating a mask corresponding to the face image according to the background pixels and the foreground pixels; fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as that of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than that of the foreground pixel.
A face image processing apparatus comprising: the gray value acquisition module is used for determining a first gray threshold value for distinguishing background pixels and foreground pixels in the face image according to the gray value of each pixel in the face image; a front background pixel determination module, configured to determine, as the background pixel, a pixel in the face image whose grayscale value is smaller than the first grayscale threshold, and determine, as the foreground pixel, a pixel in the face image whose grayscale value is larger than the first grayscale threshold; the mask generating module is used for generating a mask corresponding to the face image according to the background pixels and the foreground pixels; the image fusion module is used for fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as that of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than that of the foreground pixel.
A facial image processing apparatus comprising a processor and a memory, the memory having stored thereon computer readable instructions which, when executed by the processor, implement a facial image processing method as described above.
A computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to execute the face image processing method as described above.
In the above technical solution, the first gray value is determined according to the gray value of each pixel in the face image, so that the distribution of the foreground pixels obtained by distinguishing according to the first gray threshold can reflect the overall distribution of the image area with strong brightness and the image area with weak brightness in the face image. In the target face image obtained by fusing the mask and the face image, the brightness of the pixel points corresponding to the background pixels is the same as that of the background pixels, and the brightness of the pixel points corresponding to the foreground pixels is lower than that of the foreground pixels.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a flow diagram illustrating a method of facial image processing according to an exemplary embodiment;
FIG. 2 is a flow chart of step 110 in one exemplary embodiment of the embodiment shown in FIG. 1;
FIG. 3 is a flow chart of step 150 in one exemplary embodiment of the embodiment shown in FIG. 1;
FIG. 4 is a flow chart of an exemplary embodiment of step 170 in the embodiment of FIG. 1;
FIG. 5 is a flow chart illustrating a method of facial image processing according to another exemplary embodiment;
FIG. 6 is a flow chart of one embodiment of step 250 in the embodiment shown in FIG. 5;
FIG. 7 is a flow chart of step 250 in another embodiment of the embodiment of FIG. 5;
FIG. 8 is a flow chart of step 250 in another embodiment of the embodiment of FIG. 5;
FIG. 9 is a schematic diagram illustrating a face image according to an exemplary embodiment;
FIG. 10 is a schematic diagram of face feature region protection for the face image shown in FIG. 9;
FIG. 11 is a schematic diagram of the face image of FIG. 9 illustrating the differentiation between background pixels and foreground pixels;
FIG. 12 is a schematic diagram of an exemplary mask generated based on the face image shown in FIG. 9;
FIG. 13 is a schematic diagram of a target face image obtained by processing the face image shown in FIG. 9;
FIG. 14 is a schematic illustration of the effect of facial image processing shown in accordance with an exemplary embodiment;
FIG. 15 is a block diagram illustrating a face image processing apparatus according to an exemplary embodiment;
fig. 16 is a block diagram illustrating a face image processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
As mentioned above, in the process of portrait shooting, the problem of uneven brightness of the face part in the shot image is easily caused by uneven light, which greatly affects the subsequent processing of the face part, and thus a better portrait processing effect cannot be obtained.
In the prior art, only an image area of a face image with a brightness higher than a preset brightness threshold is simply determined as a highlight area, and then the brightness of the highlight area is adjusted to be low, so as to obtain the face image with the adjusted brightness, thereby facilitating subsequent processing of the face image. However, when the brightness of the highlight region is adjusted to be low, the transition between the image region corresponding to the highlight region and the surrounding image region tends to be unnatural, and a good human image processing effect cannot be obtained.
Based on this, in order to solve the technical problem that a better face image effect cannot be obtained due to uneven shooting illumination, embodiments of the present application respectively provide a face image processing method, a face image processing apparatus, a face image processing device, and a computer-readable storage medium, by processing a face image, the luminance of the face image can be reduced on the whole, the problem that the transition of an image region in the face image is unnatural is avoided, and the subsequent processing of the face image after luminance adjustment can obtain a better effect.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for processing a face image according to an exemplary embodiment. As shown in fig. 1, in an exemplary embodiment, the face image processing method at least includes the following steps:
step 110, determining a first gray threshold for distinguishing background pixels and foreground pixels in the face image according to the gray value of each pixel in the face image.
In the present embodiment, the problem that the transition between the image area corresponding to the highlight area and the surrounding image area is unnatural is easily caused after the brightness of the highlight area in the face image is dimmed, so that the problem that the transition between the image area and the surrounding image area is unnatural can be avoided by considering that the brightness of the image area with larger brightness in the face image is integrally adjusted, and therefore, the image area with larger brightness in the face image needs to be determined first.
The foreground pixels of the face image refer to pixels with higher brightness in the face image, and the background pixels refer to pixels with lower brightness in the face image, so that an image area formed by the foreground pixels is an image area with higher brightness in the face image.
The gray value of each pixel in the face image reflects the brightness distribution of each pixel in the face image, so that the first gray threshold value can be determined according to the face image, and the background pixel and the foreground pixel in the face image can be distinguished according to the first gray threshold value.
Step 130, determining the pixels with the gray values smaller than the first gray threshold value in the face image as background pixels, and determining the pixels with the gray values larger than the first gray threshold value as foreground pixels.
As described above, the foreground pixels refer to pixels with higher brightness in the face image, the background pixels refer to pixels with lower brightness in the face image, the pixels with the gray scale value smaller than the first gray scale threshold value in the face image are determined as the background pixels according to needs, and the pixels with the gray scale value larger than the first gray scale threshold value in the face image are determined as the foreground pixels.
Therefore, in this embodiment, pixels in the face image are divided into background pixels with darker brightness and foreground pixels with brighter brightness, compared with a highlight region directly determined in the prior art, in this embodiment, an image region composed of foreground pixels includes not only the highlight region in the face image but also all other image regions with larger brightness, for example, image regions located around the highlight region, that is, the image region composed of foreground pixels in this embodiment can reflect the image region with larger brightness in the face image as a whole.
Therefore, in the subsequent face image processing, the brightness of the face image can be adjusted on the basis of the determined background pixels and foreground pixels, the highlight area in the face image is removed, meanwhile, other image areas with larger brightness in the face image are dimmed, and the problem of unnatural transition between the processed highlight area and the surrounding image areas can be effectively avoided.
And 150, generating a mask corresponding to the face image according to the background pixels and the foreground pixels.
The mask is a specific image used for partially covering an image to be processed, and in the field of digital image processing, the mask is represented as a two-dimensional matrix array, and the two-dimensional matrix array corresponds to a pixel value corresponding to each pixel in the specific image.
In this embodiment, the generated mask is used to cover all or part of the face image, for example, only the face region in the face image may be covered.
The mask comprises background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels, wherein the background mask pixels are used for covering corresponding back pixels in the face image, and the foreground mask pixels are used for covering corresponding foreground pixels in the face image.
Step 170, the mask is fused with the face image to obtain a target face image, in which the brightness of the pixel point corresponding to the background pixel is the same as the brightness of the corresponding background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than the brightness of the corresponding foreground pixel.
The mask is fused with the face image, namely the mask is covered on the face image, so that the target face image is obtained. Therefore, each pixel point in the target face image corresponds to each pixel point with the same position in the face image.
For the pixels corresponding to the same image position in the mask and the face image respectively, the fusion is to fuse the pixel values of the two pixels corresponding to the image position.
Because in the target face image, the brightness of the pixel points corresponding to the background pixels is the same as that of the corresponding background pixels, and the brightness of the pixel points corresponding to the foreground pixels is lower than that of the corresponding foreground pixels, the embodiment performs integral brightness dimming processing on the image area with larger brightness in the face image substantially based on the fusion of the mask and the face image, so that the brightness adjusting effect is more natural, and the problem that the image area in the face image is unnatural in transition can be avoided.
The target face image obtained by the embodiment can be applied to subsequent modification processing such as whitening, peeling, face changing and the like of a face part, and because the highlight area in the face image is removed from the target face image and the brightness of other image areas with high brightness is correspondingly reduced, the influence of unbalanced brightness on the processing effect of the face image is avoided to a great extent, and a good face image effect can be obtained.
FIG. 2 is a flow chart of step 110 in one exemplary embodiment of the embodiment shown in FIG. 1. As shown in fig. 2, in an exemplary embodiment, determining a first gray threshold for distinguishing a background pixel from a foreground pixel in a face image according to a gray value of each pixel in the face image includes at least the following steps:
and step 111, distinguishing background pixels and foreground pixels in the face image according to a preset gray threshold.
In this regard, when the first gray threshold is optimal, the degree of distinction between the background pixels and the foreground pixels in the face image is the greatest, and the region with a large brightness in the face image determined according to the foreground pixels is more accurate, so it is necessary to obtain the optimal first gray threshold.
In this embodiment, an initial gray threshold is preset, an optimal first gray threshold is obtained by traversing the initial gray threshold under the condition that the degree of distinction between a background pixel and a foreground pixel in a face image is maximum, and when the degree of distinction is maximum, the corresponding gray threshold is optimal when traversing is performed, and the optimal gray threshold is determined as the first gray threshold.
Similar to the step 130 of distinguishing the background pixel and the foreground pixel in the face image according to the first gray threshold, the distinguishing the background pixel and the foreground pixel in the face image according to the preset gray threshold means that the pixel in the face image with the gray value smaller than the preset gray threshold is determined as the foreground pixel, and the pixel with the gray value larger than the preset gray threshold is determined as the background pixel.
And step 113, calculating a background pixel proportion, a foreground pixel proportion, a background pixel average gray value and a foreground pixel average gray value corresponding to the face image according to the number of the background pixels and the number of the foreground pixels contained in the face image, and the gray value of the background pixels and the gray value of the foreground pixels.
The background pixel ratio corresponding to the face image is a ratio of the number of background pixels to the total number of pixels in the face image, the foreground pixel ratio is a ratio of the number of foreground pixels to the total number of pixels, the average gray value of foreground pixels is an average value of gray values corresponding to all background pixels in the face image, and the average gray value of foreground pixels is an average value of gray values corresponding to all foreground pixels.
If N is used1Representing the number of background pixels in the face image, by N2Representing the number of foreground pixels in the face image, representing the total number of pixels in the face image by Sum and representing the total number of pixels in the face image by omega1Representing background pixel fraction by ω2Representing the foreground pixel fraction in mu1Representing the mean gray value of the background pixel, in μ2Representing the average gray value of the foreground pixels, the average gray value can be represented by a formula
Figure BDA0002377254410000071
Calculating to obtain the background pixel ratio corresponding to the face image, and obtaining the background pixel ratio through a formula
Figure BDA0002377254410000072
Calculating to obtain the corresponding foreground pixel ratio of the face image, and obtaining the foreground pixel ratio of the face image through a formula
Figure BDA0002377254410000073
Figure BDA0002377254410000074
Calculating the average gray value of background pixels of the face image by a formula
Figure BDA0002377254410000075
And calculating the average gray value of the foreground pixels corresponding to the face image. Wherein, mu represents the sum of the gray values of all pixels in the face image, and mu (t) represents the sum of the gray values of all background pixels in the face image.
And step 115, calculating a square value of the difference between the average gray value of the background pixels and the average gray value of the foreground pixels, and calculating the product of the foreground pixel ratio, the background pixel ratio and the square value to obtain the inter-class variance of the face image.
It should be noted that the inter-class variance of the face image is used to indicate the degree of distinction between the background pixels and the foreground pixels in the face image, and the larger the inter-class variance corresponding to the face image is, the larger the degree of distinction between the background pixels and the foreground pixels in the face image is.
If the inter-class variance of the face image is represented by g, such inter-class variance is calculated by the following formula:
g=ω12*(μ12)2
and step 117, traversing the gray threshold under the condition of maximizing the inter-class variance, and determining the gray threshold corresponding to the maximum inter-class variance as the first gray threshold.
Traversing the gray level threshold under the condition of maximizing the inter-class variance means to iteratively execute the processes of step 111 to step 115 for the purpose of obtaining the maximum inter-class variance, and the process of step 111 to step 115 is called one traversal of the gray level threshold every time the process is executed.
In each traversal, the gray threshold used for distinguishing the background pixel and the foreground pixel of the face image in step 111 needs to be adjusted, so that the inter-class variances corresponding to different gray thresholds are obtained through multiple traversals, and therefore the corresponding gray threshold when the inter-class variance is maximum can be determined to be the first gray threshold.
Therefore, in the embodiment, the degree of distinction between the background pixel and the foreground pixel in the face image is determined based on the inter-class variance, the degree of distinction between the background pixel and the foreground pixel can be specified, and the gray threshold value corresponding to the maximum inter-class variance can be determined to be the best first gray level in a plurality of gray threshold values and inter-class variances obtained through traversal by traversing the gray threshold values under the condition of maximizing the inter-class variance, so that the brightness distribution condition in the face image can be reflected to the maximum extent through subsequent differentiation of the background pixel and the foreground pixel through the first gray level, the subsequent overall brightness adjustment for the face image is facilitated, and a better processing effect is obtained.
FIG. 3 is a flow chart of an exemplary embodiment of step 150 in the embodiment shown in FIG. 1. As shown in fig. 3, generating a mask corresponding to a face image at least includes the following steps:
and 151, generating a blank mask based on the face image, wherein the blank mask comprises background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels.
It should be noted that, in the blank mask generated based on the face image, the image size of the blank mask is the same as the image size of the face image, and the pixel value corresponding to each pixel is zero.
The blank mask contains background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels, so that the distribution of the background mask pixels and the foreground mask pixels in the blank mask corresponds to the distribution of the background pixels and the foreground pixels in the face image.
Step 153, determining half of the highest-order channel value of the face image in the color mode corresponding to the face image as the channel value of the background mask pixel in each color channel, and determining the channel value of the foreground mask pixel in each color channel according to the channel value of the foreground pixel in each color channel, wherein the channel value of the foreground mask pixel in each color channel is lower than half of the highest-order channel value.
The color mode corresponding to the face image includes an RGB mode, which is a color standard for obtaining colors by brightness change of three color channels of red, green, and blue and their superposition, and the channel value of each color channel reflects the color intensity of each color channel.
In the RGB mode, each pixel of the face image comprises three color channels, the color corresponding to each pixel is obtained by superposing the channel values of the color channels, the size of the channel value corresponding to each color channel is set to be 0-255 levels, and the higher the level is, the higher the color intensity of the corresponding color channel is. Therefore, the highest-order channel value in the color mode corresponding to the face image is 255.
In order to prevent the brightness of the background pixel in the target face image from changing relative to the brightness of the corresponding background pixel in the face image, it is necessary to ensure that the pixel value obtained by fusing the background mask pixel in the mask and the corresponding background pixel in the face image is the same as the pixel value of the corresponding background pixel in the face image. It should be understood that the pixel values corresponding to the pixels in the mask and face images are the channel values of the pixels in the color channels.
Illustratively, in the process of fusing the mask and the face image, it is necessary to perform fusion calculation on the channel values of the background mask pixel and the corresponding background pixel in each color channel, and when the channel value of the background mask pixel in each color channel is half of the highest-order channel value 255, the channel value of each color channel obtained by fusion calculation is the same as the channel value of the corresponding background pixel in the face image in the corresponding color channel. Therefore, in this embodiment, half of the highest-order channel value in the color mode corresponding to the face image needs to be determined as the channel value of the background mask pixel in each color channel.
In the process of performing fusion calculation on the foreground mask pixel and the corresponding foreground pixel in the channel values of the color channels respectively, when the channel value of the foreground mask pixel in each color channel is lower than half of the highest-order channel value, the channel value of each color channel obtained through fusion calculation is smaller than the channel value of the corresponding background pixel in the face image in the corresponding color channel, so that the function of reducing brightness can be achieved.
In this embodiment, the intensities of the brightness reduction performed on the foreground pixels with different brightness in the face image are different from each other to obtain a more natural brightness dimming effect, so that the channel values of the corresponding foreground mask pixels in the mask in each color channel need to be determined according to the channel values of the foreground pixels in each color channel in the face image.
In one embodiment, the channel value of the foreground mask pixel in each color channel needs to be calculated according to the channel value of the foreground mask pixel in each color channel, the gray value of the foreground pixel corresponding to the foreground mask pixel, the highest-order channel value, and the first gray threshold value.
If b represents the ratio of the channel value of any foreground mask pixel in a certain color channel to the highest-order channel value, Gary represents the gray value of the foreground pixel corresponding to the foreground mask pixel, and is used for Gary' to represent the first gray threshold, then the following calculation formula is provided:
b=0.5-(log2(1+(Gary-Gary′)/(255-Gary′)))*0.5
after the value of b is calculated according to the formula, the channel value of the foreground mask pixel in the corresponding color channel can be correspondingly determined by calculating the product of b and the highest-order channel value.
Therefore, the ratio of the channel value of the foreground mask pixel in each color channel to the highest-order channel value calculated in the embodiment is less than 0.5, that is, the channel value of the foreground mask pixel in each color channel is less than half of the highest-order channel value.
Step 155, correspondingly filling the channel values of the background mask pixels and the foreground mask pixels in each color channel into the blank mask, and obtaining a mask corresponding to the face image.
The step of filling the channel values of the background mask pixels and the foreground mask pixels in each color channel into the blank mask correspondingly means that the corresponding channel values are assigned to each color channel of the corresponding pixels in the blank mask according to the channel values of the background mask pixels and the foreground mask pixels in each color channel determined in the step 153, so that the corresponding pixels in the blank mask are colored.
Therefore, in the embodiment, a blank mask is generated based on the face image, channel values of each background mask pixel in each color channel and channel values of each foreground mask pixel in each color channel in the blank mask are respectively determined, the determined channel values of the background mask pixel and the foreground mask pixel in each color channel are correspondingly filled into the blank mask, and after the obtained mask is fused with the face image, the brightness of the pixel corresponding to the background pixel in the target face image can be effectively ensured to be the same as the brightness of the corresponding background pixel in the face image, and the brightness of the pixel corresponding to the foreground pixel is lower than the brightness of the corresponding foreground pixel in the face image, so that the aim of reducing the brightness integrally is fulfilled.
FIG. 4 is a flowchart of an exemplary embodiment of step 170 shown in FIG. 1. As shown in fig. 4, in an exemplary embodiment, fusing the mask and the face image to obtain the target face image, includes at least the following steps:
step 171, obtaining the normalized channel values of each pixel in the face image and the mask in different color channels, respectively, where the normalized channel value is the ratio of the channel value to the highest-order channel value.
In this embodiment, it is necessary to obtain channel values of each pixel in the face image and the mask in different color channels, and then calculate a ratio of each channel value to the highest-order channel value to obtain a normalized channel value of each pixel in the face image and the mask in different color channels.
In the subsequent fusion of the face image and the mask, the fusion calculation process is more convenient by performing the fusion calculation on the normalized channel values of each pixel in the face image and the mask in different color channels.
And as can be seen from the foregoing, the normalized channel value of each background mask pixel in the mask at different color channels is 0.5.
Step 173, calculating the normalized channel value of each color channel by fusing the target pixels at the same positions in the face image and the mask, and obtaining the fused channel value of the target pixel in each color channel.
It should be noted that, the target pixels respectively located at the same position in the face image and the mask are substantially a pixel pair, and each pixel in the pixel pair corresponds to a pixel position in the face image and the mask, and each pixel has a channel value corresponding to each color channel.
Illustratively, the formula for fusion computing the normalized channel value of the target pixel at each color channel is:
Figure BDA0002377254410000101
wherein f (a, b) represents the fusion channel value of the target pixel in a single color channel, a represents the normalized channel value of the corresponding color channel of the target pixel in the face image, b represents the normalized channel value of the corresponding color channel of the target pixel in the mask, and a and b are both values between 0 and 1.
When b < 0.5, it means that the target pixel is a foreground pixel in the face image, and when b is 0.5, the target pixel is a background pixel in the face image.
It can be seen that, when b is less than 0.5, the calculated fusion channel value is less than a, that is, the channel value of the color channel reflected by the fusion channel value is lower than the channel value of the background pixel in the face image in the corresponding color channel, and based on the superposition of the channel values of the respective color channels, the obtained brightness should be lower than the brightness of the background pixel in the face image.
And when b is equal to 0.5, the calculated fusion channel value is equal to a, that is, the channel value of the color channel reflected by the fusion channel value is the same as the channel value of the background pixel in the face image in the corresponding color channel, and the obtained brightness is consistent with the brightness of the background pixel in the face image based on the superposition of the channel values of the color channels.
And step 175, updating the channel value of each color channel of the target pixel in the face image according to the fusion channel value of the target pixel in each color channel, so as to obtain the target face image.
In this embodiment, the channel value of each color channel of the target pixel in the face image is updated according to the fused channel value of the target pixel in each color channel, that is, the fused channel value of each color channel is determined as the channel value of the corresponding pixel in each color channel in the target face image.
Therefore, in this embodiment, the brightness of the pixel corresponding to the background pixel in the target image should be the same as the brightness of the corresponding background pixel, and the brightness of the pixel corresponding to the foreground pixel is lower than the brightness of the corresponding foreground pixel, so that the overall brightness dimming processing is performed on the image area with larger brightness in the face image, and the brightness adjustment effect is more natural.
As shown in fig. 5, in another exemplary embodiment, before step 110, the face image processing method further includes the steps of:
step 210, converting the face image into a face gray image.
Based on the above calculation formula of the fusion channel value of the target pixel in a single color channel, if the channel value of the pixel in the face image in a certain color channel is large, the value of a will be very close to 1, and no matter how to adjust the corresponding value of b, the finally obtained fusion channel value will also be very close to 1, which will result in that the brightness reduction effect of the corresponding pixel in the target face image is not very obvious.
Therefore, in the process of performing fusion of the mask and the face image for the highlight area in the face area of the face image, no matter how the channel values of the mask pixels in the mask corresponding to the highlight area in the face area in each color channel are adjusted, in the finally obtained target face image, the brightness of the highlight area in the face area cannot be effectively reduced.
To solve this problem, the present embodiment eliminates the highlight area of the face area in advance, and then performs the overall brightness reduction processing on the face image to obtain a better brightness reduction effect. Therefore, in the present embodiment, the face image needs to be converted into a face gray scale image to determine the highlight area in the face area according to the face gray scale image.
Step 230, determining a target region with a gray value larger than a preset second gray threshold value in the face region of the face gray image, and determining a highlight region corresponding to the target region in the face image.
Firstly, it should be noted that, the face image may be subjected to face recognition in advance to obtain a face region in the face image, and after the face image is converted into a face grayscale image, the face region in the face grayscale image may be determined accordingly.
The second gray level threshold is a lowest boundary value of a preset highlight region, and may be 220, for example, and after a target region with a gray level value greater than the second gray level threshold in the face region of the face gray level map is determined, the highlight region corresponding to the target region in the face image may be determined accordingly.
Step 250, filling the high light area with surrounding pixels of the high light area.
In the highlight area in the face area, the brightness of each pixel located around the highlight area is lower than that of the highlight area, and the brightness is not too low. Therefore, the present embodiment can fill the highlight region with the surrounding pixels of the highlight region to effectively reduce the luminance of the highlight region, while ensuring that the luminance of the highlight region after filling does not have an unnatural transition with the surrounding image region.
In one exemplary embodiment, as shown in FIG. 6, filling the highlight region with surrounding pixels of the highlight region may include the steps of:
step 2511, determining an isolux line corresponding to the boundary of the highlight region;
in step 2513, the surrounding pixels of the highlight region are transmitted to the highlight region along the direction of the isocandela line to fill the highlight region.
In this embodiment, a Navier-Stokes (a motion equation describing the conservation of momentum of incompressible fluid in fluid mechanics) method is used to fill the highlight region, which is essentially a gradient transition method, and the highlight region is filled with surrounding pixels of the highlight region.
The isolux line is a curve drawn by using appropriate coordinates for all points of the same illuminance on the illuminated surface, which is on the boundary of the highlight region in the present embodiment.
In the Navier-Stokes method, surrounding pixel information is transmitted to an area to be repaired (a highlight area in this embodiment) in an isolux line direction, thereby filling the area to be repaired. Illustratively, the image function after the region to be repaired is obtained correspondingly by solving the Navier-Stokes equation on the whole region to be repaired, and the detailed calculation process is not repeated here.
In another exemplary embodiment, as shown in FIG. 7, filling the highlight region with surrounding pixels of the highlight region may include the steps of:
step 2521, determining a weight matrix corresponding to a set of pixels located within a preset radius of each central point, with each pixel in the highlight area as a central point;
step 2523, calculating a pixel value of the central point according to the pixel value set and the weight matrix corresponding to the pixel set;
at step 2525, the pixel values are filled into the center point.
In this embodiment, the peripheral pixels of the highlight area are filled in the highlight area by using the gaussian blur method.
And respectively taking each pixel in the highlight area as a central point, and determining a pixel set located within a preset radius of each central point, wherein the preset radius is also called as a fuzzy radius of the central point, and the pixel set contains pixel values of each pixel located within the preset radius of each central point. The larger the preset radius is, the larger the number of pixels contained in the pixel set is, and the better the blurring effect on the central point is.
And because the pixels in the image are continuous, the closer the pixel near the central point is to the central point, the closer the pixel is associated with the central point, and therefore, the weights corresponding to the respective pixels in the pixel set need to be determined to obtain the weight matrix corresponding to the pixel set. In one embodiment, the weights for each pixel in the set of pixels are normally distributed based on the distance of each pixel from the center point.
Thus, the pixel value of the central point can be obtained by performing a weighted sum operation on the pixel value of each pixel in the pixel set and the weight corresponding to each pixel. Filling of the highlight region is achieved by filling this pixel value to the corresponding center point.
In another exemplary embodiment, as shown in fig. 8, filling the highlight region with surrounding pixels of the highlight region may further include the steps of:
2531, determining pixels to be repaired on the boundary of the highlight area;
step 2533, determining the pixel corresponding to the minimum pixel value square error as a sampling pixel matched with the pixel to be repaired by calculating the pixel value square error of the pixel to be repaired and each pixel in the face image;
step 2535, filling the pixel value of the sampling pixel into the pixel to be repaired, removing the pixel to be repaired from the boundary of the highlight area, and re-determining the boundary of the highlight area;
step 2537, iteratively performing pixel value filling of the pixel to be repaired in the boundary of the highlight area and updating of the boundary of the highlight area until all pixels in the highlight area are filled.
In this embodiment, a sampling filling method is adopted, a certain point is selected on the boundary of a highlight area to determine that the pixel is a pixel to be repaired, and then a sampling pixel matched with the pixel to be repaired is searched in a face image to copy the sampling pixel to the pixel to be repaired, so that the pixel value filling of the region to be repaired is completed.
The pixel to be repaired may be any one of pixels on the boundary of the highlight region, or the pixel corresponding to the highest priority may be determined as the pixel to be repaired according to the priority corresponding to each pixel in the boundary.
And determining the pixel corresponding to the minimum pixel value square error as a sampling pixel by calculating the pixel value square error between the pixel to be repaired and each pixel in the face image and searching the pixel corresponding to the minimum pixel value square error in the face image. Wherein the pixel value squared error may be a spatial distance between the two pixels in the corresponding color space.
And after the pixel value filling of the pixel to be repaired is completed, updating the boundary of the highlight area, namely removing the pixel to be repaired from the boundary of the highlight area to obtain an updated boundary. And repeating the repairing process until the high-light area is repaired.
Thus, in the present embodiment, the high-light region is filled with the peripheral pixels of the high-light region, so that the luminance of the high-light region can be effectively lowered, and the problem that the luminance of the filled high-light region does not make transition with the peripheral image region unnatural is solved.
In another exemplary embodiment, before the face image is converted into the face grayscale image, face recognition needs to be performed on the face image to obtain a face region and a face feature region in the face image, where the face feature region at least includes an eye region, an eyebrow region, and a mouth region, and the face region in the face image corresponds to the face region in the face grayscale image.
Considering that if the highlight region contains the face feature region, the appearance of the portrait in the face image is likely to change greatly after the highlight region is filled with the surrounding pixels of the highlight region, which greatly affects the processing effect of the face image, so that it is necessary to protect the face feature region in the face image.
The protection of the face feature region in the face image means that in the process of determining a target region with a gray value larger than a second gray threshold value in the face region of the face gray image, a corresponding face feature region in the face region of the face gray image is determined according to the face region and the face feature region in the face image, and then the target region with a gray value larger than the second gray threshold value is determined for image regions except for facial regions in the face region of the face gray image.
Therefore, the embodiment protects the face feature region in the face image, and can reduce the influence of the face image processing on the face appearance to the greatest extent.
In an exemplary application scenario, the process for processing a face image is as follows:
as shown in fig. 9, fig. 9 is a face image to be processed, in which the brightness of the forehead, the nose wing, the cheek, and other regions of the face region is relatively high, and if the face region is directly subjected to modification processing such as whitening, peeling, and changing the face of a person, these regions with relatively high brightness will greatly affect the modification effect, so that the whole brightness dimming processing needs to be performed on the face image in advance.
For the face feature regions in the face region, such as the eyebrow, eye, mouth, and the like, the surrounding pixels are used to fill the corresponding regions, which greatly affects the appearance of the face, and further affects the processing effect of the face image, so it is necessary to protect the face feature regions of the face image. That is, when a highlight region in the face region is subsequently processed, the protected face feature region will not be processed. As shown in fig. 10, the black region outside the white region represents an image region in the face image other than the face region, and the black region inside the white region represents a protected face feature region.
In the process of fusing the mask and the face image, no matter how the channel values of the mask pixels corresponding to the highlight area in the mask in each color channel are adjusted, the brightness of the highlight area in the face area in the finally obtained target face image cannot be effectively reduced. Therefore, it is next necessary to repair the highlight region in the face region, and exemplarily, the highlight region is filled with the surrounding pixels of the highlight region to effectively reduce the brightness of the highlight region and avoid the influence of the highlight region on the subsequent overall brightness adjustment.
Next, an image area to be subjected to overall brightness adjustment in the face image needs to be determined, and for example, in a face grayscale image corresponding to the face image, a foreground pixel and a background pixel are distinguished according to a first grayscale threshold, that is, an image area formed by the foreground pixel can be used as an image area to be subjected to overall brightness reduction. As shown in fig. 11, after the foreground pixels and the background pixels are distinguished from each other in the face grayscale, the white area is determined as the area to be adjusted.
Then, a mask corresponding to the face image needs to be generated, where the mask includes background mask pixels corresponding to the background pixels in the face image and foreground mask pixels corresponding to the foreground pixels in the face image, that is, each pixel in the mask corresponds to each pixel in the face image. FIG. 12 is a schematic diagram of a mask, shown in an exemplary embodiment.
The brightness value of the foreground pixel in the face image is reduced through the fusion of the foreground pixel in the face image and the foreground mask pixel in the mask, the brightness value of the background pixel in the face image is not changed through the fusion of the background pixel in the face image and the background mask pixel in the mask, and therefore the brightness of the face area is dimmed on the whole face image, and the follow-up requirement for modification treatment of the face image can be met.
FIG. 13 is a schematic diagram of a target face image resulting from processing the face image shown in FIG. 9 according to the above. It can be seen that, in the face region of the face image shown in fig. 13, the brightness of the forehead, the nasal ala and the cheek is significantly reduced, and the problem of unnatural transition with the surrounding image region does not occur, and the appearance of the portrait is not changed, so that the problem of uneven brightness in the face image can be effectively solved.
It should also be mentioned that the speed of brightness processing for a 2K resolution face image in the embodiment of the present application may be within 0.5 second, and the processing time is short.
In another exemplary application scenario, as shown in fig. 14, fig. 14A is a face image to be processed, fig. 14B is a mask generated based on fig. 14A, and fig. 14C is a target face image obtained by fusing fig. 14A and 14B.
In the application scene, the mask generated based on the face image corresponds to the face area in the face image, and the mask covers the face area so as to perform overall brightness adjustment on the face area. As can be seen from fig. 14C, the brightness of the face region in the target face image is reduced as a whole, so that the subsequent modification processing on the target face image is facilitated.
Fig. 15 is a block diagram illustrating a face image processing apparatus according to an exemplary embodiment. As shown in fig. 15, in an exemplary embodiment, the apparatus includes a gray value acquisition module 310, a foreground pixel determination module 330, a mask generation module 350, and an image fusion module 370.
The gray value obtaining module 310 is configured to determine a first gray threshold for distinguishing a background pixel from a foreground pixel in the face image according to a gray value of each pixel in the face image.
The front-background pixel determination module 330 is configured to determine pixels in the face image with a gray value smaller than the first gray threshold as background pixels, and determine pixels with a gray value larger than the first gray threshold as foreground pixels.
The mask generating module 350 is configured to generate a mask corresponding to the face image according to the background pixels and the foreground pixels.
The image fusion module 370 is configured to fuse the mask with the face image to obtain a target face image, where in the target face image, the luminance of a pixel corresponding to a background pixel is the same as the luminance of a corresponding background pixel, and the luminance of a pixel corresponding to a foreground pixel is lower than the luminance of a corresponding foreground pixel.
In another exemplary embodiment, the mask generation module 350 includes a blank mask generation unit, a mask pixel value determination unit, and a mask pixel value filling unit.
The blank mask generating unit is used for generating a blank mask based on the face image, and the blank mask comprises background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels.
The mask pixel value determining unit is used for determining the channel values of the background mask pixels in each color channel and determining the channel values of the foreground mask pixels in each color channel according to the channel values of the foreground pixels in each color channel.
The mask pixel value filling unit is used for correspondingly filling the channel values of the background mask pixels and the foreground mask pixels in each color channel into the blank mask to obtain the mask corresponding to the face image.
In another exemplary embodiment, the channel value of the background mask pixel in each color channel is half of the highest-order channel value in the color mode corresponding to the face image, and the channel value of the foreground mask pixel in each color channel is lower than half of the highest-order channel value.
In another exemplary embodiment, the mask pixel value determining unit includes a foreground mask pixel determining subunit, configured to calculate the channel value of the foreground mask pixel in each color channel according to the channel value of the foreground mask pixel in each color channel, the gray value of the foreground pixel corresponding to the foreground mask pixel, the highest-order channel value, and the first gray threshold.
In another exemplary embodiment, the image fusion module 370 includes a normalization processing unit, a fusion calculation unit, and a channel value update unit.
The normalization processing unit is used for respectively acquiring the normalization channel values of each pixel in the face image and the mask in different color channels, and the normalization channel values are the ratio of the channel values to the highest-order channel values.
And the fusion calculation unit is used for fusion calculation of the normalized channel value of each color channel aiming at the target pixels respectively positioned at the same position in the face image and the mask to obtain the fusion channel value of the target pixel in each color channel.
And the channel value updating unit is used for updating the channel value of each color channel of the target pixel in the face image according to the fusion channel value of the target pixel in each color channel to obtain the target face image.
In another exemplary embodiment, the gray value obtaining module 310 includes a pixel distinguishing unit, an information calculating unit, an inter-class method calculating unit, and a traversing unit.
The pixel distinguishing unit is used for distinguishing background pixels and foreground pixels in the face image according to a preset gray threshold.
The information calculation unit is used for calculating a background pixel proportion, a foreground pixel proportion, a background pixel average gray value and a foreground pixel average gray value corresponding to the face image according to the number of background pixels and the number of foreground pixels contained in the face image, and the gray value of the background pixels and the gray value of the foreground pixels.
The inter-class method calculating unit is used for calculating a square value of the difference between the average gray value of the background pixels and the average gray value of the foreground pixels, calculating the product of the ratio of the foreground pixels, the ratio of the background pixels and the square value, and obtaining the inter-class variance of the face image.
The traversal unit is used for traversing the gray threshold value and determining the gray threshold value corresponding to the maximum inter-class variance as the first gray threshold value
In another exemplary embodiment, the face image processing apparatus further includes a grayscale map conversion module, a highlight region determination module, and a highlight region filling module.
The grey-scale image conversion module is used for converting the face image into a face grey-scale image.
The highlight area determination module is used for determining a target area with a gray value larger than a preset second gray threshold value in the face area of the face gray image, and determining a highlight area corresponding to the target area in the face image.
The high light area filling module is used for filling the high light area according to surrounding pixels of the high light area.
In another exemplary embodiment, the highlight region filling module includes an isolux line determining unit and a surrounding pixel transmitting unit.
The isolux line determination unit is used for determining an isolux line corresponding to the boundary of the highlight area.
The peripheral pixel transmission unit is used for transmitting peripheral pixels of the high-light area to the high-light area along the direction of the equal illumination line so as to fill the high-light area.
In another exemplary embodiment, the highlight region filling module includes a pixel set acquisition unit, a blurred pixel value calculation unit, and a blurred pixel value filling unit.
The pixel set acquisition unit is used for determining a pixel set located in a preset radius of each central point and a weight matrix corresponding to the pixel set by taking each pixel in the highlight area as the central point.
The pixel value calculating unit is used for calculating the pixel value of the central point according to the pixel value set corresponding to the pixel set and the weight matrix.
The pixel value filling unit is used for filling the pixel value to the central point.
In another exemplary embodiment, the highlight region filling module includes a pixel-to-be-repaired determining unit, a sampling pixel determining unit, a boundary updating unit, and an iterative performing unit.
The pixel to be repaired determining unit is used for determining the pixel to be repaired on the boundary of the highlight area.
The sampling pixel determining unit is used for determining the pixel corresponding to the minimum pixel value square error as the sampling pixel matched with the pixel to be repaired by calculating the pixel value square error of the pixel to be repaired and each pixel in the face image.
The boundary updating unit is used for filling the pixel value of the sampling pixel into the pixel to be repaired, removing the pixel to be repaired from the boundary and re-determining the boundary of the highlight area.
The iteration execution unit is used for iteratively executing pixel value filling of pixels to be repaired in the boundary of the highlight area and updating of the boundary of the highlight area until all pixels in the highlight area are filled.
In another exemplary embodiment, the face image processing apparatus further includes a face recognition module, configured to perform face recognition on the face image, and obtain a face region and a face feature region in the face image, where the face feature region at least includes an eye region, an eyebrow region, and a mouth region, and the face region in the face image corresponds to the face region in the face grayscale map.
In another exemplary embodiment, the highlight region determination module includes a face feature region determination unit and a target region determination unit.
The face feature region determining unit is used for determining a corresponding face feature region in the face region of the face gray level image according to the face region and the face feature region in the face image.
The target area determining unit is used for determining a target area with a gray value larger than a second gray threshold value aiming at an image area except the face characteristic area in the face area of the face gray image.
It should be noted that the apparatus provided in the foregoing embodiment and the method provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit execute operations has been described in detail in the method embodiment, and is not described again here.
Another aspect of the present application also provides a face image processing device, which includes a processor and a memory, wherein the memory has stored thereon computer readable instructions, and the computer readable instructions, when executed by the processor, implement the face image processing method as described above.
Fig. 16 is a schematic structural diagram illustrating a face image processing apparatus according to an exemplary embodiment.
It should be noted that the face image processing device is only an example adapted to the present application, and should not be considered as providing any limitation to the scope of the application. The face image processing device also cannot be interpreted as needing to rely on or have to have one or more components of the exemplary face image processing device shown in fig. 16.
As shown in fig. 16, in an exemplary embodiment, the facial image processing device includes a processing component 401, a memory 402, a power component 403, a multimedia component 404, an audio component 405, a sensor component 407, and a communication component 408. The above components are not all necessary, and the face image processing device may add other components or reduce some components according to its own functional requirements, which is not limited in this embodiment.
The processing component 401 generally controls the overall operation of the face image processing apparatus, such as operations associated with display, data communication, and log data processing. The processing components 401 may include one or more processors 409 to execute instructions to perform all or a portion of the above-described operations. Further, processing component 401 may include one or more modules that facilitate interaction between processing component 401 and other components. For example, the processing component 401 may include a multimedia module to facilitate interaction between the multimedia component 404 and the processing component 401.
The memory 402 is configured to store various types of data to support operation at the facial image processing device, examples of which include instructions for any application or method operating on the facial image processing device. The memory 402 stores one or more modules configured to be executed by the one or more processors 409 to perform all or part of the steps of the facial image processing method described in the above embodiments.
The power supply component 403 supplies power to various components of the face image processing apparatus. The power components 403 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the face image processing device.
The multimedia component 404 includes a screen between the facial image processing device and the user that provides an output interface. In some embodiments, the screen may include a TP (Touch Panel) and an LCD (Liquid crystal display). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 405 is configured to output and/or input audio signals. For example, the audio component 405 includes a microphone configured to receive external audio signals when the facial image processing device is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. In some embodiments, audio component 405 also includes a speaker for outputting audio signals.
The sensor component 407 includes one or more sensors for providing various aspects of state assessment for the facial image processing device. For example, the sensor component 407 may detect the on/off state of the face image processing device, and may also detect a temperature change of the face image processing device.
The communication component 408 is configured to facilitate communication between the facial image processing device and other devices in a wired or wireless manner. The face image processing apparatus can access a Wireless network based on a communication standard, such as Wi-Fi (Wireless-Fidelity, Wireless network).
It will be appreciated that the arrangement shown in figure 16 is merely illustrative and that the face image processing apparatus may include more or fewer components than shown in figure 16 or have different components than shown in figure 16. Each of the components shown in fig. 16 may be implemented in hardware, software, or a combination thereof.
Another aspect of the present application also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the face image processing method as described above. The computer-readable storage medium may be included in the face image processing apparatus described in the above-described embodiment, or may be separately present without being incorporated in the face image processing apparatus.
The above description is only a preferred exemplary embodiment of the present application, and is not intended to limit the embodiments of the present application, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A face image processing method is characterized by comprising the following steps:
determining a first gray threshold value for distinguishing background pixels and foreground pixels in a face image according to the gray value of each pixel in the face image;
determining pixels with gray values smaller than the first gray threshold value in the face image as the background pixels, and determining pixels with gray values larger than the first gray threshold value as the foreground pixels;
generating a mask corresponding to the face image according to the background pixels and the foreground pixels;
fusing the mask and the face image to obtain a target face image;
in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as that of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than that of the foreground pixel.
2. The method according to claim 1, wherein the generating a mask corresponding to the face image comprises:
generating a blank mask based on the face image, wherein the blank mask comprises background mask pixels corresponding to the background pixels and foreground mask pixels corresponding to the foreground pixels;
determining channel values of the background mask pixels and the foreground mask pixels in each color channel;
and correspondingly filling the channel values of the background mask pixels and the foreground mask pixels in each color channel into the blank mask to obtain a mask corresponding to the face image.
3. The method according to claim 2, wherein the channel value of the background mask pixel in each color channel is half of the highest-order channel value in the color mode corresponding to the face image, and the channel value of the foreground mask pixel in each color channel is lower than half of the highest-order channel value.
4. The method of claim 3, wherein determining the channel values of the foreground mask pixels in the respective color channels comprises:
and calculating the channel value of the foreground mask pixel in each color channel according to the channel value of the foreground mask pixel in each color channel, the gray value of the foreground pixel corresponding to the foreground mask pixel, the highest-order channel value and the first gray threshold value.
5. The method according to claim 3 or 4, wherein the fusing the mask and the face image to obtain a target face image comprises:
respectively acquiring normalized channel values of each pixel in the face image and the mask in different color channels, wherein the normalized channel values are the ratio of the channel values to the highest-order channel values;
calculating the normalized channel value of each color channel by fusing the target pixels respectively positioned at the same position in the face image and the mask to obtain the fused channel value of the target pixel in each color channel;
and updating the channel value of the target pixel in each color channel in the face image according to the fusion channel value of the target pixel in each color channel to obtain the target face image.
6. The method of claim 1, wherein determining a first gray threshold for distinguishing background pixels from foreground pixels in the face image according to gray values of respective pixels in the face image comprises:
distinguishing background pixels and foreground pixels in the face image according to a preset gray threshold;
calculating a background pixel ratio, a foreground pixel ratio, a background pixel average gray value and a foreground pixel average gray value corresponding to the face image according to the number of background pixels and the number of foreground pixels contained in the face image, and the gray value of the background pixels and the gray value of the foreground pixels;
calculating a square value of the difference between the average gray value of the background pixels and the average gray value of the foreground pixels, and calculating the product of the occupation ratio of the foreground pixels, the occupation ratio of the background pixels and the square value to obtain the inter-class variance of the face image;
and traversing the gray threshold, and determining the gray threshold corresponding to the maximum inter-class variance as the first gray threshold.
7. The method of claim 1, wherein before determining the first gray threshold for distinguishing background pixels from foreground pixels in the face image according to gray values of respective pixels in the face image, the method further comprises:
converting the face image into a face gray image;
determining a target region with a gray value larger than a preset second gray threshold value in a face region of the face gray image, and determining a highlight region corresponding to the target region in the face image;
filling the highlight region with surrounding pixels of the highlight region.
8. The method of claim 7, wherein the filling the highlight region with surrounding pixels of the highlight region comprises:
determining an isolux line corresponding to the boundary of the highlight region;
and transmitting the surrounding pixels of the high-light area to the high-light area along the direction of the equal illumination line so as to fill the high-light area.
9. The method of claim 7, wherein the filling the highlight region with surrounding pixels of the highlight region comprises:
respectively taking each pixel in the highlight area as a central point, and determining a pixel set located in a preset radius of each central point and a weight matrix corresponding to the pixel set;
calculating the pixel value of the central point according to the pixel value set corresponding to the pixel set and the weight matrix;
filling the pixel values to the center point.
10. The method of claim 7, wherein the filling the highlight region with surrounding pixels of the highlight region comprises:
determining pixels to be repaired on the boundary of the highlight area;
determining a pixel corresponding to the minimum pixel value square error as a sampling pixel matched with the pixel to be repaired by calculating the pixel value square error of the pixel to be repaired and each pixel in the face image;
filling the pixel value of the sampling pixel into the pixel to be repaired, removing the pixel to be repaired from the boundary, and re-determining the boundary of the highlight area;
and iteratively executing pixel value filling of the pixel to be repaired in the boundary of the highlight area and updating of the boundary of the highlight area until all pixels in the highlight area are filled.
11. The method of claim 7, wherein prior to converting the face image into a face grayscale image, the method further comprises:
and carrying out face recognition on the face image to obtain a face region and a face characteristic region in the face image, wherein the face characteristic region at least comprises an eye region, an eyebrow region and a mouth region, and the face region in the face image corresponds to the face region in the face gray image.
12. The method according to claim 11, wherein the determining a target region with a gray value greater than a preset second gray threshold value in the face region of the face gray map comprises:
determining a corresponding face characteristic region in the face region of the face gray level image according to the face region and the face characteristic region in the face image;
and determining a target area with a gray value larger than the second gray threshold value aiming at the image area except the face characteristic area in the face area of the face gray image.
13. A control apparatus for video transmission, comprising:
the gray value acquisition module is used for determining a first gray threshold value for distinguishing background pixels and foreground pixels in the face image according to the gray value of each pixel in the face image;
a front background pixel determination module, configured to determine, as the background pixel, a pixel in the face image whose grayscale value is smaller than the first grayscale threshold, and determine, as the foreground pixel, a pixel in the face image whose grayscale value is larger than the first grayscale threshold;
the mask generating module is used for generating a mask corresponding to the face image according to the background pixels and the foreground pixels;
the image fusion module is used for fusing the mask and the face image to obtain a target face image; in the target face image, the brightness of the pixel point corresponding to the background pixel is the same as that of the background pixel, and the brightness of the pixel point corresponding to the foreground pixel is lower than that of the foreground pixel.
14. A control device for video transmission, comprising:
a memory storing computer readable instructions;
a processor to read computer readable instructions stored by the memory to perform the method of any of claims 1-12.
15. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-12.
CN202010073356.3A 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium Active CN111275648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010073356.3A CN111275648B (en) 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010073356.3A CN111275648B (en) 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111275648A true CN111275648A (en) 2020-06-12
CN111275648B CN111275648B (en) 2024-02-09

Family

ID=71001178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010073356.3A Active CN111275648B (en) 2020-01-21 2020-01-21 Face image processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111275648B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053389A (en) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 Portrait processing method and device, electronic equipment and readable storage medium
CN113822806A (en) * 2020-06-19 2021-12-21 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080143739A1 (en) * 2006-12-13 2008-06-19 Harris Jerry G Method and System for Dynamic, Luminance-Based Color Contrasting in a Region of Interest in a Graphic Image
CN102013006A (en) * 2009-09-07 2011-04-13 泉州市铁通电子设备有限公司 Method for automatically detecting and identifying face on the basis of backlight environment
CN107194900A (en) * 2017-07-27 2017-09-22 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107301405A (en) * 2017-07-04 2017-10-27 上海应用技术大学 Method for traffic sign detection under natural scene
CN107454315A (en) * 2017-07-10 2017-12-08 广东欧珀移动通信有限公司 The human face region treating method and apparatus of backlight scene
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107633485A (en) * 2017-08-07 2018-01-26 百度在线网络技术(北京)有限公司 Face's luminance regulating method, device, equipment and storage medium
CN107845080A (en) * 2017-11-24 2018-03-27 信雅达系统工程股份有限公司 Card image enhancement method
CN108875759A (en) * 2017-05-10 2018-11-23 华为技术有限公司 A kind of image processing method, device and server
CN108898546A (en) * 2018-06-15 2018-11-27 北京小米移动软件有限公司 Face image processing process, device and equipment, readable storage medium storing program for executing
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080143739A1 (en) * 2006-12-13 2008-06-19 Harris Jerry G Method and System for Dynamic, Luminance-Based Color Contrasting in a Region of Interest in a Graphic Image
CN102013006A (en) * 2009-09-07 2011-04-13 泉州市铁通电子设备有限公司 Method for automatically detecting and identifying face on the basis of backlight environment
CN108875759A (en) * 2017-05-10 2018-11-23 华为技术有限公司 A kind of image processing method, device and server
CN107301405A (en) * 2017-07-04 2017-10-27 上海应用技术大学 Method for traffic sign detection under natural scene
CN107454315A (en) * 2017-07-10 2017-12-08 广东欧珀移动通信有限公司 The human face region treating method and apparatus of backlight scene
CN107194900A (en) * 2017-07-27 2017-09-22 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107633485A (en) * 2017-08-07 2018-01-26 百度在线网络技术(北京)有限公司 Face's luminance regulating method, device, equipment and storage medium
CN107845080A (en) * 2017-11-24 2018-03-27 信雅达系统工程股份有限公司 Card image enhancement method
CN108898546A (en) * 2018-06-15 2018-11-27 北京小米移动软件有限公司 Face image processing process, device and equipment, readable storage medium storing program for executing
US20190385290A1 (en) * 2018-06-15 2019-12-19 Beijing Xiaomi Mobile Software Co., Ltd. Face image processing method, device and apparatus, and computer-readable storage medium
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
N. M. KWOK等: "Intensity-based gain adaptive unsharp masking for image contrast enhancement", 《2012 5TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING》, pages 529 - 533 *
许峰: "人脸识别中光照补偿方法的研究及FPGA实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 138 - 1859 *
郭红建等: "复杂背景彩色图像中的人脸分割技术", 《计算机工程与应用》, no. 35, pages 73 - 76 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822806A (en) * 2020-06-19 2021-12-21 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113822806B (en) * 2020-06-19 2023-10-03 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN112053389A (en) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 Portrait processing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN111275648B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
US11948282B2 (en) Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data
CN111369644A (en) Face image makeup trial processing method and device, computer equipment and storage medium
CN110443747A (en) Image processing method, device, terminal and computer readable storage medium
JP6576083B2 (en) Image processing apparatus, image processing method, and program
CN105141841B (en) Picture pick-up device and its method
CN107396079B (en) White balance adjustment method and device
CN113610723B (en) Image processing method and related device
CN111275648B (en) Face image processing method, device, equipment and computer readable storage medium
CN109005343A (en) Control method, device, imaging device, electronic equipment and readable storage medium storing program for executing
WO2021128593A1 (en) Facial image processing method, apparatus, and system
CN112686820A (en) Virtual makeup method and device and electronic equipment
US10621769B2 (en) Simplified lighting compositing
CN111597963B (en) Light supplementing method, system and medium for face in image and electronic equipment
CN111901519B (en) Screen light supplement method and device and electronic equipment
JP2004133919A (en) Device and method for generating pseudo three-dimensional image, and program and recording medium therefor
JP6896811B2 (en) Image processing equipment, image processing methods, and programs
CN110473156B (en) Image information processing method and device, storage medium and electronic equipment
CN116055895B (en) Image processing method and device, chip system and storage medium
CN107105167B (en) Method and device for shooting picture during scanning question and terminal equipment
CN115439577A (en) Image rendering method and device, terminal equipment and storage medium
CN114359021A (en) Processing method and device for rendered picture, electronic equipment and medium
CN113592753A (en) Image processing method and device based on industrial camera shooting and computer equipment
WO2021069282A1 (en) Perceptually improved color display in image sequences on physical displays
JP2002010283A (en) Display method and processor for face image
Shih et al. Enhancement and speedup of photometric compensation for projectors by reducing inter-pixel coupling and calibration patterns

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024356

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant