CN114596214A - Face whitening image processing method, device, medium and equipment - Google Patents

Face whitening image processing method, device, medium and equipment Download PDF

Info

Publication number
CN114596214A
CN114596214A CN202111470958.3A CN202111470958A CN114596214A CN 114596214 A CN114596214 A CN 114596214A CN 202111470958 A CN202111470958 A CN 202111470958A CN 114596214 A CN114596214 A CN 114596214A
Authority
CN
China
Prior art keywords
image
color
lookup table
color value
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111470958.3A
Other languages
Chinese (zh)
Inventor
李晓帆
徐子昱
李美娜
娄心怡
宋莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Publication of CN114596214A publication Critical patent/CN114596214A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/90
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a medium for processing a face-whitening image, where the method includes: extracting a target image; searching corresponding color values in a lookup table according to the color values of the pixel points in the target image, and replacing the color values of the pixel points in the target image with the searched color values; performing at least one fusion processing on the original color values of the pixel points in the target image and the replaced color values; and performing whitening treatment on the target image according to the target image subjected to the fusion treatment and a preset fine adjustment weight. The method and the device for adjusting the brightness of the facial skin image and replacing the color of the facial skin image by applying the LUT lookup table technology realize a more real whitening effect. And local whitening optimization is carried out aiming at the teeth of the user, so that the whitening requirement of the user is better fulfilled.

Description

Face whitening image processing method, device, medium and equipment
The present application claims priority of chinese patent application entitled "a method, apparatus, medium, and device for processing a face-whitening image" filed by chinese patent office on 03/12/2020 and having application number 202011396247.1, which is incorporated herein by reference in its entirety.
Technical Field
The present disclosure relates to the field of image processing technologies, and more particularly, to a method, an apparatus, a medium, and a device for processing a face-whitening image.
Background
The portrait is whitened, which is a picture-repairing operation that is often performed by most beauty-lovers, and manual picture-repairing requires that the skin area of the portrait is firstly scratched out, then the skin color is adjusted, and finally the scratched-out edge area is subjected to a smooth operation, so that the skin area and other areas are in transition coordination and nature. Such a figure-repairing operation often requires a certain figure-repairing skill, and the figure-repairing operation also consumes a lot of time, which is not friendly to most beauty lovers.
At present, the image is generally whitened by using a filter, that is, the color of the whole image is adjusted. However, this method is often difficult to adjust for the skin area, affects the color of the image background, has strong filter sense, and is difficult to achieve the ideal image-modifying effect.
Disclosure of Invention
The method aims to solve the technical problems that the whitening effect is strong in filter sense and poor in whitening effect when the whitening treatment is performed on the whole image in the prior art. The present disclosure provides a face whitening image processing method, including:
extracting a target image;
searching corresponding color values in a lookup table according to the color values of the pixel points in the target image, and replacing the color values of the pixel points in the target image with the searched color values;
performing at least one fusion processing on the original color values of the pixel points in the target image and the replaced color values;
and performing whitening treatment on the target image according to the target image subjected to the fusion treatment and a preset fine tuning weight.
Further, the target image data is a facial skin image, the searching for the corresponding color value in the lookup table according to the color value of the pixel point in the target image, and replacing the color value of the pixel point in the target image with the searched color value includes:
and carrying out light and shade replacement on the facial skin image through a first lookup table to obtain replaced image data, wherein the first lookup table is a one-dimensional image lookup table gradually changing from black to white.
Further, the performing a shading replacement on the facial skin image through the first lookup table to obtain replaced image data includes:
acquiring color values of pixel points from the facial skin image;
determining the color value as a one-dimensional coordinate to be searched;
looking up a color value at the one-dimensional coordinate in the first lookup table;
and replacing the color value of the pixel point in the facial skin image with the color value at the one-dimensional coordinate.
Further, the at least one fusion processing is performed on the original color values of the pixel points in the target image and the color values after replacement, including:
and carrying out weighted fusion on the original color values of the pixel points in the facial skin image and the color values after replacement.
Further, the whitening processing is performed on the target image according to the target image after the fusion processing and a preset fine tuning weight, and includes:
and according to a preset fine tuning weight, carrying out weighted fusion on the color values of the pixel points in the fused facial skin image and the color values of the pixel points in the original facial skin image.
Further, after performing whitening processing on the target image according to the target image after the fusion processing and a preset fine tuning weight, the method further includes:
and carrying out color adjustment on the whitened facial skin image by utilizing a second lookup table, wherein the second lookup table is a two-dimensional color lookup table formed by splicing a plurality of color images, and the color images in the second lookup table are arranged in a sequence of gradually changing from dark to bright from left to right and from top to bottom.
Further, the performing color adjustment on the whitened facial skin image by using a second lookup table includes:
acquiring RGB three-channel color values of pixel points from the face skin image after whitening treatment;
according to the RGB three-channel color value of the pixel point, inquiring the corresponding three-channel color value in the second lookup table;
and replacing the three-channel color values of the pixel points in the face skin image after the whitening treatment with the inquired three-channel color values.
Further, the method further comprises:
acquiring a face image to be processed, and identifying and positioning key point data in the face image;
determining a mouth region from the face image according to the key point data;
acquiring a template material, and fitting the template material into a blank texture in an index mode according to the pixel point coordinates of the mouth area;
and carrying out tooth whitening treatment on the mouth region in the face image corresponding to the template material based on the texture data of the template material.
Further, the tooth whitening processing is performed on the mouth region in the face image corresponding to the template material based on the texture data of the template material, and the tooth whitening processing includes:
based on the texture data of the template material, inquiring a color value corresponding to each pixel point in the mouth area in a third lookup table;
and respectively replacing the current color value of each pixel point of the mouth area in the face image with the color value corresponding to each searched pixel point.
Further, the querying, in a third lookup table, a color value corresponding to each pixel point in the mouth region based on the texture data of the template material includes:
acquiring a channel value of a preset color channel of a first pixel point from the blank texture attached with the template material, wherein the first pixel point is any one pixel point in the template material;
calculating a first coordinate and a second coordinate corresponding to the first pixel point according to the acquired channel value;
determining a first color value of an RGB three-channel corresponding to the first coordinate and a second color value of the RGB three-channel corresponding to the second coordinate;
respectively inquiring a third color value corresponding to the first color value and a fourth color value corresponding to the second color value in a third lookup table;
and determining the found average color value of the third color value and the fourth color value as the whitening processed color value corresponding to the first pixel point.
To achieve the above technical object, the present disclosure also provides a face whitening image processing apparatus including:
the data extraction module is used for extracting a target image;
the data adjusting module is used for searching corresponding color values in a lookup table according to the color values of the pixel points in the target image, and replacing the color values of the pixel points in the target image with the searched color values;
the fusion adjusting module is used for performing at least one fusion processing on the original color values of the pixel points in the target image and the replaced color values;
and the fine tuning processing module is used for carrying out whitening processing on the target image according to the target image subjected to the fusion processing and a preset fine tuning weight.
To achieve the above technical objects, the present disclosure also provides a computer storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the steps of the above-mentioned face whitening image processing method.
To achieve the above technical objective, the present disclosure also provides an electronic device including a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the above-mentioned face whitening image processing method when executing the computer program.
The beneficial effect of this disclosure does:
the present disclosure provides a method, an apparatus, a medium, and a device for processing a face-whitening image, which brightens a face skin image through a first lookup table, and performs color adjustment on the face skin image through a second lookup table, thereby realizing only whitening of a face skin region without affecting the color of the background of the entire image and changing the color of eyes, hair, eyebrows, lips, and other parts. And extracting a mouth region from the face image, and performing tooth whitening treatment on the mouth region only through the third lookup table without influencing other regions of the face. The whitening treatment of the skin area and the tooth area is realized, so that the whitened image has no filter sense, the whitening effect is good, and the vision is more natural.
Starting from the actual requirements of users, the LUT lookup table technology is applied to carry out brightness adjustment and color replacement on facial images shot by the users, so that a more real whitening effect is realized. Meanwhile, local whitening optimization is carried out on the teeth of the user, and the whitening requirement of the user is well met.
Drawings
Fig. 1 illustrates a flowchart of a method for processing a face-whitening image according to an embodiment of the present disclosure;
fig. 2 is another flow chart diagram illustrating a face whitening image processing method provided by an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a first lookup table provided by an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a second lookup table as provided by embodiments of the present disclosure;
FIG. 5 shows a schematic diagram of color lookup in a second lookup table provided by an embodiment of the present disclosure;
fig. 6 shows a schematic flow chart of a tooth whitening treatment provided by an embodiment of the present disclosure;
FIG. 7 illustrates a schematic diagram of facial keypoint data provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating the fitting of template material to a blank texture map by way of indexing according to an embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of a third lookup table provided by an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram illustrating a face whitening image processing apparatus provided by an embodiment of the present disclosure;
fig. 11 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
Various structural schematics according to embodiments of the present disclosure are shown in the figures. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of the various regions, layers and their relative sizes, positional relationships are shown in the drawings as examples only, and in practice deviations due to manufacturing tolerances or technical limitations are possible, and a person skilled in the art may additionally design regions/layers with different shapes, sizes, relative positions according to the actual needs.
The present disclosure relates to the interpretation of terms:
LUT: LUT refers to a Look-Up-Table (Look-Up-Table), which is essentially a RAM. After data is written into RAM in advance, every time a signal is input, it is equal to inputting an address to make table look-up, finding out the content correspondent to the address, then outputting.
The LUT has a wide range of applications, for example: the LUT (Look-Up Table) can be applied to a mapping Table of pixel gray-level values, which transforms the actually sampled pixel gray-level values into another corresponding gray-level value through certain transformation, such as threshold, inversion, binarization, contrast adjustment, linear transformation, etc., so as to highlight the useful information of the image and enhance the optical contrast of the image. Many PC series cards have 8/10/12/16 or even 32 bit LUTs, and in particular what transformations are performed in the LUTs are defined by software. The most important meaning of the LUT here is that it is compatible with the high-order display functions of a normal display, so that a wide color gamut (generally referred to as more than srgb) that cannot be displayed by a normal display can be simulated as roughly as possible on a normal display. However, the effect simulated by lut is only used as a reference, and is used as a rough look and feel when the picture is repaired. The best and most complete presentation, still with highly consistent indices of the display, graphics card and material itself, is the so-called "hard decoding".
The full name of LUT is Look Up Table, translating into Chinese is the meaning of Look-Up Table, which is equivalent to a discrete function, giving an input value, and getting an output value through Look-Up Table. The lookup table may be used in many places, referred to in the art of color matching as a color lookup table, which has three components R, G, B, referred to herein as input components: rin、Gin、Bin(ii) a The output component being referred to as Rout、Gout、Bout. The input and output are in one-to-one correspondence, and the system finds the output value for each input value according to the correspondence in the lookup table, thereby completing the color conversion.
LUTs can be divided into from the lookup approach: 1D LUT and 3D LUT, the conversion formula is as follows:
1.1 1D LUT:
Rout=FLUT(Rin)
Gout=FLUT(Gin)
Bout=FLUT(Bin)
it can be seen from the formula that the RGB data in the 1D LUT are independent from each other, and can only affect R, G, B respective brightness, and can control Gamma value to balance RGB colors.
The amount of data contained in a system having a color depth of 10 bits is small.
1.2 3D LUT:
Figure BDA0003392166720000081
Figure BDA0003392166720000082
Figure BDA0003392166720000083
From the formulation, the range of influence of the 3D LUT is more profound, each color component of RGB is related to three input components, hue, saturation, brightness, etc. can be influenced by the 3D LUT, and almost all aspects of the picture can be influenced.
In the data contained in the system with the color depth of 10 bits, the data amount is huge, so the 3D LUT is usually stored by enumerating nodes, and the data between two nodes is calculated by using a difference method, so the number of nodes is a mark for measuring the accuracy of the 3D LUT. The maximum number of nodes supported by the current color management system is 257.
The first embodiment is as follows:
as shown in fig. 1, the present disclosure provides a face whitening image processing method, including:
s101: extracting a target image;
s102: searching corresponding color values in a lookup table according to the color values of the pixel points in the target image, and replacing the color values of the pixel points in the target image with the searched color values;
s103: performing at least one fusion processing on the original color values of the pixel points in the target image and the replaced color values;
s104: and performing whitening treatment on the target image according to the target image subjected to the fusion treatment and a preset fine tuning weight.
The target image is specifically a facial skin image, and in step S101, a face image to be processed may be first acquired, all face key points in the face image are identified, and the facial skin image is extracted from the face image according to the identified face key points.
As shown in fig. 2, the S102 specifically includes S1021: and performing light and shade replacement on the facial skin image through a first lookup table to obtain replaced image data.
It should be noted that, the first lookup table referred to in this disclosure is a one-dimensional color lookup table gradually changing from black to white, as shown in fig. 3.
Specifically, for each pixel point in the facial skin image extracted in S101, a color value of the pixel point is obtained from the facial skin image, and the color value of the pixel point is a numerical value between 0 and 255. The color value is determined as a one-dimensional coordinate to be looked up. The color value at the one-dimensional coordinate is looked up in a first look-up table. And replacing the color value of the pixel point in the facial skin image with the found color value at the one-dimensional coordinate.
For example, if the color value of a pixel in the facial skin image is x, the corresponding one-dimensional coordinate is (x, 1.0). Look up the color value y at coordinate (x, 1.0) in the first lookup table. And replacing the color value of the pixel point in the facial skin image by y from x.
Further, the S103 specifically includes S1031: and carrying out weighted fusion on the original color values of the pixel points in the facial skin image and the replaced color values.
Specifically, the weighted fusion is specifically performed according to the following formula:
C2=C*a+C1(1-a), wherein C2Color value, C, representing fused pixel1The color value of the pixel point after the face skin image is replaced is represented, namely the color value corresponding to the pixel point searched out from the first lookup table, and C represents the original color value of the pixel point in the face skin image. a is a weighting coefficient, and the value of a can be 0.5, 0.6 and the like.
The color value of each pixel point in the facial skin image is replaced by the corresponding color value found in the first lookup table in the above mode. And then, fusing the replaced facial skin image and the original facial skin image according to the weighted fusion formula, so that the color value of each pixel point in the fused facial skin image is the weighted sum of the color value after replacement and the color value before replacement. The brightness of the facial skin image is improved by replacing the brightness of the facial skin image through the first lookup table, and the effect of brightening the facial skin is achieved.
After the above-described fusion processing, the fused facial skin image is fused with the weight coefficient. S104 specifically includes S1041: and according to the preset fine tuning weight, carrying out weighted fusion on the color values of the pixel points in the fused facial skin image and the color values of the pixel points in the original facial skin image.
Specifically, the fusion is performed according to the following formula:
C3=C*(1.0-D)+C2d, where D represents a preset fine tuning weight, C2Color value, C, representing S103 fused pixel3And representing the color value of the pixel point after the fusion in the S104.
After the facial skin image is brightened through the first lookup table, the color of the original facial skin image and the brightened color are fused according to the preset fine tuning weight, the facial skin image can be whitened to a certain extent through the preset fine tuning weight, the preset fine tuning weight is equivalent to the whitening degree, the value range of the preset fine tuning weight is 0-1, 0 represents no whitening, and 1 represents the whitening to the maximum extent.
After whitening the facial skin image according to the preset fine tuning weight, as shown in fig. 2, the method further includes S105: and performing color adjustment on the face skin image after the whitening treatment by utilizing the second lookup table. It should be noted that the second lookup table referred to in the present disclosure is a two-dimensional color lookup table formed by multiple color images, for example, a lookup table formed by multiple color images arranged by four, an upper left vertex of the color image located at an upper left corner in the second lookup table is black, a lower right vertex of the color image located at a lower right corner in the second lookup table is white, and the color images are arranged in the second lookup table according to a sequence of gradually changing from dark to bright from left to right and from top to bottom, as shown in fig. 4.
And for each pixel point in the face skin image after whitening treatment, acquiring RGB three-channel color values of the pixel points from the face skin image after whitening treatment. And inquiring the corresponding three-channel color value in a second lookup table according to the RGB three-channel color value of the pixel point. That is, according to the value of the R channel of the pixel point, the corresponding value of the R channel is looked up in the second lookup table. And inquiring a corresponding G channel value in a second lookup table according to the value of the G channel of the pixel point. And inquiring a corresponding B channel value in a second lookup table according to the value of the B channel of the pixel point.
As shown in fig. 5, RGB output is LUT (R input, G input, B input). The input RGB three channel values are (50,50,50), the output three channel values are (70,70, 70). The three channel values of the input are (50,70,50), and the three channel values of the output are (85,90, 70). The three input channel values are (50,70,60), and the three output channel values are (90,95, 80).
For each pixel point in the face skin image after whitening processing, a corresponding three-channel value is inquired in the second lookup table through the method, and then the current three-channel color value of the pixel point in the face skin image after whitening processing is replaced by the inquired three-channel color value.
And replacing the color values of the pixel points in the face skin image after whitening treatment through a second lookup table to further whiten the face skin image. And the plurality of color images arranged in the second lookup table can adjust the red and yellow in the facial skin image, so that the whitening effect of the facial skin image with reddish and yellowish colors is better.
The operation process of color replacement through the second lookup table through the above steps S101-S104 and S105 is completed, that is, the whitening process on the facial skin image is completed. And subsequently, the facial skin image obtained by final processing and the facial skin image segmented before are fused into a face image again, and the face image with the whitened face is obtained. Thus, only the face skin is subjected to whitening treatment, the color of the areas such as eyes, eyebrows, lips and hair is not changed, the pertinence of the face whitening treatment is stronger, and the whitening treatment effect is more natural.
Example two:
in view of the increasing demand of users for whitening teeth, and after the facial skin is whitened by the above embodiment, if the color difference between the tooth area and the facial skin is large, the overall visual effect of the face image is reduced. Based on this, the facial whitening image processing method provided by the present disclosure may also perform whitening treatment on a tooth region. As shown in fig. 6, the tooth whitening treatment is specifically performed by the following steps:
s201: and acquiring a face image to be processed, and identifying and positioning key point data in the face image.
As shown in fig. 7, the key point data in the face image may be key points of the face detected by a preset face key point detection model, including key points of regions such as face contour, eyes, nose, mouth, eyebrows, and the like.
S202: and determining a mouth region from the face image according to the identified key point data.
And determining a mouth region from the face image according to the key points of the face positioned in the mouth in the recognized key point data, wherein the mouth region may or may not include the image of the teeth.
Therefore, before performing the subsequent tooth whitening processing, the embodiment of the application may further identify whether the mouth region in the face image includes the image of the tooth through a preset image classification model, and if so, continue to perform the subsequent tooth whitening processing. If the image does not contain the tooth, the subsequent tooth whitening treatment is not carried out any more, and the tooth whitening treatment is avoided being carried out on the image not containing the tooth.
S203: and acquiring a template material, and fitting the template material into the blank texture in an indexing mode according to the pixel point coordinates of the mouth region.
The template material is a mouth image with preset colors, and the preset colors can be pink, red and the like. According to the pixel point coordinates of the mouth region in the face image, the template material is attached to the blank texture in an indexing manner, as shown in fig. 8, so that the position of the template material in the blank texture image is the same as the position of the mouth region in the face image.
And S204, carrying out tooth whitening treatment on the mouth region in the face image corresponding to the template material based on the texture data of the template material.
First, based on the texture data of the template material, the color value corresponding to each pixel point in the mouth region is looked up in the third lookup table. And then, respectively replacing the current color value of each pixel point of the mouth region in the face image with the color value corresponding to each inquired pixel point.
Specifically, a channel value of a preset color channel of a first pixel point is obtained from a blank texture attached with the template material, and the first pixel point is any one pixel point in the template material. The preset color channel may be any one of R, G, B three channels. And calculating a first coordinate and a second coordinate corresponding to the first pixel point according to the acquired channel value. And determining a first color value of the RGB three channels corresponding to the first coordinate and a second color value of the RGB three channels corresponding to the second coordinate. And respectively querying a third color value corresponding to the first color value and a fourth color value corresponding to the second color value in a third lookup table. And determining the average color value of the found third color value and fourth color value as the color value after whitening treatment corresponding to the first pixel point.
And respectively determining the color value after whitening treatment corresponding to each pixel point of the template material in the blank texture map by the above method. And then, for each pixel point of the mouth region of the face image, replacing the current pixel value of the pixel point with a color value after whitening treatment corresponding to the pixel point with the same coordinate in the template material.
In order to facilitate understanding of the above process of looking up the color value corresponding to the pixel point through the third lookup table, a specific example is described below. And assuming that the preset color channel is a B channel, acquiring a B channel value of the first pixel point from the blank texture map attached with the template material, and calculating a first coordinate and a second coordinate corresponding to the first pixel point according to the B channel value. First, B1 ═ B/255 ×. 15 is calculated, and then Y ═ floor (B1) × 0.2, X ═ floor (B1) — Y × 4 is calculated, and V ═ X, Y is the first coordinate corresponding to the first pixel point. Y1 ═ ceil (B1) × 0.2, X1 ═ ceil (B1) — Y4, and V1 ═ X1, Y1 are calculated as second coordinates corresponding to the first pixel point. The three-channel color value R, G, B of the first pixel point is taken as the color value of the first coordinate (X, Y). R1 ═ R0.234375 +0.0078125 and G1 ═ G0.234375 +0.0078125 were calculated, and R1, G1 and B1 were used as color values of the second coordinates (X1 and Y1). RG (R1, G1), texpos1.xy ═ V × 0.25+ RG, texpos2.x1y1 ═ V1 × 0.25+ RG. With texpos1.xy and texpos2.x1y1 as coordinates to be queried, respectively, a COLOR value COLOR at texpos1.xy is queried in the third lookup table as RGBA (texpos1.xy), and a COLOR value COLOR1 at texpos2.x1y1 is queried as RGBA (texpos2. x1y1). (COLOR + COLOR1)/2 is calculated, and the calculated value is defined as the COLOR value after the whitening treatment corresponding to the first pixel point. And then, modifying the COLOR value of a pixel point with the same coordinate as the first pixel point in the mouth area of the face image into (COLOR + COLOR 1)/2.
According to the human face and facial features, the mouth region is extracted independently, the extracted mouth region is dyed through the template material, the extracted mouth image is subjected to matting processing, and color adjustment is performed on the mouth image data subjected to the matting processing through the third lookup table. The method has the advantages that the tooth area is independently whitened, the pertinence of the face whitening treatment is higher, the influence of the tooth whitening treatment on the colors of other areas of the face is avoided, and the face whitening effect is improved.
It should be noted that the third lookup table and the second lookup table of the present disclosure are arranged in the same manner, for example, the third lookup table and the second lookup table may both be lookup tables formed by multiple color images of four times four, but the third lookup table and the second lookup table of each color image are not the same, as shown in fig. 9 specifically. If the third lookup table is applied to the above-mentioned technical solution for face whitening, the whole screen may appear white, and the face of a person cannot be identified.
For the face image to be processed, the facial skin is whitened in the first embodiment, and the teeth of the mouth area are whitened in the second embodiment, so that the facial whitening effect of the face image is improved.
The face skin image is brightened through the first lookup table in the application, and the color of the face skin image is adjusted through the second lookup table, so that the whitening treatment is only carried out on the face skin area, the color of the background of the whole image cannot be influenced, and the colors of eyes, hair, eyebrows, lips and other parts cannot be changed. And extracting a mouth region from the face image, and performing tooth whitening treatment on the mouth region only through the third lookup table without influencing other regions of the face. The whitening treatment of the skin area and the tooth area is realized, so that the whitened image has no filter sense, the whitening effect is good, and the vision is more natural.
Example three:
as shown in fig. 10, the present disclosure also provides a face whitening image processing apparatus for performing the face whitening image processing method according to the above embodiments, the apparatus comprising:
a data extraction module 301, configured to extract a target image;
the data adjusting module 302 is configured to search a corresponding color value in a lookup table according to a color value of a pixel point in the target image, and replace the color value of the pixel point in the target image with the searched color value;
the fusion adjusting module 303 is configured to perform at least one fusion processing on the original color values and the replaced color values of the pixels in the target image;
and the fine tuning processing module 304 is configured to perform whitening processing on the target image according to the fused target image and a preset fine tuning weight.
The data extraction module 301 is connected to the data adjustment module 302, the fusion adjustment module 303, and the fine adjustment processing module 304 in sequence.
Further, the target image is a facial skin image, and the data adjusting module 302 specifically includes:
and the light and shade replacement module is used for performing light and shade replacement on the facial skin image through a first lookup table to obtain replaced image data, and the first lookup table is a one-dimensional color lookup table gradually changing from black to white.
The bright-dark replacement module is specifically used for acquiring color values of pixel points from the facial skin image; determining the color value as a one-dimensional coordinate to be searched; querying a color value at a one-dimensional coordinate in a first lookup table; and replacing the color values of the pixel points in the facial skin image with the color values at the one-dimensional coordinates.
And the fusion adjusting module 303 is configured to perform weighted fusion on the original color values of the pixels in the facial skin image and the replaced color values.
And the fine tuning processing module 304 is configured to perform weighted fusion on the color values of the pixels in the fused facial skin image and the color values of the pixels in the original facial skin image according to a preset fine tuning weight.
The device also includes: and the color adjusting module is used for adjusting the color of the face skin image after the whitening treatment by utilizing a second lookup table, the second lookup table is a two-dimensional color lookup table formed by splicing a plurality of colored images, and the colored images in the second lookup table are arranged in a sequence of gradually changing the tone from dark to bright from left to right and from top to bottom.
The color adjusting module is specifically used for acquiring RGB three-channel color values of pixel points from the face skin image after whitening treatment; according to the RGB three-channel color values of the pixel points, inquiring the corresponding three-channel color values in a second lookup table; and replacing the three-channel color values of the pixel points in the face skin image after the whitening treatment with the inquired three-channel color values.
The device also includes: the tooth whitening module is used for acquiring a face image to be processed, and identifying and positioning key point data in the face image; determining a mouth region from the face image according to the key point data; acquiring a template material, and fitting the template material into the blank texture in an indexing mode according to the pixel point coordinates of the mouth region; and carrying out tooth whitening treatment on the mouth region in the face image corresponding to the template material based on the texture data of the template material.
The tooth whitening module is used for inquiring a color value corresponding to each pixel point in the mouth area in a third lookup table based on texture data of the template material; and respectively replacing the current color value of each pixel point of the mouth area in the face image with the color value corresponding to each searched pixel point.
The tooth whitening module is used for acquiring a channel value of a preset color channel of a first pixel point from the blank texture attached with the template material, wherein the first pixel point is any one pixel point in the template material; calculating a first coordinate and a second coordinate corresponding to the first pixel point according to the acquired channel value; determining a first color value of an RGB three channel corresponding to the first coordinate and a second color value of the RGB three channel corresponding to the second coordinate; respectively inquiring a third color value corresponding to the first color value and a fourth color value corresponding to the second color value in a third lookup table; and determining the average color value of the found third color value and fourth color value as the color value after whitening treatment corresponding to the first pixel point.
The face whitening image processing device and the face whitening image processing method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the device and the method.
Example four:
the present disclosure can also provide a computer storage medium having stored thereon a computer program for implementing the steps of the above-described face whitening image processing method when executed by a processor.
The computer storage medium of the present disclosure may be implemented with a semiconductor memory, a magnetic core memory, a magnetic drum memory, or a magnetic disk memory.
Semiconductor memories are mainly used as semiconductor memory elements of computers, and there are two types, Mos and bipolar memory elements. Mos devices have high integration, simple process, but slow speed. The bipolar element has the advantages of complex process, high power consumption, low integration level and high speed. NMos and CMos were introduced to make Mos memory dominate in semiconductor memory. NMos is fast, e.g. 45ns for 1K bit sram from intel. The CMos power consumption is low, and the access time of the 4K-bit CMos static memory is 300 ns. The semiconductor memories described above are all Random Access Memories (RAMs), i.e. read and write new contents randomly during operation. And a semiconductor Read Only Memory (ROM), which can be read out randomly but cannot be written in during operation, is used to store solidified programs and data. The ROM is classified into a non-rewritable fuse type ROM, PROM, and a rewritable EPROM.
The magnetic core memory has the characteristics of low cost and high reliability, and has more than 20 years of practical use experience. Magnetic core memories were widely used as main memories before the mid 70's. The storage capacity can reach more than 10 bits, and the access time is 300ns at the fastest speed. The typical international magnetic core memory has a capacity of 4 MS-8 MB and an access cycle of 1.0-1.5 mus. After semiconductor memory is rapidly developed to replace magnetic core memory as a main memory location, magnetic core memory can still be applied as a large-capacity expansion memory.
Drum memory, an external memory for magnetic recording. Because of its fast information access speed and stable and reliable operation, although its capacity is smaller and is gradually replaced by disk memory, it is still used as external memory for real-time process control computers and medium and large computers. In order to meet the needs of small and micro computers, subminiature magnetic drums have emerged, which are small, lightweight, highly reliable, and convenient to use.
Magnetic disk memory, an external memory for magnetic recording. It combines the advantages of drum and tape storage, i.e. its storage capacity is larger than that of drum, its access speed is faster than that of tape storage, and it can be stored off-line, so that the magnetic disk is widely used as large-capacity external storage in various computer systems. Magnetic disks are generally classified into two main categories, hard disks and floppy disk memories.
Hard disk memories are of a wide variety. The structure is divided into a replaceable type and a fixed type. The replaceable disk is replaceable and the fixed disk is fixed. The replaceable and fixed magnetic disks have both multi-disk combinations and single-chip structures, and are divided into fixed head types and movable head types. The fixed head type magnetic disk has a small capacity, a low recording density, a high access speed, and a high cost. The movable head type magnetic disk has a high recording density (up to 1000 to 6250 bits/inch) and thus a large capacity, but has a low access speed compared with a fixed head magnetic disk. The storage capacity of a magnetic disk product can reach several hundred megabytes with a bit density of 6250 bits per inch and a track density of 475 tracks per inch. The disk set of the multiple replaceable disk memory can be replaced, so that the disk set has large off-body capacity, large capacity and high speed, can store large-capacity information data, and is widely applied to an online information retrieval system and a database management system.
Example five:
the present disclosure also provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above-mentioned face whitening image processing method when executing the computer program.
Fig. 11 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 11, the electronic device includes a processor, a storage medium, a memory, and a network interface connected through a system bus. The storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions can enable the processor to realize a face whitening image processing method when being executed by the processor. The processor of the electronic device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform a method of thread timeout fault detection. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The electronic device includes, but is not limited to, a smart phone, a computer, a tablet, a wearable smart device, an artificial smart device, a mobile power source, and the like.
The processor may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor is a Control Unit of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (for example, executing remote data reading and writing programs, etc.) stored in the memory and calling data stored in the memory.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connected communication between the memory and at least one processor or the like.
Fig. 11 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 11 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor through a power management device, so that functions such as charge management, discharge management, and power consumption management are implemented through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and another electronic device.
Optionally, the electronic device may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
The present disclosure provides a method, an apparatus, a medium, and a device for processing a facial whitening image, which perform brightness adjustment and color replacement on a facial image photographed by a user by applying an LUT lookup table technique based on actual needs of the user, thereby achieving a more realistic whitening effect. Meanwhile, local whitening optimization is carried out on the teeth of the user, and the whitening requirement of the user is well met.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (13)

1. A face whitening image processing method, characterized by comprising:
extracting a target image;
searching corresponding color values in a lookup table according to the color values of the pixel points in the target image, and replacing the color values of the pixel points in the target image with the searched color values;
performing at least one fusion processing on the original color values of the pixel points in the target image and the replaced color values;
and performing whitening treatment on the target image according to the target image subjected to the fusion treatment and a preset fine tuning weight.
2. The method of claim 1, wherein the target image is a facial skin image, and the searching for a corresponding color value in a lookup table according to a color value of a pixel in the target image and replacing the color value of the pixel in the target image with the searched color value comprises:
and carrying out light and shade replacement on the facial skin image through a first lookup table to obtain replaced image data, wherein the first lookup table is a one-dimensional color lookup table gradually changing from black to white.
3. The method according to claim 2, wherein the performing shading replacement on the facial skin image by using the first lookup table to obtain replaced image data comprises:
acquiring color values of pixel points from the facial skin image;
determining the color value as a one-dimensional coordinate to be searched;
looking up a color value at the one-dimensional coordinate in the first lookup table;
and replacing the color value of the pixel point in the facial skin image with the color value at the one-dimensional coordinate.
4. The method of claim 2, wherein the at least one fusion process of the original color values of the pixels in the target image and the replaced color values comprises:
and carrying out weighted fusion on the original color values of the pixel points in the facial skin image and the color values after replacement.
5. The method according to claim 2, wherein the whitening of the target image according to the fused target image and a preset fine-tuning weight comprises:
and according to a preset fine tuning weight, carrying out weighted fusion on the color values of the pixel points in the fused facial skin image and the color values of the pixel points in the original facial skin image.
6. The method according to claim 2, wherein after performing whitening processing on the target image according to the target image after the fusion processing and a preset fine tuning weight, the method further comprises:
and carrying out color adjustment on the whitened facial skin image by utilizing a second lookup table, wherein the second lookup table is a two-dimensional color lookup table formed by splicing a plurality of color images, and the color images in the second lookup table are arranged in a sequence of gradually changing from dark to bright from left to right and from top to bottom.
7. The method of claim 6, wherein the color adjusting the whitened facial skin image using a second lookup table comprises:
acquiring RGB three-channel color values of pixel points from the whitened facial skin image;
according to the RGB three-channel color value of the pixel point, inquiring the corresponding three-channel color value in the second lookup table;
and replacing the three-channel color values of the pixel points in the face skin image after the whitening treatment with the inquired three-channel color values.
8. The method according to any one of claims 1-7, further comprising:
acquiring a face image to be processed, and identifying and positioning key point data in the face image;
determining a mouth region from the face image according to the key point data;
acquiring a template material, and fitting the template material into a blank texture in an index mode according to the pixel point coordinates of the mouth area;
and carrying out tooth whitening treatment on the mouth region in the face image corresponding to the template material based on the texture data of the template material.
9. The method according to claim 8, wherein the tooth whitening treatment is performed on the mouth region in the face image corresponding to the template material based on the texture data of the template material, and comprises:
based on the texture data of the template material, inquiring a color value corresponding to each pixel point in the mouth area in a third lookup table;
and respectively replacing the current color value of each pixel point of the mouth area in the face image with the color value corresponding to each searched pixel point.
10. The method of claim 9, wherein the querying a third lookup table for a color value corresponding to each pixel point in the mouth region based on texture data of the template material comprises:
acquiring a channel value of a preset color channel of a first pixel point from the blank texture attached with the template material, wherein the first pixel point is any one pixel point in the template material;
calculating a first coordinate and a second coordinate corresponding to the first pixel point according to the acquired channel value;
determining a first color value of an RGB three-channel corresponding to the first coordinate and a second color value of the RGB three-channel corresponding to the second coordinate;
respectively inquiring a third color value corresponding to the first color value and a fourth color value corresponding to the second color value in a third lookup table;
and determining the found average color value of the third color value and the fourth color value as the whitening processed color value corresponding to the first pixel point.
11. A face whitening image processing apparatus, characterized by comprising:
the data extraction module is used for extracting a target image;
the data adjusting module is used for searching corresponding color values in a lookup table according to the color values of the pixel points in the target image, and replacing the color values of the pixel points in the target image with the searched color values;
the fusion adjusting module is used for performing at least one fusion processing on the original color values of the pixel points in the target image and the replaced color values;
and the fine tuning processing module is used for carrying out whitening processing on the target image according to the target image subjected to fusion processing and a preset fine tuning weight.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the face whitening image processing method as claimed in any one of claims 1 to 10.
13. A computer storage medium having computer program instructions stored thereon, wherein the program instructions, when executed by a processor, are for implementing the steps corresponding to the face whitening image processing method of any of claims 1 to 10.
CN202111470958.3A 2020-12-03 2021-12-03 Face whitening image processing method, device, medium and equipment Pending CN114596214A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011396247 2020-12-03
CN2020113962471 2020-12-03

Publications (1)

Publication Number Publication Date
CN114596214A true CN114596214A (en) 2022-06-07

Family

ID=81804192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111470958.3A Pending CN114596214A (en) 2020-12-03 2021-12-03 Face whitening image processing method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN114596214A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117547248A (en) * 2024-01-12 2024-02-13 深圳市宗匠科技有限公司 Skin whitening degree analysis method, apparatus, computer device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117547248A (en) * 2024-01-12 2024-02-13 深圳市宗匠科技有限公司 Skin whitening degree analysis method, apparatus, computer device and storage medium
CN117547248B (en) * 2024-01-12 2024-04-19 深圳市宗匠科技有限公司 Skin whitening degree analysis method, apparatus, computer device and storage medium

Similar Documents

Publication Publication Date Title
US11748934B2 (en) Three-dimensional expression base generation method and apparatus, speech interaction method and apparatus, and medium
CN108492343B (en) Image synthesis method for training data for expanding target recognition
Wang et al. RGB-D salient object detection via minimum barrier distance transform and saliency fusion
Zhang et al. Style transfer via image component analysis
CN107945175A (en) Evaluation method, device, server and the storage medium of image
CN105354248A (en) Gray based distributed image bottom-layer feature identification method and system
CN111476849B (en) Object color recognition method, device, electronic equipment and storage medium
CN111047509A (en) Image special effect processing method and device and terminal
CN108280426A (en) Half-light source expression recognition method based on transfer learning and device
CN113191938B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109388725A (en) The method and device scanned for by video content
WO2022242397A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN109920018A (en) Black-and-white photograph color recovery method, device and storage medium neural network based
CN111080746A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112528909A (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN114596214A (en) Face whitening image processing method, device, medium and equipment
CN107832359A (en) A kind of picture retrieval method and system
CN114842240A (en) Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism
EP3869466A1 (en) Method and device for picture generation, electronic device, and storage medium
CN114882444B (en) Image fusion processing method, device and medium
CN111626130A (en) Skin color identification method and device, electronic equipment and medium
CN111369431A (en) Image processing method and device, readable medium and electronic equipment
KR101484003B1 (en) Evaluating system for face analysis
WO2023272495A1 (en) Badging method and apparatus, badge detection model update method and system, and storage medium
Shengze et al. Research based on the HSV humanoid robot soccer image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination