CN114202482A - Method for removing oil and luster from face image - Google Patents
Method for removing oil and luster from face image Download PDFInfo
- Publication number
- CN114202482A CN114202482A CN202111537764.0A CN202111537764A CN114202482A CN 114202482 A CN114202482 A CN 114202482A CN 202111537764 A CN202111537764 A CN 202111537764A CN 114202482 A CN114202482 A CN 114202482A
- Authority
- CN
- China
- Prior art keywords
- image
- max
- pixel
- oil
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000002932 luster Substances 0.000 title claims description 4
- 230000036555 skin type Effects 0.000 claims abstract description 12
- 238000001914 filtration Methods 0.000 claims abstract description 10
- 230000002146 bilateral effect Effects 0.000 claims description 18
- 238000005286 illumination Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 12
- 239000000126 substance Substances 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 8
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000002474 experimental method Methods 0.000 abstract 1
- 239000003921 oil Substances 0.000 description 31
- 230000037303 wrinkles Effects 0.000 description 18
- 210000001061 forehead Anatomy 0.000 description 13
- 239000011148 porous material Substances 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 238000005498 polishing Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 239000004519 grease Substances 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 206010048245 Yellow skin Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 239000000295 fuel oil Substances 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 208000002874 Acne Vulgaris Diseases 0.000 description 1
- 206010039792 Seborrhoea Diseases 0.000 description 1
- 206010000496 acne Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005238 degreasing Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000005194 fractionation Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000037311 normal skin Effects 0.000 description 1
- 230000037312 oily skin Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000028327 secretion Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/77—
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method for removing oil gloss of a face image, comprising the following steps of S1, obtaining a face original image S and carrying out skin type classification to determine the oil gloss grade; s2 calculates σ at each pixel point in SmaxAnd storing S as a grayscale image I; s3 calculates lambda at each pixel point of SmaxAnd storing the image as a gray level image II; s4, filtering the image gray level image I by using the gray level image II as a guide image to obtain a preprocessed image; s5 preprocessing each pixel in imageAnd σmaxThe larger of the two; s6 repeating steps S4 and S5 until each pixel pointExecuting the next step; s7, processing the pre-processed image to obtain a pre-processed image; and S7, when the oil light level of the preprocessed image is lower than S and not more than the preset oil light level threshold value, outputting the preprocessed image as an image D, otherwise, returning to S2 and updating S. The experiment proves that the method has obvious oil removing effect.
Description
Technical Field
The invention relates to a beautifying method, in particular to a method for removing oily light from a face image.
Background
With the rapid iteration of hardware technology in recent years, no matter a digital camera or a smart phone, the photographing function is continuously improved, the imaging pixels are larger and larger, the definition of the photographed image is higher and higher, and therefore the flaw content such as spots, acne marks, wrinkles and the like of the face skin can be photographed. Aiming at the problem, in the field of portrait beautifying, the skin of the human face needs to be rubbed, so that noise, black spots and flaws in portrait pictures are effectively removed, and the smooth, texture and softening of the face are realized. Whitening of dark and yellowish skin is required to make the skin fair and ruddy.
The existing methods for removing highlight from color pictures are roughly divided into two types: one is based on a plurality of pictures and utilizes multi-view and strategy to remove the specular reflection component, and the other is based on a single picture and utilizes methods such as spatial domain analysis or color space analysis to remove the specular reflection component. However, in practical situations, it is difficult to obtain multiple pictures, so the algorithm for removing highlights from a single picture is particularly important. Tan et al propose an SF map, and utilize a specific-to-difference mechanism to successfully separate out diffuse reflection components and specular reflection components, but the chromaticity of pixel points can change in the separation process, resulting in image color distortion. Shen et al propose an improved SF map based on Tan by first classifying the image pixels into clusters and calculating the specular component of the image by using the intensity contrast between the maxima and the ranges, but again resulting in a loss of image texture features. Yang et al directly applies a low-pass filter, processes the maximum chromaticity diagram of an image by using the maximum diffuse reflection chromaticity estimated value as bilateral filtering of a value range, but because global information is lacked, the texture characteristics of the image can be reduced, Kim et al proposes an MAP optimization framework to separate the diffuse reflection components of a single image, although the method has strong stability and good robustness, the texture characteristics are still hard to store when complex texture images are processed.
Disclosure of Invention
Aiming at the problems in the prior art, the technical problems to be solved by the invention are as follows: how to provide an image matting method which is highly robust and the obtained image is natural.
In order to solve the technical problems, the invention adopts the following technical scheme: a method for removing oil and luster of a face image comprises the following steps:
s1, acquiring a face original image S, performing skin type classification on the face original image S, and determining the oil light level of the face original image S;
the oil gloss grades are classified into a fourth-level oil gloss, a third-level oil gloss, a second-level oil gloss and a first-level oil gloss, and each oil gloss grade is assigned with a value in sequence;
s2, calculating the maximum chroma sigma of each pixel point of the human face original image SmaxStoring the original human face image S as a gray image I;
s3, calculating the maximum value lambda of the approximate diffuse reflection chroma of each pixel point of the original human face image SmaxAnd storing the image as a gray level image II;
s4, using the gray level image II as a guide image, applying a combined bilateral filter to the image gray level image I, and storing the filtered image as a preprocessed image;
s5 calculating σ for each pixel p in the preprocessed imagemax(p) comparisonAnd σmaxTaking the maximum value as shown in equations 2-17:
s7 determining sigma of each pixel point p in the preprocessed imagemax(p) the channel in RGB selected, the pixel of the selected channel of each pixel point p is sigmamax(p) multiplied by 255, and then iterating the pixels of the two unselected channels of each pixel point p and the pixels of the selected channel to obtain a preprocessed image;
and S8, performing skin type classification on the preprocessed image to determine the oil light level, outputting the preprocessed image as an image D when the oil light level of the preprocessed image is lower than the oil light level of the original face image S in the S1 and not more than a preset oil light level threshold, otherwise, returning to the step S2, and updating the original face image S by using the preprocessed image.
As an improvement, in S2, the maximum chroma σ at each pixel point of the original face image S is calculatedmaxThe process of (2) is as follows:
the reflected light color J in RGB color space is represented as a diffuse reflectance value JDAnd specular reflectance value JSLinear combination of colors, formula 2-5:
J=JD+JS (2-5);
defining chrominance as a color component σcThe formula is 2-6:
wherein c is ∈ { r, g, b }, JcRepresenting the reflected light color;
diffuse reflectance chromaticity ΛcAnd an illumination chromaticity ΓcEquations 2-7 and equations 2-8 are defined as follows:
wherein the content of the first and second substances,which represents the diffuse reflection component of the light,representing a diffuse reflection component;
according to the above formula, the reflected light color J iscDefined as formulas 2-9:
wherein u represents a layer, and u can be an r layer, a g layer or a b layer,representing the diffuse reflection component in the layer u,representing the diffuse reflection component in the layer u;
input face original image S is normalized to white estimation using illumination chromaticityAnd Γr,Γgand ΓbRespectively representing the illumination chromaticity of the r, g and b layers,andrespectively representing the specular reflection values of the r, g and b layers;
then the diffuse reflection assembly according to the previous formula is as shown in formulas 2-10:
wherein the content of the first and second substances,representing the diffuse reflection value of the c-th image layer;
the maximum chroma is defined by equations 2-11:
σmax=max(σr,σg,σb) (2-11);
wherein σr,σg,σbRepresenting the maximum color components of the r, g and b layers, respectively;
the maximum diffuse reflectance chromaticity is defined as equation 2-12:
Λmax=max(Λr,Λg,Λb) (2-12);
wherein, Λr,Λg,ΛbRespectively representing the maximum diffuse reflection chroma of the r layer, the g layer and the b layer;
the diffuse reflection component may be ΛmaxExpressed as equations 2-13:
As an improvement, in S2, a maximum value λ of the approximate diffuse reflection chromaticity at each pixel point of the original face image S is calculatedmaxThe process of (2) is as follows:
let sigmamin=min(σr,σg,σb) Using λcTo estimate ΛcThe equations 2-14 are calculated as follows:
λcintermediate variables, with no actual meaning;
approximate diffuse reflectance chromaticity λcAnd true diffuse reflectance chromaticity ΛcThe relationship between them is described as 1) and 2).
1) For any two pixels p and q, if Λc(p)=Λc(q), then λc(p)=λc(q)
2) For any two pixels p and q, if λc(p)=λc(q), then only if Λmin(p)=ΛminWhen (q) is higher thanc(p)=Λc(q)
The maximum value of the approximate diffuse reflectance chromaticity is the formula 2-15:
wherein λ isr,λg,λbThe calculated variables representing the layers r, g and b, respectively, have no actual meaning;
filtered maximum chromaticity σ using the approximate maximum diffuse reflectance chromaticity value as a smoothing parametermaxEquations 2-16 are calculated as follows:
wherein the content of the first and second substances,meaning that the calculated variable for pixel point p has no actual meaning,andare typically gaussian distributed spatial and distance weighting functions.
As an improvement, the gray image II in S4 is used as a guide image, and a joint bilateral filter is applied to the image gray image I
The filtering process is as follows:
wherein, ID(i, j) represents the pixel value of the pixel point with the coordinate (i, j) after the joint bilateral filtering, (k, l) represents the pixel coordinate of other points in the filtering window,the pixel value of the center point is represented,the pixel values of the rest nodes are shown, and w (j, j, k, l) is a parameter for multiplying a Gaussian distribution space function and a Gaussian function of the similarity of the pixel intensity;
the joint bilateral filter is defined as follows:
is thatThis part is related only to the coordinates of the pixel points p (i, j) and q (k, l),by substituting into the formulamax(q) is equal to the portion of I (k, l) in the bilateral filter, representing qThe pixel value of the dot.
Compared with the prior art, the invention has at least the following advantages:
1. the method is simple to implement, the image with the highlight removed can be obtained only by iteration for several times during calculation, in addition, compared with the algorithm for removing the highlight from the rest single image, the method is small in calculation amount, strong in algorithm stability, good in mobility and good in practical application effect.
2. The method improves the phenomenon that three-channel similar pixel points appear fading and texture missing in highlight result removal based on bilateral filtering, provides a new maximum diffuse reflection chromaticity estimation, improves the texture characteristics of the image, obtains a clear, natural and highlight-free image, has simple algorithm, easy implementation and higher robustness, and can effectively restore the image edge and color information.
Drawings
FIG. 1 is a schematic flow diagram of a degreasing process.
Fig. 2 is a schematic diagram of a polygonal outer frame of a face and a region of interest.
Fig. 3 is a schematic diagram of skin tone grading.
FIG. 4 is a schematic representation of oil light fractionation.
Fig. 5 is a schematic view of wrinkle classification.
Fig. 6 is a schematic illustration of pore grading.
Fig. 7 shows a comparison of the effects before and after polishing of the face artwork, where fig. 7a shows the artwork and fig. 7b shows the artwork after polishing.
Detailed Description
The present invention is described in further detail below.
Specifically, the process of removing oil from the image B by the oil polishing operator in S is as follows:
s1, acquiring a face original image S, performing skin type classification on the face original image S, and determining the oil light level of the face original image S; the gloss grades are classified into four-level gloss, three-level gloss, two-level gloss and one-level gloss, and each gloss grade is assigned in sequence.
S2, calculating the maximum chroma of each pixel point of the human face original image SσmaxAnd storing the original human face image S as a gray image I.
S3, calculating the maximum value lambda of the approximate diffuse reflection chroma of each pixel point of the original human face image SmaxAnd stored as a grayscale image ii. For example: in general, a color image has three layers of RGB, each having a pixel value (0-255), for example, a pixel value of (1, 2, 5) at a certain pointStoring the value as a gray scale map (the gray scale map has only one layer, so the value can be regarded as the value of one pixel, i.e. the image B is obtained, and the same principle is that according to lambdamaxFormula (2) Take the above point as an example to obtain lambdamaxIs composed ofThis value is stored as a pixel point of the gray scale map.
And S4, using the gray level image II as a guide image, applying a joint bilateral filter to the image gray level image I, and storing the filtered image as a preprocessing image.
S5 calculating σ for each pixel p in the preprocessed imagemax(p) comparisonAnd σmaxTaking the maximum value as shown in equations 2-17:
S7 determining sigma of each pixel point p in the preprocessed imagemax(p) the channel in RGB selected, the pixel of the selected channel of each pixel point p is sigmamax(p) multiplied by 255, and then iterating the pixels of the two unselected channels of each pixel point p and the pixels of the selected channel to obtain a preprocessed image. The preprocessed image to be substituted into the calculation is a gray scale image, and the preprocessed image is sigma which is updated iterativelymax(p) x 255+ pixels of the other two channels form a three-layer RGB color image, e.g., σ for pixel pmax(p) if the R channel is selected, the pixel of the pixel point p in the R channel is sigmamax(p) x 255, and then the pixels of the pixel point p in the G channel and the B channel and the pixel sigma of the R channelmaxAnd (p) multiplied by 255 are iterated, and the three layers of RGB color images, namely the preprocessed images, are obtained by repeating the operation on all the pixel points p.
And S8, performing skin type classification on the preprocessed image to determine the oil light level, outputting the preprocessed image as an image D when the oil light level of the preprocessed image is lower than the oil light level of the original face image S in the S1 and not more than a preset oil light level threshold, otherwise, returning to the step S2, and updating the original face image S by using the preprocessed image.
Specifically, in step S2, the maximum chroma σ at each pixel point of the original face image S is calculatedmaxThe process of (2) is as follows:
the reflected light color J in RGB color space is represented as a diffuse reflectance value JDAnd specular reflectance value JSLinear combination of colors, formula 2-5:
J=JD+JS (2-5);
defining chrominance as a color component σcThe formula is 2-6:
wherein c is ∈ { r, g, b }, JcRepresenting the reflected light color;
diffuse reflectance chromaticity ΛcAnd an illumination chromaticity ΓcEquations 2-7 and equations 2-8 are defined as follows:
wherein the content of the first and second substances,which represents the diffuse reflection component of the light,representing a diffuse reflection component;
according to the above formula, the reflected light color J iscDefined as formulas 2-9:
wherein u represents a layer, and u can be an r layer, a g layer or a b layer,representing the diffuse reflection component in the layer u,representing the diffuse reflection component in the layer u;
using white estimation with illumination chromaticity, the input image B is normalizedAndΓr,Γgand ΓbRespectively representing the illumination chromaticity of the r, g and b layers,andrespectively representing the specular reflection values of the r, g and b layers;
then the diffuse reflection assembly according to the previous formula is as shown in formulas 2-10:
wherein the content of the first and second substances,representing the diffuse reflection value of the c-th image layer;
the maximum chroma is defined by equations 2-11:
σmax=max(σr,σg,σb) (2-11);
wherein σr,σg,σbRepresenting the maximum color components of the r, g and b layers, respectively;
the maximum diffuse reflectance chromaticity is defined as equation 2-12:
Λmax=max(Λr,Λg,Λb) (2-12);
wherein, Λr,Λg,ΛbRespectively representing the maximum diffuse reflection chroma of the r layer, the g layer and the b layer;
the diffuse reflection component may be ΛmaxExpressed as equations 2-13:
Specifically, the maximum value λ of the approximate diffuse reflection chromaticity at each pixel point of the image B is calculated in S2maxIn the process ofThe following:
let sigmamin=min(σr,σg,σb) Using λcTo estimate ΛcThe equations 2-14 are calculated as follows:
λcintermediate variables, with no actual meaning;
approximate diffuse reflectance chromaticity λcAnd true diffuse reflectance chromaticity ΛcThe relationship between them is described as 1) and 2).
1) For any two pixels p and q, if Λc(p)=Λc(q), then λc(p)=λc(q)
2) For any two pixels p and q, if λc(p)=λc(q), then only if Λmin(p)=ΛminWhen (q) is higher thanc(p)=Λc(q)
The maximum value of the approximate diffuse reflectance chromaticity is the formula 2-15:
wherein λ isr,λg,λbThe calculated variables representing the layers r, g and b, respectively, have no actual meaning;
filtered maximum chromaticity σ using the approximate maximum diffuse reflectance chromaticity value as a smoothing parametermaxEquations 2-16 are calculated as follows:
wherein the content of the first and second substances,meaning that the calculated variable for pixel point p has no actual meaning,andare typically gaussian distributed spatial and distance weighting functions. .
Specifically, the process of applying the joint bilateral filter to the image grayscale image i by using the grayscale image ii in S4 as a guide image is as follows:
wherein, ID(i, j) represents the pixel value of the pixel point with the coordinate (i, j) after the joint bilateral filtering, (k, l) represents the pixel coordinate of other points in the filtering window,the pixel value of the center point is represented,pixel values of the remaining nodes are indicated, w (j, j, k, l) is a parameter of the multiplication of the gaussian distribution space function and the gaussian function of the similarity of pixel intensities.
The joint bilateral filter is defined as follows:
is thatThis is oneThe part is only related to the coordinates of the p (i, j) and q (k, l) pixel points,by substituting into the formulamax(q) is equal to the portion of I (k, l) in the bilateral filter, representing the pixel value at point q.
The joint bilateral filter is applied to sigmamaxThe set of gray maps is continuously updated iteratively (with successive iterations in the algorithm).
A final sigma of the algorithm flowmaaA gray scale map of values, consisting of σmax=max(σr,σg,σb) Is defined as follows, seemaxIs a fraction between 0 and 1 and is derived from the one of the rgb channels for which the ratio of chrominance values is the largest, while for the whole graph the rgb pixel values of each point are varied so that for the whole graph the computation of the sigma values of the rgb channels is equivalent to the same time and after iteration we have obtained a histogram of sigma valuesmaxA gray scale map of the composition, willmaxAnd multiplying the pixel by 255+ to obtain the image D after highlight removal.
And taking a face image as an effect test, wherein the skin oil-light grading is a three-level grade, so that the oil-light removal parameter is set to be 2, and the effect of the face original image and the effect after the oil-light removal operator treatment are shown in fig. 7. The left image is the original image, and the right image is the image after the matte operator treatment. As can be seen from fig. 7, the oil removing method provided by the present invention has a significant effect on removing skin reflection and oil shine in the forehead and left cheek regions.
The method for classifying the skin types of the original human face image comprises the following steps:
defining a plurality of characteristic points in the original image of the human face, connecting all the characteristic points in sequence to form a polygon, and defining the obtained mask as a complete human face area as MpointsThe mask for the skin region of the whole body of the human body is MhumanMask of human face skin area is Mface,
Mface=Mpoints∩Mhuman (3-3);
Aligned 81 feature points are obtained by using a TensorFlow-based deep neural network face detection algorithm provided by OpenCV and a face alignment algorithm proposed by AdrianBulat. Sequentially connecting points of the outermost frame of the face to form a polygon, wherein the obtained mask is a complete face area and is defined as MpointsAs shown by the outer frame polygon of fig. 2.
The human face is affected by factors such as hair, glasses, ornaments, light shadow and the like, so that the skin type classification is inaccurate, and therefore, on the basis of key point positioning segmentation, intersection needs to be obtained with the result of whole-body skin segmentation to obtain the final human face skin area.
S32: four-dimensional classification is carried out on the mask image of the human face skin area according to skin color, oil light, wrinkles and pores, and the four-dimensional classification is as follows
The skin types of human skin are various and can be divided into a plurality of types according to four dimensions of skin color, gloss, wrinkles and pores. In the beauty task, firstly, the skin type is judged, and then parameters of an algorithm for processing different flaws are determined.
Skin color: at present, research related to human skin color mainly focuses on the fields of medical diagnosis, face comparison, expression recognition and the like, and the grade subdivision of the skin color provided by the invention is to better determine parameters of a beautifying algorithm and is different from a standard skin color grading standard. In the portrait photography, the skin color of the same person can present different results due to differences of illumination, shooting equipment, shooting parameters and the like. The invention thus classifies skin tones based on the shade and color of the image reflection, rather than the human body itself.
The skin color grades are divided into four classes of four, three, two and one, and each skin color grade is assigned with 1,2,3 and 0 in sequence. The four-level skin color is dark skin color or dark skin color caused by light shadow during shooting, the three-level skin color is yellow skin color caused by yellow skin color, ambient light or white balance setting and the like, the two-level skin color is white skin color caused by white skin color or shooting overexposure and the like, and the one-level skin color is normal skin color type which does not need to be adjusted, as shown in fig. 3.
The gloss grades are classified into four-level gloss, three-level gloss, two-level gloss and one-level gloss, and each gloss grade is assigned with 1,2,3 and 0 in sequence.
In the portrait photography, the face highlight region is a region having the highest L-average value in the Lab color space. The degree of exposure of the photograph can be determined from the L value of the highlight region, and is generally classified into underexposure, normal exposure, and overexposure. In the later trimming process, the under-exposed and over-exposed photos need to be brightened and brightened respectively.
Because oily skin secretes grease, the grease reflects during imaging, which causes the phenomenon of reflecting in the highlight area of human face, therefore, the highlight area often appears along with the highlight area. And determining parameters of the oil removing polishing algorithm through classification of the oil polishing grade.
The four-level oil light means that grease is secreted much, and the reflection degree of the portrait is high; the first-order gloss is the secretion of a small amount of oil from the skin, and the human image has no reflection phenomenon, as shown in fig. 4.
The wrinkle grades are divided into four grades, three grades, two grades and one grade, and each wrinkle grade is sequentially assigned with 1,2,3 and 0.
Wrinkles may appear in different grades due to the person being at different age stages. A plurality of wrinkle quantitative determination methods based on computer vision are proposed at home and abroad, and are greatly influenced by illumination, shadow, resolution and the like during image shooting, and the detection effect is unstable. The emphasis of the dermabrasion algorithm is on wrinkles in the skin, so that the accuracy of the grading of wrinkles directly determines the effectiveness of the dermabrasion algorithm. The fourth level characterizes the level with the most wrinkles, the deepest texture, and the final level, and the first level characterizes the level with few wrinkles, very light texture, and the lowest level, as shown in fig. 5.
Pore grades are divided into four grades, three grades, two grades and one grade, and each pore grade is sequentially assigned with 1,2,3 and 0.
Rough skin is also the content of the key treatment of the dermabrasion algorithm. The size and size of pores in the skin reflect whether the skin is smooth and fine. The skin conditions of different people vary greatly, and the skin is divided into four grades, three grades, two grades and one grade according to the roughness degree. The fourth level represents the rough, prominent pore grade, and the first level represents the smooth, fine grade, as shown in fig. 6.
S33: in the original image of the face, selecting four parts of the forehead, the left cheek, the right cheek and the chin of the face as interested areas, setting the weight of each area divided into skin color, gloss, wrinkles and pores, then calculating the grade assignment of the four parts by adopting the following formula, wherein the value of sigma is equal to the grade assignment:
wherein the content of the first and second substances,respectively represents the weights of skin colors in four areas of the forehead, the left cheek, the right cheek and the chin,respectively represents the weight of the oil light in four areas of the forehead, the left cheek, the right cheek and the chin,respectively represents the weights of wrinkles in four areas of the forehead, the left cheek, the right cheek and the chin,the weights of pores in four regions of the forehead, left cheek, right cheek and chin are represented respectively.
In the portrait photo, after a face rectangular frame is detected and key points of the face are aligned, an interested area is selected, and parameters of a beautifying algorithm are finally determined according to the skin classification indexes.
When the skin is classified according to indexes, the skin classification weights of different areas of the human face are different, the forehead highlight area is usually an area with heavy oil and bright skin color, the cheek area is usually an area with heavy oil and heavy wrinkles, and the chin area is usually an area with light oil and light wrinkles. In order to always select a skin region which is not influenced by factors such as illumination shadow, shooting angle and the like, four parts of the forehead, the left cheek, the right cheek and the chin of a human face are selected as regions of interest, and when index calculation is carried out on the four regions, a weight matrix shown in the following table 1 is set according to experience.
TABLE 1 weight table of skin type indexes of interested area of face
Forehead head | Left face | Right face | Jaw | |
Skin tone | 0.35 | 0.25 | 0.25 | 0.15 |
Oil polish | 0.4 | 0.2 | 0.2 | 0.1 |
Wrinkle (wrinkle) | 0.2 | 0.3 | 0.3 | 0.2 |
Pores of skin | 0.2 | 0.3 | 0.3 | 0.2 |
The forehead, the left cheek, the right cheek and the chin of the human face as the interested regions can be extracted in the following way:
the expression formula of the face key points is Loci=(xi,yi) I-1, 2, …,81, wherein xi,yiThe horizontal and vertical coordinates of the points are shown, and the specific area is shown in table 8 below.
TABLE 8 regions corresponding to face Key points
Range of key points | Face region |
Loc1~Loc17 | Cheek edge |
Loc18~Loc22 | Left eyebrow |
Loc23~Loc27 | Right side eyebrow |
Loc28~Loc36 | Nose |
Loc37~Loc42 | Left eye |
Loc43~Loc48 | Right eye |
Loc49~Loc68 | Mouth bar |
Loc69~Loc81 | Forehead head |
In the skin classification task of the human face, if the whole region is taken as an input, the whole region is interfered by pose, shadow and the like, so that four regions of interest (ROI) are proposed to be divided, and a schematic diagram is shown in fig. 2. Setting Rectlx,Rectily,Rectirx,RectiryAnd i is 1,2,3 and 4, which respectively represent the forehead, the left cheek, the right cheek and the lower jaw.
The key point positions of the upper left corner and the lower right corner of the forehead area are respectively as follows: (Rect1lx,Rect1ly)=(x21,max(y71,y72,y81)),(Rect1rx,Rect1ry)=(x24,min(y21,y24))。
The key point positions of the upper left corner and the lower right corner of the left cheek region are respectively as follows: (Rect2lx,Rect2ly)=(x37,y29),(Rect2rx,Rect2ry)=(x32,y32)。
The key point positions of the upper left corner and the lower right corner of the right cheek region are respectively as follows: (Rect3lx,Rect3ly)=(x36,y29),(Rect3rx,Rect3ry)=(x46,y32)。
The key point positions of the upper left corner and the lower right corner of the lower jaw area are respectively as follows: (Rect4lx,Rect4ly)=(x8,max(y57,y58,y59)),(Rect4rx,Rect4ry)=(x10,min(y8,y9,y10))。
The schematic of the four regions is shown in the inner frame rectangle of fig. 2.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.
Claims (4)
1. A method for removing oil and luster of a face image is characterized by comprising the following steps:
s1: acquiring a face original image S, carrying out skin type classification on the face original image S, and determining the oil light level of the face original image S;
the oil gloss grades are classified into a fourth-level oil gloss, a third-level oil gloss, a second-level oil gloss and a first-level oil gloss, and each oil gloss grade is assigned with a value in sequence;
s2: calculating the maximum chroma sigma of each pixel point of the original human face image SmaxStoring the original image S of the human face as a gray image I;
s3: calculating the maximum value lambda of the approximate diffuse reflection chroma of each pixel point of the original human face image SmaxAnd storing it as a grayscale image II;
s4: using the gray level image II as a guide image, applying a joint bilateral filter to the image gray level image I, and storing the filtered image as a preprocessed image;
s5: calculating σ for each pixel p in the preprocessed imagemax(p) comparisonAnd σmaxTaking the maximum value as shown in equations 2-17:
s7: determining sigma of each pixel point p in a preprocessed imagemax(p) the channel in RGB selected, the pixel of the selected channel of each pixel point p is sigmamax(p) multiplied by 255, and then iterating the pixels of the two unselected channels of each pixel point p and the pixels of the selected channel to obtain a preprocessed image;
s8: and performing skin type classification on the preprocessed image to determine the oil light level, outputting the preprocessed image as an image D when the oil light level of the preprocessed image is lower than the oil light level of the original face image S in S1 and not more than a preset oil light level threshold, otherwise, returning to the step S2, and updating the original face image S by using the preprocessed image.
2. The method for removing oily light from a human face image as claimed in claim 1, characterized in that: in the step S2, the maximum chroma σ at each pixel point of the original face image S is calculatedmaxThe process of (2) is as follows:
the reflected light color J in RGB color space is represented as a diffuse reflectance value JDAnd specular reflectance value JSLinear combination of colors, formula 2-5:
J=JD+JS (2-5);
defining chrominance as a color component σcThe formula is 2-6:
wherein c is ∈ { r, g, b }, JcRepresenting the reflected light color;
diffuse reflectance chromaticity ΛcAnd an illumination chromaticity ΓcEquations 2-7 and equations 2-8 are defined as follows:
wherein the content of the first and second substances,which represents the diffuse reflection component of the light,representing a diffuse reflection component;
according to the above formula, the reflected light color J iscDefined as formulas 2-9:
wherein u represents a layer, and u can be an r layer, a g layer or a b layer,representing the diffuse reflection component in the layer u,representing diffuse reflectance in layer uAn amount;
input face original image S is normalized to white estimation using illumination chromaticityAnd Γr,Γgand ΓbRespectively representing the illumination chromaticity of the r, g and b layers,andrespectively representing the specular reflection values of the r, g and b layers;
then the diffuse reflection assembly according to the previous formula is as shown in formulas 2-10:
wherein the content of the first and second substances,representing the diffuse reflection value of the c-th image layer;
the maximum chroma is defined by equations 2-11:
σmax=max(σr,σg,σb) (2-11);
wherein σr,σg,σbRepresenting the maximum color components of the r, g and b layers, respectively;
the maximum diffuse reflectance chromaticity is defined as equation 2-12:
Λmax=max(Λr,Λg,Λb) (2-12);
wherein, Λr,Λg,ΛbRespectively representing the maximum diffuse reflection chroma of the r layer, the g layer and the b layer;
the diffuse reflection component may be ΛmaxExpressed as equations 2-13:
3. The method for removing oily light from a human face image as claimed in claim 1 or 2, wherein: in the step S2, the maximum value lambda of the approximate diffuse reflection chroma of each pixel point of the human face original image S is calculatedmaxThe process of (2) is as follows:
let sigmamin=min(σr,σg,σb) Using λcTo estimate ΛcThe equations 2-14 are calculated as follows:
λcintermediate variables, with no actual meaning;
approximate diffuse reflectance chromaticity λcAnd true diffuse reflectance chromaticity ΛcThe relationship between them is described as 1) and 2).
1) For any two pixels p and q, if Λc(p)=Λc(q), then λc(p)=λc(q)
2) For any two pixels p and q, if λc(p)=λc(q), then only if Λmin(p)=ΛminWhen (q) is higher thanc(p)=Λc(q)
The maximum value of the approximate diffuse reflectance chromaticity is the formula 2-15:
wherein λ isr,λg,λbThe calculated variables representing the layers r, g and b, respectively, have no actual meaning;
filtered maximum chromaticity σ using the approximate maximum diffuse reflectance chromaticity value as a smoothing parametermaxEquations 2-16 are calculated as follows:
4. The method for removing oily light from a human face image as claimed in claim 3, characterized in that: in S4, taking the grayscale image II as a guide image, and applying a joint bilateral filter to the image grayscale image I as follows:
wherein, ID(i, j) represents the pixel value of the pixel point with the coordinate (i, j) after the joint bilateral filtering, (k, l) represents the pixel coordinate of other points in the filtering window,the pixel value of the center point is represented,the pixel values of the rest nodes are shown, and w (j, j, k, l) is a parameter for multiplying a Gaussian distribution space function and a Gaussian function of the similarity of the pixel intensity;
the joint bilateral filter is defined as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111537764.0A CN114202482A (en) | 2021-12-15 | 2021-12-15 | Method for removing oil and luster from face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111537764.0A CN114202482A (en) | 2021-12-15 | 2021-12-15 | Method for removing oil and luster from face image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114202482A true CN114202482A (en) | 2022-03-18 |
Family
ID=80654184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111537764.0A Withdrawn CN114202482A (en) | 2021-12-15 | 2021-12-15 | Method for removing oil and luster from face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114202482A (en) |
-
2021
- 2021-12-15 CN CN202111537764.0A patent/CN114202482A/en not_active Withdrawn
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101390756B1 (en) | Facial feature detection method and device | |
CN108038456B (en) | Anti-deception method in face recognition system | |
US10304166B2 (en) | Eye beautification under inaccurate localization | |
KR101446975B1 (en) | Automatic face and skin beautification using face detection | |
CN111445410B (en) | Texture enhancement method, device and equipment based on texture image and storage medium | |
JP2004348674A (en) | Region detection method and its device | |
IL172480A (en) | Method for automatic detection and classification of objects and patterns in low resolution environments | |
CN113344836A (en) | Face image processing method and device, computer readable storage medium and terminal | |
JP7300027B2 (en) | Image processing device, image processing method, learning device, learning method, and program | |
Lee et al. | Ramp distribution-based contrast enhancement techniques and over-contrast measure | |
CN114240743A (en) | Skin beautifying method based on high-contrast buffing human face image | |
JP5203159B2 (en) | Image processing method, image processing system, and image processing program | |
CN104966271B (en) | Image de-noising method based on biological vision receptive field mechanism | |
Sablatnig et al. | Structural analysis of paintings based on brush strokes | |
JP4742068B2 (en) | Image processing method, image processing system, and image processing program | |
CN114202482A (en) | Method for removing oil and luster from face image | |
WO2008018459A1 (en) | Image processing method, image processing apparatus, image processing program, and image pickup apparatus | |
Hidayattullah et al. | Feature extraction in batik image geometric motif using canny edge detection | |
Riaz et al. | Visibility restoration using generalized haze-lines | |
Choudhury et al. | Perceptually motivated automatic color contrast enhancement based on color constancy estimation | |
CN114187207A (en) | Skin beautifying method of face image based on additive lee filtering buffing | |
CN112070771A (en) | Adaptive threshold segmentation method and device based on HS channel and storage medium | |
CN114202483B (en) | Improved additive lee filtering skin grinding method | |
CN114202483A (en) | Additive lee filtering and peeling method based on improvement | |
Samuelsson | Classification of Skin Pixels in Images: Using feature recognition and threshold segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220318 |