CN114202483B - Improved additive lee filtering skin grinding method - Google Patents
Improved additive lee filtering skin grinding method Download PDFInfo
- Publication number
- CN114202483B CN114202483B CN202111539879.3A CN202111539879A CN114202483B CN 114202483 B CN114202483 B CN 114202483B CN 202111539879 A CN202111539879 A CN 202111539879A CN 114202483 B CN114202483 B CN 114202483B
- Authority
- CN
- China
- Prior art keywords
- skin
- image
- grade
- processed
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 239000000654 additive Substances 0.000 title claims abstract description 31
- 230000000996 additive effect Effects 0.000 title claims abstract description 31
- 238000001914 filtration Methods 0.000 title claims abstract description 20
- 238000000227 grinding Methods 0.000 title claims abstract description 18
- 230000037303 wrinkles Effects 0.000 claims description 30
- 239000011148 porous material Substances 0.000 claims description 29
- 210000001061 forehead Anatomy 0.000 claims description 26
- 239000003086 colorant Substances 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 12
- 239000003921 oil Substances 0.000 description 20
- 206010040844 Skin exfoliation Diseases 0.000 description 19
- 238000005498 polishing Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 208000002874 Acne Vulgaris Diseases 0.000 description 3
- 208000035874 Excoriation Diseases 0.000 description 3
- 206010000496 acne Diseases 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 239000004519 grease Substances 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 239000000295 fuel oil Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000028327 secretion Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 206010039792 Seborrhoea Diseases 0.000 description 1
- 206010048245 Yellow skin Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000005282 brightening Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005238 degreasing Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000005194 fractionation Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000037312 oily skin Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000036555 skin type Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention relates to a skin-grinding method based on improved additive lee filtering, which comprises the steps of firstly, acquiring a face original image, and intercepting the face original image into the size of N x M to obtain an image to be processed; setting a sliding window, and calculating a parameter k according to pixel value local variances of all coordinate points in the image to be processed in the sigma value sliding window; reuse formulaAnd (3) carrying out additive denoising on the pixel values at the coordinates (i, j), traversing all coordinate points in the image to be processed to obtain pixel values after additive denoising of each coordinate point, taking the pixel values corresponding to all coordinate points after additive denoising as pixel values after peeling of the coordinate points, and obtaining and outputting the filtered image. The method of the invention can smooth the skin and has the advantages of excessively uniform skin treatment effect, reserved contrast, reserved texture and real texture.
Description
Technical Field
The invention relates to a beautifying algorithm, in particular to an improved additive lee filtering and skin grinding method.
Background
The main theoretical basis of the portrait peeling algorithm is two points, namely, the color of the skin spot area is reduced and the filtering of the skin spot area is carried out. Based on these two points, the portrait peeling algorithm can be divided into the following: general skin-grinding algorithm, channel skin-grinding algorithm, detail superposition skin-grinding algorithm, etc.
The general peeling algorithm is the most basic peeling algorithm, and the style belongs to smoothness. The channel peeling algorithm originates from peeling operation in PhotoShop, is a peeling algorithm based on human image blue channel calculation, and is based on the principle that dark spot areas of skin are lightened, so that the dark spot color is simple, the effect of approximate peeling when the dark spot is small is achieved, and the style belongs to natural smoothness. The detail superposition polishing is a polishing method for sequentially superposing detail information by using double-scale edge-preserving filtering, and the method superposes different detail information on a large-radius smooth image so as to meet the polishing requirement.
The general skin grinding is the most basic skin grinding algorithm, and although the effect is smooth, the detail is rich, the detail loss is large, the picture distortion is serious, and in addition, the skin acne mark flaws can be reduced by channel skin grinding to whiten the skin, but the smoothness is general. Detail superimposed skin grinding is a skin grinding method which sequentially superimposes detail information by using double-scale edge-protection filtering, and although the skin grinding algorithm based on a bilateral filter has good effect, when flaws such as wrinkles and acne marks of skin are serious, parameters of the filter are required to be increased, and when the strength parameters are too large, the algorithm speed is greatly influenced.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention aims to solve the technical problems that: how to provide a skin grinding method which is simple and quick in operation and good in skin smoothness and texture retaining effect.
In order to solve the technical problems, the invention adopts the following technical scheme: an improved additive lee filtering skin-grinding method comprises the following steps:
S1: acquiring an original face image, intercepting the original face image into N x M to obtain an image to be processed, and representing pixel values at coordinates (i, j) of the image to be processed by using x ij;
S2: setting a sliding window with the size of (2 x n+1) x (2 x m+1), and calculating local average values and local variances of pixel values of all coordinate points in the image to be processed in the sliding window;
S3: according to the face original image, skin classification grades are carried out to determine sigma values, each skin grade corresponds to one sigma value, and then according to the sigma values and the local variances of pixel values of all coordinate points in the to-be-processed image in the sliding window obtained by S2, k values are calculated by adopting the following formula:
Wherein sigma represents a parameter of the additive lee filtering, k represents the degree of peeling of an original image, and v ij represents the local variance of an image to be processed in the sliding window;
S4: additive denoising the pixel value at coordinates (i, j) according to the following formula, and using the additive denoised pixel value As the pixel value at the coordinates (i, j);
Wherein, Representing the additively denoised pixel value at coordinates (i, j);
s5: repeating the steps S2-S4, traversing all coordinate points in the image to be processed to obtain pixel values after additive denoising of each coordinate point, taking the pixel values corresponding to all coordinate points after additive denoising as pixel values after peeling of the coordinate points, and obtaining and outputting the filtered image.
As an improvement, the method for calculating the local variance of all the coordinate pixel values in the image to be processed in the sliding window in S2 is as follows:
Where x kl represents the pixel value at coordinates (k, l) within the sliding window, m ij represents the local average of the pixel values of all coordinate points in the image to be processed within the sliding window, and v ij represents the local variance of all coordinate pixel values in the image to be processed within the sliding window.
As an improvement, the method for classifying the skin according to the original face image in S3 includes:
S31: defining a plurality of characteristic points in an original image of a human face, sequentially connecting all the characteristic points to form polygons, wherein an obtained mask is a complete human face area, M points is defined, a mask of a skin area of the whole human body is M human, and a mask diagram of the skin area of the human face is M face:
Mface=Mpoints∩Mhuman (3-2);
s32: the mask pattern of the skin region of the face is classified into four dimensions according to skin color, gloss, wrinkles and pores, and the mask pattern is specifically as follows:
the skin color grade is divided into four classes of three classes, two classes and one class, and each skin color grade is assigned 1,2,3,0 in sequence;
The oil polish grades are classified into four-grade oil polish, three-grade oil polish, two-grade oil polish and one-grade oil polish, and each oil polish grade is assigned 1,2,3,0 in sequence;
the wrinkle grade is divided into four grades, three grades, two grades and one grade, and each wrinkle grade is assigned 1,2,3,0 in sequence;
the pore grade is divided into four grades, three grades, two grades and one grade, and each pore grade is assigned 1,2,3,0 in sequence;
S33: in the original face image, four parts of the forehead, the left cheek, the right cheek and the chin of the face are selected as interested areas, the weights of each area divided into skin colors, oily lights, wrinkles and pores are set, then the grade assignment of the four parts is calculated by adopting the following formula, and the value of sigma is equal to the grade assignment:
Wherein, Gamma = 1,2,3,4 represents the weight of the skin tone in the four areas of the forehead, left cheek, right cheek and chin,/>, respectivelyGamma = 1,2,3,4 represents the weight of the shiny in the forehead, left cheek, right cheek and chin, respectively,/>Gamma = 1,2,3,4 indicates the weights of wrinkles in the forehead, left cheek, right cheek and chin, respectively,/>Γ=1, 2,3,4 represent the weights of pores in the four areas of the forehead, left cheek, right cheek, and chin, respectively.
Compared with the prior art, the invention has at least the following advantages:
1. The calculation complexity of the existing filtering on each pixel point is greater than that of the improved additive lee filtering, when a large number of defects such as wrinkles, acne marks and the like of images needing to be ground and skin are serious, parameters of the filter need to be increased, and when the intensity parameters are overlarge, the algorithm speed is greatly influenced.
2. According to the method, the parameter k of the skin peeling modification degree is calculated through classification and classification of skin, and the final skin peeling degree can be influenced by different skin grades, so that the texture of the skin peeling effect is real.
3. The method of the invention can smooth the skin and has the treatment effects of excessive uniformity, reserved contrast, reserved texture and real texture.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a polygonal outline of a face and a region of interest.
Fig. 3 is a skin tone grading schematic.
Fig. 4 is a schematic illustration of oil light fractionation.
Fig. 5 is a wrinkle grading schematic diagram.
Fig. 6 is a schematic diagram of pore grading.
Fig. 7 shows the original face image and the skin-polishing effect, fig. 7a shows the original face image, and fig. 7b shows the face image after the skin-polishing operator treatment.
Detailed Description
The present invention will be described in further detail below.
Referring to fig. 1, a modified additive lee filtering based skin peeling method comprises the following steps:
s1: and acquiring an original face image, intercepting the original face image into the size of N x M to obtain an image to be processed, and representing pixel values at coordinates (i, j) of the image to be processed by using x ij.
S2: setting a sliding window with the size of (2 x n+1) x (2 x m+1), and calculating the local variance of pixel values of all coordinate points in the image to be processed in the sliding window; the local mean and local variance of all the coordinate pixel values in the image to be processed inside the sliding window can be calculated by any existing method, and the present invention preferably selects the following method to perform. The following method is preferably adopted:
The method for calculating the local variance of all the coordinate pixel values in the image to be processed in the sliding window in the S2 is as follows:
Where x kl denotes a pixel value at a coordinate (k, l) within the sliding window, the coordinate of the pixel in the image to be processed that is being denoised is (i, j), the coordinate of one of the surrounding pixels used is (k, l), m ijmij denotes a local average value of pixel values of all coordinate points in the image to be processed within the sliding window, and v ij denotes a local variance of pixel values of all coordinates in the image to be processed within the sliding window.
S3: according to the face original image, skin classification grades are carried out to determine sigma values, each skin grade corresponds to one sigma value, and then according to the sigma values and the local variances of pixel values of all coordinate points in the to-be-processed image in the sliding window obtained by S2, k values are calculated by adopting the following formula:
Wherein sigma represents a parameter of the additive lee filtering, k represents the degree of peeling of an original image, and v ij represents the local variance of an image to be processed in the sliding window; for example, when the classification is first order, the value of sigma is 0, and the formula is calculated according to the k value K=1 is available, indicating that skin conditions are better without skin abrasion modification at this time); the additive lee filter is prior art.
Sigma represents a parameter of the additive lee filtering, which can be used to control the degree of filtering, when the classification is four-level, the value of sigma is 3; when the classification is three-level, σ has a value of 2; when the classification is second order, the value of σ is 1; when the classification is one level, the value of σ is 0.
S4: additive denoising the pixel value at coordinates (i, j) according to the following formula, and using the additive denoised pixel valueAs the pixel value at the coordinates (i, j);
Wherein, Representing the additively denoised pixel value at coordinate (i, j).
S5: repeating the steps S2-S4, traversing all coordinate points in the image to be processed to obtain pixel values after additive denoising of each coordinate point, taking the pixel values corresponding to all coordinate points after additive denoising as pixel values after peeling of the coordinate points, and obtaining and outputting the filtered image.
Specifically, the method for classifying the skin according to the original face image in the step S3 includes:
s31: defining multiple feature points in the original image of human face, sequentially connecting all feature points to form polygons, defining the mask as a complete human face region, defining the mask as M points, defining the mask as M human of the skin region of the whole human body, defining the mask pattern as M face of the skin region of the human face,
Mface=Mpoints∩Mhuman (3-2);
And obtaining 81 aligned feature points by using a TensorFlow-based deep neural network face detection algorithm provided by OpenCV and a Adrian Bulat-proposed face alignment algorithm. The points of the outermost frame of the face are sequentially connected to form polygons, and the obtained mask is a complete face area and is defined as M points, as shown by the outer frame polygons in FIG. 2.
The classification of the face is inaccurate due to the influence of factors such as hair, glasses, ornaments, light shadows and the like, so that the face needs to be intersected with the whole body skin segmentation result on the basis of the key point positioning segmentation, and a final face skin region is obtained.
S32: the mask pattern of the skin region of the face is classified into four dimensions according to skin color, gloss, wrinkles and pores, and the mask pattern is as follows
The skin quality of human skin is various and can be divided into a plurality of types according to four dimensions of complexion, gloss, wrinkles and pores. In the task of beautifying, firstly, the skin type is judged, and then the parameters of the algorithm for processing different flaws are determined.
Skin color: at present, researches related to human skin colors are mainly focused on the fields of medical diagnosis, face comparison, expression recognition and the like, and the skin colors are classified to better determine parameters of a beauty algorithm and to be distinguished from standard skin color classification standards. In portrait photography, the complexion of the same person can also present different results due to differences in illumination, photographing equipment, photographing parameters, and the like. The invention therefore classifies skin colors based on the brightness and color reflected by the image, rather than the human body itself.
The skin tone class is classified into four classes, namely four classes, three classes, two classes and one class, and each skin tone class is assigned 1,2,3,0 in sequence. The fourth-level skin tone is dark and dark due to dark skin tone or light shadow during shooting, the third-level skin tone is yellowish due to yellow skin tone, ambient light or white balance setting and the like, the second-level skin tone is white skin tone due to white skin tone or overexposure and the like, and the first-level skin tone is normal and is of a skin tone type which does not need to be adjusted, as shown in fig. 3.
The gloss grades are classified into four-grade gloss, three-grade gloss, two-grade gloss and one-grade gloss, and each gloss grade is assigned 1,2,3,0 in sequence.
In the portrait photographing, the highlight region of the face is a region with the highest L-average value in the Lab color space. The exposure level of the photo can be judged according to the L value of the highlight region, and is generally divided into underexposure, normal and overexposure. In the process of post trimming, the operations of brightening and high-gloss are required for the underexposed and overexposed photos respectively.
Because oily skin secretes grease, the grease reflects light in the imaging process, and the phenomenon of reflecting light in a highlight area of a human face is caused, so that the highlight area often appears along with the grease area. The parameters of the degreasing algorithm are determined by classifying the gloss level.
The fourth-level oil light means that the oil secretion is high and the human figure has high reflection degree; primary oil light refers to small oil secretion from skin, and no reflection phenomenon exists in a portrait, as shown in figure 4.
The wrinkle grade is divided into four grades, three grades, two grades and one grade, and each wrinkle grade is assigned 1,2,3,0 in turn.
People may present different grade classifications due to their being at different age stages. A plurality of quantitative determination methods for wrinkles based on computer vision are proposed at home and abroad, and are greatly influenced by illumination, shadow, resolution and the like during image shooting, and the detection effect is unstable. The skin-abrasion algorithm focuses on wrinkles in the skin, so that the accuracy of the classification of the wrinkles directly determines the effectiveness of the skin-abrasion algorithm. The four levels characterize the most wrinkled, deepest textured, and least highly final levels, and the one level characterizes the less wrinkled, very lightly textured, and least highly final levels, as shown in fig. 5.
The pore grades are divided into four grades, three grades, two grades and one grade, and 1,2,3,0 is assigned to each pore grade in sequence.
Rough skin is also the subject of the emphasis of the peeling algorithm. The size and the size of skin pores reflect whether the skin is smooth and fine. Skin conditions vary widely from person to person, and skin is classified into four, three, two, and one according to roughness. The four grades characterize the coarse, clear pore grades, and the one grade characterizes the smooth, fine grades, as shown in fig. 6.
S33: in the original face image, four parts of the forehead, the left cheek, the right cheek and the chin of the face are selected as interested areas, the weights of each area divided into skin colors, oily lights, wrinkles and pores are set, then the grade assignment of the four parts is calculated by adopting the following formula, and the value of sigma is equal to the grade assignment:
Wherein, Gamma = 1,2,3,4 represents the weight of the skin tone in the four areas of the forehead, left cheek, right cheek and chin,/>, respectivelyGamma = 1,2,3,4 represents the weight of the shiny in the forehead, left cheek, right cheek and chin, respectively,/>Gamma = 1,2,3,4 indicates the weights of wrinkles in the forehead, left cheek, right cheek and chin, respectively,/>Γ=1, 2,3,4 represent the weights of pores in the four areas of the forehead, left cheek, right cheek, and chin, respectively.
In the portrait photo, after a face rectangular frame and a face key point are detected and aligned, an interested area is selected, and parameters of a beautifying algorithm are finally determined according to the skin classification indexes.
When classifying the skin according to indexes, the skin classification weights of different areas of the human face are different, the forehead highlight area is usually a region with heavy oil and brighter skin color, the cheek is usually a region with heavy oil and heavier wrinkles, and the chin is usually a region with lighter oil and lighter wrinkles. In order to always select a skin area which is not affected by factors such as illumination shadows, shooting angles and the like and has classification accuracy, the invention selects four parts of the forehead, the left cheek, the right cheek and the chin of a human face as the interested area, and sets a weight matrix in the following table 1 according to experience when index calculation is carried out on the four areas.
TABLE 1 skin indicator weight table for regions of interest on face
The forehead, the left cheek, the right cheek and the chin of the face are used as the interested areas and can be extracted by the following modes:
The expression formula of the key points of the face is Loc i=(xi,yi), i=1, 2, …,81, wherein x i,yi represents the abscissa and ordinate of the points, and the specific region is shown in the following table 2.
TABLE 2 areas corresponding to face keypoints
Key point range | Face region |
Loc1~Loc17 | Cheek edge |
Loc18~Loc22 | Left eyebrow |
Loc23~Loc27 | Right eyebrow |
Loc28~Loc36 | Nose |
Loc37~Loc42 | Left eye |
Loc43~Loc48 | Right eye |
Loc49~Loc68 | Mouth of mouth |
Loc69~Loc81 | Forehead (forehead) |
In the task of classifying the skin of the face, if the whole region is taken as input, the whole region is interfered by gestures, shadows and the like, so that the division of four regions of interest (Region of Interest, ROI) is proposed, and a schematic diagram is shown in fig. 2. Setting Recti lx,Rectily,Rectirx,Rectiry, i=1, 2,3 and 4 are the positions of the left upper corner of the rectangular region of the four ROI on the horizontal axis of the key point, the position of the left upper corner on the vertical axis of the key point, the position of the right lower corner on the horizontal axis of the key point, and the positions of the right lower corner on the vertical axis of the key point, wherein i=1, 2,3 and 4 respectively represent the forehead, the left cheek, the right cheek and the lower jaw.
The key point positions of the left upper corner and the right lower corner of the forehead area are respectively :(Rect1lx,Rect1ly)=(x21,max(y71,y72,y81)),(Rect1rx,Rect1ry)=(x24,min(y21,y24)).
The key point locations of the upper left and lower right corners of the left cheek region are respectively :(Rect2lx,Rect2ly)=(x37,y29),(Rect2rx,Rect2ry)=(x32,y32).
The key point locations of the upper left corner and the lower right corner of the right cheek region are respectively :(Rect3lx,Rect3ly)=(x36,y29),(Rect3rx,Rect3ry)=(x46,y32).
The key point positions of the upper left corner and the lower right corner of the mandibular region are respectively :(Rect4lx,Rect4ly)=(x8,max(y57,y58,y59)),(Rect4rx,Rect4ry)=(x10,min(y8,y9,y10)).
A schematic of the four zones is shown as the inner frame rectangle of fig. 2.
Experiment and analysis
① Experimental data
1000 Portrait photos were used in this experiment. The first step is to carry out face recognition and key alignment on the pictures, and divide the forehead, the left cheek, the right cheek and the chin according to the interested region formula, and because part of the pictures are multi-person pictures, 1450 pictures of the faces are obtained in total. And secondly, inviting professional graphic repair operators, and marking all the divided graphs according to skin index levels according to industry experience. In the third step, 70% (1015) of the data set is randomly divided into a training set, 20% (290) of the data set is used as a verification set, 10% (145) of the data set is used as a test set, and ResNet is used for training and testing. Fourth, as a comparison, different metrics were calculated on the same test set using conventional methods, respectively.
② Conclusion of single index experiment
Through the steps, four indexes are calculated separately, and experimental conclusions of the following four tables are obtained respectively.
TABLE 3 skin tone Classification accuracy
Features (e.g. a character) | Sample of | ResNet | Color thresholding (%) |
First-order skin color | 42 | 95.24 | 90.48 |
Secondary skin colour | 36 | 94.44 | 86.11 |
Three-level skin color | 38 | 92.11 | 76.32 |
Four-level skin color | 29 | 86.21 | 65.52 |
Totalizing | 145 | 92.41 | 80.69 |
As can be seen from Table 3, the ResNet method achieved a classification accuracy of 92.41% higher than 80.69% of the conventional method when classifying 145 different skin tone images.
TABLE 4 accuracy of oil and light classification
Features (e.g. a character) | Sample of | ResNet | Maximum inter-class variance method |
First-grade oil polish | 40 | 75.00 | 65.00 |
Secondary oil polish | 32 | 106.25 | 65.63 |
Three-stage oil polish | 45 | 93.33 | 73.33 |
Four-level oil polish | 28 | 78.57 | 78.57 |
Totalizing | 145 | 88.28 | 70.34 |
As can be seen from Table 4, when classifying 145 graphs of different gloss conditions, the ResNet method achieved a classification accuracy of 88.28% which was higher than 70.34% of the maximum inter-class variance method.
TABLE 5 wrinkle Classification accuracy
As can be seen from Table 5, when classifying 145 images of different wrinkle conditions, the ResNet method achieves 89.66% classification accuracy, which is higher than 73.1% of the gray level co-occurrence matrix method.
TABLE 6 pore Classification accuracy
Features (e.g. a character) | Sample of | ResNet | Threshold segmentation method (%) |
First-order pore | 47 | 87.23 | 80.85 |
Second order pore | 43 | 90.70 | 86.05 |
Three-level pore | 33 | 93.94 | 87.88 |
Four-level pore | 22 | 81.82 | 77.27 |
Totalizing | 145 | 88.97 | 83.45 |
As can be seen from table 6, when classifying 145 different pore-state graphs, the RestNet method achieved a classification accuracy of 88.97% which was higher than 83.45% of the combined thresholding and morphology method.
From the four tables, four skin indexes of deep learning-based skin color, gloss, wrinkles and pores are classified, and the comprehensive accuracy higher than that of the traditional conventional method is obtained.
③ Conclusion of multi-index experiment
The effectiveness of the algorithm is shown by the independent classification results of the four indexes, one face image is selected for multi-index display, and fig. 3.13 is an original image and the skin of four interested areas.
The four regions were index-classified to obtain the results of table 7 below. Skin tone of the original is tertiary, not fair enough, but not tetrahedral. The oil polish belongs to three stages, and the oil polish degree of forehead and left cheek is obvious. Wrinkles and pores are three-level, and the skin is rough and not fine enough.
TABLE 7 comprehensive index classification of regions of interest for faces
Region(s) | Skin color | Oil polish | Wrinkles | Pores (pore) |
Forehead (forehead) | Three stages | Three stages | Second-level | Second-level |
Chin bar | Three stages | Second-level | Second-level | Three stages |
Left cheek | Three stages | Three stages | Three stages | Three stages |
Right cheek | Three stages | First level | Second-level | Second-level |
Comprehensive synthesis | Three stages | Three stages | Three stages | Three stages |
The conclusions in table 7 are consistent with the data of the standard tags, and it can be seen that it is scientific and reasonable to classify the skin by constructing a skin evaluation model by subdividing four skin indexes into four levels based on four regions of interest of the face. It is feasible to realize the automatic Chinese determination of parameters based on the improved additive lee filtering and peeling method.
Taking a face image as an effect test, wherein wrinkles and pores of the skin of the face image are three-level, so that the skin-grinding parameter is set to be 2, and the effect after the face original image and skin-grinding operator are processed is shown in fig. 7. Fig. 7a is the original and fig. 7b is the diagram after the skin peeling operator treatment.
As can be seen from fig. 7, the skin treatment effect on the skin is excessively uniform, contrast retention, texture retention and true texture while the skin is smooth based on an improved additive lee filtering skin-polishing method.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered by the scope of the claims of the present invention.
Claims (2)
1. An improved additive lee filtering skin-grinding method is characterized by comprising the following steps:
S1: acquiring an original face image, intercepting the original face image into N x M to obtain an image to be processed, and representing pixel values at coordinates (i, j) of the image to be processed by using x ij;
S2: setting a sliding window with the size of (2 x n+1) x (2 x m+1), and calculating local average values and local variances of pixel values of all coordinate points in the image to be processed in the sliding window;
S3: according to the face original image, skin classification grades are carried out to determine sigma values, each skin grade corresponds to one sigma value, and then according to the sigma values and the local variances of pixel values of all coordinate points in the to-be-processed image in the sliding window obtained by S2, k values are calculated by adopting the following formula:
Wherein sigma represents a parameter of the additive lee filtering, k represents the degree of peeling of an original image, and v ij represents the local variance of an image to be processed in the sliding window;
the method for classifying the skin according to the original face image comprises the following steps:
S31: defining a plurality of characteristic points in an original image of a human face, sequentially connecting all the characteristic points to form polygons, wherein an obtained mask is a complete human face area, M points is defined, a mask of a skin area of the whole human body is M human, and a mask diagram of the skin area of the human face is M face:
Mface=Mpoints∩Mhuman (3-2);
s32: the mask pattern of the skin region of the face is classified into four dimensions according to skin color, gloss, wrinkles and pores, and the mask pattern is specifically as follows:
the skin color grade is divided into four classes of three classes, two classes and one class, and each skin color grade is assigned 1,2,3,0 in sequence;
The oil polish grades are classified into four-grade oil polish, three-grade oil polish, two-grade oil polish and one-grade oil polish, and each oil polish grade is assigned 1,2,3,0 in sequence;
the wrinkle grade is divided into four grades, three grades, two grades and one grade, and each wrinkle grade is assigned 1,2,3,0 in sequence;
the pore grade is divided into four grades, three grades, two grades and one grade, and each pore grade is assigned 1,2,3,0 in sequence;
S33: in the original face image, four parts of the forehead, the left cheek, the right cheek and the chin of the face are selected as interested areas, the weights of each area divided into skin colors, oily lights, wrinkles and pores are set, then the grade assignment of the four parts is calculated by adopting the following formula, and the value of sigma is equal to the grade assignment:
Wherein, Gamma=1, 2,3,4 denote the weights of the skin colors in the four areas of the forehead, left cheek, right cheek and chin respectively,Gamma = 1,2,3,4 represents the weight of the shiny in the forehead, left cheek, right cheek and chin, respectively,/>Gamma = 1,2,3,4 indicates the weights of wrinkles in the forehead, left cheek, right cheek and chin, respectively,/>Γ=1, 2,3,4 represents the weights of pores in the four areas of the forehead, left cheek, right cheek, and chin, respectively;
S4: additive denoising the pixel value at coordinates (i, j) according to the following formula, and using the additive denoised pixel value As the pixel value at the coordinates (i, j);
Wherein, Representing the additively denoised pixel value at coordinates (i, j); m ij represents a local average value of pixel values of all coordinate points in the image to be processed in the sliding window;
s5: repeating the steps S2-S4, traversing all coordinate points in the image to be processed to obtain pixel values after additive denoising of each coordinate point, taking the pixel values corresponding to all coordinate points after additive denoising as pixel values after peeling of the coordinate points, and obtaining and outputting the filtered image.
2. The improved additive lee filtering peeling method based on claim 1, wherein the method for calculating the local variance of all coordinate pixel values in the image to be processed in the sliding window in S2 is:
Where x kl represents the pixel value at coordinates (k, l) within the sliding window, m ij represents the local average of the pixel values of all coordinate points in the image to be processed within the sliding window, and v ij represents the local variance of all coordinate pixel values in the image to be processed within the sliding window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111539879.3A CN114202483B (en) | 2021-12-15 | 2021-12-15 | Improved additive lee filtering skin grinding method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111539879.3A CN114202483B (en) | 2021-12-15 | 2021-12-15 | Improved additive lee filtering skin grinding method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114202483A CN114202483A (en) | 2022-03-18 |
CN114202483B true CN114202483B (en) | 2024-05-14 |
Family
ID=80654344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111539879.3A Active CN114202483B (en) | 2021-12-15 | 2021-12-15 | Improved additive lee filtering skin grinding method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114202483B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712095A (en) * | 2018-12-26 | 2019-05-03 | 西安工程大学 | A kind of method for beautifying faces that rapid edge retains |
CN110070502A (en) * | 2019-03-25 | 2019-07-30 | 成都品果科技有限公司 | The method, apparatus and storage medium of facial image mill skin |
CN110232670A (en) * | 2019-06-19 | 2019-09-13 | 重庆大学 | A method of the image visual effect enhancing based on low-and high-frequency separation |
CN112396573A (en) * | 2019-07-30 | 2021-02-23 | 纵横在线(广州)网络科技有限公司 | Facial skin analysis method and system based on image recognition |
CN112784773A (en) * | 2021-01-27 | 2021-05-11 | 展讯通信(上海)有限公司 | Image processing method and device, storage medium and terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2277141A2 (en) * | 2008-07-30 | 2011-01-26 | Tessera Technologies Ireland Limited | Automatic face and skin beautification using face detection |
-
2021
- 2021-12-15 CN CN202111539879.3A patent/CN114202483B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712095A (en) * | 2018-12-26 | 2019-05-03 | 西安工程大学 | A kind of method for beautifying faces that rapid edge retains |
CN110070502A (en) * | 2019-03-25 | 2019-07-30 | 成都品果科技有限公司 | The method, apparatus and storage medium of facial image mill skin |
CN110232670A (en) * | 2019-06-19 | 2019-09-13 | 重庆大学 | A method of the image visual effect enhancing based on low-and high-frequency separation |
CN112396573A (en) * | 2019-07-30 | 2021-02-23 | 纵横在线(广州)网络科技有限公司 | Facial skin analysis method and system based on image recognition |
CN112784773A (en) * | 2021-01-27 | 2021-05-11 | 展讯通信(上海)有限公司 | Image processing method and device, storage medium and terminal |
Non-Patent Citations (3)
Title |
---|
Digital Image Enhancement and Noise Filtering by Use of Local Statistics;Jong-Sen Lee;《IEEE Transactions on Pattern Analysis and Machine Intelligence 》;19800331;全文 * |
基于保边滤波和肤色模型的人脸美颜技术研究与实现;王志强;苗翔宇;;无线互联科技;20180903(17);全文 * |
基于皮肤分割与肤质评价的美肤模型设计与实现.《中国优秀硕士学位论文全文数据库 信息科技辑》.2023,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN114202483A (en) | 2022-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104834898B (en) | A kind of quality classification method of personage's photographs | |
CN101609500B (en) | Quality estimation method of exit-entry digital portrait photos | |
CN109961426B (en) | Method for detecting skin of human face | |
CN105404846B (en) | A kind of image processing method and device | |
CN104036278B (en) | The extracting method of face algorithm standard rules face image | |
CN106407917A (en) | Dynamic scale distribution-based retinal vessel extraction method and system | |
EP2685419A1 (en) | Image processing device, image processing method, and control program | |
CN103634680B (en) | The control method for playing back and device of a kind of intelligent television | |
CN107220624A (en) | A kind of method for detecting human face based on Adaboost algorithm | |
CN104899905B (en) | Face image processing process and device | |
CN106919911A (en) | Modified using the automatic face and skin of face detection | |
CN108932493A (en) | A kind of facial skin quality evaluation method | |
CN104318262A (en) | Method and system for replacing skin through human face photos | |
CN106295656A (en) | Image outline characteristic extraction method based on image color lump content and device | |
CN105139404A (en) | Identification camera capable of detecting photographing quality and photographing quality detecting method | |
CN113344836B (en) | Face image processing method and device, computer readable storage medium and terminal | |
CN106971153A (en) | A kind of facial image illumination compensation method | |
CN108921004A (en) | Safety cap wears recognition methods, electronic equipment, storage medium and system | |
CN106326823A (en) | Method and system for acquiring head image in picture | |
CN108615239A (en) | Tongue image dividing method based on threshold technology and Gray Projection | |
CN103218615B (en) | Face judgment method | |
CN107154058A (en) | A kind of method for guiding user to reduce magic square | |
CN112802074A (en) | Textile flaw detection method based on illumination correction and visual saliency characteristics | |
CN114155569B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
CN114511567A (en) | Tongue body and tongue coating image identification and separation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |