CN114202483A - Additive lee filtering and peeling method based on improvement - Google Patents

Additive lee filtering and peeling method based on improvement Download PDF

Info

Publication number
CN114202483A
CN114202483A CN202111539879.3A CN202111539879A CN114202483A CN 114202483 A CN114202483 A CN 114202483A CN 202111539879 A CN202111539879 A CN 202111539879A CN 114202483 A CN114202483 A CN 114202483A
Authority
CN
China
Prior art keywords
image
skin
coordinate
face
grades
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111539879.3A
Other languages
Chinese (zh)
Other versions
CN114202483B (en
Inventor
何鑫
杨梦宁
龙超
柴海洋
张欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202111539879.3A priority Critical patent/CN114202483B/en
Publication of CN114202483A publication Critical patent/CN114202483A/en
Application granted granted Critical
Publication of CN114202483B publication Critical patent/CN114202483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to an improved additive lee filtering based peeling method, which comprises the steps of firstly obtaining an original image of a human face, and intercepting the original image into N x M to obtain an image to be processed; then setting a sliding window, and calculating a parameter k according to the local variance of the pixel values of all coordinate points in the image to be processed in the sigma value sliding window; reuse formula
Figure DDA0003413671280000011
Performing additive denoising on the pixel value at the coordinate (i, j), traversing all coordinate points in the image to be processed to obtain the additive denoised pixel value of each coordinate point, taking the corresponding pixel value of all coordinate points after the additive denoising as the pixel value of the pixel point after the peeling,and outputting the filtered image. The method of the invention can smooth the skin, and simultaneously, the treatment effect on the skin is excessively uniform, the contrast of the light and the shade is kept, the texture is kept, and the texture is real.

Description

Additive lee filtering and peeling method based on improvement
Technical Field
The invention relates to a beautifying algorithm, in particular to an additive lee filtering and peeling method based on improvement.
Background
The main theoretical basis of the portrait dermabrasion algorithm is two points, namely color reduction of a skin spot area and filtering of the skin spot area. Based on these two points, the portrait peeling algorithm can be divided into the following: a general buffing algorithm, a channel buffing algorithm, a detail superposition buffing algorithm and the like.
The general buffing algorithm is the most basic buffing algorithm, and the style of the general buffing algorithm belongs to a smooth type. The channel buffing algorithm originates from buffing operation in Photoshop, is a buffing algorithm based on portrait blue channel calculation, and has the principle of brightening a dark spot area of skin, so that the color of the dark spot is simple, the effect of approximate buffing when the dark spot is small is achieved, and the style of the buffing algorithm belongs to natural smoothness. The detail superposition buffing is a buffing method for superposing detail information in sequence by using edge-preserving filtering of double scales, and different detail information is superposed on a large-radius smooth image by the method so as to meet the requirement of buffing.
The general skin grinding is the most basic skin grinding algorithm, although the effect is smooth, the detail is rich and weak, the loss of the details of an original image is large, the distortion of the image is serious, and in addition, the channel skin grinding can fade skin acne marks and flaws into skin whitening, but the smoothness is general. The detail superposition skin-polishing method is a skin-polishing method using double-scale edge-preserving filtering to superpose detail information in sequence, although the skin-polishing algorithm based on the bilateral filter has a good effect, when the flaws of skin such as wrinkles, acne marks and the like are serious, the parameters of the filter need to be increased, and when the intensity parameters are too large, the algorithm speed is greatly influenced.
Disclosure of Invention
Aiming at the problems in the prior art, the technical problems to be solved by the invention are as follows: how to provide a skin-polishing method which is simple and quick in operation and has good effect of keeping skin smooth and texture.
In order to solve the technical problems, the invention adopts the following technical scheme: a skin-polishing method based on improved additive lee filtering comprises the following steps:
s1: obtaining a face original image, cutting the face original image into N x M to obtain an image to be processed, and using xijRepresenting the pixel value at the coordinates (i, j) of the image to be processed;
s2: setting a sliding window with the size of (2 x n +1) × (2 x m +1), and calculating the local average value and the local variance of the pixel values of all coordinate points in the image to be processed in the sliding window;
s3: determining a sigma value according to skin classification grades of an original image of the face, wherein each skin grade corresponds to one sigma value, and calculating a k value by adopting the following formula according to the sigma value and the local variance of the pixel values of all coordinate points in the image to be processed in the sliding window obtained by S2:
Figure BDA0003413671260000021
where σ denotes the parameters of additive lee filtering, k denotes the degree of skinning of the original image, vijRepresenting the local variance of the image to be processed inside the sliding window;
s4: additively denoising the pixel value at coordinate (i, j) according to the following formula, and using the additively denoised pixel value
Figure BDA0003413671260000022
As the pixel value at coordinate (i, j);
Figure BDA0003413671260000023
wherein,
Figure BDA0003413671260000024
representing the additively denoised pixel value at coordinate (i, j);
s5: and repeating S2-S4, traversing all coordinate points in the image to be processed to obtain the additively denoised pixel value of each coordinate point, and taking the corresponding pixel value of all coordinate points after additive denoising as the pixel value of the pixel point after buffing to obtain the filtered image and output the filtered image.
As an improvement, the method for calculating the local variance of all coordinate pixel values in the image to be processed in the sliding window in S2 is as follows:
Figure BDA0003413671260000025
Figure BDA0003413671260000026
wherein x isklRepresenting the pixel value at coordinate (k, l) within the sliding window, mijLocal mean value, v, of pixel values representing all coordinate points in the image to be processed within the sliding windowijRepresenting the local variance of all coordinate pixel values in the image to be processed within the sliding window.
As an improvement, in S3, the method for classifying the skin types according to the original image of the face includes:
s31: defining a plurality of characteristic points in the original image of the human face, connecting all the characteristic points in sequence to form a polygon, and defining the obtained mask as a complete human face area as MpointsThe mask for the skin region of the whole body of the human body is MhumanMask of human face skin area is Mface
Mface=Mpoints∩Mhuman (3.2);
S32: four-dimensional classification is carried out on the mask image of the human face skin area according to skin color, oil light, wrinkles and pores, and the four-dimensional classification is as follows:
the skin color grades are divided into four classes of four, three, two and one, and each skin color grade is assigned with 1,2,3 and 0 in sequence;
the oil gloss grades are classified into four-level oil gloss, three-level oil gloss, two-level oil gloss and one-level oil gloss, and each oil gloss grade is sequentially assigned with 1,2,3 and 0;
the wrinkle grades are divided into four grades, three grades, two grades and one grade, and each wrinkle grade is assigned with 1,2,3 and 0 in sequence;
the pore grade is divided into four grades, three grades, two grades and one grade, and each pore grade is sequentially assigned with 1,2,3 and 0;
s33: in the original image of the face, selecting four parts of the forehead, the left cheek, the right cheek and the chin of the face as interested areas, setting the weight of each area divided into skin color, gloss, wrinkles and pores, then calculating the grade assignment of the four parts by adopting the following formula, wherein the value of sigma is equal to the grade assignment:
Figure BDA0003413671260000031
wherein,
Figure BDA0003413671260000032
respectively represents the weights of skin colors in four areas of the forehead, the left cheek, the right cheek and the chin,
Figure BDA0003413671260000033
respectively represents the weight of the oil light in four areas of the forehead, the left cheek, the right cheek and the chin,
Figure BDA0003413671260000034
respectively represents the weights of wrinkles in four areas of the forehead, the left cheek, the right cheek and the chin,
Figure BDA0003413671260000035
the weights of pores in four regions of the forehead, left cheek, right cheek and chin are represented respectively.
Compared with the prior art, the invention has at least the following advantages:
1. the calculation complexity of each pixel point of the existing filtering is higher than that of the improved additive lee filtering, when a large number of images are needed to be ground and the flaws of wrinkles, acne marks and the like of skin are serious, the parameters of the filter need to be increased, and when the intensity parameters are too large, the algorithm speed is greatly influenced.
2. According to the method, the skin grinding modification degree parameter k is calculated through skin classification and classification, and different skin grades can influence the final skin grinding degree, so that the skin grinding effect is real in texture and texture.
3. The method of the invention can smooth the skin, and simultaneously, the treatment effect on the skin is excessively uniform, the contrast of the light and the shade is kept, the texture is kept, and therefore, the texture is real.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a polygonal outer frame of a face and a region of interest.
Fig. 3 is a schematic diagram of skin tone grading.
FIG. 4 is a schematic representation of oil light fractionation.
Fig. 5 is a schematic view of wrinkle classification.
Fig. 6 is a schematic illustration of pore grading.
Fig. 7 is a comparison between the face original and the peeling effect, where fig. 7a is the original and fig. 7b is the image after the peeling operator.
Detailed Description
The present invention is described in further detail below.
Referring to fig. 1, a peeling method based on improved additive lee filtering comprises the following steps:
s1: obtaining a face original image, cutting the face original image into N x M to obtain an image to be processed, and using xijRepresenting the pixel value at the coordinates (i, j) of the image to be processed.
S2: setting a sliding window with the size of (2 x n +1) × (2 x m +1), and calculating the local variance of the pixel values of all coordinate points in the image to be processed in the sliding window; any existing method can be used to calculate the local mean and local variance of all coordinate pixel values in the image to be processed inside the sliding window, and the following method is preferably selected by the present invention. The following method is preferably employed:
the method for calculating the local variance of all coordinate pixel values in the image to be processed in the sliding window in S2 includes:
Figure BDA0003413671260000041
Figure BDA0003413671260000042
wherein x isklRepresenting the pixel value at coordinate (k, l) within the sliding window, the coordinate of the pixel being denoised in the image to be processed is (i, j), and the coordinate of one of the surrounding pixels used is (k, l), mijmijLocal mean value, v, of pixel values representing all coordinate points in the image to be processed within the sliding windowijRepresenting the local variance of all coordinate pixel values in the image to be processed within the sliding window.
S3: determining a sigma value according to skin classification grades of an original image of the face, wherein each skin grade corresponds to one sigma value, and calculating a k value by adopting the following formula according to the sigma value and the local variance of the pixel values of all coordinate points in the image to be processed in the sliding window obtained by S2:
Figure BDA0003413671260000043
where σ denotes the parameters of additive lee filtering, k denotes the degree of skinning of the original image, vijRepresenting the local variance of the image to be processed inside the sliding window; for example, when the classification is one-level, σ is 0, and the formula is calculated from k
Figure BDA0003413671260000044
K can be obtained as 1, which indicates that the skin condition is better without dermabrasion modification); the additive lee filter is prior art.
σ denotes a parameter of additive lee filtering, which can be used to control the degree of filtering, and when the classification is four levels, σ has a value of 3; when the classification is three-level, σ has a value of 2; when the classification is two-level, σ has a value of 1; when the classification is first order, σ has a value of 0.
S4: additively denoising the pixel value at coordinate (i, j) according to the following formula, and using the additively denoised pixel value
Figure BDA0003413671260000045
As the pixel value at coordinate (i, j);
Figure BDA0003413671260000046
wherein,
Figure BDA0003413671260000047
representing the additively denoised pixel value at coordinate (i, j).
S5: and repeating S2-S4, traversing all coordinate points in the image to be processed to obtain the additively denoised pixel value of each coordinate point, and taking the corresponding pixel value of all coordinate points after additive denoising as the pixel value of the pixel point after buffing to obtain the filtered image and output the filtered image.
Specifically, in S3, the method for classifying the skin types according to the original human face image includes:
s31: defining a plurality of characteristic points in the original image of the human face, connecting all the characteristic points in sequence to form a polygon, and defining the obtained mask as a complete human face area as MpointsThe mask for the skin region of the whole body of the human body is MhumanMask of human face skin area is Mface
Mface=Mpoints∩Mhuman (3-2);
And obtaining aligned 81 feature points by using a TensorFlow-based deep neural network face detection algorithm provided by OpenCV and a face alignment algorithm proposed by Adrian Bulat. Sequentially connecting points of the outermost frame of the face to form a polygon, wherein the obtained mask is a complete face area and is defined as MpointsAs shown by the outer frame polygon of fig. 2.
The human face is affected by factors such as hair, glasses, ornaments, light shadow and the like, so that the skin type classification is inaccurate, and therefore, on the basis of key point positioning segmentation, intersection needs to be obtained with the result of whole-body skin segmentation to obtain the final human face skin area.
S32: four-dimensional classification is carried out on the mask image of the human face skin area according to skin color, oil light, wrinkles and pores, and the four-dimensional classification is as follows
The skin types of human skin are various and can be divided into a plurality of types according to four dimensions of skin color, gloss, wrinkles and pores. In the beauty task, firstly, the skin type is judged, and then parameters of an algorithm for processing different flaws are determined.
Skin color: at present, research related to human skin color mainly focuses on the fields of medical diagnosis, face comparison, expression recognition and the like, and the grade subdivision of the skin color provided by the invention is to better determine parameters of a beautifying algorithm and is different from a standard skin color grading standard. In the portrait photography, the skin color of the same person can present different results due to differences of illumination, shooting equipment, shooting parameters and the like. The invention thus classifies skin tones based on the shade and color of the image reflection, rather than the human body itself.
The skin color grades are divided into four classes of four, three, two and one, and each skin color grade is assigned with 1,2,3 and 0 in sequence. The four-level skin color is dark skin color or dark skin color caused by light shadow during shooting, the three-level skin color is yellow skin color caused by yellow skin color, ambient light or white balance setting and the like, the two-level skin color is white skin color caused by white skin color or shooting overexposure and the like, and the one-level skin color is normal skin color type which does not need to be adjusted, as shown in fig. 3.
The gloss grades are classified into four-level gloss, three-level gloss, two-level gloss and one-level gloss, and each gloss grade is assigned with 1,2,3 and 0 in sequence.
In the portrait photography, the face highlight region is a region having the highest L-average value in the Lab color space. The degree of exposure of the photograph can be determined from the L value of the highlight region, and is generally classified into underexposure, normal exposure, and overexposure. In the later trimming process, the under-exposed and over-exposed photos need to be brightened and brightened respectively.
Because oily skin secretes grease, the grease reflects during imaging, which causes the phenomenon of reflecting in the highlight area of human face, therefore, the highlight area often appears along with the highlight area. And determining parameters of the oil removing polishing algorithm through classification of the oil polishing grade.
The four-level oil light means that grease is secreted much, and the reflection degree of the portrait is high; the first-order gloss is the secretion of a small amount of oil from the skin, and the human image has no reflection phenomenon, as shown in fig. 4.
The wrinkle grades are divided into four grades, three grades, two grades and one grade, and each wrinkle grade is sequentially assigned with 1,2,3 and 0.
Wrinkles may appear in different grades due to the person being at different age stages. A plurality of wrinkle quantitative determination methods based on computer vision are proposed at home and abroad, and are greatly influenced by illumination, shadow, resolution and the like during image shooting, and the detection effect is unstable. The emphasis of the dermabrasion algorithm is on wrinkles in the skin, so that the accuracy of the grading of wrinkles directly determines the effectiveness of the dermabrasion algorithm. The fourth level characterizes the level with the most wrinkles, the deepest texture, and the final level, and the first level characterizes the level with few wrinkles, very light texture, and the lowest level, as shown in fig. 5.
Pore grades are divided into four grades, three grades, two grades and one grade, and each pore grade is sequentially assigned with 1,2,3 and 0.
Rough skin is also the content of the key treatment of the dermabrasion algorithm. The size and size of pores in the skin reflect whether the skin is smooth and fine. The skin conditions of different people vary greatly, and the skin is divided into four grades, three grades, two grades and one grade according to the roughness degree. The fourth level represents the rough, prominent pore grade, and the first level represents the smooth, fine grade, as shown in fig. 6.
S33: in the original image of the face, selecting four parts of the forehead, the left cheek, the right cheek and the chin of the face as interested areas, setting the weight of each area divided into skin color, gloss, wrinkles and pores, then calculating the grade assignment of the four parts by adopting the following formula, wherein the value of sigma is equal to the grade assignment:
Figure BDA0003413671260000061
wherein,
Figure BDA0003413671260000062
respectively represents the weights of skin colors in four areas of the forehead, the left cheek, the right cheek and the chin,
Figure BDA0003413671260000063
respectively represents the weight of the oil light in four areas of the forehead, the left cheek, the right cheek and the chin,
Figure BDA0003413671260000064
respectively represents the weights of wrinkles in four areas of the forehead, the left cheek, the right cheek and the chin,
Figure BDA0003413671260000065
the weights of pores in four regions of the forehead, left cheek, right cheek and chin are represented respectively.
In the portrait photo, after a face rectangular frame is detected and key points of the face are aligned, an interested area is selected, and parameters of a beautifying algorithm are finally determined according to the skin classification indexes.
When the skin is classified according to indexes, the skin classification weights of different areas of the human face are different, the forehead highlight area is usually an area with heavy oil and bright skin color, the cheek area is usually an area with heavy oil and heavy wrinkles, and the chin area is usually an area with light oil and light wrinkles. In order to always select a skin region which is not influenced by factors such as illumination shadow, shooting angle and the like, four parts of the forehead, the left cheek, the right cheek and the chin of a human face are selected as regions of interest, and when index calculation is carried out on the four regions, a weight matrix shown in the following table 1 is set according to experience.
TABLE 1 weight table of skin type indexes of interested area of face
Forehead head Left face Right face Jaw
Skin tone 0.35 0.25 0.25 0.15
Oil polish 0.4 0.2 0.2 0.1
Wrinkle (wrinkle) 0.2 0.3 0.3 0.2
Pores of skin 0.2 0.3 0.3 0.2
The forehead, the left cheek, the right cheek and the chin of the human face as the interested regions can be extracted in the following way:
the expression formula of the face key points is Loci=(xi,yi) 1,2, 81, wherein xi,yiThe horizontal and vertical coordinates of the points are shown, and the specific area is shown in table 2 below.
TABLE 2 regions corresponding to face Key points
Range of key points Face region
Loc1~Loc17 Cheek edge
Loc18~Loc22 Left eyebrow
Loc23~Loc27 Right side eyebrow
Loc28~Loc36 Nose
Loc37~Loc42 Left eye
Loc43~Loc48 Right eye
Loc49~Loc68 Mouth bar
Loc69~Loc81 Forehead head
In the skin classification task of the human face, if the whole Region is taken as an input, the whole Region is interfered by pose, shadow and the like, so that a division of four regions of Interest (ROI) is proposed, and a schematic diagram is shown in fig. 2. Setting Rectlx,Rectily,RectirxRectiryAnd i is 1,2,3 and 4, which respectively represent the forehead, the left cheek, the right cheek and the lower jaw.
The key point positions of the upper left corner and the lower right corner of the forehead area are respectively as follows: (Rect1lx,Rect1ly)=(x21,max(y71,y72,y81)),(Rect1rx,Rect1ry)=(x24,min(y21,y24))。
The key point positions of the upper left corner and the lower right corner of the left cheek region are respectively as follows: (Rect2lx,Rect2ly)=(x37,y29),(Rect2rx,Rect2ry)=(x32,y32)。
The key point positions of the upper left corner and the lower right corner of the right cheek region are respectively as follows: (Rect3lx,Rect3ly)=(x36,y29),(Rect3rx,Rect3ry)=(x46,y32)。
The key point positions of the upper left corner and the lower right corner of the lower jaw area are respectively as follows: (Rect4lx,Rect4ly)=(x8,max(y57,y58,y59)),(Rect4rx,Rect4ry)=(x10,min(y8,y9,y10))。
The schematic of the four regions is shown in the inner frame rectangle of fig. 2.
Experiments and analyses
Experimental data of
1000 portrait photos were used for this experiment. The method comprises the steps of firstly, carrying out face recognition and key alignment on pictures, segmenting the forehead, the left cheek, the right cheek and the chin according to an interested region formula, and obtaining 1450 pictures of faces in total because part of the pictures are multi-person photos. And step two, inviting professional map repairers to label all the segmented maps according to the skin type index level according to industry experience. The third step randomly divides 70% (1015) of the data set as training set, 20% (290) as validation set, and 10% (145) as test set, and trains and tests using ResNet 152. And step four, as comparison, calculating different indexes on the same test set by using a traditional method respectively.
② single index experimental conclusion
Through the steps, the four indexes are separately calculated, and experimental conclusions of the following four tables are obtained respectively.
TABLE 3 skin color classification accuracy
Feature(s) Sample(s) ResNet Color threshold method (%)
First degree skin tone 42 95.24 90.48
Secondary skin tone 36 94.44 86.11
Three-level skin tone 38 92.11 76.32
Four-level skin tone 29 86.21 65.52
Total up to 145 92.41 80.69
As can be seen from Table 3, when 145 different skin-color images are classified, the ResNet method achieves 92.41% of classification accuracy, which is higher than 80.69% of the traditional method.
TABLE 4 gloss Classification accuracy
Figure BDA0003413671260000081
Figure BDA0003413671260000091
As can be seen from Table 4, when 145 graphs of different gloss conditions are classified, the ResNet method achieves a classification accuracy of 88.28%, which is higher than 70.34% of the maximum inter-class variance method.
TABLE 5 wrinkle Classification accuracy
Feature(s) Sample(s) ResNet Gray level co-occurrence matrix method
First order wrinkles 35 82.50 82.86
Second degree wrinkle 37 96.88 70.27
Third-order wrinkles 46 91.11 67.39
Four-level wrinkles 27 89.29 74.07
Total up to 145 89.66 73.10
As can be seen from table 5, when 145 different wrinkle condition maps were classified, the ResNet method achieved 89.66% classification accuracy, which is higher than 73.1% of the gray level co-occurrence matrix method.
TABLE 6 pore Classification accuracy
Feature(s) Sample(s) ResNet Threshold segmentation method (%)
Primary pore 47 87.23 80.85
Secondary pores 43 90.70 86.05
Three-level pore 33 93.94 87.88
Quaternary pore size 22 81.82 77.27
Total up to 145 88.97 83.45
As can be seen from table 6, when 145 graphs of different pore conditions were classified, the RestNet method achieved a classification accuracy of 88.97%, which was higher than 83.45% of the threshold segmentation and morphology combination method.
From the four tables, the comprehensive accuracy rate higher than that of the traditional conventional method is obtained based on the skin color, gloss, wrinkle and pore four-skin-quality index classification of deep learning.
Third, multi-index experiment conclusion
The effectiveness of the algorithm is shown by the four-index independent grading classification results, a face image is selected to be displayed in multiple indexes, and an original image and the skins of four interested areas are shown in fig. 3.13.
Index classification was performed on the four regions, and the results as in table 7 below were obtained. The original color is of the third order, not fair but not dark. The gloss is of three grades, and the gloss is obvious on the forehead and the left cheek. Wrinkles and pores belong to three levels, and the skin is rough and not fine enough.
TABLE 7 human face region of interest comprehensive index classification
Region(s) Skin tone Oil polish Wrinkle (wrinkle) Pores of skin
Forehead head Three-stage Three-stage Second stage Second stage
Jaw Three-stage Second stage Second stage Three-stage
Left cheek Three-stage Three-stage Three-stage Three-stage
Right cheek Three-stage First stage Second stage Second stage
Synthesis of Three-stage Three-stage Three-stage Three-stage
The conclusion in table 7 is consistent with the data of the standard label, and it can be seen that the four skin indexes are subdivided into four levels based on the four regions of interest of the human face, so that it is scientific and reasonable to construct a skin evaluation model to classify the skin. On the basis, Chinese automatic parameter determination based on the improved additive lee filtering and buffing method is feasible.
A face image is taken as an effect test, wrinkles and pores of the skin of the tested face image are all in three levels, so that the peeling parameter is set to be 2, and the effect of the original face image and the effect of the peeling operator after treatment are shown in figure 7. Fig. 7a is an original drawing, and fig. 7b is a drawing after the peeling operator processing.
As can be seen from FIG. 7, the improved additive lee filtering based skin-polishing method provided by the invention can smooth the skin and simultaneously has the advantages of excessively uniform treatment effect on the skin, reserved light and shade contrast, reserved texture and real texture.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (3)

1. A skin grinding method based on improved additive lee filtering is characterized by comprising the following steps:
s1: obtaining a face original image, cutting the face original image into N x M to obtain an image to be processed, and using xijRepresenting the pixel value at the coordinates (i, j) of the image to be processed;
s2: setting a sliding window with the size of (2 x n +1) × (2 x m +1), and calculating the local average value and the local variance of the pixel values of all coordinate points in the image to be processed in the sliding window;
s3: determining a sigma value according to skin classification grades of an original image of the face, wherein each skin grade corresponds to one sigma value, and calculating a k value by adopting the following formula according to the sigma value and the local variance of the pixel values of all coordinate points in the image to be processed in the sliding window obtained by S2:
Figure FDA0003413671250000011
where σ denotes the parameters of additive lee filtering, k denotes the degree of skinning of the original image, vijRepresenting the local variance of the image to be processed inside the sliding window;
s4: additively denoising the pixel value at coordinate (i, j) according to the following formula, and using the additively denoised pixel value
Figure FDA0003413671250000012
As the pixel value at coordinate (i, j);
Figure FDA0003413671250000013
wherein,
Figure FDA0003413671250000014
representing the additively denoised pixel value at coordinate (i, j);
s5: and repeating S2-S4, traversing all coordinate points in the image to be processed to obtain the additively denoised pixel value of each coordinate point, and taking the corresponding pixel value of all coordinate points after additive denoising as the pixel value of the pixel point after buffing to obtain the filtered image and output the filtered image.
2. The improved additive lee filter-based peeling method of claim 1, wherein the method for calculating the local variance of all coordinate pixel values in the image to be processed in the sliding window in S2 is:
Figure FDA0003413671250000015
Figure FDA0003413671250000016
wherein x isklRepresenting the pixel value at coordinate (k, l) within the sliding window, mijLocal mean value, v, of pixel values representing all coordinate points in the image to be processed within the sliding windowijRepresenting the local variance of all coordinate pixel values in the image to be processed within the sliding window.
3. The improved additive lee filtering based dermabrasion method as claimed in claim 1 or 2, wherein said method for classifying skin types according to the human face original image in S3 is:
s31: defining a plurality of characteristic points in the original image of the human face, connecting all the characteristic points in sequence to form a polygon, and defining the obtained mask as a complete human face area as MpointsThe mask for the skin region of the whole body of the human body is MhumanMask of human face skin area is Mface
Mface=Mpoints∩Mhuman (3.2);
S32: four-dimensional classification is carried out on the mask image of the human face skin area according to skin color, oil light, wrinkles and pores, and the four-dimensional classification is as follows:
the skin color grades are divided into four classes of four, three, two and one, and each skin color grade is assigned with 1,2,3 and 0 in sequence;
the oil gloss grades are classified into four-level oil gloss, three-level oil gloss, two-level oil gloss and one-level oil gloss, and each oil gloss grade is sequentially assigned with 1,2,3 and 0;
the wrinkle grades are divided into four grades, three grades, two grades and one grade, and each wrinkle grade is assigned with 1,2,3 and 0 in sequence;
the pore grade is divided into four grades, three grades, two grades and one grade, and each pore grade is sequentially assigned with 1,2,3 and 0;
s33: in the original image of the face, selecting four parts of the forehead, the left cheek, the right cheek and the chin of the face as interested areas, setting the weight of each area divided into skin color, gloss, wrinkles and pores, then calculating the grade assignment of the four parts by adopting the following formula, wherein the value of sigma is equal to the grade assignment:
Figure FDA0003413671250000021
wherein,
Figure FDA0003413671250000022
γ is 1,2,3,4, which represents the weight of skin color in four regions of forehead, left cheek, right cheek and chin,
Figure FDA0003413671250000023
gamma is 1,2,3,4, which respectively represents the weight of the oil light in four areas of the forehead, the left cheek, the right cheek and the chin,
Figure FDA0003413671250000024
γ is 1,2,3,4, which represents the weight of wrinkles in four regions of the forehead, left cheek, right cheek and chin,
Figure FDA0003413671250000025
γ is 1,2,3,4, which indicates the weight of the pore in four regions of the forehead, left cheek, right cheek, and chin, respectively.
CN202111539879.3A 2021-12-15 2021-12-15 Improved additive lee filtering skin grinding method Active CN114202483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111539879.3A CN114202483B (en) 2021-12-15 2021-12-15 Improved additive lee filtering skin grinding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111539879.3A CN114202483B (en) 2021-12-15 2021-12-15 Improved additive lee filtering skin grinding method

Publications (2)

Publication Number Publication Date
CN114202483A true CN114202483A (en) 2022-03-18
CN114202483B CN114202483B (en) 2024-05-14

Family

ID=80654344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111539879.3A Active CN114202483B (en) 2021-12-15 2021-12-15 Improved additive lee filtering skin grinding method

Country Status (1)

Country Link
CN (1) CN114202483B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130229549A1 (en) * 2008-07-30 2013-09-05 DigitalOptics Corporation Europe Limited Automatic Face and Skin Beautification Using Face Detection
CN109712095A (en) * 2018-12-26 2019-05-03 西安工程大学 A kind of method for beautifying faces that rapid edge retains
CN110070502A (en) * 2019-03-25 2019-07-30 成都品果科技有限公司 The method, apparatus and storage medium of facial image mill skin
CN110232670A (en) * 2019-06-19 2019-09-13 重庆大学 A method of the image visual effect enhancing based on low-and high-frequency separation
CN112396573A (en) * 2019-07-30 2021-02-23 纵横在线(广州)网络科技有限公司 Facial skin analysis method and system based on image recognition
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130229549A1 (en) * 2008-07-30 2013-09-05 DigitalOptics Corporation Europe Limited Automatic Face and Skin Beautification Using Face Detection
CN109712095A (en) * 2018-12-26 2019-05-03 西安工程大学 A kind of method for beautifying faces that rapid edge retains
CN110070502A (en) * 2019-03-25 2019-07-30 成都品果科技有限公司 The method, apparatus and storage medium of facial image mill skin
CN110232670A (en) * 2019-06-19 2019-09-13 重庆大学 A method of the image visual effect enhancing based on low-and high-frequency separation
CN112396573A (en) * 2019-07-30 2021-02-23 纵横在线(广州)网络科技有限公司 Facial skin analysis method and system based on image recognition
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"基于皮肤分割与肤质评价的美肤模型设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 October 2023 (2023-10-15) *
JONG-SEN LEE: "Digital Image Enhancement and Noise Filtering by Use of Local Statistics", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 》, 31 March 1980 (1980-03-31) *
王志强;苗翔宇;: "基于保边滤波和肤色模型的人脸美颜技术研究与实现", 无线互联科技, no. 17, 3 September 2018 (2018-09-03) *

Also Published As

Publication number Publication date
CN114202483B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN109961426B (en) Method for detecting skin of human face
CN101609500B (en) Quality estimation method of exit-entry digital portrait photos
WO2022161009A1 (en) Image processing method and apparatus, and storage medium and terminal
CN105404846B (en) A kind of image processing method and device
US20030007687A1 (en) Correction of "red-eye" effects in images
CN107220624A (en) A kind of method for detecting human face based on Adaboost algorithm
EP2685419A1 (en) Image processing device, image processing method, and control program
CN111524080A (en) Face skin feature identification method, terminal and computer equipment
CN108932493A (en) A kind of facial skin quality evaluation method
CN104318262A (en) Method and system for replacing skin through human face photos
US20040151376A1 (en) Image processing method, image processing apparatus and image processing program
CN104899905B (en) Face image processing process and device
CN106919911A (en) Modified using the automatic face and skin of face detection
CN105139404A (en) Identification camera capable of detecting photographing quality and photographing quality detecting method
CN107862659A (en) Image processing method, device, computer equipment and computer-readable recording medium
CN113344836B (en) Face image processing method and device, computer readable storage medium and terminal
CN111223110B (en) Microscopic image enhancement method and device and computer equipment
CN114511567B (en) Tongue body and tongue coating image identification and separation method
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN113592851B (en) Pore detection method based on full-face image
CN114240743B (en) Skin beautifying method based on high-contrast skin grinding face image
CN113743421A (en) Method for segmenting and quantitatively analyzing anthocyanin developing area of rice leaf
CN112802074A (en) Textile flaw detection method based on illumination correction and visual saliency characteristics
CN114202483B (en) Improved additive lee filtering skin grinding method
CN114219739A (en) High contrast buffing algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant