CN112396573A - Facial skin analysis method and system based on image recognition - Google Patents
Facial skin analysis method and system based on image recognition Download PDFInfo
- Publication number
- CN112396573A CN112396573A CN201910692391.0A CN201910692391A CN112396573A CN 112396573 A CN112396573 A CN 112396573A CN 201910692391 A CN201910692391 A CN 201910692391A CN 112396573 A CN112396573 A CN 112396573A
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- color
- skin
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 45
- 230000001815 facial effect Effects 0.000 title claims abstract description 33
- 238000001514 detection method Methods 0.000 claims abstract description 108
- 238000000034 method Methods 0.000 claims abstract description 21
- 230000003993 interaction Effects 0.000 claims abstract description 13
- 230000005808 skin problem Effects 0.000 claims abstract description 9
- 230000007246 mechanism Effects 0.000 claims abstract description 7
- 238000011158 quantitative evaluation Methods 0.000 claims abstract description 7
- 230000037303 wrinkles Effects 0.000 claims description 51
- 238000012549 training Methods 0.000 claims description 28
- 239000013598 vector Substances 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 19
- 238000013210 evaluation model Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 4
- 230000001788 irregular Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 4
- 230000000694 effects Effects 0.000 abstract description 3
- 238000011156 evaluation Methods 0.000 abstract description 3
- 230000036541 health Effects 0.000 abstract description 3
- 238000012423 maintenance Methods 0.000 abstract description 2
- 210000003491 skin Anatomy 0.000 description 76
- 206010000496 acne Diseases 0.000 description 7
- 239000011148 porous material Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 206010013786 Dry skin Diseases 0.000 description 3
- 208000002874 Acne Vulgaris Diseases 0.000 description 2
- 206010039792 Seborrhoea Diseases 0.000 description 2
- 238000011871 bio-impedance analysis Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 230000037312 oily skin Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 210000004243 sweat Anatomy 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 206010027146 Melanoderma Diseases 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 210000000577 adipose tissue Anatomy 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000037336 dry skin Effects 0.000 description 1
- 210000002615 epidermis Anatomy 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000037311 normal skin Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000004439 roughness measurement Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002344 surface layer Substances 0.000 description 1
- 210000000106 sweat gland Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention relates to the technical field of skin detection, in particular to a facial skin analysis method and a system based on image recognition, wherein the facial skin analysis method comprises the following steps: s100, remotely acquiring an image to be detected, and identifying a geometric outline of a human face; s200, dividing the face feature image into a plurality of detection areas, and performing face skin mechanism analysis on each detection area respectively; and S300, carrying out quantitative scoring according to the skin mechanism analysis result. The facial skin analysis system comprises a detection module, a cloud server and a human-computer interaction platform, wherein the human-computer interaction platform transmits an image to be detected to the detection module through the cloud server, the detection module performs detection and analysis on parts and performs quantitative scoring, and the scoring is transmitted to the human-computer interaction platform through the cloud server. The invention remotely obtains the image to be tested on line, processes and analyzes the image, respectively carries out quantitative evaluation on various skin problems, accurately and comprehensively feeds back the health condition of the skin, and provides skin maintenance reference or after-makeup effect evaluation for users.
Description
Technical Field
The invention relates to the technical field of skin detection, in particular to a facial skin analysis method and system based on image recognition.
Background
With the improvement of living standard and the rapid development of medical cosmetology, people are more and more paying attention to the health condition of their skin. When caring for skin, it is often necessary to make a reasonable assessment of the facial skin. In order to provide cosmetics according to each person's skin condition, attempts have been made in the field of medical cosmetology to accurately detect various skin problems, such as spots, pores, blackheads, wrinkles, and the like, that appear on the face.
Currently used facial skin analysis methods based on image recognition are classified into a bio-impedance analysis method and an image data analysis processing method, wherein the bio-impedance analysis method is mainly used for detecting body fat and moisture, and causes a burden on a subject and has fewer detection parameters; the image data analysis processing method is to obtain various skin parameters by shooting and then analyzing and processing images, so that the burden on a tested person is very little, the method can also be used for carrying out information interaction with a cloud server, and a user can test at any time through a mobile terminal.
Disclosure of Invention
The invention aims to overcome the defects of incomplete test parameters and insufficient precision of side beams of the existing online human face skin test system.
In order to achieve the above object, the present invention discloses a facial skin analysis method based on image recognition, comprising the steps of:
s100, remotely acquiring an image to be detected, calling a preset image training set, and carrying out face geometric contour recognition on the image to be detected according to the training images in the image training set to obtain a face characteristic image;
s200, dividing the face feature image into a plurality of detection areas, and respectively carrying out face skin mechanism analysis on each detection area, wherein the method comprises the following steps S211-S215:
s211, carrying out Gabor filtering processing on the face characteristic image and converting the face characteristic image into a gray image;
s212, according to the filtering response condition of the human face feature image, carrying out binarization processing on the gray level image through thresholding so as to obtain a high gradient feature, and then carrying out eccentricity processing on the high gradient feature so as to eliminate spot features and non-curve features which are mixed in the high gradient feature;
and S213, repeatedly executing the step S212, wherein different threshold parameters are adopted in each execution, and a group of binary images are obtained.
S214, integrating the output binary image by using an image expansion method and logical OR operation to obtain a wrinkle detection key area;
s215, detecting the adjacent features and the cross features in the wrinkle detection key area, and performing geometric constraint on the adjacent features and the cross features to obtain the final wrinkle position and the final wrinkle number;
and S300, carrying out quantitative scoring on the face characteristic image according to the analysis result in the step S200.
Preferably, the step S100 further includes the steps of:
s101, performing feature point calibration on each training image to form 7 interesting regions for describing the geometric outline of the human face;
s102, counting the distribution range information of the characteristic points, and searching the region of interest of the image to be detected according to the distribution range information of the characteristic points.
Preferably, the step S200 further includes the steps of:
s216, establishing a wrinkle evaluation model to evaluate the wrinkle degree score of the human face characteristic image; the wrinkle evaluation model is R ═ beta1+β2N, where R represents wrinkle degree score, N represents the number of wrinkles obtained through the above steps S201 to S203, and β1And beta2To evaluate the parameters.
Preferably, the step S200 further includes the steps of:
s221, generating a local problem training set, carrying out local problem feature marking and normal feature marking on images of the local problem training set, and training a local problem classifier aiming at local problem features, wherein the local problem features comprise irregular color block features, dark color circular features, black point features and local red-protruding features;
s222, further dividing each detection area into a plurality of detection units, judging whether a local problem characteristic exists in each detection unit or not by using a local problem classifier and the type of the local problem characteristic, and counting the number of the detection units with the local problem by using an edge detection algorithm.
Preferably, the step S200 further includes the steps of:
s231, converting the face feature image into an HSV color space;
s232, segmenting bright areas and dark areas in the converted human face characteristic image; acquiring a segmentation threshold of the converted face feature image by using a maximum inter-class variance method, performing binarization processing on an S space and a V space of the converted face feature image, and marking pixel points which accord with a bright area or a dark area in the S space and the V space;
and S233, performing logical operation on the S space and the V space after the threshold segmentation, and calculating the proportion of pixels marked as bright areas in the S space and the V space to the total pixels in the S space and the V space.
Preferably, the step S200 further includes the steps of:
s241, converting the face feature image into an L A B color space;
s242, counting color information of the face feature image in each component space, and calculating a color mean value of each component space after eliminating extreme pixels;
and S243, converting the skin color grade of the skin color comparison card into L A B color space description, calculating the color difference between the color mean value and the converted skin color grade, and outputting the skin color grade information with the minimum color difference with the color mean value as a skin color detection result.
Preferably, the step S200 further includes the steps of:
s251, converting the face feature image into a gray image;
s252, four gray characteristic values of a co-occurrence matrix of the gray images in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees are respectively calculated, and the average value of each characteristic value is calculated to obtain an average roughness vector for describing the average roughness of the face characteristic image;
the gray characteristic values comprise angular second moment, entropy, contrast and correlation;
s253, comparing the average roughness vector with a preset roughness grade ruler, calculating Euclidean distances between the average roughness four-dimensional vector and the four-dimensional vectors of all grades of the roughness grade ruler, and outputting a roughness grade with the minimum difference degree with the average roughness vector as a roughness detection result;
the rough grade ruler is a plurality of graded four-dimensional vectors, and the dimensions of the four-dimensional vectors are angle second moment, entropy, contrast and correlation respectively.
The invention also discloses a facial skin analysis system based on image recognition, which comprises a detection module, wherein the detection module comprises:
the first detection module is used for performing binarization processing and geometric constraint on the gray level image converted from the human face feature image by using continuous thresholding according to the Gabor filtering response condition of the human face feature image, determining the position of wrinkles and calculating the quantity of the wrinkles;
the second detection module is used for classifying, screening and counting local skin problems existing in the face characteristic image through a local problem classifier trained by the image training set;
the third detection module is used for describing the face characteristic image through the HSV color space and calculating the pixel point proportion of a bright area detected in the face characteristic image;
the fourth detection module is used for describing the complexion grades of the face characteristic image and the complexion contrast card through an L A B color space and comparing the color mean value of the face characteristic image with the color difference of the complexion grades on the complexion contrast card;
and the fifth detection module is used for describing the human face feature image and the rough grade of the rough grade ruler by a gray level co-occurrence matrix method and comparing the difference value between the average roughness of the human face feature image and the rough grade on the rough grade ruler.
The detection module also comprises a quantitative scoring module which is used for quantitatively scoring each detection result and feeding back the scoring result to the user; the first detection module, the second detection module, the third detection module, the fourth detection module and the fifth detection module respectively carry out quantitative evaluation on respective detection results, and the quantitative scoring module further carries out weighting summation on the quantitative evaluation of each detection result to obtain a final score;
the facial skin analysis system based on image recognition further comprises a cloud server and a human-computer interaction platform, the human-computer interaction platform collects images to be detected in real time and uploads the images to the detection module through the cloud server, and the quantitative scoring module transmits final scores to the human-computer interaction platform through the cloud server.
The invention has the beneficial effects that: the method comprises the steps of obtaining an image to be measured in an online remote mode, carrying out quantitative evaluation on the number of wrinkles, local skin problems, skin oiliness proportion, skin color and skin roughness respectively through processing and analyzing the image, feeding back the health condition of the skin accurately and comprehensively, and providing skin maintenance reference or after-makeup effect evaluation for a user.
Drawings
FIG. 1: the invention discloses a flow diagram of a facial skin analysis method based on image recognition.
FIG. 2: the invention discloses a flow diagram of skin mechanism analysis in a facial skin analysis method based on image recognition.
FIG. 3: the invention discloses a structural schematic diagram of a facial skin analysis system based on image recognition.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments and the accompanying drawings.
Referring to fig. 1-2, the implementation process of the facial skin analysis method based on image recognition of the present invention includes the following steps:
1. face feature recognition
S100, obtaining an image to be detected, calling a preset image training set, and carrying out face geometric outline recognition on the image to be detected according to the training images in the image training set to obtain a face characteristic image.
S101, performing feature point calibration on each training image to form 7 interesting regions for describing the geometric outline of the human face;
s102, counting the distribution range information of the characteristic points, and searching the region of interest of the image to be detected according to the distribution range information of the characteristic points.
In steps S101 to S102, the principle of recognizing the geometric contour of the face to obtain the face feature image is implemented by a face feature locator based on an active shape model Algorithm (ASM).
The feature point calibration is performed after human face features in training images are identified manually, 67 feature points are calibrated in each training image, and accordingly 7 interesting regions of the forehead, the eyes, the nose, the lips, the chin, the left cheek and the right cheek are formed. The face feature locator records feature points in each training image and obtains feature point distribution information conditions of each training image and an acceptable offset range of each feature point, then region-of-interest searching is carried out by determining the number of the feature points in the image to be detected and the placement conditions of each feature point, whether the image to be detected contains complete face information or not is judged, if the image to be detected contains complete face information, the image to be detected is called a face feature image and step S200 is executed, and if the image to be detected does not contain complete face information, the image to be detected does not contain complete face information is fed back to a user.
2. Analysis of skin mechanisms
S200, dividing the face feature image into a plurality of detection areas, and respectively carrying out face skin mechanism analysis on each detection area.
2.1 wrinkle detection
S211, carrying out Gabor filtering processing on the face characteristic image and converting the face characteristic image into a gray image;
s212, according to the filtering response condition of the human face feature image, carrying out binarization processing on the gray level image through thresholding so as to obtain a high gradient feature, and then carrying out eccentricity processing on the high gradient feature so as to eliminate spot features and non-curve features which are mixed in the high gradient feature;
and S213, repeatedly executing the step S212, wherein different threshold parameters are adopted in each execution, and a group of binary images are obtained.
S214, integrating the output binary image by using an image expansion method and logical OR operation to obtain a wrinkle detection key area;
s215, detecting the adjacent features and the cross features in the wrinkle detection key area, and performing geometric constraint on the adjacent features and the cross features respectively to obtain the final wrinkle position and the final wrinkle number;
s216, establishing a wrinkle evaluation model to evaluate the wrinkle degree score of the human face characteristic image; the wrinkle evaluation model is R ═ beta1+β2N, where R represents wrinkle degree score, N represents the number of wrinkles obtained through the above steps S201 to S203, and β1And beta2To evaluate the parameters.
Steps S211 to S215 are based on several features of wrinkles in the face image: (1) wrinkles are clearly distinguished from the surrounding smooth skin, where the image gradient is higher; (2) wrinkles are generally curvilinear, rather than speckles or patches; (3) wrinkles are generally continuous curves, not a series of broken curves; (4) with a small probability, wrinkles will be concentrated across each other in a small area.
In steps S211 to S212, a Gabor filter response is used as an image feature to extract a curve feature of a wrinkle. The Gabor transform belongs to windowed fourier transform, and can extract relevant features in different directions and different scales of a frequency domain, so that the line detector can be considered as a line detector with adjustable directions and scales. Since the wrinkles can be regarded as "narrowband signals" in a certain direction, the Gabor filter of this embodiment includes 24 Gabor filters of six directions (0 °, 45 °, 60 °, 90 °, 120 °, 135 °) and 4 frequency bands (0.05 to 0.4), each of the Gabor filters allows only the texture corresponding to the frequency and direction thereof to pass smoothly, and the background information (pixel value of 0) and the line information (pixel value of 1) of the image are obtained by applying a specific threshold value to the amplitude response output from each Gabor filter to perform binarization processing.
The acquisition principle of the high gradient features is as follows: extracting a connected region of the binarized output image, and calculating the eccentricity of the connected region; when the eccentricity is 0, the communication area is a perfect circle; when the eccentricity is close to 1, the connected region is close to a straight line. In this embodiment, the connected region with the eccentricity less than 0.98 is calculated, and is determined as the speckle feature or the non-curve feature, which needs to be removed. Using one in multiple iterations (n ═ M)Repeating the steps by a series of continuous threshold values to obtain a group of binary images (i) after eccentric processing1,i2,i3…iM}。
Step S213 merges the images using image expansion (expansion convolution) and logical or operation. Specifically, we first merge i1And i2Obtaining an image I{1,2}Subsequently, an image I is output{1,2}Same as i3Merging to obtain an image I{1,2,3}And so on until the output image and the last image iMAre combined to obtain the final image I{1,2..M}And the line area in the final image is the wrinkle detection key area.
The wrinkle evaluation model is established based on a unary linear regression principle: 10000 survey images are put in an online questionnaire survey mode, a interviewee subjectively scores a plurality of images (the full score is 10 points, the higher the score is, the more young the interviewee is, the more the interviewee is), and each survey image is a face image with different sexes, ages or different face aging degrees; obtaining subjective scores R of 10000 survey images respectively, then carrying out wrinkle detection on each survey image to obtain the number N of wrinkles, and substituting 10000 groups of scores R and the number N of wrinkles into a wrinkle evaluation model to obtain R ═ beta1+β2N, determining beta1And beta2The value of (c).
2.2, local skin problem detection (including blotches, large pores, blackheads and red and swollen pox)
S221, generating a local problem training set, carrying out local problem feature marking and normal feature marking on images of the local problem training set, and training a local problem classifier aiming at local problem features, wherein the local problem features comprise irregular color block features, dark color circular features, black point features and local red-protruding features;
s222, further dividing each detection area into a plurality of detection units, judging whether a local problem characteristic exists in each detection unit or not by using a local problem classifier and the type of the local problem characteristic, and counting the number of the detection units with the local problem by using an edge detection algorithm.
Steps S221 to S222 are based on the fact that spots, large pores, blackheads and inflamed pox in human skin are all local blemishes that are distinct from normal smooth skin. Wherein the spots (including acne marks) are irregular color block regions which appear on the epidermis of the face and have a color different from the color of the surface layer of normal skin; sweat pore refers to a circular surface opening of a sweat gland duct, and due to shadows, the sweat pore is large and looks darker than surrounding skin color, and is represented by dark circular shape recognition; blackheads are characterized by a black spot in the pore that is significantly enlarged, the tip being blackened; the red and swollen pox is mainly characterized by local skin bulge and redness with obvious round boundaries.
The principle of local problem feature recognition is similar to the principle of face feature recognition, and in practical application, local problem features and normal features are respectively marked so that a local problem classifier can more accurately recognize local problems; because the area of the local problem is small, each detection area is further divided into a plurality of detection units of about 4mm x 4mm before detection, a local problem classifier is applied to each detection unit to identify various local problems, and the number of the detection units with the local problems is counted through an edge detection algorithm.
2.3 skin test (oily, dry and neutral)
S231, converting the face feature image into an HSV color space;
s232, segmenting bright areas and dark areas in the converted human face characteristic image; acquiring a segmentation threshold of the converted face feature image by using a maximum inter-class variance method, performing binarization processing on an S space and a V space of the converted face feature image, and marking pixel points which accord with a bright area or a dark area in the S space and the V space;
and S233, performing logical operation on the S space and the V space after the threshold segmentation, and calculating the proportion of pixels marked as bright areas in the S space and the V space to the total pixels in the S space and the V space.
The skin of a human can be divided into three types, namely oily skin, neutral skin and dry skin, the three types of skin are different in the degree of gloss of the human face in an image, wherein the oily skin is the brightest in the image, and the proportion of oily areas is evaluated by acquiring the brightness information distribution in the human face in steps S231-S233.
It should be noted that the purpose of converting the face feature image into the HSV color space is that in the HSV color space, information such as hue (H), saturation (S), brightness (V) and the like is independently stored, and a pixel point with saturation (S) close to 0 and brightness (V) close to 1 can be understood as a bright area in the image, otherwise, the pixel point can be understood as a dark area; thus when dividing light and dark areas, the division criteria are: in the S space, a pixel point with a pixel value of 0 is a bright area, and a pixel point with a pixel value of 1 is a dark area; in the V space, a pixel region having a pixel value of 1 is a bright region, and a pixel region having a pixel value of 0 is a dark region.
2.4 skin color detection
S241, converting the face feature image into an L A B color space;
s242, counting color information of the face feature image in each component space, and calculating a color mean value of each component space after eliminating extreme pixels;
and S243, converting the skin color grade of the skin color comparison card into L A B color space description, calculating the color difference between the color mean value and the converted skin color grade, and outputting the skin color grade information with the minimum color difference with the color mean value as a skin color detection result.
It should be noted that, the L × a × B color model is different from a general physical device (such as an RGB model), and the L × a × B color space is an output that simulates human visual perception, and can better reflect the psychosensory effect of skin color and effectively measure the color distance based on human visual perception.
In practical applications, the L x a B color model describes the color attributes using three independent values, where L represents the brightness from black (0) to white (100), and the larger the skin L value, the whiter the skin color; the value a indicates the saturation of the color on the red-green axis, the negative value indicates green, the positive value indicates red, and the larger the value a is, the more red the skin color is; the b value represents the saturation of the color on the blue-yellow axis, negative values represent blue, positive values represent yellow, and the larger the b value, the more yellow the skin color.
The RGB color values of the six skin color grades on the skin color comparison card are respectively as follows:
white penetration (249, 229, 217); fair (242, 213, 195); nature (39, 194, 167);
wheat (193, 155, 136); dull (153, 113, 95); dark (104, 74, 66).
Converting the information of each skin color grade into L A B description, then respectively making color difference comparison with the skin color mean value, selecting the minimum color grade to output, wherein the formula of the color difference comparison is as follows:
2.5 roughness measurement
S251, converting the face feature image into a gray image;
s252, four gray characteristic values of a co-occurrence matrix of the gray images in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees are respectively calculated, and the average value of each characteristic value is calculated to obtain an average roughness vector for describing the average roughness of the face characteristic image; the gray characteristic values comprise angular second moment, entropy, contrast and correlation;
and S253, comparing the average roughness vector with a preset roughness grade ruler, calculating Euclidean distances between the average roughness four-dimensional vector and the four-dimensional vectors of all grades of the roughness grade ruler, and outputting the roughness grade with the minimum difference degree with the average roughness vector as a roughness detection result.
The rough grade ruler is a plurality of graded four-dimensional vectors, and the dimensions of the four-dimensional vectors are angle second moment, entropy, contrast and correlation respectively.
The principle of skin roughness detection is realized based on analysis of image texture features, wherein a rough scale is a plurality of graded four-dimensional vectors, dimensions of the four-dimensional vectors are respectively an angular second moment, entropy, contrast and correlation, and a gray level co-occurrence matrix method in texture analysis is adopted in the embodiment to perform quantitative analysis on the roughness of the skin.
It should be noted that the gray level co-occurrence matrix is used to describe the gray level correlation of the neighboring pixel points in an angle direction in the image, and reflects the comprehensive information of the gray level distribution of the image about the local neighborhood, direction, and variation range. In steps S251 to S253, the 4 features in the gray level co-occurrence matrix are the angular second moment, entropy, contrast, and correlation, respectively. The angle second moment is used for reflecting the gray level distribution uniformity, and the smaller the value of the angle second moment is, the rougher the skin is; the entropy is used for reflecting the randomness of gray distribution, the larger the value of the entropy is, the denser the texture is, and the rougher the skin expression is; the contrast is used for reflecting and measuring the size of local gray change, the larger the value of the contrast is, the deeper and denser the texture is, and the rougher the skin expression is; the correlation is used to measure the correlation of local gray scale, and the larger the value of the correlation, the coarser the texture and the coarser the skin.
The coarse scale is constructed as follows:
10000 face front 2D images with different sexes and ages are collected, each face image is divided into a plurality of roughness detection areas, and the average roughness vector of each roughness detection area is calculated according to the steps S251 to S252. And (4) quantifying the scores corresponding to the numerical values of the dimensions to generate the score distribution of the average roughness vector in each dimension (angular second moment, entropy, contrast and correlation). For the distribution of the second moment of the angle, the 10 th percentile, the 30 th percentile, the 50 th percentile, the 70 th percentile and the 90 th percentile are taken as the first-to-fifth-level second moment scale, and the lower the level is, the rougher the skin is; for the distribution of entropy, the 90 th, 70 th, 50 th, 30 th and 10 th percentiles are taken as entropy grade scales from one grade to five grades, and the lower the grade is, the rougher the skin is; for the distribution of contrast, 90, 70, 50, 30 and 10 percentiles are taken as entropy scale scales of one to five grades, and the lower the grade is, the rougher the skin is; for the distribution of the correlation, 90, 70, 50, 30, 10 percentiles are taken as the entropy scale of one to five, the lower the scale, the rougher the skin. And combining the scores of the four dimensions on each grade scale to generate four-dimensional data of each grade, and forming the rough grade scale based on four-dimensional measurement.
3. Comprehensive quantitative scoring
And S300, carrying out quantitative scoring on the face characteristic image according to the analysis result in the step S200.
The quantitative scoring criteria were as follows:
b, scoring a wrinkle condition, and calling a score R of a wrinkle evaluation model;
scoring the local skin problems (spot condition b, enlarged pore condition c, blackhead condition d, and red acne condition e), and calculating the ratio of the evaluation units of which the local skin problems are out of the normal range, for example, the spot condition b is (1-number of spot analysis units/number of total analysis units) × 100;
the skin condition score f is 100 as the ratio of the pixel points in the bright area;
scoring the skin color condition, quantitatively scoring 6 skin color grades, and selecting the score of the output color grade;
grading the roughness condition h, averaging Euclidean distances between the roughness four-dimensional vector and the four-dimensional vectors of each grade of the rough grade ruler, and outputting the closest roughness grade score;
the upper limit of the scores a to h is 100, the comprehensive score is Y, and the comprehensive score Y is the result of weighted summation of the scores a to h (the weighted values are X respectively)1~X8) Namely:
composite score Y ═ a × X1+b*X2+c*X3+d*X4+e*X5+f*X6+g*X7+h*X8。
The facial skin analysis method based on image recognition in the embodiment of the present invention is described above. The facial skin analysis system based on image recognition in the embodiment of the present invention is described below.
Please refer to fig. 3, which is a schematic structural diagram of a facial skin analysis system based on image recognition according to an embodiment of the present invention. The facial skin analysis system based on image recognition in the present embodiment can be used for the facial skin analysis method based on image recognition described above.
The facial skin analysis system based on image recognition comprises a detection module 300, a cloud server 200 and a man-machine interaction platform 100, wherein the detection module 300 is used for detecting various parameters, and the detection module 300 comprises:
the first detection module 301 is configured to perform binarization processing and geometric constraint on a gray level image converted from a face feature image through continuous thresholding according to a Gabor filtering response condition of the face feature image, determine a wrinkle position, and calculate a wrinkle number;
a second detection module 302, configured to sort, screen and count local skin problems existing in the face feature image through a local problem classifier trained by the image training set;
the third detection module 303 is configured to describe the face feature image through the HSV color space, and calculate a pixel ratio of a bright region detected in the face feature image;
a fourth detection module 304, configured to describe the skin color levels of the face feature image and the skin color comparison card through an L × a × B color space, and compare a color difference between a color mean of the face feature image and the skin color level on the skin color comparison card;
a fifth detection module 305, configured to describe the facial feature image and the roughness level of the roughness level ruler by using a gray level co-occurrence matrix method, and compare the difference between the average roughness of the facial feature image and the roughness level of the roughness level ruler;
the quantitative scoring module 306 is used for quantitatively scoring each detection result and feeding back the result to the user; the first detection module 301, the second detection module 302, the third detection module 303, the fourth detection module 304 and the fifth detection module 305 respectively perform quantitative evaluation on respective detection results.
The human-computer interaction platform 100 transmits the image to be detected of the user to the detection module 300 at the background through the cloud server 200, the quantitative scoring module 306 further weights and sums the quantitative scores of all the detection results to obtain a final score, and the final score is transmitted to the human-computer interaction platform 100 through the cloud server 200.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, each detection module in the present invention is only a logic function partition, and there may be another partition manner in actual implementation, for example, all the detection modules may exist in a physical form of being integrated together, or each detection module may exist in a single physical form, or two or more detection modules may be integrated together, and the integrated detection module may be implemented in a form of hardware or a form of software function unit.
Although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that various changes, modifications and substitutions can be made without departing from the spirit and scope of the invention as defined by the appended claims. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A facial skin analysis method based on image recognition is characterized by comprising the following steps:
s100, remotely acquiring an image to be detected, calling a preset image training set, and carrying out face geometric contour recognition on the image to be detected according to the training images in the image training set to obtain a face characteristic image;
s200, dividing the face feature image into a plurality of detection areas, and respectively carrying out face skin mechanism analysis on each detection area, wherein the method comprises the following steps S211-S215:
s211, carrying out Gabor filtering processing on the face characteristic image and converting the face characteristic image into a gray image;
s212, according to the filtering response condition of the human face feature image, carrying out binarization processing on the gray level image through thresholding so as to obtain a high gradient feature, and then carrying out eccentricity processing on the high gradient feature so as to eliminate spot features and non-curve features which are mixed in the high gradient feature;
and S213, repeatedly executing the step S212, wherein different threshold parameters are adopted in each execution, and a group of binary images are obtained.
S214, integrating the output binary image by using an image expansion method and logical OR operation to obtain a wrinkle detection key area;
s215, detecting the adjacent features and the cross features in the wrinkle detection key area, and performing geometric constraint on the adjacent features and the cross features respectively to obtain the final wrinkle position and the final wrinkle number;
and S300, carrying out quantitative scoring on the face characteristic image according to the analysis result in the step S200.
2. The facial skin analysis method based on image recognition as claimed in claim 1, wherein the step S100 further comprises the steps of:
s101, performing feature point calibration on each training image to form 7 interesting regions for describing the geometric outline of the human face;
s102, counting the distribution range information of the characteristic points, and searching the region of interest of the image to be detected according to the distribution range information of the characteristic points.
3. The facial skin analysis method based on image recognition as claimed in claim 1, wherein the step S200 further comprises the steps of:
s216, establishing a wrinkle evaluation model to evaluate the wrinkle degree score of the human face characteristic image; the wrinkle evaluation model is R ═ beta1+β2N, where R represents wrinkle degree score, N represents the number of wrinkles obtained through the above steps S201 to S203, and β1And beta2To evaluate the parameters.
4. The facial skin analysis method based on image recognition as claimed in claim 1, wherein the step S200 further comprises the steps of:
s221, generating a local problem training set, carrying out local problem feature marking and normal feature marking on images of the local problem training set, and training a local problem classifier aiming at local problem features, wherein the local problem features comprise irregular color block features, dark color circular features, black point features and local red-protruding features;
s222, further dividing each detection area into a plurality of detection units, judging whether a local problem characteristic exists in each detection unit or not by using a local problem classifier and the type of the local problem characteristic, and counting the number of the detection units with the local problem by using an edge detection algorithm.
5. The facial skin analysis method based on image recognition as claimed in claim 1, wherein the step S200 further comprises the steps of:
s231, converting the face feature image into an HSV color space;
s232, segmenting bright areas and dark areas in the converted human face characteristic image; acquiring a segmentation threshold of the converted face feature image by using a maximum inter-class variance method, performing binarization processing on an S space and a V space of the converted face feature image, and marking pixel points which accord with a bright area or a dark area in the S space and the V space;
and S233, performing logical operation on the S space and the V space after the threshold segmentation, and calculating the proportion of pixels marked as bright areas in the S space and the V space to the total pixels in the S space and the V space.
6. The facial skin analysis method based on image recognition as claimed in claim 1, wherein the step S200 further comprises the steps of:
s241, converting the face feature image into an L A B color space;
s242, counting color information of the face feature image in each component space, and calculating a color mean value of each component space after eliminating extreme pixels;
and S243, converting the skin color grade of the skin color comparison card into L A B color space description, calculating the color difference between the color mean value and the converted skin color grade, and outputting the skin color grade information with the minimum color difference with the color mean value as a skin color detection result.
7. The facial skin analysis method based on image recognition as claimed in claim 1, wherein the step S200 further comprises the steps of:
s251, converting the face feature image into a gray image;
s252, four gray characteristic values of a co-occurrence matrix of the gray images in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees are respectively calculated, and the average value of each characteristic value is calculated to obtain an average roughness vector for describing the average roughness of the face characteristic image; the gray characteristic values comprise angular second moment, entropy, contrast and correlation;
s253, comparing the average roughness vector with a preset roughness grade ruler, calculating Euclidean distances between the average roughness four-dimensional vector and the four-dimensional vectors of all grades of the roughness grade ruler, and outputting a roughness grade with the minimum difference degree with the average roughness vector as a roughness detection result;
the rough grade ruler is a plurality of graded four-dimensional vectors, and the dimensions of the four-dimensional vectors are angle second moment, entropy, contrast and correlation respectively.
8. A facial skin analysis system based on image recognition, comprising a detection module, characterized in that the detection module comprises:
the first detection module is used for performing binarization processing and geometric constraint on the gray level image converted from the human face feature image by using continuous thresholding according to the Gabor filtering response condition of the human face feature image, determining the position of wrinkles and calculating the quantity of the wrinkles;
the second detection module is used for classifying, screening and counting local skin problems existing in the face characteristic image through a local problem classifier trained by the image training set;
the third detection module is used for describing the face characteristic image through the HSV color space and calculating the pixel point proportion of a bright area detected in the face characteristic image;
the fourth detection module is used for describing the complexion grades of the face characteristic image and the complexion contrast card through an L A B color space and comparing the color mean value of the face characteristic image with the color difference of the complexion grades on the complexion contrast card;
and the fifth detection module is used for describing the human face feature image and the rough grade of the rough grade ruler by a gray level co-occurrence matrix method and comparing the difference value between the average roughness of the human face feature image and the rough grade on the rough grade ruler.
9. The image recognition based facial skin analysis system of claim 8,
the detection module also comprises a quantitative scoring module which is used for quantitatively scoring each detection result and feeding back the scoring result to the user; the first detection module, the second detection module, the third detection module, the fourth detection module and the fifth detection module respectively carry out quantitative evaluation on respective detection results, and the quantitative scoring module further carries out weighting summation on the quantitative evaluation of each detection result to obtain a final score;
the facial skin analysis system based on image recognition further comprises a cloud server and a human-computer interaction platform, the human-computer interaction platform collects images to be detected in real time and uploads the images to the detection module through the cloud server, and the quantitative scoring module sends final scores to the human-computer interaction platform through the cloud server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910692391.0A CN112396573A (en) | 2019-07-30 | 2019-07-30 | Facial skin analysis method and system based on image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910692391.0A CN112396573A (en) | 2019-07-30 | 2019-07-30 | Facial skin analysis method and system based on image recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112396573A true CN112396573A (en) | 2021-02-23 |
Family
ID=74601118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910692391.0A Pending CN112396573A (en) | 2019-07-30 | 2019-07-30 | Facial skin analysis method and system based on image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112396573A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160224A (en) * | 2021-05-18 | 2021-07-23 | 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) | Artificial intelligence-based skin aging degree identification method, system and device |
CN113223707A (en) * | 2021-05-31 | 2021-08-06 | 黄大芽 | Skin disease prescreening system |
CN113554623A (en) * | 2021-07-23 | 2021-10-26 | 江苏医像信息技术有限公司 | Intelligent quantitative analysis method and system for human face skin |
CN113610844A (en) * | 2021-08-31 | 2021-11-05 | 深圳市邻友通科技发展有限公司 | Intelligent skin care method, device, equipment and storage medium |
CN113674829A (en) * | 2021-07-13 | 2021-11-19 | 广东丸美生物技术股份有限公司 | Recommendation method and device for makeup formula |
CN114202483A (en) * | 2021-12-15 | 2022-03-18 | 重庆大学 | Additive lee filtering and peeling method based on improvement |
CN115131446A (en) * | 2021-03-29 | 2022-09-30 | 北京小米移动软件有限公司 | Cosmetic mirror, image processing method, image processing device, and storage medium |
CN115187659A (en) * | 2022-06-17 | 2022-10-14 | 上海麦色医疗科技有限公司 | Multi-angle 4D imaging analysis system based on machine vision |
TWI781893B (en) * | 2021-03-22 | 2022-10-21 | 日商樂天集團股份有限公司 | Information processing device, information processing method and program product |
CN116777919A (en) * | 2023-08-25 | 2023-09-19 | 湖南云天工程检测有限公司 | Intelligent maintenance method and system for concrete test piece |
CN117173259A (en) * | 2023-08-31 | 2023-12-05 | 深圳伯德睿捷健康科技有限公司 | Human face skin pigmentation analysis method, system and computer readable storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040170337A1 (en) * | 2003-02-28 | 2004-09-02 | Eastman Kodak Company | Method and system for enhancing portrait images that are processed in a batch mode |
US20120133753A1 (en) * | 2010-11-26 | 2012-05-31 | Chuan-Yu Chang | System, device, method, and computer program product for facial defect analysis using angular facial image |
KR101318607B1 (en) * | 2012-12-21 | 2013-10-30 | 에이엠씨주식회사 | Apparatus of providing mobile service using skin condition inpector |
KR20140076754A (en) * | 2012-12-13 | 2014-06-23 | 코웨이 주식회사 | Evaluating Method For Skin Radiance |
JP2014149677A (en) * | 2013-02-01 | 2014-08-21 | Panasonic Corp | Makeup support apparatus, makeup support system, makeup support method and makeup support system |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
CN104434038A (en) * | 2014-12-15 | 2015-03-25 | 无限极(中国)有限公司 | Acquired skin data processing method, device and system |
CN106529429A (en) * | 2016-10-27 | 2017-03-22 | 中国计量大学 | Image recognition-based facial skin analysis system |
KR20170057479A (en) * | 2015-11-16 | 2017-05-25 | 김임준 | Apparatus of measuring condition of skin using mobile phone |
CN107123027A (en) * | 2017-04-28 | 2017-09-01 | 广东工业大学 | A kind of cosmetics based on deep learning recommend method and system |
CN109299633A (en) * | 2017-07-25 | 2019-02-01 | 上海中科顶信医学影像科技有限公司 | Wrinkle detection method, system, equipment and medium |
CN109381165A (en) * | 2018-09-12 | 2019-02-26 | 维沃移动通信有限公司 | A kind of skin detecting method and mobile terminal |
CN109711378A (en) * | 2019-01-02 | 2019-05-03 | 河北工业大学 | Human face expression automatic identifying method |
CN109730637A (en) * | 2018-12-29 | 2019-05-10 | 中国科学院半导体研究所 | A kind of face face-image quantified system analysis and method |
CN109961426A (en) * | 2019-03-11 | 2019-07-02 | 西安电子科技大学 | A kind of detection method of face skin skin quality |
-
2019
- 2019-07-30 CN CN201910692391.0A patent/CN112396573A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040170337A1 (en) * | 2003-02-28 | 2004-09-02 | Eastman Kodak Company | Method and system for enhancing portrait images that are processed in a batch mode |
US20120133753A1 (en) * | 2010-11-26 | 2012-05-31 | Chuan-Yu Chang | System, device, method, and computer program product for facial defect analysis using angular facial image |
KR20140076754A (en) * | 2012-12-13 | 2014-06-23 | 코웨이 주식회사 | Evaluating Method For Skin Radiance |
KR101318607B1 (en) * | 2012-12-21 | 2013-10-30 | 에이엠씨주식회사 | Apparatus of providing mobile service using skin condition inpector |
JP2014149677A (en) * | 2013-02-01 | 2014-08-21 | Panasonic Corp | Makeup support apparatus, makeup support system, makeup support method and makeup support system |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
CN104434038A (en) * | 2014-12-15 | 2015-03-25 | 无限极(中国)有限公司 | Acquired skin data processing method, device and system |
KR20170057479A (en) * | 2015-11-16 | 2017-05-25 | 김임준 | Apparatus of measuring condition of skin using mobile phone |
CN106529429A (en) * | 2016-10-27 | 2017-03-22 | 中国计量大学 | Image recognition-based facial skin analysis system |
CN107123027A (en) * | 2017-04-28 | 2017-09-01 | 广东工业大学 | A kind of cosmetics based on deep learning recommend method and system |
CN109299633A (en) * | 2017-07-25 | 2019-02-01 | 上海中科顶信医学影像科技有限公司 | Wrinkle detection method, system, equipment and medium |
CN109381165A (en) * | 2018-09-12 | 2019-02-26 | 维沃移动通信有限公司 | A kind of skin detecting method and mobile terminal |
CN109730637A (en) * | 2018-12-29 | 2019-05-10 | 中国科学院半导体研究所 | A kind of face face-image quantified system analysis and method |
CN109711378A (en) * | 2019-01-02 | 2019-05-03 | 河北工业大学 | Human face expression automatic identifying method |
CN109961426A (en) * | 2019-03-11 | 2019-07-02 | 西安电子科技大学 | A kind of detection method of face skin skin quality |
Non-Patent Citations (1)
Title |
---|
李顾全等: "基于图像处理的皮肤健康检测研究", 《电子世界》, vol. 21, 8 November 2017 (2017-11-08), pages 5 - 7 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI781893B (en) * | 2021-03-22 | 2022-10-21 | 日商樂天集團股份有限公司 | Information processing device, information processing method and program product |
CN115131446A (en) * | 2021-03-29 | 2022-09-30 | 北京小米移动软件有限公司 | Cosmetic mirror, image processing method, image processing device, and storage medium |
CN113160224A (en) * | 2021-05-18 | 2021-07-23 | 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) | Artificial intelligence-based skin aging degree identification method, system and device |
CN113160224B (en) * | 2021-05-18 | 2021-11-26 | 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) | Artificial intelligence-based skin aging degree identification method, system and device |
CN113223707A (en) * | 2021-05-31 | 2021-08-06 | 黄大芽 | Skin disease prescreening system |
CN113223707B (en) * | 2021-05-31 | 2023-04-18 | 杭州康晟健康管理咨询有限公司 | Skin disease prescreening system |
CN113674829A (en) * | 2021-07-13 | 2021-11-19 | 广东丸美生物技术股份有限公司 | Recommendation method and device for makeup formula |
CN113554623A (en) * | 2021-07-23 | 2021-10-26 | 江苏医像信息技术有限公司 | Intelligent quantitative analysis method and system for human face skin |
CN113610844A (en) * | 2021-08-31 | 2021-11-05 | 深圳市邻友通科技发展有限公司 | Intelligent skin care method, device, equipment and storage medium |
CN114202483A (en) * | 2021-12-15 | 2022-03-18 | 重庆大学 | Additive lee filtering and peeling method based on improvement |
CN114202483B (en) * | 2021-12-15 | 2024-05-14 | 重庆大学 | Improved additive lee filtering skin grinding method |
CN115187659A (en) * | 2022-06-17 | 2022-10-14 | 上海麦色医疗科技有限公司 | Multi-angle 4D imaging analysis system based on machine vision |
CN116777919A (en) * | 2023-08-25 | 2023-09-19 | 湖南云天工程检测有限公司 | Intelligent maintenance method and system for concrete test piece |
CN116777919B (en) * | 2023-08-25 | 2023-11-07 | 湖南云天工程检测有限公司 | Intelligent maintenance method and system for concrete test piece |
CN117173259A (en) * | 2023-08-31 | 2023-12-05 | 深圳伯德睿捷健康科技有限公司 | Human face skin pigmentation analysis method, system and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112396573A (en) | Facial skin analysis method and system based on image recognition | |
CN109961426B (en) | Method for detecting skin of human face | |
CN110097034A (en) | A kind of identification and appraisal procedure of Intelligent human-face health degree | |
EP1229493B1 (en) | Multi-mode digital image processing method for detecting eyes | |
CN106295124B (en) | The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts | |
CN106778468B (en) | 3D face identification method and equipment | |
CN110110637A (en) | A kind of method of face wrinkle of skin automatic identification and wrinkle severity automatic classification | |
CN103034838B (en) | A kind of special vehicle instrument type identification based on characteristics of image and scaling method | |
CN102799872B (en) | Image processing method based on face image characteristics | |
Frucci et al. | WIRE: Watershed based iris recognition | |
CN103971106B (en) | Various visual angles facial image gender identification method and device | |
CN111507426B (en) | Non-reference image quality grading evaluation method and device based on visual fusion characteristics | |
Bernardis et al. | Quantifying alopecia areata via texture analysis to automate the salt score computation | |
CN106650606A (en) | Matching and processing method for face image and face image model construction system | |
CN108765427A (en) | A kind of prostate image partition method | |
Ichim et al. | Advanced processing techniques for detection and classification of skin lesions | |
CN110874572B (en) | Information detection method and device and storage medium | |
CN106778491B (en) | The acquisition methods and equipment of face 3D characteristic information | |
CN108416304A (en) | A kind of three classification method for detecting human face using contextual information | |
CN110298815B (en) | Method for detecting and evaluating skin pores | |
Karamizadeh et al. | Race classification using gaussian-based weight K-nn algorithm for face recognition. | |
CN116077030A (en) | Skin evaluation method based on skin component volume content | |
Vasconcelos et al. | A new color assessment methodology using cluster-based features for skin lesion analysis | |
O'Mara | Automated facial metrology | |
Daghrir et al. | Selection of statistic textural features for skin disease characterization toward melanoma detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |